Informacje o stanowisku
We are looking for Data Engineers to support one of our best-in-class clients from financial services. Youll be working on different projects starting from on-premises solutions, migration to cloud. Our clients Big Data Lake is the largest aggregation of data ever within financial services with over 300 sources and a rapidly growing book of work.
TECH :java/scala/spark/hadoop/python and GCP, OOP (Object oriented programming).
English also must be C1 at least.
THINGS YOU WILL DO
- Transform the business and system requirements into solution designs and functional requirements
- Deliver an ecosystem of curated, enriched, and protected sets of data – created from global, raw, structured, and unstructured sources
- Data development process: design, build and test data products that are complex or large-scale
- Promote development standards, code reviews, mentoring, testing, scrum story writing
- Cooperate with customers/stakeholders, product owners, business users and other subject matter experts
TECH STACK: ETL, Hadoop-Based Analytics (Hbase, Hive, mapReduce, Kafka, Spark, BI, ETL, Database, etc), Java/Scala/Python, Spark, GCP (cloud storage, Big Query, Pub/Sub, Data Flow), Jenkins, GitHub, SQL
SKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE
- Experience in Scala, Python, or Java
- Experience in Unix/Linux environment on-premises
- Experience with developing RESTful APIs
- Experience in Elasticsearch
- Experience building data pipelines using Hadoop components (Apache Hadoop, Scala, Apache Spark, YARN, Hive, SQL)
- Knowledge of industry-standard version control tools (Git, GitHub) and automated deployment tools (Ansible & Jenkins)
- Knowledge of SDLC and SQL
- Understanding users’ requirements and functional specification
- Excellent communication, interpersonal, and decision-making skills
- Good English knowledge
Praca KrakówKraków - Oferty pracy w okolicznych lokalizacjach