Informacje o stanowisku
As a recruitment company, DCG understands that every business is powered by experienced professionals. Our management style and partnership approach enable us to meet your needs and provide continuous support. Due to our ongoing growth and the large number of recruitment projects we undertake for our partners, we are currently looking for:
Senior Data Engineer
Offer:
- Private medical care
- Co-financing for the sports card
- Constant support of dedicated consultant
- Employee referral program
- Minimum 5 years of hands-on work with Spark & Scala
- At least 7 years of professional Python development
- Minimum 5 years of working with Linux environments
- Solid background in distributed data processing engines (e.g., Spark)
- Practical knowledge of Hadoop ecosystem components (Hive, Oozie, MapReduce, etc.)
- Strong to advanced SQL proficiency
- Proven track record of designing and implementing data flows
- Expertise in Bitbucket and Git
- AWS certification or practical experience with AWS services
- Experience with unit testing frameworks and tools (JUnit 5, Mockito, Spark testing frameworks)
- Understanding of code versioning and branching strategies
- Advanced level of English (written and spoken)
As a recruitment company, DCG understands that every business is powered by experienced professionals. Our management style and partnership approach enable us to meet your needs and provide continuous support. Due to our ongoing growth and the large number of recruitment projects we undertake for our partners, we are currently looking for:
Senior Data Engineer
Offer:
- Private medical care
- Co-financing for the sports card
- Constant support of dedicated consultant
- Employee referral program
,[Building distributed and highly parallelized Big data processing pipeline which process massive amount of data (both structured and unstructured data) in near real-time, Leveraging Spark to enrich and transform corporate data to enable searching, data visualization, and advanced analytics, Working closely with analysts and business stakeholders to develop analytics models, Continuous delivering on Hadoop and other Big Data Platforms, Automating processes where possible and are repeatable and reliable, Working closely with QA team Requirements: Scala, Python, Spark, SQL, AWS, Hadoop
Praca GdyniaGdynia - Oferty pracy w okolicznych lokalizacjach