We run a variety of projects in which our sweepmasters can excel. Advanced Analytics, Data Platforms, Streaming Analytics Platforms, Machine Learning Models, Generative AI and more. We like working with top technologies and open-source solutions for Data & AI and ML/AI. In our portfolio, you can find Clients from many industries, e.g., media, e-commerce, retail, fintech, banking, and telcos, such as Truecaller, Spotify, ING, Acast, Volt, Play, and Allegro. You can read some customer stories here.
What else do we do besides working on projects?
We conduct many initiatives like Guilds and Labs and other knowledge-sharing initiatives. We build a community around Data & AI, thanks to our conference Big Data Technology Warsaw Summit, meetup Warsaw Data Tech Talks, Radio Data podcast, and DATA Pill newsletter.
Data & AI projects that we run and the companys philosophy of sharing knowledge and ideas in this field make GetInData | Part of Xebia not only a great place to work but also a place that provides you with a real opportunity to boost your career.
If you want to be up to date with the latest news from us, please follow up on our LinkedIn profile.
About role
A Data Engineers role involves crafting, constructing, and upholding the structure, tools, and procedures essential for an organization to gather, store, modify, and scrutinize extensive data amounts. This position involves creating data platforms using typically provided infrastructure and establishing a clear path for Analytics Engineers who utilize the system.
responsibilities :
Development and maintenance of ETL and data platforms (Python, Scala, Spark, HDFS, Hive)
Development and maintenance of access applications for business users and user support (Airflow, Jupyterhub, Trino, Superset, MLFlow) in the context of Kubernetes, Docker, ArgoCD
Automation and CICD (Gitlab-CI)
Monitoring (Prometheus)
R&D, maintenance, and monitoring of the platforms components
Implementing and executing policies aligned with the companys strategic plans concerning used technologies, work organization, etc
requirements-expected :
Proficiency in a programming language like Python and Scala
Working with Spark messaging systems
Experience with Hadoop
Hands-on experience with Kubernetes
Strong programming skills with a solid understanding of software engineering principles, best practices, and solutions
Experience with Version Control System, preferably GIT
Ability to actively participate/lead discussions with clients to identify and assess concrete and ambitious avenues for improvement
offered :
Salary: 140 - 185 PLN net + VAT/h B2B (depending on knowledge and experience)
100% remote work
Flexible working hours
Possibility to work from the office located in the heart of Warsaw
Opportunity to learn and develop with the best Big Data experts