We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of “measuring outcomes, not hours”. Join us & see for yourself!
About the role
Join our team to develop heavy data pipelines in cooperation with data scientists and other engineers. Working with distributed data processing tools such as Spark, to parallelise computation for Machine Learning and data pipelines. Diagnosing and resolving technical issues, ensuring the availability of high-quality solutions that can be adapted and reused. Collaborating closely with different engineering and data science teams, providing advice and technical guidance to streamline daily work. Championing best practices in code quality, security, and scalability by leading by example. Taking your own, informed decisions moving a business forward.
Project
STORE OPS
Project Scope
The project aims at constructing, scaling and maintaining data pipelines for a simulation platform. You will be working on a solution to grant connectivity between AWS s3 and Cloudian s3. A previously completed Proof of Concept used Airflow to spin Spark job for some data extraction and to then expose the collected data via Airflow built-in XComs feature. Further work required productionization of the PoC solution, testing it on scale, or proposing an alternate solution.
As a Data Engineer in Store Ops, you will dive into projects that streamlining retail operations through the use of analytics and ML, by applying your Python, Spark, Airflow, Kubernetes skills.
Responsibilities
Tech Stack
Python, PySpark, Airflow, Docker, Kubernetes, Dask, xgboost, pandas, scikit-learn, numpy, GitHub Actions, Azure DevOps, Terraform, Git @ GitHub
Project Challenges
Team
5 engineers
A few perks of being with us
And a lot more!
What we expect in general
Seems like lots of expectations, huh? Don’t worry! You don’t have to meet all the requirements.
What matters most is your passion and willingness to develop. Apply and find out!
We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of “measuring outcomes, not hours”. Join us & see for yourself!
About the role
Join our team to develop heavy data pipelines in cooperation with data scientists and other engineers. Working with distributed data processing tools such as Spark, to parallelise computation for Machine Learning and data pipelines. Diagnosing and resolving technical issues, ensuring the availability of high-quality solutions that can be adapted and reused. Collaborating closely with different engineering and data science teams, providing advice and technical guidance to streamline daily work. Championing best practices in code quality, security, and scalability by leading by example. Taking your own, informed decisions moving a business forward.
Project
STORE OPS
Project Scope
The project aims at constructing, scaling and maintaining data pipelines for a simulation platform. You will be working on a solution to grant connectivity between AWS s3 and Cloudian s3. A previously completed Proof of Concept used Airflow to spin Spark job for some data extraction and to then expose the collected data via Airflow built-in XComs feature. Further work required productionization of the PoC solution, testing it on scale, or proposing an alternate solution.
As a Data Engineer in Store Ops, you will dive into projects that streamlining retail operations through the use of analytics and ML, by applying your Python, Spark, Airflow, Kubernetes skills.
Responsibilities
Tech Stack
Python, PySpark, Airflow, Docker, Kubernetes, Dask, xgboost, pandas, scikit-learn, numpy, GitHub Actions, Azure DevOps, Terraform, Git @ GitHub
Project Challenges
Team
5 engineers
A few perks of being with us
And a lot more!
,[ Requirements: Python, PySpark, pandas, NumPy, scikit-learn, Apache Airflow, ETL, ELT, Docker, Kubernetes, xgboost, Azure DevOps, GitHub Actions Additionally: Building tech community, Flexible hybrid work model, Home office reimbursement, Language lessons, MyBenefit points, Training Package, Virtusity / in-house training.