As an ML Engineer in Forecasting and Commodities, you will be involved in projects that support critical decision making processes, by applying your Python, PySpark, Kubernetes and Cloud (Azure) skills. You will be working in a technically mature ecosystem, implementing new features and covering new use-cases. Part of your responsibilities will be design and implementation of a data science innovation framework, as well making contributions to an overall engineering best practises of the organization.
StoreOps
As an ML Engineer in StoreOps, you will dive into projects that streamlining retail operations through the use of analytics and ML, by applying your Python, Spark, Kubernetes, and Cloud (Azure) skills. You will be contributing to a mix of mature and new projects by bringing machine learning pipelines into production, building and maintaining robust Azure infrastructure, as well as fostering a technical culture of the organization.
responsibilities :
Forecasting & Commodities
- Developing libraries, tools, and frameworks that standardise and accelerate development and deployment of machine learning models.
- Working in an Azure cloud environment, developing model training code in AzureML. Building and maintaining cloud infrastructure with IaC (infrastructure as code).
- Working with distributed data processing tools such as Spark, to parallelise computation for Machine Learning.
- Diagnosing and resolving technical issues, ensuring availability of high-quality solutions that can be adapted and reused.
- Collaborating closely with different engineering and data science teams, providing advice and technical guidance to streamline daily work.
- Championing best practices in code quality, security, and scalability by leading by example.
- Taking your own, informed decisions moving a business forward.
StoreOps
- Developing machine learning models and feature engineering pipelines with cooperation with data scientists.
- Working in an Azure cloud environment, developing model training code in AzureML.
- Building and maintaining cloud infrastructure with IaC (infrastructure as code).
- Working with distributed data processing tools such as Spark, to parallelise computation for Machine Learning.
- Diagnosing and resolving technical issues, ensuring availability of high-quality solutions that can be adapted and reused.
- Collaborating closely with different engineering and data science teams, providing advice and technical guidance to streamline daily work.
- Championing best practices in code quality, security, and scalability by leading by example.
-Taking your own, informed decisions moving a business forward.
requirements-expected :
Readiness to work from the Kraków office in a hybrid model
Hands-on experience with deployment of Python projects
Strong experience with writing high quality Python code
Experience with developing CI/CD components and a good understanding of the software development lifecycle
Experience with developing in the cloud
Experience with Azure and AzureML is an advantage
Basic knowledge of orchestration tools such as Airflow
Basic knowledge of Spark or other distributed data processing tools
Ability to dive into the Kubernetes ecosystem as an user
Ability to work in a team and taking part in the design process
Good command of English (B2 / C1)
benefits :
sharing the costs of sports activities
private medical care
sharing the costs of foreign language classes
sharing the costs of professional training & courses