We are expanding our Supply Chain Analytics team and looking for a skilled Data Engineer to develop and maintain data ingestion, processing, and delivery pipelines using Microsoft Azure services.
The role focuses on supporting data-driven decision-making through efficient and reliable data infrastructure.
Data Engineer with Azure
Your responsibilities
- Design, build, and optimize ELT/ETL pipelines to support statistical modeling and business intelligence
- Monitor and ensure data quality and integrity
- Configure and maintain data storage systems such as data lakes and SQL databases
- Manage production connections across the data infrastructure components
- Write secure, scalable, and maintainable code that translates business needs into technical solutions
- Promote engineering best practices, including automation, CI/CD, and code maintainability
- Collaborate closely with BI analysts, data scientists, machine learning engineers, and core IT teams
Our requirements
- Minimum 2–3 years of hands-on experience in data engineering roles
- Strong proficiency in Python, SQL, and Azure data services (Azure Data Factory, Blob Storage, Key Vault, Logic Apps, Azure SQL Database, Synapse)
- Experience with data workflow orchestration, particularly using Azure Data Factory
- Solid understanding of version control systems and CI/CD pipelines, preferably Azure DevOps or GitHub
- Familiarity with data governance concepts such as metadata management, data lineage, master data, and compliance
- Business-level English communication skills (B2 or higher)
- Experience with Snowflake platform
- Knowledge of Databricks and Azure Machine Learning
- Relevant Azure certifications in Data Engineering or Data Analytics
- Exposure to SAP Business Warehouse as a data source
- Understanding of BI tools such as Power BI
- Experience with PySpark or Scala
What we offer
- Long-term engagement
- Equipment provided by the client
- B2B contract
- Remote work