We are looking for a Data Engineer to design and maintain modern data solutions built on Databricks, PySpark, and Microsoft Azure. The role focuses on developing scalable data platforms and supporting business use cases with reliable data pipelines.
responsibilities :
Design, build, and maintain scalable data pipelines
Develop data processing and transformation workflows
Support data ingestion from multiple sources into cloud environments
Ensure performance, reliability, and data quality across solutions
Collaborate with business and technical stakeholders to translate requirements into data solutions
Contribute to documentation and continuous improvement of data processes
Work in a distributed team environment, focusing on product and business goals
requirements-expected :
Minimum 5 years of experience in Data Engineering
Strong hands on experience with PySpark (DataFrames, SparkSQL optimization, partitioning)
Practical experience with Databricks and Azure Data Factory
Knowledge of Azure SQL and core Azure services (Storage Accounts, KeyVault, VNET, Application Gateway, Azure Portal)
Experience working with Delta, Parquet, and CSV file formats
Experience with CI/CD processes
Ability to work independently and solve problems with minimal supervision
Strong written and spoken English
Experience with Power BI
offered :
Work on modern data solutions in Azure and Databricks environment
Flexible working model and stable long term cooperation
Exposure to international projects and stakeholders
Support for professional growth and continuous learning