Responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems,
Building and maintaining architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies,
Evaluating and translating technical design to workable technical solutions/code and technical specifications at par with industry standards,
Driving creation of re-usable artifacts,
Establishing scalable, efficient, automated processes for data analysis, data model development, validation, and implementation,
Working closely with analysts/data scientists to understand impact to the downstream data models,
Writing efficient and well-organized software to ship products in an iterative, continual release environment,
Contributing and promoting good software engineering practices across the team,
Communicating clearly and effectively to technical and non-technical audiences,
Defining data retention policies,
Monitoring performance and advising any necessary infrastructure changes.
requirements-expected :
3+ years’ experience with Azure Data Factory and Databricks,
5+ years’ experience with data engineering or backend/fullstack software development,
Strong SQL skills,
Python scripting proficiency,
Experience with data transformation tools - Databricks and Spark,
Experience in structuring and modelling data in both relational and non-relational forms,
Experience with CI/CD tooling,
Working knowledge of Git,
Experience with Azure Event Hubs, CosmosDB, Spark Streaming,
Experience with Airflow,
Experience in Aviation Industry and Copilot.
Openness to work between 7 a.m. and 3 p.m. CET,
Good verbal and written communication skills in English.
benefits :
sharing the costs of foreign language classes
sharing the costs of professional training & courses