Join the Data Engineering Team and to design and optimize reliable data pipelines that deliver trusted datasets for analytics and our applications.
Join the Data Engineering Team and to design and optimize reliable data pipelines that deliver trusted datasets for analytics and our applications.
,[Own the day-to-day running of data pipelines ensuring reliability, scalability, and cost efficiency, Build and maintain pipelines using Databricks (PySpark/Spark SQL) and Azure Data Factory; manage workflows and deployments, Implement validation checks, monitoring, and alerting; troubleshoot issues and drive root-cause analysis, all for data quality, Protect data flows with secure access and ensure compliance with platform standards, Work closely with analysts, engineers, and architects to align datasets and data models with business needs, Document datasets and processes, and continuously improve frameworks and practices. Requirements: SQL, Python, Spark, PySpark, Azure Databricks, Azure Data Factory, Azure DevOps, YAML, ADLS, Spark SQL, UNITY, Key Vault, Data engineering, Testing, CD, Cloud, AI Tools: GitHub, Azure DevOps, GIT, Agile. Additionally: Private healthcare.