Project: is about e-purchasing (ranging from data about buyers, sellers, material prices, KPI calculations, to the entire parts ordering process). The project is divided into two teams. The business uses our SAP BO solutions and PBI dashboards. This project processes truly massive amounts of data, measured in terabytes.
Team: we work in scrum in every team in our company. There are several Data Engineers and Data Analysts, DPOs in each team, and every team is international.
Tech stack you’ll meet: Azure, Databricks (PySpark/Spark SQL, Unity Catalog, Workflows), ADF, ADLS/Delta, Key Vault, Azure DevOps (Repos/Pipelines YAML), Python, SQL
Must-have competences & skills:
Nice to have:
Project: is about e-purchasing (ranging from data about buyers, sellers, material prices, KPI calculations, to the entire parts ordering process). The project is divided into two teams. The business uses our SAP BO solutions and PBI dashboards. This project processes truly massive amounts of data, measured in terabytes.
Team: we work in scrum in every team in our company. There are several Data Engineers and Data Analysts, DPOs in each team, and every team is international.
Tech stack you’ll meet: Azure, Databricks (PySpark/Spark SQL, Unity Catalog, Workflows), ADF, ADLS/Delta, Key Vault, Azure DevOps (Repos/Pipelines YAML), Python, SQL
,[Own day-to-day operations of Picto data pipelines (ingest → transform → publish), ensuring reliability, performance and cost efficiency., Develop and maintain Databricks notebooks (PySpark/Spark SQL) and ADF pipelines/Triggers; manage Jobs/Workflows and CI/CD., Implement data quality checks, monitoring & alerting (SLA/SLO), troubleshoot incidents, and perform root-cause analysis., Secure pipelines (Key Vault, identities, secrets) and follow platform standards (Unity Catalog, environments, branching)., Collaborate with BI Analysts and Architects to align data models and outputs with business needs., Document datasets, flows and runbooks; contribute to continuous improvement of the Ingestion Framework. Requirements: Azure, Databricks, ADF, PySpark, Spark SQL, Data pipelines, CD, SLA, Key Vault, UNITY, BI, Data models, Azure Databricks, Azure Data, Azure Data Factory, Storage, ADLS, Networking, SQL, Data modeling, Python, Data engineering, Azure DevOps, YAML, Git, AI Additionally: International projects, Cafeteria system, Multisport card, Integration events, Insurance, Friendly atmosphere, Free coffee, Canteen, Bike parking, Free beverages, No dress code, Free parking, Modern office.