.
Senior Data Engineer @ AVENGA (Agencja Pracy, nr KRAZ: 8448)
  • Wrocław
Senior Data Engineer @ AVENGA (Agencja Pracy, nr KRAZ: 8448)
Wrocław, Wrocław, Lower Silesian Voivodeship, Polska
AVENGA (Agencja Pracy, nr KRAZ: 8448)
21. 10. 2025
Informacje o stanowisku

Project: is about e-purchasing (ranging from data about buyers, sellers, material prices, KPI calculations, to the entire parts ordering process). The project is divided into two teams. The business uses our SAP BO solutions and PBI dashboards. This project processes truly massive amounts of data, measured in terabytes.

Team: we work in scrum in every team in our company. There are several Data Engineers and Data Analysts, DPOs in each team, and every team is international.

Tech stack you’ll meet: Azure, Databricks (PySpark/Spark SQL, Unity Catalog, Workflows), ADF, ADLS/Delta, Key Vault, Azure DevOps (Repos/Pipelines YAML), Python, SQL


Must-have competences & skills:

  • Azure Databricks (PySpark, Spark SQL; Unity Catalog; Jobs/Workflows).
  • Azure data services: Azure Data Factory, Azure Key Vault, storage (ADLS), fundamentals of networking/identities.
  • Advanced SQL and solid data modeling in a lakehouse/Delta setup.
  • Python for data engineering (APIs, utilities, tests).
  • Azure DevOps (Repos, Pipelines, YAML) and Git-based workflows.
  • Experience operating production pipelines (monitoring, alerting, incident handling, cost control).

Nice to have:

  • AI

Project: is about e-purchasing (ranging from data about buyers, sellers, material prices, KPI calculations, to the entire parts ordering process). The project is divided into two teams. The business uses our SAP BO solutions and PBI dashboards. This project processes truly massive amounts of data, measured in terabytes.

Team: we work in scrum in every team in our company. There are several Data Engineers and Data Analysts, DPOs in each team, and every team is international.

Tech stack you’ll meet: Azure, Databricks (PySpark/Spark SQL, Unity Catalog, Workflows), ADF, ADLS/Delta, Key Vault, Azure DevOps (Repos/Pipelines YAML), Python, SQL

,[Own day-to-day operations of Picto data pipelines (ingest → transform → publish), ensuring reliability, performance and cost efficiency., Develop and maintain Databricks notebooks (PySpark/Spark SQL) and ADF pipelines/Triggers; manage Jobs/Workflows and CI/CD., Implement data quality checks, monitoring & alerting (SLA/SLO), troubleshoot incidents, and perform root-cause analysis., Secure pipelines (Key Vault, identities, secrets) and follow platform standards (Unity Catalog, environments, branching)., Collaborate with BI Analysts and Architects to align data models and outputs with business needs., Document datasets, flows and runbooks; contribute to continuous improvement of the Ingestion Framework. Requirements: Azure, Databricks, ADF, PySpark, Spark SQL, Data pipelines, CD, SLA, Key Vault, UNITY, BI, Data models, Azure Databricks, Azure Data, Azure Data Factory, Storage, ADLS, Networking, SQL, Data modeling, Python, Data engineering, Azure DevOps, YAML, Git, AI Additionally: International projects, Cafeteria system, Multisport card, Integration events, Insurance, Friendly atmosphere, Free coffee, Canteen, Bike parking, Free beverages, No dress code, Free parking, Modern office.

  • Praca Wrocław
  • Wrocław - Oferty pracy w okolicznych lokalizacjach


    124 524
    24 428