.
Data Engineer (Azure)
  • Kraków
Data Engineer (Azure)
Kraków, Kraków, Lesser Poland Voivodeship, Polska
Motife Sp. z o.o.
21. 3. 2026
Informacje o stanowisku

technologies-expected :


  • Python
  • PySpark
  • SQL
  • Kafka
  • Event Hubs
  • Kinesis

technologies-optional :


  • Terraform
  • Bicep
  • Azure Container Apps
  • K8s
  • Debezium

about-project :


  • We support recruitment for a US-based company that is a provider of mission-critical background screening solutions. They work with Fortune 100 clients helping them manage risk and hire the best talent. This role will provide you with an outstanding opportunity to work for an industry-leading company. With over 4500 employees from 30+ different nationalities, you will be working with a diverse bunch of creatives redefining the world of digital background check and verification services across the globe.
  • We’re seeking a self-motivated Data Engineer with strong Python/PySpark skills to join the Data Engineering Team and help build the Azure Data Analytics Platform. The ideal candidate is an independent, collaborative team player who leads projects, identifies process gaps, and continuously develops expertise in Human Capital technology.
  • The role involves developing reusable, metadata-driven data pipelines, automating platform processes, building data integrations, extending ETL libraries, writing unit tests, creating Databricks monitoring solutions, proactively resolving ETL issues, collaborating on cloud resources, updating documentation, conducting code reviews, and enhancing platform architecture.
  • The position is offered on a B2B contract from March to December 2026, with the possibility of continued collaboration afterward.
  • Key takeaways:
  • Stack: Python/PySpark, SQL, Databricks Spark, Knowledge of Azure cloud native solutions

responsibilities :


  • Build reusable, metadata-driven data pipelines.
  • Automate and optimize data platform processes.
  • Develop integrations with data sources and consumers.
  • Extend shared ETL libraries with transformation methods.
  • Write unit tests.
  • Create monitoring solutions for the Databricks platform.
  • Proactively address ETL performance and quality issues.
  • Collaborate with infrastructure teams on cloud resources.
  • Update data platform wiki and documentation.
  • Conduct code reviews to ensure quality.
  • Initiate and implement architecture improvements.

requirements-expected :


  • Proficiency in Python/PySpark and SQL.
  • Strong experience building robust data pipelines with Databricks Spark.
  • Proven track record handling large, complex datasets.
  • Expertise in developing reusable data transformation libraries (Python packages).
  • Deep knowledge of Databricks Delta optimization (partitioning, Z-ordering, compaction, etc.).
  • Hands-on experience with CI/CD pipeline development.
  • Skilled in event streaming integration using Kafka, Event Hubs, or Kinesis.
  • Solid understanding of fundamental networking concepts.
  • Familiarity with Agile/Scrum methodologies.

offered :


  • 100 Remote work model.
  • Superior co-working and personal development experience in an international setting.

  • Praca Kraków
  • Kraków - Oferty pracy w okolicznych lokalizacjach


    113 652
    17 207