.
Senior Data Engineer- Databricks
  • Kraków
Senior Data Engineer- Databricks
Kraków, Kraków, Lesser Poland Voivodeship, Polska
HIQO SOLUTIONS sp. z o.o.
12. 12. 2025
Informacje o stanowisku

technologies-expected :


  • Databricks
  • Delta Lake
  • Spark SQL
  • Python
  • SQL
  • Apache Spark
  • Microsoft Azure
  • AWS
  • Google Cloud Platform

technologies-optional :


  • Azure Data Factory
  • Apache Airflow
  • Snowflake Data Cloud
  • Synapse
  • Red Gate
  • Unity Catalog
  • Purview
  • Microsoft Power BI
  • Tableau

about-project :


  • We are looking for a skilled and proactive Senior Data Engineer with deep expertise in Databricks to join our data platform team. You will be responsible for designing, building, and optimizing scalable data pipelines and lakehouse architectures that power analytics, reporting, and machine learning across the organization.
  • Location: Poland (office in Krakow/Wroclaw or 100% remote)
  • This role is part of a US-based project and follows the US Central Time schedule. We expect candidates to be available from 4 PM to at least 8 PM CET/CEST. Priority will be given to candidates who can meet these hours.

responsibilities :


  • Develop and maintain robust ETL/ELT pipelines using Databricks and Apache Spark
  • Design and implement Delta Lake architectures for structured and semi-structured data
  • Collaborate with data analysts, scientists, and product teams to deliver clean, reliable datasets
  • Optimize performance of Spark jobs and manage cluster resources efficiently
  • Automate workflows using Databricks Jobs, Workflows
  • Ensure data quality, lineage, and governance using Unity Catalog and monitoring tools
  • Document data models, pipeline logic, and architectural decisions
  • Participate in code reviews and contribute to engineering best practices

requirements-expected :


  • 4+ years of experience as a Data Engineer or in a similar role
  • Strong hands-on experience with Databricks, including Delta Lake, Spark SQL
  • Proficiency in Python and SQL for data manipulation and pipeline development
  • Solid understanding of Apache Spark internals and performance tuning
  • Experience with cloud platforms (Azure, AWS,GCP)
  • Knowledge of data modeling, partitioning, and lakehouse principles
  • Ability to work with large-scale datasets and optimize storage and compute costs
  • Strong communication skills and ability to collaborate across teams
  • English proficiency- at least B2 level
  • Highly desirable: availability to work from 4pm to at least 8pm CET/CEST

offered :


  • Flexible working time - you can agree on it within the team
  • Necessary tools and equipment
  • Communication in English - only foreign customers, and international Teams
  • Simple structure and open door way of communication
  • Full-time English teachers
  • Medical insurance for employees
  • HiQo University- internal education and training programs
  • HIQO COINS - We have a system of rewarding employees for extracurricular activities

benefits :


  • private medical care
  • sharing the costs of foreign language classes
  • remote work opportunities
  • flexible working time
  • integration events

  • Praca Kraków
  • Kraków - Oferty pracy w okolicznych lokalizacjach


    98 719
    16 598