.
Senior Data Engineer
  • Kraków
Senior Data Engineer
Kraków, Kraków, Lesser Poland Voivodeship, Polska
ITDS Polska Sp. z o.o.
5. 9. 2025
Informacje o stanowisku

technologies-expected :


  • PySpark
  • Scala
  • Hadoop
  • Spark
  • Hive
  • SQL
  • Unix
  • Linux
  • Git
  • Jenkins
  • GitLab
  • Parquet
  • ORC
  • Avro

technologies-optional :


  • Kafka
  • Spark Streaming
  • Apache Flink

about-project :


  • As a Senior Data Engineer, you will be working for our client, a global digital-first bank focused on delivering innovative financial solutions at scale. You will join a dynamic engineering team responsible for building and enhancing data solutions that support critical business applications used by millions of customers. Your role will involve developing robust and fault-tolerant data pipelines, automating processes, and supporting cloud and on-premise deployments. You will collaborate with engineers, data analysts, and business stakeholders to ensure data solutions are efficient, scalable, and aligned with the bank’s digital and data transformation initiatives.
  • Join us, and engineer robust systems for millions of users!
  • Kraków – based opportunity with hybrid work model (6 days/month in the office)/

responsibilities :


  • Designing, developing, and maintaining end-to-end data pipelines across cloud and on-premise systems
  • Implementing robust ETL/ELT processes using PySpark, Hadoop, Hive, and Spark SQL
  • Collaborating with engineers and analysts to translate requirements into scalable data solutions
  • Automating workflows and optimizing data engineering processes for efficiency and reliability
  • Ensuring data quality, accuracy, and consistency across pipelines and applications
  • Migrating on-premise data solutions to cloud platforms such as GCP, AWS, or Azure
  • Participating in code reviews, promoting development standards, and sharing knowledge with peers
  • Supporting production environments, troubleshooting issues, and monitoring performance and scale
  • Contributing to system architecture, design discussions, and Agile development processes

requirements-expected :


  • Strong experience in PySpark, Scala, or similar data engineering languages
  • Hands-on experience building production data pipelines using Hadoop, Spark, and Hive
  • Knowledge of cloud platforms and migrating on-premise solutions to the cloud
  • Experience with scheduling tools such as Airflow and workflow orchestration
  • Strong SQL skills and experience with data modelling and warehousing principles
  • Familiarity with Unix/Linux platforms and big data distributed systems
  • Experience with version control tools such as Git and CI/CD pipelines (Jenkins, GitHub Actions)
  • Understanding of ETL/ELT frameworks and data formats (Parquet, ORC, Avro)
  • Proven ability to troubleshoot, debug, and optimize data processing workflows
  • Experience working in Agile environments and collaborating across global teams

offered :


  • Stable and long-term cooperation with very good conditions
  • Enhance your skills and develop your expertise in the financial industry
  • Work on the most strategic projects available in the market
  • Define your career roadmap and develop yourself in the best and fastest possible way by delivering strategic projects for different clients of ITDS over several years
  • Participate in Social Events, training, and work in an international environment
  • Access to attractive Medical Package
  • Access to Multisport Program
  • Access to Pluralsight
  • Flexible hours

benefits :


  • sharing the costs of sports activities
  • private medical care
  • flexible working time
  • fruits
  • integration events
  • corporate gym
  • mobile phone available for private use
  • computer available for private use
  • saving & investment scheme
  • no dress code
  • coffee / tea
  • drinks
  • christmas gifts
  • birthday celebration
  • access to +100 projects
  • access to Pluralsight

  • Praca Kraków
  • Kraków - Oferty pracy w okolicznych lokalizacjach


    107 980
    18 962