As a Senior Data Engineer, you will be working for our client, a global digital-first bank focused on delivering innovative financial solutions at scale. You will join a dynamic engineering team responsible for building and enhancing data solutions that support critical business applications used by millions of customers. Your role will involve developing robust and fault-tolerant data pipelines, automating processes, and supporting cloud and on-premise deployments. You will collaborate with engineers, data analysts, and business stakeholders to ensure data solutions are efficient, scalable, and aligned with the bank’s digital and data transformation initiatives.
Join us, and engineer robust systems for millions of users!
Kraków – based opportunity with hybrid work model (6 days/month in the office)/
responsibilities :
Designing, developing, and maintaining end-to-end data pipelines across cloud and on-premise systems
Implementing robust ETL/ELT processes using PySpark, Hadoop, Hive, and Spark SQL
Collaborating with engineers and analysts to translate requirements into scalable data solutions
Automating workflows and optimizing data engineering processes for efficiency and reliability
Ensuring data quality, accuracy, and consistency across pipelines and applications
Migrating on-premise data solutions to cloud platforms such as GCP, AWS, or Azure
Participating in code reviews, promoting development standards, and sharing knowledge with peers
Supporting production environments, troubleshooting issues, and monitoring performance and scale
Contributing to system architecture, design discussions, and Agile development processes
requirements-expected :
Strong experience in PySpark, Scala, or similar data engineering languages
Hands-on experience building production data pipelines using Hadoop, Spark, and Hive
Knowledge of cloud platforms and migrating on-premise solutions to the cloud
Experience with scheduling tools such as Airflow and workflow orchestration
Strong SQL skills and experience with data modelling and warehousing principles
Familiarity with Unix/Linux platforms and big data distributed systems
Experience with version control tools such as Git and CI/CD pipelines (Jenkins, GitHub Actions)
Understanding of ETL/ELT frameworks and data formats (Parquet, ORC, Avro)
Proven ability to troubleshoot, debug, and optimize data processing workflows
Experience working in Agile environments and collaborating across global teams
offered :
Stable and long-term cooperation with very good conditions
Enhance your skills and develop your expertise in the financial industry
Work on the most strategic projects available in the market
Define your career roadmap and develop yourself in the best and fastest possible way by delivering strategic projects for different clients of ITDS over several years
Participate in Social Events, training, and work in an international environment