.
Big Data Engineer (Spark) @ Integral Solutions
  • Warsaw
Big Data Engineer (Spark) @ Integral Solutions
Warszawa, Warsaw, Masovian Voivodeship, Polska
Integral Solutions
21. 10. 2025
Informacje o stanowisku

We are looking for a skilled BigData Engineer to join an exciting project for a leading banking client, one of the top players in the Polish market. You will work on building and improving Big Data solutions that support key business processes, using modern technologies and best engineering practices.

Its a Warsaw-based opportunity. The team visit the office once in two weeks.


  • Minimum 3 years of experience in Python or Scala programming
  • Commercial experience and exposure to BigData technologies, Spark is a must
  • Familiarity with Data Warehouse principles
  • Knowledge of good engineering practices, Big Data processing, including Design standards, Data Modelling techniques, coding, documentation, testing and implementation
  • Experience with various data formats: JSON, PARQUET, ORC, AVRO
  • Understanding database types and usage scenarios, e.g. Hive, Kudu, HBase, Iceberg, etc.
  • Advanced knowledge of SQL Experience in integrating data from multiple data sources
  • Knowledge of tools for building projects/applications, e.g. Maven
  • Advanced knowledge of Polish and good knowledge of English

We are looking for a skilled BigData Engineer to join an exciting project for a leading banking client, one of the top players in the Polish market. You will work on building and improving Big Data solutions that support key business processes, using modern technologies and best engineering practices.

Its a Warsaw-based opportunity. The team visit the office once in two weeks.

,[Write and maintain data pipelines and applications using Python or Scala, Work with Apache Spark and other Big Data tools to process large amounts of data, Use data warehouse principles to keep data well-structured and reliable, Follow good engineering practices: clear code, proper testing, documentation, and clean design, Handle different data formats like JSON, Parquet, ORC, and Avro, Write and improve SQL queries to work with data efficiently, Combine data from different sources into a single system Requirements: Spark, Python, Hadoop, HDFS, Kafka, Cloud Tools: . Additionally: Sport subscription, Private healthcare, Modern office.

  • Praca Warszawa
  • Warszawa - Oferty pracy w okolicznych lokalizacjach


    124 524
    24 428