.
BigData Engineer (Python + Spark) @ Integral Solutions
  • Warsaw
BigData Engineer (Python + Spark) @ Integral Solutions
Warszawa, Warsaw, Masovian Voivodeship, Polska
Integral Solutions
7. 12. 2025
Informacje o stanowisku

About the company/project*

We are looking for a skilled BigData Engineer to join an exciting project for a leading banking client, one of the top players in the Polish market. You will work on building and improving Big Data solutions that support key business processes, using modern technologies and best engineering practices.

Its a Warsaw-based opportunity. The team visit the office once in two weeks.


Requirements

  • Minimum 3 years of experience in Python or Scala programming
  • Commercial experience and exposure to BigData technologies, Spark is a must
  • Familiarity with Data Warehouse principles
  • Knowledge of good engineering practices, Big Data processing, including Design standards, Data Modelling techniques, coding, documentation, testing and implementation
  • Experience with various data formats: JSON, PARQUET, ORC, AVRO
  • Understanding database types and usage scenarios, e.g. Hive, Kudu, HBase, Iceberg, etc.
  • Advanced knowledge of SQL Experience in integrating data from multiple data sources
  • Knowledge of tools for building projects/applications, e.g. Maven
  • Advanced knowledge of Polish and good knowledge of English

Nice-to-have

  • Practical knowledge of Agile processes and tools: Jira, Confluence, Kanban, Scrum, etc.
  • Knowledge of the Kubeflow platform
  • Knowledge of streaming technologies and tools such as Kafka, Apache Nifi
  • Practical knowledge of CI/CD automation

About the company/project*

We are looking for a skilled BigData Engineer to join an exciting project for a leading banking client, one of the top players in the Polish market. You will work on building and improving Big Data solutions that support key business processes, using modern technologies and best engineering practices.

Its a Warsaw-based opportunity. The team visit the office once in two weeks.

,[Write and maintain data pipelines and applications using Python, Work with Apache Spark and other Big Data tools to process large amounts of data, Use data warehouse principles to keep data well-structured and reliable, Follow good engineering practices: clear code, proper testing, documentation, and clean design, Handle different data formats like JSON, Parquet, ORC, and Avro, Write and improve SQL queries to work with data efficiently, Combine data from different sources into a single system Requirements: Python, Scala, BigData, Big Data, Testing, JSON, Hive, Kudu, HBase, SQL, Maven, Jira, Confluence, Kanban, Kubeflow, Kafka Additionally: International projects, Private healthcare, Sport subscription.

  • Praca Warszawa
  • Warszawa - Oferty pracy w okolicznych lokalizacjach


    123 113
    17 448