Informacje o stanowisku
Job Description
We are looking for Senior Data Engineer to join Sopra Steria Polska a team for a client in the banking sector. You will be working in a stable environment in an international team working for one of the most popular Dutch banks.
Note that we can only offer cooperation to people who are located in Poland and are willing to commute to our office in Katowice or Gdańsk, Poland.
Tech stack on the project:
- .Net/Databricks
- Python, API
- Apache Spark
- Azure
- SQL Server
Additional Information
What we offer:
- BENEFITS (UoP): Luxmed, Medicover Sport, Worksmile, educational platforms, languages learning platform, referral bonus, copyrights, life insurance, workation
- DEVELOPMENT OPPORTUNITIES (UoP): certifications (paid by the company), conferences, Tech Lunches, possibility to join our Communities (Project Management, Architecture, Security, Process Management, Leadership, AI and Cloud)
All information about salary range and its additional components will be provided during 1st stage of recruitment process.
- 5+ years of experience in data engineering roles.
- Deep hands-on experience with Databricks, including Unity Catalog, Delta Lake, and Lakehouse architecture.
- Strong proficiency in Apache Spark, including PySpark, Spark SQL, and Spark performance tuning.
- Experience creating Python libraries for reusable components and API integration.
- Experience with Azure Cloud services (Data Lake, Key Vault, Active Directory, App Services).
- Familiarity with SQL Server and relational database concepts.
- Understanding of data security principles and compliance requirements.
- Experience building CI/CD pipelines in Azure DevOps for data workflows.
- Infrastructure as Code (Bicep or Terraform).
- Good communication skills and a team player.
- Experience with Agile ways of working.
- English B2/C1.
- Being open to occasional business trips abroad and visits in our office in Katowice or Gdańsk.
- EU Citizenship.
Nice to have requirements:
- Experience with finance or banking systems.
- Knowledge of streaming architectures (though primary focus is batch).
- Familiarity with Spark Structured Streaming and MLlib.
- Advanced cost optimization strategies for Spark workloads in cloud environments.
Job Description
We are looking for Senior Data Engineer to join Sopra Steria Polska a team for a client in the banking sector. You will be working in a stable environment in an international team working for one of the most popular Dutch banks.
Note that we can only offer cooperation to people who are located in Poland and are willing to commute to our office in Katowice or Gdańsk, Poland.
Tech stack on the project:
- .Net/Databricks
- Python, API
- Apache Spark
- Azure
- SQL Server
Additional Information
What we offer:
- BENEFITS (UoP): Luxmed, Medicover Sport, Worksmile, educational platforms, languages learning platform, referral bonus, copyrights, life insurance, workation
- DEVELOPMENT OPPORTUNITIES (UoP): certifications (paid by the company), conferences, Tech Lunches, possibility to join our Communities (Project Management, Architecture, Security, Process Management, Leadership, AI and Cloud)
All information about salary range and its additional components will be provided during 1st stage of recruitment process.
,[ Build and maintain data pipelines and platforms in Databricks and Azure using Apache Spark. , Ensure data security, governance, and compliance using Unity Catalog and best practices. , Develop reusable Python libraries for integration across Databricks and Azure Functions. , Designing and implementing batch data pipelines in Databricks using Apache Spark (PySpark) and Delta Lake. , Applying Spark performance tuning techniques (e.g., partitioning, caching, broadcast joins, shuffle optimization) for large-scale data processing. , Setting up and managing Unity Catalog for data governance and security. , Building Python libraries for reusable business logic and utilities, used in Databricks Spark jobs and Azure Functions to expose APIs. , Developing CI/CD pipelines in Azure DevOps for data workflows and infrastructure deployment. , Collaborating with stakeholders to understand data requirements and deliver solutions that create customer value. Requirements: Databricks, Unity Catalog, Delta Lake, Lakehouse architecture, Apache Spark, PySpark, Spark SQL, Spark performance tuning, Azure Cloud Services, SQL Server, CI/CD Pipelines, Azure DevOps, Bicep, Terraform, Communication skills, Agile, Streaming Architecture, Spark Streaming, MLlib Additionally: Modern technologies, Sport subscription, Training budget, Private healthcare, International projects, Free coffee, Shower, Free snacks, No dress code, Modern office, In-house trainings.
Praca KatowiceKatowice - Oferty pracy w okolicznych lokalizacjach