Wrocław, Wrocław, Lower Silesian Voivodeship, Polska
Grape Up Sp. z o.o.
12. 2. 2026
Informacje o stanowisku
technologies-expected :
Python
Databricks
Spark
SQL
Apache Spark
PySpark
Airflow
Dagster
Prefect
AWS
Azure
technologies-optional :
Kafka
Azure Event Hubs
Terraform
CloudFormation
Unity Catalog
Delta Lake
Lakehouse
about-project :
At Grape Up, we transform businesses by unlocking the potential of AI and data through innovative software solutions.
We partner with industry leaders, from the automotive and finance industry, to build sophisticated Data & Analytics platforms that transform how organizations manage and leverage their data assets. Our solutions provide comprehensive capabilities spanning data storage, management, advanced analytics, machine learning, and AI, enabling enterprises to accelerate innovation and make data-driven decisions.
responsibilities :
Implement a scalable architecture capable of handling the high volume of simulation data
Build a flexible data preprocessing pipelines that are extensible and that can be integrated into customer’s existing platform
Define KPIs to measure the improved reusability and automation of the new pipelines and test their performance in an end-to-end setting with model training
Develop and implement processes and best practices for data management and governance
Optimize and enhance system setup and improve data structures following industry best practices
Collaborate effectively with data engineering team members while partnering closely with analytics and data science teams to meet user needs
Collaborate with business stakeholders and technical teams to understand data requirements, translate business needs into technical solutions
Lead technical discussions and solution design sessions with clients or internal stakeholders, presenting complex data engineering concepts in accessible ways
requirements-expected :
Master’s degree in computer science, Data Engineering, AI, or a related field
4+ years of professional experience in Data Engineering and Big Data, building production-grade data platforms and pipelines
Proven experience with Databricks platform (Azure Databricks or Databricks on AWS)
Strong experience with Apache Spark (PySpark), including performance optimization, and large-scale data processing
Proficiency in Python and SQL for data transformations and analytics workloads
Experience with data pipeline orchestration (Airflow, Dagster, Prefect or similar)
Experience with data governance and quality frameworks
Strong problem-solving skills and ability to work independently
Fluency in English, both written and spoken
offered :
Cutting-Edge Technology: Drive innovation with the latest tech solutions
Work Your Way: Enjoy complete flexibility in choosing your ideal work environment – office, hybrid or remote.
Non-corporate work environment
Equipment of your choice
Language lessons (English, German, and Polish)
LuxMed private medical care
Weekly Lunch & Learn where we meet up in the office, lunch together, and share our knowledge
Employee referral program
Rewards for the Success of the Month and Year awarded by our employees
G-Man work anniversary awards
Integration activities
benefits :
private medical care
sharing the costs of foreign language classes
sharing the costs of professional training & courses