Our client is a global leader in fueling and convenience retail solutions, offering advanced fuel-dispensing equipment, clean energy technologies, payment systems, automatic tank gauging, and wet stock management. Headquartered in Austin, Texas, the organization operates technology development and manufacturing facilities across the globe, including locations in Brazil, China, India, Italy, Poland, the Netherlands, the United Kingdom, and the United States.
We seek a Data Engineer to design and implement scalable, cloud-based data processing pipelines, enabling efficient data storage and accessibility for business needs. The ideal candidate will collaborate with stakeholders, mentor team members, and stay at the forefront of data engineering technologies to drive innovative solutions in our SaaS environment.
Stack: Python, SQL, Apache Spark, Azure Data Lake, CosmosDB/MongoDB, Apache Kafka, Azure Databricks (nice to have)
Contract of Employment (UoP): up to 21 000 PLN gross/month
B2B: 150 - 170 PLN net/h,
Working model: Hybrid - 2 days per week from ul. Zielińskiego, Krakow
responsibilities :
Collaborate with business stakeholders and product owners to gain a deep understanding of the business domain and associated data.
Prepare data for modelling by performing tasks such as data ingestion and cleansing.
Design and implement cloud-based data processing pipelines.
Ensure data is stored and made accessible in a scalable and cost-efficient manner.
Stay informed about the latest data engineering tools and technologies, recommending the most suitable tools for specific tasks.
Effectively communicate and present solutions to both technical and non-technical stakeholders.
Actively participate in demos and reviews to share insights and progress.
Train, coach, and mentor team members, fostering a passion for and proficiency in data engineering.
Seek opportunities to integrate innovative ideas and methodologies into the engineering of SaaS solutions.
requirements-expected :
A strong passion for addressing real-world challenges with an innovative and proactive mindset.
Demonstrated expertise in designing and developing scalable, efficient data pipelines to address business needs.
Advanced proficiency in Python and SQL programming.
In-depth knowledge of RDBMS and data modelling, with hands-on experience in Oracle or MSSQL/Azure SQL.
Practical experience in distributed Big Data processing using technologies such as Apache Spark, Azure Data Lake, Azure Databricks, CosmosDB, or MongoDB.
Proficiency with ETL and orchestration tools, including Airflow, Luigi, Beam, Azure Synapse, or Azure Data Factory.
Solid experience with cloud computing, preferably in MS Azure, or other leading cloud platforms.
Strong verbal and written communication abilities.
Excellent organizational, time management, and prioritization skills.
offered :
In this role, you will receive an opportunity to upskill to become a leader in the near future,
For UoP employees:
100% paid medical care
Life insurance
Multisport subsidy
Social benefits fund (holiday and Christmas allowances)