.
Data Engineer ID47465 @ AgileEngine
  • Kraków
Data Engineer ID47465 @ AgileEngine
Kraków, Kraków, Lesser Poland Voivodeship, Polska
AgileEngine
15. 1. 2026
Informacje o stanowisku

Hi there! AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.

Important: after confirming your application on this platform, you’ll receive an email with the next step: completing your application on our internal site, LaunchPod. So keep an eye on your inbox and don’t miss this step — without it, the process can’t move forward.

Why join us

If you’re looking for a place to grow, make an impact, and work with people who care, we’d love to meet you! :)

About the role

As a Middle Data Engineer, you will play a pivotal role in evolving our patented CDI™ Platform by transforming massive streams of live data into predictive insights that safeguard global supply chains. This role offers a unique opportunity to directly influence product vision by building models and streaming architectures that address real-world disruptions. You will work in an innovative environment where your expertise in Spark and Python drives meaningful growth and delivers critical intelligence to industry leaders.

Perks and benefits

  • Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
  • Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
  • A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
  • Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office — whatever makes you the happiest and most productive.

Meet Our Recruitment Process Asynchronous stage — An automated, self-paced track that helps us move faster and give you quicker feedback:

  • Short online form to confirm basic requirements
  • 30–60 minute skills assessment
  • 5-minute introduction video

Synchronous stage — Live interviews

  • Technical interview with our engineering team (scheduled at your convenience)
  • Final interview with your future teammates

If it’s a match—you’ll get an offer!


    Must haves

    • 2+ years of experience in cloud-based data parsing and analysis, data manipulation and transformation, and visualization;
    • Programming and scripting experience with Scala or Python;
    • Experience with Apache Spark or similar frameworks;
    • Experience with introductory SQL;
    • Ability to explain technical and statistical findings to non-technical users and decision makers;
    • Experience in technical consulting and conceptual solution design;
    • Understanding of Hadoop and Apache-based tools to exploit massive data sets;
    • Bachelor’s degree;
    • Upper-intermediate English level.

    Nice to haves

    • Experience with Java;
    • Experience with Kafka or other streaming architecture frameworks;
    • Domain knowledge in Supply Chain and/or transportation management and visibility technologies.

    Hi there! AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.

    Important: after confirming your application on this platform, you’ll receive an email with the next step: completing your application on our internal site, LaunchPod. So keep an eye on your inbox and don’t miss this step — without it, the process can’t move forward.

    Why join us

    If you’re looking for a place to grow, make an impact, and work with people who care, we’d love to meet you! :)

    About the role

    As a Middle Data Engineer, you will play a pivotal role in evolving our patented CDI™ Platform by transforming massive streams of live data into predictive insights that safeguard global supply chains. This role offers a unique opportunity to directly influence product vision by building models and streaming architectures that address real-world disruptions. You will work in an innovative environment where your expertise in Spark and Python drives meaningful growth and delivers critical intelligence to industry leaders.

    Perks and benefits

    • Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
    • Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
    • A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
    • Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office — whatever makes you the happiest and most productive.

    Meet Our Recruitment Process Asynchronous stage — An automated, self-paced track that helps us move faster and give you quicker feedback:

    • Short online form to confirm basic requirements
    • 30–60 minute skills assessment
    • 5-minute introduction video

    Synchronous stage — Live interviews

    • Technical interview with our engineering team (scheduled at your convenience)
    • Final interview with your future teammates

    If it’s a match—you’ll get an offer!

    ,[Become an expert on platform solutions and how they solve customer challenges within Supply Chain & related arenas;, Identify, retrieve, manipulate, relate, and exploit multiple structured and unstructured data sets from thousands of various sources, including building or generating new data sets as appropriate;, Create methods, models, and algorithms to understand the meaning of streaming live data and translate it into insightful predictive output for customer applications and data products;, Educate internal teams on how data science and resulting predictions can be productized for key industry verticals;, Keep up to date on competitive solutions, products, and services. Requirements: Scala, Python, Apache Spark, SQL, Hadoop, Degree, Java, Kafka Additionally: Training budget, International projects.

  • Praca Kraków
  • Kraków - Oferty pracy w okolicznych lokalizacjach


    103 215
    17 493