.
Big Data Engineer @ Link Group
  • Kraków
Big Data Engineer @ Link Group
Kraków, Kraków, Lesser Poland Voivodeship, Polska
Link Group
21. 5. 2025
Informacje o stanowisku

? Big Data Engineer

? Remote or Hybrid (EMEA preferred) | Full-time | AdTech Platform | Python/Java/Scala

We’re looking for an experienced Big Data Engineer to join our high-impact team building the backbone of a global advertising platform delivering personalized content to millions of media-enabled devices.

Your work will directly influence data-driven decision making, real-time targeting, and analytics pipelines for one of the most advanced AdTech ecosystems in the industry.


✅ What You Bring

  • 5+ years of experience in backend/data engineering using Python, Java, or Scala
  • Strong experience with Big Data frameworks (Hadoop, Spark, MapReduce)
  • Solid knowledge of SQL/NoSQL technologies (e.g. Snowflake, PostgreSQL, DynamoDB)
  • Hands-on with Kubernetes, Airflow, and AWS (or similar cloud platforms)
  • Stream processing experience: Flink, Ignite
  • Experience with large-scale dataset processing and performance optimization
  • Familiarity with modern software practices: Git, CI/CD, clean code, Design Patterns
  • Fluent in English (B2+)
  • Degree in Computer Science, Telecommunications, or related technical field

⭐ Bonus Points

  • Experience with GoLang or GraphQL
  • Hands-on with microservices or serverless solutions
  • Experience in container technologies (Docker, Kubernetes)
  • Previous work in AdTech, streaming media, or real-time data systems

? Big Data Engineer

? Remote or Hybrid (EMEA preferred) | Full-time | AdTech Platform | Python/Java/Scala

We’re looking for an experienced Big Data Engineer to join our high-impact team building the backbone of a global advertising platform delivering personalized content to millions of media-enabled devices.

Your work will directly influence data-driven decision making, real-time targeting, and analytics pipelines for one of the most advanced AdTech ecosystems in the industry.

,[Build and maintain robust, scalable data pipelines to support attribution, targeting, and analytics, Collaborate closely with Data Scientists, Engineers, and Product Managers, Design and implement efficient storage and retrieval layers for massive datasets, Optimize data infrastructure and streaming processing systems (e.g. Flink, Apache Ignite), Drive quality through unit tests, integration tests, and code reviews, Develop and maintain Airflow DAGs and other orchestration pipelines, Translate business needs into robust, technical data solutions, Lead or support A/B testing and data-driven model validation, Contribute to R&D initiatives around cloud services and architecture Requirements: Data engineering, Python, Java, Scala, Big Data, Hadoop, Spark, SQL, NoSQL, Snowflake, PostgreSQL, AWS DynamoDB, Kubernetes, Airflow, AWS, Cloud platform, Flink, Git, CD, Design Patterns, Degree, Golang, GraphQL, Microservices, Docker, AdTech Tools: Agile, Scrum. Additionally: Sport subscription, Private healthcare, International projects.

  • Praca Kraków
  • Kraków - Oferty pracy w okolicznych lokalizacjach


    81 502
    8 752