.
Senior Data Engineer
  • Warsaw
Senior Data Engineer
Warszawa, Warsaw, Masovian Voivodeship, Polska
DEVTALENTS sp. z o.o.
20. 12. 2024
Informacje o stanowisku

technologies-expected :


  • SQL
  • PySpark
  • Python
  • Apache Airflow
  • AWS
  • Kafka
  • Redshift
  • Lambda
  • Terraform
  • Ansible

technologies-optional :


  • Microsoft Azure

about-project :


  • We are looking for a Senior Data Engineer to be a part of the DEVTALENTS team and contribute to the development of truly amazing solutions for businesses all over the world.
  • Joining DEVTALENTS can be a life-changing decision for you, with endless benefits along the way. We provide top projects for which recruitment is not possible externally because of our wide range of partners. We value open and transparent communication, leveraging tools like Slack and generative AI to enhance collaboration and efficiency. We support continuous growth, encouraging you to step beyond your comfort zone and develop technically and personally. We are looking for an experienced Senior Data Engineer to join our team and provide technical leadership and mentorship.

responsibilities :


  • Lead the design, development, and maintenance of data pipelines and ETL/ELT processes to handle large-scale, diverse datasets.
  • Optimize data ingestion, transformation, and delivery using SQL, PySpark, and Python.
  • Leverage frameworks like Apache Airflow, AWS Glue, Kafka, and Redshift to ensure efficient data orchestration, batch/stream processing, and high-performance analytics.
  • Drive best practices in version control (Git), infrastructure as code (Terraform, Ansible), and CI/CD pipelines to ensure robust, repeatable, and scalable deployments.
  • Collaborate closely with cross-functional teams—Data Scientists, Analysts, Product Managers—to design data models and architectures that meet business objectives.
  • Monitor, debug, and optimize ETL pipelines, ensuring high reliability, low latency, and cost efficiency.
  • Mentor mid-level and junior engineers, fostering a culture of knowledge sharing, continuous improvement, and innovation.

requirements-expected :


  • Strong proficiency in SQL, PySpark, and Python for data transformations and scalable pipeline development (6+ years of commercial experience).
  • Hands-on experience with Apache Airflow, AWS Glue, Kafka, and Redshift. Familiarity with handling large volumes of structured and semi-structured data. Experience with DBT is a bonus.
  • Proficiency with Git for version control. Airflow is crucial for orchestration.
  • Solid experience working with AWS (Lambda, S3, CloudWatch, SNS/SQS, Kinesis) and exposure to serverless architectures.
  • Experience with Terraform and Ansible to automate and manage infrastructure.
  • Strong skills in monitoring ETL pipelines, troubleshooting performance bottlenecks, and maintaining high operational reliability.
  • Familiarity with CI/CD processes to automate testing, deployment, and versioning of data pipelines.
  • Ability to design distributed systems that scale horizontally for large data volumes. Knowledge of real-time (Lambda) and batch (Kappa) processing architectures is a plus.
  • Experience building APIs (REST, GraphQL, OpenAPI, FastAPI) for data exchange.
  • Exposure to Data Mesh principles and self-service data tools is highly desirable. Previous experience in building scalable data platforms and transforming large datasets is a strong plus.
  • Higher education with a profile in Computer Science or related
  • English level min. B2.

offered :


  • Influence over data architecture and platform decisions, playing a key role in shaping our data strategy.
  • A transparent, supportive culture that fosters professional growth, learning, and innovation.
  • Ongoing opportunities for training, workshops, and engagement with the broader data engineering community.

  • Praca Warszawa
  • Warszawa - Oferty pracy w okolicznych lokalizacjach


    94 936
    15 467