The person in this role will collaborate with project teams responsible for developing advanced Data Lakehouse, Business Intelligence, and Advanced Analytics solutions in cloud environments. Working in an international setting, they will specialize in the latest technologies in this field.
Warsaw Hybrid/Remote
Minimum 5 years of experience in designing and building Business Intelligence, ETL/ELT, Data Warehouse, Data Lake, Data Lakehouse, Big Data, OLAP class solutions
Practical knowledge of various relational (e.g., SQL Server, Oracle, Redshift, PostgreSQL, Teradata) and non-relational database engines (e.g., MongoDB, Cosmos DB, DynamoDB, Neo4j, HBase, Redis, InfluxDB)
Strong proficiency in SQL and Python (minimum 5 years of experience)
Familiarity with data engineering and orchestration tools, particularly Spark/Databricks (including structured streaming mechanisms, DLT, etc.), Hadoop/CDP, Azure/Fabric Data Factory, Apache Flink, Apache Kafka, Apache Airflow, dbt, Debezium, and more
Understanding of data governance, data quality, and batch/streaming data processing challenges
Knowledge of architectural patterns in data, including Data Mesh, Data Vault, Dimensional Modeling, Medallion Architecture, and Lambda/Kappa Architectures
Proficiency in using git repositories (Bitbucket, GitHub, GitLab)
Experience with data services on the Azure and/or AWS platforms
Flexibility, self-reliance, and efficiency, with a strong sense of responsibility for assigned tasks
Practical knowledge of English at a minimum B2 level (C1+ preferred)
Designing new solutions and coming up with initiatives for improvements to existing solutions within data platforms - both as part of orders coming from the business (functional changes) and from technology (architectural changes)
Development of data platforms and ETL/ELT processes: technical support and active participation in the development of data platforms. Work on building and optimizing ETL/ELT processes that are responsible for processing large data sets. Implement processes to ensure optimal data processing, using data engineering best practices
Standardize and streamline technical processes: implementing and optimizing code, test and documentation management standards. Selecting and configuring tools and development environments that support data engineering processes to maintain code quality and facilitate code scaling
Ensure standards compliance and code review: responsible for applying existing platform development standards, initiating new guidelines where improvements are needed, and monitoring the quality of delivered solutions by conducting regular code-reviews
Work directly with technology as a Data Engineer and Data Analyst to maintain a high level of technology sophistication, understand current challenges, and drive improvements based on actual technical needs
Act as a mentor to the team, providing subject matter support in the areas of solution design, code standardization, process optimization and best practice implementation
Global projects in clouds - we work with clients from all over the world based on modern cloud technologies
Certification reimbursement - we fund exams, Microsoft certifications, AWS, Databricks
Learning time - 60 paid hours per year
Flexible approach - choice between working from home or meeting at our offices
Personalised benefits - medical care, subsidised sports packages, language tuition, new employee referral bonus (up to PLN 15,000) as well as annual and media bonuses
I believe that clear and honest communication is the determinant of successful cooperation. Through achieving those, we build a strong and cohesive team. If you are a candidate who values open and direct communication, then we would love to hear your questions about the company!
Technical competence, initiative, ability to innovate and problem solve - those are qualities important for every Senior Data Engineer and things I look for in my team. Our job is to make sure everything runs smoothly and to the highest standard. If you think you can do that - join us!
#J-18808-Ljbffr