We are looking for experienced Data Engineer who will be responsible for designing, implementing, and maintaining scalable data pipelines and infrastructure to ensure seamless data ingestion, storage, and transformation across multiple international environments.
This role operates within a global project context, collaborating with cross-functional teams located in different regions to deliver reliable and high-performance data solutions.
The position emphasizes the technical aspects of data engineering, including optimizing system performance, ensuring data consistency across geographies, and supporting robust, scalable, and compliant data workflows on an international scale.
responsibilities :
Build and maintain data ingestion processes from various sources into the Data Lake.
Design, develop, and optimize complex data pipelines for reliable data flow.
Build, develop, and maintain frameworks that facilitate the construction of data pipelines.
Implement end-to-end testing frameworks for data pipelines.
Collaborate with data analysts and scientists to ensure the delivery of quality data.
Ensure robust data governance, security, and compliance practices.
Explore and implement emerging technologies to improve data pipeline performance.
Utilize and integrate data from various source system types, including Kafka, MQ, SFTP, databases, APIs, and file shares.
requirements-expected :
Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Economics, or a related field.
Minimum 3 years of experience as a Data Analyst or Data Quality Analyst in a data-driven organization.
Proven experience in data quality management and data governance practices.
Strong database expertise, including advanced SQL and PL/SQL.
Proficiency in Python, with experience in Scala as a plus.
Proficiency in cloud platforms and services (preferably GCP).