The client is a global leader in the energy industry, specializing in subsea and surface technologies. The project involves building and maintaining advanced data architecture on both cloud and on-premise platforms using technologies such as Snowflake, dbt, SQL, SAP BW, and AWS S3.
responsibilities :
Design, implement, and maintain data architecture across cloud and on-premise platforms.
Develop and optimize ETL/ELT pipelines using tools like Informatica, Talend, Attunity, and WebMethods.
Collaborate with data scientists, analysts, and business teams to ensure data availability, usability, and governance.
Define and enforce best practices in data modeling, version control (GitHub), and CI/CD workflows.
Integrate complex data sources from systems like SAP, Oracle, Microsoft BizTalk Server, and REST APIs.
Drive improvements in data quality, metadata management, and lineage tracking using tools such as Alation.
Ensure data security and regulatory compliance in collaboration with information security teams, utilizing tools like Azure Information Protection, OneTrust, and Vanta.
Participate in the evaluation and implementation of emerging data technologies and tools.
requirements-expected :
Minimum 6 years of experience in data engineering roles, including experience leading technical teams.
Experience in creating ETL pipelines.
Experience with Databricks (preferable).
Knowledge and hands-on experience with Generative AI (nice to have).
Background in energy, engineering, or manufacturing sectors (oil and gas experience preferred, followed by experience in physics, mathematics, or energy; data engineering experience is also acceptable).