Currently, for our client, we are looking for a talented and dedicated Software or Data Engineer to join a team, focusing on the developments of capabilities to enable Data Mesh, and offering expertise in leveraging contemporary technologies. This involves advising on the utilization of services available on public clouds or developing in house services on internal Kubernetes-based platform.
Project information:
Location: Remote
Type of employment: B2B contract
Project language: English
Project length: February 2025 - April 2027
responsibilities :
Data Architecture and Data Mesh Development: Design, develop, and maintain scalable data architectures including databases, data lakes, and data warehouses. Implement best practices for data storage, retrieval, and processing. Drive the adoption of Data Mesh principles to promote decentralized data ownership and architecture.
Data Product Development: Develop and maintain reusable data products that serve various business units. Collaborate closely with data scientists and analysts to understand data needs and design data models that meet business requirements. Develop and maintain ETL (Extract, Transform, Load) processes for moving and transforming data from various sources into the data infrastructure aligned with Data Mesh principles.
Data Quality and Governance: Implement and enforce data quality standards and governance policies. Develop and maintain metadata documentation, data lineage, and data dictionaries for all data assets to ensure they are discoverable and accessible across the organization.
EDP Platform Adaptation: Design and implement Kubernetes-based deployment strategies for scalable, reliable, and manageable data technologies. Collaborate with DevOps and infrastructure teams to optimize data technology deployment processes within a Kubernetes environment.
Documentation: Document Data Mesh implementations, Proof of Concept (PoC) results, and best practices to share knowledge and create reference materials for future use.
requirements-expected :
5+ years of general IT experience and 3+ years of Big Data experience.
Proven experience as a Data Engineer with expertise in designing and implementing scalable data architectures.
Extensive experience in developing and maintaining databases, data lakes, and data warehouses.
Hands-on experience with ETL processes and data integration from various sources.
Familiarity with modern data technologies and cloud services (AWS, GCP, Azure).
Experience in designing and implementing data models to meet business requirements.
Proficient in data processing languages such as SQL, Java, Python, or Scala.
Experience with Data Catalog technology in conjunction with Data Mesh.
Familiarity with Big Data architectures (Data Warehouse, Data Lake, Data Lakehouse) and their implementation.
Knowledge and experience with at least some of the Data technologies/frameworks: