Join a new, strategic datatransformation project, where we’re moving analytics from onpremise to GCP and building our data architecture and data model from the ground up, with a strong focus on business value creation and CX of our customers.
We work with technologies like GCP, Spark, Python, Kubernetes, BigQuery, Vertex AI, Terraform, Looker. We integrate diverse, high volume data sources, design streaming and batch processing layers, implement data governance, lineage, data quality and data security, and set up CI/CD and monitoring/SLOs to shorten the path from question to answer for our business and create a solid foundation for AI/LLM driven solutions.
We’re looking for people who combine architecture and hands on engineering, understand business needs, bring proactivity, energy and fresh ideas, and want to actively shape the standards, patterns and long term direction of our data platform.
Join a new, strategic datatransformation project, where we’re moving analytics from onpremise to GCP and building our data architecture and data model from the ground up, with a strong focus on business value creation and CX of our customers.
We work with technologies like GCP, Spark, Python, Kubernetes, BigQuery, Vertex AI, Terraform, Looker. We integrate diverse, high volume data sources, design streaming and batch processing layers, implement data governance, lineage, data quality and data security, and set up CI/CD and monitoring/SLOs to shorten the path from question to answer for our business and create a solid foundation for AI/LLM driven solutions.
We’re looking for people who combine architecture and hands on engineering, understand business needs, bring proactivity, energy and fresh ideas, and want to actively shape the standards, patterns and long term direction of our data platform.
,[Develop reusable frameworks for data processing and testing on GCP (e.g., BigQuery, Dataflow/Dataproc, Composer)., Build and maintain batch and streaming data ingestion pipelines from various sources (databases, Kafka/MQ, APIs, files) into GCP., Implement automated tests and data quality checks for data pipelines., Collaborate with analysts and data scientists to deliver reliable, well‑documented datasets., Monitor, optimize and secure data pipelines in line with data governance and compliance standards. Requirements: Data migration, Cloud, Data Lake, GCP, Storage, BigQuery, PUB, Looker, AI, Infrastructure as Code, Terraform, SQL, Spark, Python, Scala, Java, Linux, Docker, Communication skills, Kubernetes, Degree, Data science