Our client is a global IT consulting company specializing in software development, agile transformation, and digital innovation. With a strong focus on delivering high-quality solutions, they empower businesses to achieve technological excellence through cutting-edge technologies and methodologies. Known for fostering a culture of continuous learning and collaboration, they partner with leading organizations to drive their digital transformation journeys. The company operates across multiple industries, offering expertise in areas such as cloud computing, data engineering, and DevOps practices.
responsibilities :
Design and implement data pipelines using Spark and GCP services
Maintain data quality, governance, and optimize processes for efficient data management
Utilize GCP tools like BigQuery, Cloud Pub/Sub, Cloud Storage, Cloud Dataflow, and Databricks to support cloud infrastructure
Leverage Infrastructure as Code (IaC) using Terraform to automate cloud infrastructure
Implement CI/CD pipelines for data workflows, ensuring continuous integration and deployment
requirements-expected :
Min. 4+ years of experience in a similar role
Strong understanding of cloud computing concepts with hands-on experience in GCP
Proficiency in Python and SQL for data manipulation and processing
Experience with data modeling and using BigQuery for data analytics
Analytical mindset, capable of solving complex technical challenges
Familiarity with DevOps practices, particularly in data engineering environments