Our client is a global IT consulting company specializing in software development, agile transformation, and digital innovation. With a strong focus on delivering high-quality solutions, they empower businesses to achieve technological excellence through cutting-edge technologies and methodologies. Known for fostering a culture of continuous learning and collaboration, they partner with leading organizations to drive their digital transformation journeys. The company operates across multiple industries, offering expertise in areas such as cloud computing, data engineering, and DevOps practices.
responsibilities :
Design and develop batch data processing pipelines using Azure Data Factory, Databricks/pySpark, Python, and SQL
Work with streaming data processing solutions such as Event Hub, CosmosDB, and Spark Streaming
Build and maintain scalable data architectures and pipelines on Azure cloud, ensuring efficient data migration and integration
Collaborate with DevOps teams to implement CI/CD pipelines and automation using tools like Azure DevOps, Jenkins, or Airflow
Ensure data quality, performance, and structure by applying strong analytical and data modeling skills
requirements-expected :
Min. 4+ years of experience in a similar role
Expertise in Azure services, including provisioning, configuring, and developing solutions in Azure Data Lake, Azure Data Factory, and Azure Data Warehouse
Strong understanding of database principles and hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms
Experience with distributed data processing, both batch (priority) and streaming, using tools like Kafka or similar
Familiarity with data visualization tools like PowerBI or Tableau
Proficiency in DevOps practices and tools such as Azure DevOps, Jenkins, and Airflow
Strong problem-solving and analytical skills, with a self-motivated and detail-oriented attitude