Senovo IT is looking for an experienced Data Engineer with a strong background in AWS, Python, PySpark, and Databricks. You will be responsible for building and managing data pipelines and performing transformations for both batch and streaming data. The ideal candidate will have hands-on experience with AWS cloud environments and be proficient with tools such as EMR, EC2, Airflow, AWS Lambda, and AWS Step Functions.
responsibilities :
Build, manage, and optimize data pipelines for batch and streaming data.
Perform complex data transformations using Python, PySpark, and Databricks.
Work in an AWS Cloud PaaS environment utilizing services such as EMR, EC2, Airflow, AWS Lambda, and AWS Step Functions.
Ensure the smooth operation of data processing pipelines in cloud environments.
requirements-expected :
4-6 years of hands-on experience in Data Engineering.
Expertise in AWS Cloud technologies and Databricks.
Strong proficiency in Python and PySpark.
Experience building data pipelines and handling batch and streaming data.