We are looking for an experienced Scala/Spark Developer to join a project in the banking sector. The main responsibilities include developing and optimizing Scala/Spark jobs for data transformation and aggregation within the Hadoop ecosystem, as well as implementing CI/CD processes. The candidate will work in an agile environment, collaborating with the team on designing, testing, and deploying new solutions.
Hybrid work - one day per week from the office
responsibilities :
Develop and optimize Scala/Spark jobs for data processing
Implement CI/CD processes and manage code versioning (Jenkins, Git)
Work with Hadoop and handle HQL (Hive Queries)
Automate workflows using Oozie and shell scripting
Conduct code reviews, unit testing, and root cause analysis (RCA)
requirements-expected :
Minimum 5 years of experience in Scala, Spark, Hadoop
Proficiency in Jenkins, Oozie, Git, Splunk, Hive Queries