We are looking for Data Engineers to join the IT team within the Environmental, Social & Governance department of the Data and Analytics office. Here in ESG DAO we create ESG domain data assets for consumption elsewhere in the bank.
The engineering team is responsible for taking business logic/poc asset designs and using them to create robust data pipelines using spark in scala. Our pipelines are orchestrated through airflow and deployed through a Jenkins based CICD pipeline. We operate on a private GCP instance and an on-premises Hadoop cluster. Engineers are embedded in multi-disciplinary teams including business analysts, data analysts, data engineers and software engineers and architects.
requirements-expected :
Scala
Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau
Proven ability to define and build architecturally sound solution designs.
Demonstrated ability to rapidly build relationships with key stakeholders.
Experience of automated unit testing, automated integration testing and a fully automated build and deployment process as part of DevOps tooling.
Must have the ability to understand and develop the logical flow of applications on technical code level
Strong interpersonal skills and ability to work in a team and in global environments.
Should be proactive, have learning attitude & adjust to work in dynamic work environments.
Exposure in Enterprise Data Warehouse technologies
Exposure in a customer facing role working with enterprise clients.
Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.