Informacje o stanowisku
Social Network Login/Join:
Detailed Description of Work Task to be Carried Out
Key Responsibilities:
- Develop Scala/Spark programs, scripts, and macros for data extraction, transformation, and analysis.
- Design and implement solutions to meet business requirements.
- Support and maintain existing Hadoop applications and related technologies.
- Develop and maintain metadata, user access, and security controls.
- Develop and maintain technical documentation, including data models, process flows, and system diagrams.
Description of Knowledge and Experience:
- Minimum 3-5 years of experience from Scala/Spark related projects and/or engagements.
- Create Scala/Spark jobs for data transformation and aggregation as per the complex business requirements.
- Should be able to work in a challenging and agile environment with quick turnaround times and strict deadlines.
- Perform Unit tests of the Scala code.
- Raise PR, trigger build and release JAR versions for deployment via Jenkins pipeline.
- Should be familiar with CI/CD concepts and the processes.
- Peer review the code.
- Perform RCA of the bugs raised.
- Should have an excellent understanding of the Hadoop ecosystem.
- Should be well versed with the following technologies:
• Jenkins
• HQL (Hive Queries)
• Shell scripting
• GIT
• Splunk
Preferred Qualifications:
- Relevant certifications (e.g., Scala, Spark, Hadoop, Performance).
- Knowledge of other programming languages (e.g., Python, R).
- Insight into cloud-based solutions such as Snowflake.
- Experience in Financial Services, preferably in the Credit risk domain.
#J-18808-Ljbffr
Praca GdańskGdańsk - Oferty pracy w okolicznych lokalizacjach