Technologies-expected : Scala Apache Spark Hadoop Jenkins HQL Oozie Shell Git Splunk technologies-optional : Python R Snowflake Data Cloud responsibilities : Develop Scala/Spark programs, scripts, and macros for data extraction, transformation and analysis Design and implement solutions to meet business requirements Support and maintain existing Hadoop applications and related technologies Develop and maintain metadata, user access and security controls Develop and maintain technical documentation, including data models, process flows and system diagrams requirements-expected : Minimum 3-5 years of experience from Scala/Spark related projects and/or engagements Create Scala/Spark jobs for data transformation and aggregation as per the complex business requirements. Should be able work in a challenging and agile environment with quick turnaround times and strict deadlines. Perform Unit tests of the Scala code Raise PR, trigger build and release JAR versions for deployment via Jenkins pipeline Should be familiar with CI/CD concepts and the processes Peer review the code Perform RCA of the bugs raised Should have excellent understanding of Hadoop ecosystem offered : Physical Location Poland – Warsaw benefits : sharing the costs of sports activities private medical care