Informacje o stanowisku
Correct Context is looking for a Scala/Spark Big Data Developer for Comscore in Poland and around.
Start of this role is estimated as the beginning of March, but might shift a bit.
Comscore is a global leader in media analytics, revolutionizing insights into consumer behavior, media consumption, and digital engagement.
Comscore leads in measuring and analyzing audiences across diverse digital platforms. Thrive on using cutting-edge technology, play a vital role as a trusted partner delivering accurate data to global businesses, and collaborate with industry leaders like Facebook, Disney, and Amazon. Contribute to empowering businesses in the digital era across media, advertising, e-commerce, and technology sectors.
We offer:
- Real big data projects - petabyte scale/thousands of servers / billion of events
- An international team (US/PL/IE/IN/CL/NL) - slack+zoom+english is standard set
- Hands-on experience - you have the power to execute your ideas and improve stuff
- Quite a lot of autonomy in how to execute things
- A small, independent teams working environment
- Flexible work time ⏰
- Fully remote or in-office work in Wroclaw, Poland
- 14,000 to 20,000 PLN net/month B2B (optional other forms)
- Private healthcare
- Multikafeteria / Multisport
- Free parking
If you dont have all the qualifications, but youre interested in what we do and you have a solid Linux understanding -> lets talk!
The recruitment process for the Scala/Spark Big Data Developer position has following steps:
- Technical survey - 10min
- Technical screening - 30 min
- Technical interview - 60min video call
- Technical/Managerial interview- 60min video call
- Final Interview - Technical/Managerial - 30min video call
The candidate must have:
- 2+ years of experience with Linux
- Solid knowledge of Linux (bash, threads, IPC, filesystems; being power-user is strongly desired, understanding how OS works so you can benefit from performance optimizations in production but also in daily workflows)
- 1+ years of experience with Spark, primarily using Scala for Big data processing (that includes understanding of how Spark works and why)
- Huge need to drive projects of the future, improve stuff, risk taking mindset - covered by examples
- Great communication skills (you can drive end-to-end projects, and guide dev team members)
- Professional working proficiency in English (both oral and written)
- Understanding of HTTP API communication patterns (HTTP/REST/RPC) and protocol
- Good software debugging skills, not only prints, but also using debugger
- Deep understanding of at least one technical area (please let us know which one is this and prepare a story of the biggest battle story about this you had)
- Quite good understanding of Git
Correct Context is looking for a Scala/Spark Big Data Developer for Comscore in Poland and around.
Start of this role is estimated as the beginning of March, but might shift a bit.
Comscore is a global leader in media analytics, revolutionizing insights into consumer behavior, media consumption, and digital engagement.
Comscore leads in measuring and analyzing audiences across diverse digital platforms. Thrive on using cutting-edge technology, play a vital role as a trusted partner delivering accurate data to global businesses, and collaborate with industry leaders like Facebook, Disney, and Amazon. Contribute to empowering businesses in the digital era across media, advertising, e-commerce, and technology sectors.
We offer:
- Real big data projects - petabyte scale/thousands of servers / billion of events
- An international team (US/PL/IE/IN/CL/NL) - slack+zoom+english is standard set
- Hands-on experience - you have the power to execute your ideas and improve stuff
- Quite a lot of autonomy in how to execute things
- A small, independent teams working environment
- Flexible work time ⏰
- Fully remote or in-office work in Wroclaw, Poland
- 14,000 to 20,000 PLN net/month B2B (optional other forms)
- Private healthcare
- Multikafeteria / Multisport
- Free parking
If you dont have all the qualifications, but youre interested in what we do and you have a solid Linux understanding -> lets talk!
The recruitment process for the Scala/Spark Big Data Developer position has following steps:
- Technical survey - 10min
- Technical screening - 30 min
- Technical interview - 60min video call
- Technical/Managerial interview- 60min video call
- Final Interview - Technical/Managerial - 30min video call
,[Design, implement, and maintain petabyte-scale Big data pipelines using Scala, Spark, Kubernetes and a lot of other tech, Optimize – working with Big data is very specific, sometimes it’s IO/CPU-bound, depending on the process, we need to figure out a faster way of doing things. At least empirical knowledge of calculation complexity, as in Big data, even simple operations, when you multiply by the size of the dataset can be costly , Conduct Proof of Concept (PoC) for enhancements , Writing great and performant Big Data Scala code, Cooperate with other Big data teams , Work with technologies like AWS, Kubernetes, Airflow, EMR, Hadoop, Linux/Ubuntu, Kafka, and Spark , Use Slack and Zoom for communication Requirements: Scala, Spark, Linux, API, AWS Tools: Jira, Bitbucket, GIT, Jenkins. Additionally: Sport Subscription, Private healthcare, Remote work, Flexible working hours, Free coffee, Playroom, Modern office, No dress code, In-house trainings.
Praca WrocławWrocław - Oferty pracy w okolicznych lokalizacjach