About the project
Samsung Ads is an advanced advertising ecosystem, spanning hundreds of millions of smart devices across TV, mobile, desktop, and beyond. The project we are recruiting for is focused on enabling brands to connect with Samsung TV audiences building the world’s smartest advertising platform. We use machine learning algorithms for advertising campaigns to enhance targeting, personalization, and optimization. The goal is to deliver the right message to the right audience at the right time, resulting in higher engagement rates and conversion rates.
Audience building is a crucial aspect of effective marketing, especially in todays digital landscape where targeting specific groups of people is essential for success.
During project on-boarding process you will understand our products and services to easily identify the ideal customer persona, considering factors such as demographics, psychographics, and purchasing power.
Being part of an international company such as Samsung you will get to work on the most challenging projects with stakeholders and teams located around the globe.
You will deeply dive into Samsung Advertising Galaxy working with such exciting domains like bidding, pacing and performance-based advertising, as well as recommendations and churn prediction/prevention.
As a MLOps engineer of the Samsung Ads team, you will have access to unique Samsung proprietary data to address existing product challenges and build end-to-end solutions with real-world impact. You will also work with talented engineers and top-notch machine learning researchers on exciting projects and state-of-the-art technologies.
In conclusion, your responsibility will be designing, setting up and administering infrastructure for deploying, monitoring, and maintaining ML models.
Role and Responsibilities
- Design and develop highly scalable machine learning infrastructure to support high throughput and low latency.
- Serve ML models to downstream applications, ensuring that they are accessible, scalable, and secure.
- Manage model versions and ensure that the correct version is served to clients. Implement a rollback mechanism in case of issues with the current model version.
- Implement monitoring and observability tools to track the performance, health, and usage of the platform and its components. Monitor the performance of the deployed models, addressing issues such as concept drift, data drift, and model degradation over time. Identify and resolve issues promptly, ensuring that the system remains stable and responsive.
- Develop, test, deploy, and maintain data and model training pipelines to support our ML products.
- Integrate the serving infrastructure with other systems, such as data pipelines, monitoring tools, and alerting systems. Ensure seamless communication and coordination among these systems.
- Constantly review and optimize the ML serving system. Strive to improve efficiency, reliability, and speed, looking for opportunities to simplify and automate tasks while maintaining high standards of quality.
- Research the latest machine learning serving technologies (e.g., model compilers, GPU deployment, and inference as a service), and keep up-to-date with industry trends and developments.
- Experiment with new scalable machine learning serving architectures tailored to our environment and create quick prototypes / proof-of-concepts.
- Streamline model deployment, unit testing, integration testing, stress testing and shadow testing.
- Enhance the online A/B testing framework.
- Work with ML engineers to deploy and serve production-grade, state-of-the-art machine learning models at scale.
- Depending of your skills and experience you will have a chance to technically lead people.
Technologies in use
- Python
- REST
- AWS
- Spark
- Snowflake
- Snowpark
- Github Actions
- Airflow
- Kubernetes
- Grafana
- Terraform
- TensorFlow
- PyTorch
Skills and Qualifications
- Degree in Computer Science or related fields.
- At least 2 years of proven industry experience in microservices.
- Experience with Infrastructure as Code (Terraform), cloud solutions and orchestration tools (e.g.: AWS, Sagemaker, Airflow, AWS Step/Lambda).
- Familiarity with CI/CD (e.g.: Github Actions), ETL, big data tools, and mainstream ML libraries (e.g., MapReduce, Spark, Flink, Kafka, Unix/Linux with shell, Docker, Kubernetes, TensorFlow, PyTorch, Spark ML, etc.).
- Experience working with real time monitoring/alerting components (e.g., Prometheus/ Grafana).
- Experience in Python, Go or other OOP languages.
- Experience with distributed cache systems, e.g., Redis/Aerospike.
Nice to have
- At least 5 years of industry experience in low latency, high throughput distributed microservices and integration e.g. WS/REST.
- Extensive experience with system architecture design for machine learning.
- Knowledge on testing frameworks for online A/B testing, canary, blue-green deployment.
- Knowledge about ML serving technologies, such as Seldon, Triton, ONNX, ONCL, TensorRT.
- Experience with the advertising industry, recommendation systems or real-time bidding (RTB) ecosystem.
We offer
Team
- Friendly atmosphere focused on teamwork
- Wide range of trainings and a huge support in developing algorithmic skills
- Opportunity to work in multiple projects
- Working with the latest technologies on the market
- Monthly integration budget
- Possibility to attend local and foreign conferences
- Flexible working hours
Equipment:
- PC workstation/Laptop + 2 external monitors
- OS: Windows or Linux
Benefits:
- Private medical care (possibility to add family members for free)
- Multisport card
- Life insurance
- Lunch card
- Variety of discounts (Samsung products, theaters, restaurants)
- Unlimited free access to Copernicus Science Center for you and your friends
- Possibility to test new Samsung products
Location:
- Office in Warsaw Spire near Metro station
- Working in a hybrid model: 3 days from the office per week
- Attractive relocation package