Samsung Ads is an advanced advertising ecosystem, spanning hundreds of millions of smart devices across TV, mobile, desktop, and beyond. The project we are recruiting for is focused on enabling brands to connect with Samsung TV audiences building the world’s smartest advertising platform. We use machine learning algorithms for advertising campaigns to enhance targeting, personalization, and optimization. The goal is to deliver the right message to the right audience at the right time, resulting in higher engagement rates and conversion rates.
Audience building is a crucial aspect of effective marketing, especially in todays digital landscape where targeting specific groups of people is essential for success.
During project on-boarding process you will understand our products and services to easily identify the ideal customer persona, considering factors such as demographics, psychographics, and purchasing power.
Being part of an international company such as Samsung you will get to work on the most challenging projects with stakeholders and teams located around the globe.
You will deeply dive into Samsung Advertising Galaxy working with such exciting domains like bidding, pacing and performance-based advertising, as well as recommendations and churn prediction/prevention.
As a MLOps engineer of the Samsung Ads team, you will have access to unique Samsung proprietary data to address existing product challenges and build end-to-end solutions with real-world impact. You will also work with talented engineers and top-notch machine learning researchers on exciting projects and state-of-the-art technologies.
In conclusion, your responsibility will be designing, setting up and administering infrastructure for deploying, monitoring, and maintaining ML models.
Technologies in use
●Python
●Golang
●REST
●AWS
●Spark
●Snowflake / Snowpark
●Github Actions
●ArgoCD
●Airflow
●Kubernetes
●Grafana / Prometheus
●Terraform
●TensorFlow
●PyTorch
●Hadoop
●Aerospike/Redis
responsibilities :
Design and develop highly scalable machine learning infrastructure to support high throughput and low latency.
Serve ML models to downstream applications, ensuring that they are accessible, scalable, and secure.
Manage model versions and ensure that the correct version is served to clients. Implement a rollback mechanism in case of issues with the current model version.
Implement monitoring and observability tools to track the performance, health, and usage of the platform and its components. Monitor the performance of the deployed models, addressing issues such as concept drift, data drift, and model degradation over time. Identify and resolve issues promptly, ensuring that the system remains stable and responsive.
Develop, test, deploy, and maintain data and model training pipelines to support our ML products
Integrate the serving infrastructure with other systems, such as data pipelines, monitoring tools, and alerting systems. Ensure seamless communication and coordination among these systems.
Constantly review and optimize the ML serving system. Strive to improve efficiency, reliability, and speed, looking for opportunities to simplify and automate tasks while maintaining high standards of quality.
Research the latest machine learning serving technologies (e.g., model compilers, GPU deployment, and inference as a service), and keep up-to-date with industry trends and developments.
Experiment with new scalable machine learning serving architectures tailored to our environment and create quick prototypes / proof-of-concepts.
Streamline model deployment, unit testing, integration testing, stress testing and shadow testing.
Enhance the online A/B testing framework
Work with ML engineers to deploy and serve production-grade, state-of-the-art machine learning models at scale.
Depending of your skills and experience you will have a chance to technically lead people
requirements-expected :
Degree in Computer Science or related fields.
At least 2 years of proven industry experience in microservices.
Experience with Infrastructure as Code (Terraform), cloud solutions and orchestration tools (AWS e.g. Sagemaker, Airflow MWAA, Step/Lambda, EC2, EMR).
Familiarity with CI/CD (e.g.: Github Actions, ArgoCD), ETL, big data tools, mainstream ML frameworks (e.g., MapReduce, Spark, Flink, Kafka, Unix/Linux with shell, Docker, Kubernetes, TensorFlow, PyTorch, etc.) and communication protocols (gRPC, HTTP2.0).
Experience working with real time monitoring/alerting components (e.g., Prometheus/ Grafana/ AWS Quicksight).
Experience in Python and Go (preferable).
Experience with distributed cache systems, e.g., Redis/Aerospike.
offered :
Friendly atmosphere focused on teamwork
Wide range of trainings and a huge support in developing algorithmic skills
Opportunity to work in multiple projects
Working with the latest technologies on the market
Monthly integration budget
Possibility to attend local and foreign conferences
Flexible working hours
PC workstation/Laptop + 2 external monitors
OS: Windows or Linux
Private medical care (possibility to add family members for free)
Multisport card
Life insurance
Lunch card
Variety of discounts (Samsung products, theaters, restaurants)
Unlimited free access to Copernicus Science Center for you and your friends
Possibility to test new Samsung products
Office in Warsaw Spire / Quattro Business Park
Very attractive relocation package
benefits :
sharing the costs of sports activities
private medical care
sharing the costs of foreign language classes
life insurance
corporate products and services at discounted prices
integration events
dental care
no dress code
leisure zone
pre-paid cards
baby layette
charity initiatives
unlimited free access to Copernicus Science Center