Collaborate with data scientists, engineers, and analysts to understand business requirements and design machine learning solutions
Participate in the end-to-end lifecycle of data science projects through the use of DevOps, code, experiment and model management, CI/CD and further industry best practices
Write well-designed, testable, efficient code
Work closely with engineering to continuously improve the way to deploy models in production
Collaborate with cross-functional teams to integrate machine learning models into existing systems and processes, ensuring scalability and efficiency
Our requirements
3+ years Python - Strong experience with Python, especially in writing clean, modular, and maintainable code following OOP principles
AWS Cloud understanding (experience with S3, EC2, Lambda)
Docker - Experience in containerizing applications using Docker and optimizing Docker images for various environments
Gitlab/github - Proficiency in source control management with GitLab or GitHub, including CI/CD pipelines configuration and management
REST APIs - Knowledge and hands-on experience in developing, consuming, and managing RESTful APIs
Machine Learning
Experience with Argo Workflows or Argo CD for orchestrating complex workflows and managing Kubernetes-based deployments
Proficiency in writing Bash scripts for automation tasks, system configuration, and maintenance
Kubeflow, Airflow, Dagster - Experience with Kubeflow or similar tool for running machine learning workflows on Kubernetes