MLOps Engineer is responsible for streamlining machine learning project lifecycles by designing and automating workflows, implementing CI/CD pipelines, ensuring reproducibility, and providing reliable experiment tracking. They collaborate with stakeholders and platform engineers to set up infrastructure, automate model deployment, monitor models, and scale training. MLOps Engineers possess a wide range of technical skills, including knowledge of orchestration, storage, containerization, observability, SQL, programming languages, cloud platforms, and data processing. Their expertise also covers various ML algorithms and distributed training in environments like Spark, PyTorch, TensorFlow, Dask, and Ray. MLOps Engineers are essential for optimizing and maintaining efficient ML processes in organizations.
responsibilities :
Creating, configuring, and managing GCP and K8s resources
Managing Kubeflow and/or Vertex AI and its various components
Collaborating and contributing to various GitHub repositories: infrastructure, pipelines, Python apps, and libraries
Containerization and orchestration of Python DS/ML applications: Data/Airflow and ML/Kubeflow pipelines
Setting up logging, monitoring, and alerting
Profiling Python code for performance
Scaling, configuring, and reconfiguring all the components based on metrics
Working with Data (BigQuery, GCS, Airflow), ML (Kubeflow/Vertex), and GCP infrastructure
Streamlining processes and making the Data Scientists work more effective
requirements-expected :
Proficiency in Python, as well as experience with scripting languages like Bash or PowerShell
Knowledge of at least one orchestration and scheduling tool, for example, Airflow, Prefect, Dagster, etc
Understanding of ML algorithms and distributed training, e.g., Spark / PyTorch / TensorFlow / Dask / Ray
Experience with GCP and BigQuery DWH platform
Hands-on experience with Kubeflow and Vertex AI
Familiarity with tools like MLFlow from the operations perspective
Experience with containerization technologies like Docker and knowledge of container orchestration platforms like Kubernetes
Understanding of continuous integration and continuous deployment (CI/CD) practices
Ability to identify and analyze problems in the workflow (in all the teams involved), propose solutions, and navigate complex technical challenges
offered :
Salary: 160 - 200 PLN net + VAT/h B2B (depending on knowledge and experience)
100% remote work
Flexible working hours
Possibility to work from the office located in the heart of Warsaw
Opportunity to learn and develop with the best Big Data experts