We are happy to announce that we are currently looking for an MLOps Engineer! This role is crucial to our company, and we are seeking candidates with outstanding skills and experience. Although there isn’t an immediate project available, we invite you to connect with us to discuss potential future opportunities.
MLOps Engineer is responsible for streamlining machine learning project lifecycles by designing and automating workflows, implementing CI/CD pipelines, ensuring reproducibility, and providing reliable experiment tracking. They collaborate with stakeholders and platform engineers to set up infrastructure, automate model deployment, monitor models, and scale training. MLOps Engineers possess a wide range of technical skills, including knowledge of orchestration, storage, containerization, observability, SQL, programming languages, cloud platforms, and data processing. Their expertise also covers various ML algorithms and distributed training in environments like Spark, PyTorch, TensorFlow, Dask, and Ray. MLOps Engineers are essential for optimizing and maintaining efficient ML processes in organizations.
responsibilities :
Collaborating with Platform Engineers to set the infrastructure required to run MLOps processes efficiently
Implementing ML workflows / automating CI/CD pipelines
Automating model deployment and implementing model monitoring.
Collaborating with Platform Engineers to implement backup and disaster recovery processes for ML workflows, especially models and experiments
Collaborating with stakeholders to understand the key challenges and inefficiencies of Machine Learning project lifecycles within the company
Keeping abreast of the latest trends and advancements in data engineering and machine learning
requirements-expected :
Proficiency in Python, as well as experience with scripting languages like Bash or PowerShell
Knowledge of at least one orchestration and scheduling tool, for example, Airflow, Prefect, Dagster, etc
Understanding of ML algorithms and distributed training, e.g., Spark / PyTorch / TensorFlow / Dask / Ray
Experience with cloud services (Azure / AWS / GCP)
Experience in frameworks such as Databricks
Familiarity with tools like MLflow, W&B, and Neptune AI from the operations perspective
Experience with containerization technologies like Docker and basic knowledge of container orchestration platforms like Kubernetes
Understanding of continuous integration and continuous deployment (CI/CD) practices, as well as experience with related tools like GitHub Actions or GitLab CI
offered :
Salary: 160 - 200 PLN net + VAT/h B2B (depending on knowledge and experience)
100% remote work
Flexible working hours
Possibility to work from the office located in the heart of Warsaw
Opportunity to learn and develop with the best Big Data experts