Our project is a groundbreaking initiative focused on Artificial Intelligence (AI), aimed at enabling high-value AI use cases through cutting-edge platforms. The goal is to establish a Center of Excellence in AI and GenAI, an innovative hub where top AI professionals collaborate, share best practices, and explore new ideas. This center will play a crucial role in advancing Ten-Year Ambitions, fostering innovation and excellence in AI across the organization.
This role presents an exciting opportunity to contribute to the pharmaceutical sector’s future through AI-driven solutions. Leveraging the latest advancements, this project’s strategy includes enhancing conversational AI applications, boosting developer productivity, and leveraging enterprise search technologies with trusted partners. By using low-code/no-code frameworks and established AI workbenches (Dataiku, AWS Sagemaker, Kamino), this project seeks to push the limits of AI in delivering solutions across complex use cases.
AI Consultant - MLOps Engineer
Your responsibilities
- Design, implement, and fine-tune GenAI models and machine learning pipelines.
- Deploy and manage models in production environments using best practices in MLOps.
- Continuously monitor and optimize model performance, ensuring scalability and efficiency.
- Assess large language models within specific domains and adapt as necessary.
- Work alongside data scientists, software engineers, and business stakeholders to identify requirements and deliver effective solutions.
- Communicate complex technical concepts clearly to non-technical audiences.
- Integrate GenAI models seamlessly with existing systems and workflows.
- Support a range of consulting projects, from brief proof-of-concepts to comprehensive production solutions.
- Develop scalable MLOps infrastructure, including CI/CD pipelines, version control, and automated testing processes.
- Oversee cloud-based resources and infrastructure for efficient model training and deployment.
- Stay informed on the latest GenAI and MLOps advancements.
- Implement best practices for model testing, deployment, and monitoring.
- Conduct post-deployment evaluations and propose enhancements.
- Ensure models and data pipelines adhere to regulatory standards and data security guidelines.
- Address potential biases and maintain ethical AI practices in accordance with industry standards.
Our requirements
- Strong programming proficiency, particularly in Python (R is also a plus).
- Experience with MLOps, CI/CD, version control, and automated testing.
- Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps tools (SageMaker, Vertex AI, Azure ML Studio).
- Hands-on experience with Docker, Kubernetes, and containerization.
- Knowledge of Vector Databases and machine learning integration frameworks like Langchain or Llamaindex.
- Solid background in data engineering, infrastructure automation (e.g., Terraform, AWS CloudFormation), and cloud-native Kubernetes services.
- Over 2 years in Git, Linux fundamentals, and Bash scripting.
- Designing and implementing CI/CD pipelines (e.g., GitLab, Argo CD).
- Working in complex consulting environments, especially within enterprise-level AI or MLOps projects.
- A strong understanding of data science principles, such as train/test data management, overfitting, and classification.
- Knowledge of DevOps and Agile methodologies.
- Strong problem-solving abilities, excellent attention to detail, and superior troubleshooting skills.
- Effective communication skills and the ability to work well both independently and in team environments.
- English proficiency at B2 level or higher.
- Hands-on experience with GenAI technologies and applications.
- Exposure to MLOps tools (e.g., MLflow, Kubeflow) and familiarity with cloud infrastructure and services across major platforms.
- An interest or experience in fields such as computer vision, NLP, or predictive modeling.