For the pharmaceutical company we are looking for an Automated QA Engineer to join our team who will focus on end-to-end quality assurance — both within OutSystems applications and across the backend RAG integration layer. The ideal candidate will have a strong background in software quality assurance and test automation, with a proven track record in testing AI-powered applications, particularly those utilizing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines. This role will be critical to ensure robustness, reliability, and correctness of AI-driven responses, through automated testing, regression validation, and quality benchmarking of RAG outputs.
For the pharmaceutical company we are looking for an Automated QA Engineer to join our team who will focus on end-to-end quality assurance — both within OutSystems applications and across the backend RAG integration layer. The ideal candidate will have a strong background in software quality assurance and test automation, with a proven track record in testing AI-powered applications, particularly those utilizing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines. This role will be critical to ensure robustness, reliability, and correctness of AI-driven responses, through automated testing, regression validation, and quality benchmarking of RAG outputs.
,[Automation Test Solution Development: Design and implement automated test suites for both OutSystems front-end modules and backend microservices interacting with RaaS from the Galileo platform, Regression Testing: Build and maintain automated regression pipelines to ensure continuous platform stability after deployments and LLM-model updates, CI/CD Integration: Embed automated tests into CI/CD pipelines (GitLab CI preferred) to enable consistent, repeatable quality gates before releases, Performance & Reliability Testing: Monitor and test system performance and reliability across environments, identifying performance botlenecks or RAG degradation issues, Design and curate datasets for RAG evaluation, develop and execute tests for RAG benchmarking, evaluating the accuracy and relevance of AI-generated content and information retrieval, Work closely with developers, AI engineers, and platform teams to identify automation opportunities and improve overall software quality Requirements: Python, Testing, API, Integration test, Automated testing, Selenium, JavaScript, TypeScript, REST API, Microservices, API testing, Postman, pytest, GitLab CI, GitHub, Test automation, AWS, AWS Lambda, Gateway, AWS S3, AWS DynamoDB, Cloud, LLM, RAG, Outsystems Additionally: Sport subscription, Private healthcare, International projects.