RAG-LLM Evaluation & Test Automation for Beginners

LLMs are everywhere! Every business is building its own custom AI-based RAG-LLMs to improve customer service. But how are engineers testing them? Unlike traditional software testing, AI-based systems need a special methodology for evaluation. This course starts from the ground up, explaining the architecture of how AI systems (LLMs) work behind the scenes. Then, it dives deep into LLM evaluation metrics. This course shows you how to effectively use the RAGAS framework library to evaluate LLM metrics through scripted examples. This allows you to use Pytest assertions to check metric benchmark scores and design a robust LLM Test/evaluation automation framework. What will you learn from the course? High level overview on Large Language Models (LLM) Understand how Custom LLM’s are built using Retrieval Augmented Generation (RAG) Architecture Common Benchmarks/Metrics used in Evaluating RAG based LLM’s Introduction to RAGAS Evaluation framework for evaluating/test LLM’s
Product image for RAG-LLM Evaluation & Test Automation for Beginners
one-time purchase
Price expires on 06/29/2026
one-time purchase

Course content

12 sections | 57 lessons