Course Description

LLMs are everywhere! Every business is building its own custom AI-based RAG-LLMs to improve customer service.
But how are engineers testing them? Unlike traditional software testing, AI-based systems need a special methodology for evaluation.
This course starts from the ground up, explaining the architecture of how AI systems (LLMs) work behind the scenes.
Then, it dives deep into LLM evaluation metrics.
This course shows you how to effectively use the RAGAS framework library to evaluate LLM metrics through scripted examples.
This allows you to use Pytest assertions to check metric benchmark scores and design a robust LLM Test/evaluation automation framework.
What will you learn from the course? High level overview on Large Language Models (LLM) Understand how Custom LLM’s are built using Retrieval Augmented Generation (RAG) Architecture Common Benchmarks/Metrics used in Evaluating RAG based LLM’s
Introduction to RAGAS Evaluation framework for evaluating/test LLM’s.

Course Curriculum

  Section 1: Introduction to AI concepts - LLM's & RAG LLM's
Available in days
days after you enroll
  Section2: Understand RAG (Retrieval Augmented Generation) - LLM Architecture with Usecase
Available in days
days after you enroll
  Section 3: Getting started with Practice LLM's and the approach to evaluate /Test
Available in days
days after you enroll
  Section 4: Setup Python & Pytest Environment with RAGAS LLM Evaluation Package Libraries
Available in days
days after you enroll
  Section 5: Programmatic solution to evaluate LLM Metrics with Langchain and RAGAS Libraries
Available in days
days after you enroll
  Section 6: Optimize LLM Evaluation tests with Pytest Fixtures & Parameterization techniques
Available in days
days after you enroll
  Section 7: Evaluate LLM Core Metrics and importance of EvalDataSet in RAGAS Framework
Available in days
days after you enroll
  Section 8: Upload LLM Evaluation results & Test LLM for Multi Conversational Chat History
Available in days
days after you enroll
  Section 9: Create Test Data dynamically to evaluate LLM & Generate Rubrics Evaluation Score
Available in days
days after you enroll
  Section 10: Conclusion and next steps!
Available in days
days after you enroll
  Section 11: Optional - Learn Python Fundamentals with examples
Available in days
days after you enroll
  Section 12: Optional - Overview of Pytest Framework basics with examples
Available in days
days after you enroll

Frequently Asked Questions


When does the course start and finish?
The course starts now and never ends! It is a completely self-paced online course - you decide when you start and when you finish.
How long do I have access to the course?
How does lifetime access sound? After enrolling, you have unlimited access to this course for as long as you like - across any and all devices you own.
What if I am unhappy with the course?
We would never want you to be unhappy! If you are unsatisfied with your purchase, contact us in the first 30 days and we will give you a full refund.

Choose a Pricing Option