<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/1dd375ec4b0d458fabdfc2b841089031&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/1dd375ec4b0d458fabdfc2b841089031-a0032cf8884f496e.gif</thumbnail_url><duration>991.013977</duration><title>LLM Zoomcamp - Overview of LLM Evaluation (Retrieval + AG)</title><description>In this video, I explain the process of generating a ground truth dataset, evaluating retrieval, and assessing the quality of knowledge using Elasticsearch. I demonstrate how we use generated questions to evaluate retrieval quality and discuss the importance of evaluating the retrieval system. No specific action is requested from viewers, but the video provides valuable insights into knowledge retrieval and evaluation processes.</description></oembed>