<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/6b08d60d1b8a4c2786b2bb998fd7e959&quot; frameborder=&quot;0&quot; width=&quot;1728&quot; height=&quot;1296&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1296</height><width>1728</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1296</thumbnail_height><thumbnail_width>1728</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/6b08d60d1b8a4c2786b2bb998fd7e959-3cb643cacd4487e3.gif</thumbnail_url><duration>318.546</duration><title>Building a Reliable Retrieval AI System</title><description>In this Loom, I explain how I turned my Week 7 Music Recommender into a full Applied AI system with retrieval, validation, and logging. The system retrieves relevant documents from a small knowledge base, generates answers from that context, then a validator assigns a confidence score and checks whether sources were found. It also logs every interaction as JSON files for debugging and evaluation. I tested in-scope questions like prompt engineering and retrieval augmented generation successfully, and out-of-scope overfitting triggers a fallback instead of a hallucinated answer. I did not ask viewers to take any action.</description></oembed>