<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/54d8c75bfae7444ba1a8fcfc74fab59a&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/54d8c75bfae7444ba1a8fcfc74fab59a-3ebb2b39314e4ae3.gif</thumbnail_url><duration>507.67</duration><title>VibeFinder 2.0 RAG and Guardrails Demo 🎵</title><description>Hi everybody, I am Nabyu, and I am presenting VibeFinder 2.0 for my Data, NAS, CodePath AI final project. It extends my Module 3 music recommendation by adding a RAG augmented generation pipeline, grounded explanations using the custom 24 markdown documents, and hallucination guardrails that enforce confidence, lexical grounding, and length. My pipeline has four stages, including TF IDF scoring, Gemini 2.5 Flash grounded generation, and deterministic guardrail scoring. I built two interfaces, a CLI and a Streamlit UI, and in the demo you can see song score breakdown, confidence, and retrieved chunks. No action was requested from viewers, but everything is documented in my public README and model card.</description></oembed>