<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/d24b08c8d1f448fc95192203f3230c3a&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/d24b08c8d1f448fc95192203f3230c3a-5659d5a70153be84.gif</thumbnail_url><duration>888.912</duration><title>StudyMind AI, Trustworthy RAG Study Assistant</title><description>In my Loom, I demo Study Mind AI, a RAG-powered study assistant built to improve reliability and prevent hallucinations by grounding every answer in a student’s own notes. I built it with Python, GPT 4.1 mini, Chroma, and local sentence transformers, and added guardrails plus an automated evaluation harness. The pipeline is guardrails, then a StudyMind agent that classifies the task, retrieves top four chunks, generates an answer, and self-verifies with a confidence score and reasoning trace. I also ran 27 guardrail and test chunk cases, 11 predefined harness test cases, and showed prompt injection being blocked. No action was requested from viewers.</description></oembed>