<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/8d979fe7fa3b43889f9e18b86b7446e4&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/8d979fe7fa3b43889f9e18b86b7446e4-b58f50e74bcd3288.gif</thumbnail_url><duration>243.4766</duration><title>Cursor - demo_trust_weight_rag.py - CORE - Cursor - 21 December 2025</title><description>Subject: RAG contradiction fix (Silent Demo - 3:40)

&quot;The video is silent. I included a full slow-scroll of the source so you can verify there&apos;s no hardcoded logic or &apos;cheat&apos; if/else blocks.

Navigation Map:

0:00 - 0:50: AI Integrity Check (I ask Cursor to audit the script for &apos;cheats&apos; live).

0:50 - 3:08: The Source Code (Slow scroll of the trust logic—skip this if you trust the AI audit).

3:08 - End: The Payoff (Live run: Baseline vs. Trust Layer).

Speed watch at 1.5x recommended.&quot;</description></oembed>