<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/77a63fb32c834deb8d18dd10b6e89faf&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/77a63fb32c834deb8d18dd10b6e89faf-5a8a90d179d6caed.gif</thumbnail_url><duration>204.502</duration><title>Conducting a Reasoning Audit for AI Agents</title><description>In this video, I demonstrate the audit process for AI agents, focusing on reasoning to prevent issues like hallucinations and FOMO. We ran several audits, starting with a FOMO strategy that resulted in a fail verdict, followed by another failure due to hallucinated data. I then presented a more structured strategy that successfully passed the audit. The results showed low to medium risk levels, and I encourage you to review the data and base scans provided. Please pay close attention to these findings as we refine our strategies.</description></oembed>