{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/77a63fb32c834deb8d18dd10b6e89faf\" frameborder=\"0\" width=\"1920\" height=\"1440\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1440,"width":1920,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1440,"thumbnail_width":1920,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/77a63fb32c834deb8d18dd10b6e89faf-5a8a90d179d6caed.gif","duration":204.502,"title":"Conducting a Reasoning Audit for AI Agents","description":"In this video, I demonstrate the audit process for AI agents, focusing on reasoning to prevent issues like hallucinations and FOMO. We ran several audits, starting with a FOMO strategy that resulted in a fail verdict, followed by another failure due to hallucinated data. I then presented a more structured strategy that successfully passed the audit. The results showed low to medium risk levels, and I encourage you to review the data and base scans provided. Please pay close attention to these findings as we refine our strategies."}