<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/b8a6d28cc1694121ac3869079d76085b&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/b8a6d28cc1694121ac3869079d76085b-1acb4262bccddfe9.gif</thumbnail_url><duration>849.908</duration><title>AI Hallucinations Are Now an Epidemic in the Legal Field</title><description>Tony DeSimone shares that AI hallucinations are becoming a serious and growing problem in the legal field, with prosecutors and law firms recently facing sanctions and costly consequences for submitting court filings containing false or AI-generated information. He explains that hallucinations aren’t rare mistakes, but a built-in risk of generative AI, and he breaks down the three main factors behind them to help you understand the best ways to avoid them.</description></oembed>