<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/06145ba4f0104c2bb2b75b3d1b4464b8&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/06145ba4f0104c2bb2b75b3d1b4464b8-b6687142abaa1480.gif</thumbnail_url><duration>500.926</duration><title>Legal Risks of Using Public AI Tools in Law Firms ⚖️</title><description>In this video, I discuss a recent ruling by the District Judge of the Southern District of New York regarding the case of United States v. Bradley Heppner, where documents generated by an AI tool were deemed not protected by attorney-client privilege. Heppner, who faced serious charges including securities fraud, used a non-enterprise version of an AI tool to draft defense arguments, which ultimately led to the discovery of sensitive documents during a federal seizure. The court emphasized that using public AI tools can create discoverable records that lack legal protection. I urge all law firms to educate clients and employees about the risks of entering confidential information into public AI systems and to implement strict governance and compliance measures for AI use. It&apos;s crucial that we preserve confidentiality and privilege in our legal practices moving forward.</description></oembed>