<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/977ee0718d39436a81a5b22632f4e296&quot; frameborder=&quot;0&quot; width=&quot;1280&quot; height=&quot;960&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>960</height><width>1280</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>960</thumbnail_height><thumbnail_width>1280</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/977ee0718d39436a81a5b22632f4e296-90a0101e959e525d.gif</thumbnail_url><duration>182.6</duration><title>General Demo Portcullis Benchmark</title><description>In this video, I discuss the limitations of current AI coding tools, specifically their tendency to generate code that appears correct but can disrupt architectural integrity. I highlight our solution, Port-Cullis, which not only identifies vulnerabilities but also understands the dependencies and intent behind code changes, as demonstrated through our testing with Django. Unlike Copilot, Port-Cullis centralizes security checks and provides a risk score, ensuring a more reliable coding process. As we refine this technology for human-in-the-loop development, I emphasize the need for a governance layer to build trust in AI-driven coding. I invite you to consider how Port-Cullis can enhance your development processes.</description></oembed>