{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/977ee0718d39436a81a5b22632f4e296\" frameborder=\"0\" width=\"1280\" height=\"960\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":960,"width":1280,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":960,"thumbnail_width":1280,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/977ee0718d39436a81a5b22632f4e296-90a0101e959e525d.gif","duration":182.6,"title":"General Demo Portcullis Benchmark","description":"In this video, I discuss the limitations of current AI coding tools, specifically their tendency to generate code that appears correct but can disrupt architectural integrity. I highlight our solution, Port-Cullis, which not only identifies vulnerabilities but also understands the dependencies and intent behind code changes, as demonstrated through our testing with Django. Unlike Copilot, Port-Cullis centralizes security checks and provides a risk score, ensuring a more reliable coding process. As we refine this technology for human-in-the-loop development, I emphasize the need for a governance layer to build trust in AI-driven coding. I invite you to consider how Port-Cullis can enhance your development processes."}