<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/8d65327fcb0649d6b06c0ca028eaf934&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/8d65327fcb0649d6b06c0ca028eaf934-9a9c1bec473e0219.gif</thumbnail_url><duration>136.587</duration><title>Catching AI Feature Issues with Guarded Rollouts 🚀</title><description>In this video, I share how LaunchDarkly&apos;s Guarded Rollouts helped me catch a regression in an AI feature before it became a major issue. I was rolling out a new model, GPT-40, and used the Guarded Rollout feature to monitor key metrics like completion error counts and feedback. Thanks to this feature, LaunchDarkly automatically rolled back the change at 9:53 p.m. while I was asleep, preventing a potential incident. I encourage you to try out this feature and let me know your thoughts. Stay tuned for updates as I continue testing!</description></oembed>