<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/2fa1195b76324bbeb2dfea1644605d3c&quot; frameborder=&quot;0&quot; width=&quot;1110&quot; height=&quot;832&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>832</height><width>1110</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>832</thumbnail_height><thumbnail_width>1110</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/2fa1195b76324bbeb2dfea1644605d3c-00001.gif</thumbnail_url><duration>335.7</duration><title>Vellum Prompt Sandboxes || Overview</title><description>In this video, I provide an overview of our prompt sandboxes section. I explain how to navigate the left prompt section on our left side nav and showcase the different prompts available in this workspace. I also demonstrate the comparison mode and chat mode features. The main task at hand is to determine whether a given conversation should be escalated to a human call center agent. I showcase how to compose prompt templates, add chat history, and use variables. Additionally, I highlight the different models available for comparison. No specific action is requested from the viewers.</description></oembed>