<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/0ef2dd9e0bdd442cafc7a9fb09a73b0a&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/0ef2dd9e0bdd442cafc7a9fb09a73b0a-cbf8cb35972f6de4.gif</thumbnail_url><duration>457.218</duration><title>Enhancing Agent Evaluations on the Gradient AI Platform</title><description>In this video, I walk you through the agent evaluations experience on the Gradient AI Agented Cloud platform, which I designed. This experience is aimed at helping users understand their agent&apos;s behavior and improve its performance through qualitative and quantitative metrics. I demonstrate the evaluation process using the Ariel Silks Lesson Planner agent, highlighting key metrics and the importance of avoiding hallucinations in responses. I also discuss upcoming features, such as enhanced metric explanations and actionable insights for improvement. I encourage you to explore this platform and consider how it can enhance your own agent evaluations.</description></oembed>