<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/523f07131fcd410fb10d7ee7ae193fe1&quot; frameborder=&quot;0&quot; width=&quot;1114&quot; height=&quot;835&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>835</height><width>1114</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>835</thumbnail_height><thumbnail_width>1114</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/523f07131fcd410fb10d7ee7ae193fe1-00001.gif</thumbnail_url><duration>293.6666666666668</duration><title>Rendition Prompt Evaluating Framework 👍</title><description>Hi, my name is Robert and in this video, I will be talking about the Rendition Prompt Evaluating Framework. The framework aims to evaluate and compare prompts used in our core product, which have become complex over time. We want to ensure that any changes made to the prompts do not introduce regressions and actually improve performance. I will explain how we define success criteria, execute tests, and evaluate the results. Additionally, I will provide examples and discuss the evaluation of grammar correction prompts and prompts for interacting with Kubernetes. This video will help you understand how we evaluate and improve our prompts to reduce fear of changing them in production.</description></oembed>