<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/24ba433601de45ba8b63d9fb34c31fd5&quot; frameborder=&quot;0&quot; width=&quot;1110&quot; height=&quot;832&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>832</height><width>1110</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>832</thumbnail_height><thumbnail_width>1110</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/24ba433601de45ba8b63d9fb34c31fd5-fceea1add333597e.gif</thumbnail_url><duration>631.62</duration><title>Getting Started with Eval Protocol for Reinforcement Learning 🚀</title><description>In this video, I&apos;m walking you through the Eval Protocol quickstart. I&apos;ll show you how it saves you from rewriting your eval logic every time you try a new RL trainer. We&apos;ll be working on an agent that generates SVG images, and we&apos;re going to use GPT-4.1 as a visual &apos;judge&apos; to score how well it does on its requirements. First, I&apos;ll show you how we run everything locally, where our test uses a RemoteRolloutProcessor to get the SVG code from a server, renders it, and gets it scored by our judge. Once we see that&apos;s all working, I&apos;ll show you the cool part: we just run a single command, eval-protocol create rft, which automatically packages up our entire evaluator, secrets, and dataset, and kicks off a real training job on Fireworks. To finish, we&apos;ll hop into the Fireworks dashboard and check out the training graphs, logs, and the before-and-after visual improvements.</description></oembed>