<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/c5f8d612d9e94638a5b86db0f90cdf5b&quot; frameborder=&quot;0&quot; width=&quot;1662&quot; height=&quot;1246&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1246</height><width>1662</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1246</thumbnail_height><thumbnail_width>1662</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/c5f8d612d9e94638a5b86db0f90cdf5b-00001.gif</thumbnail_url><duration>223.13333333333327</duration><title>Roboflow Project Walkthrough</title><description>In this video, I walk through my Roboflow project, explaining the process asynchronously since it takes some time. The project involves taking a video of me running on a treadmill and performing inference on it. When the runner is no longer detected on the treadmill, the video stops. I use a last frame with a stop sign and receive a text message as trigger events. I also write to a CSV file. The video goes through four main steps: breaking down the video into images, labeling the classes, running inference on the images, and piecing the images back together with predictions. The purpose is to simulate a trigger event when a runner leaves the frame without stopping the treadmill.</description></oembed>