{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/c5f8d612d9e94638a5b86db0f90cdf5b\" frameborder=\"0\" width=\"1662\" height=\"1246\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1246,"width":1662,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1246,"thumbnail_width":1662,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/c5f8d612d9e94638a5b86db0f90cdf5b-00001.gif","duration":223.13333333333327,"title":"Roboflow Project Walkthrough","description":"In this video, I walk through my Roboflow project, explaining the process asynchronously since it takes some time. The project involves taking a video of me running on a treadmill and performing inference on it. When the runner is no longer detected on the treadmill, the video stops. I use a last frame with a stop sign and receive a text message as trigger events. I also write to a CSV file. The video goes through four main steps: breaking down the video into images, labeling the classes, running inference on the images, and piecing the images back together with predictions. The purpose is to simulate a trigger event when a runner leaves the frame without stopping the treadmill."}