<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/11eb0e08d5b94b519b9a6948a339fd23&quot; frameborder=&quot;0&quot; width=&quot;1840&quot; height=&quot;1380&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1380</height><width>1840</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1380</thumbnail_height><thumbnail_width>1840</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/11eb0e08d5b94b519b9a6948a339fd23-edbafa1733ec4e4a.gif</thumbnail_url><duration>132.7375</duration><title>Hackathon Project Overview</title><description>In this video, I’m excited to share the results of our hackathon project, where we developed a Gradio app for analyzing MRI images of the spine. The app allows users to upload two medical volumes and view different slices, while our machine learning models identify lesions. I’ve pre-run the segmentation process, which takes about 300 seconds, to showcase the overlays indicating the detected lesions. I also discuss the metrics we save and how we utilize a language model for user-friendly explanations. Please take a look and let me know your thoughts!</description></oembed>