<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/e44c35ec0c944b599ca8e2982694145c&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/e44c35ec0c944b599ca8e2982694145c-36957197704cde07.gif</thumbnail_url><duration>246.038</duration><title>DSAI Lab Project - Group 4 - Indoor Navigation for Vision Impaired </title><description>Hi everyone, I am Saran Saini, and I am demonstrating my DSAI Lab project on navigation assistance for visually impaired people in an indoor space. My pipeline takes an RGB frame and runs YOLO object detection and a depth map in parallel, then combines them into depth maps with bounding boxes. From that, I compute hazard zones, including a walkable zone of 40 percent and later zones of 30 percent, and select the layer with the lowest risk. I generate a navigation command and pass it to a TTS module for audio guidance. I deployed the app on HuggingFace, and you can analyze your own images by clicking Analyze Environment.</description></oembed>