<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/91bb5bae871749c784251133ee14becc&quot; frameborder=&quot;0&quot; width=&quot;1280&quot; height=&quot;960&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>960</height><width>1280</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>960</thumbnail_height><thumbnail_width>1280</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/91bb5bae871749c784251133ee14becc-82741b41bfa9de1f.gif</thumbnail_url><duration>282.918</duration><title>Exploring the Zero Touch AI Avatar Pipeline 🚀</title><description>Hi team, in this video, I walk you through the Zero Touch AI Avatar Pipeline I built, showcasing how we process a YouTube URL and generate an avatar with a high-resolution thumbnail and cloned audio using Grok and fal.ai. I specifically engineered an asynchronous loop to handle API timeout limits effectively, ensuring smooth GPU rendering. After processing, we log the data into a SuperBase SQL database and I demonstrate the frontend I created to visualize the output. I will also share the video through Dropbox and my GitHub profile for further reference. Please take a look at the demo and let me know your thoughts!</description></oembed>