<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/ee079aca75aa4fa1ba6a5e51302fbd56&quot; frameborder=&quot;0&quot; width=&quot;3840&quot; height=&quot;2880&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>2880</height><width>3840</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>2880</thumbnail_height><thumbnail_width>3840</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/ee079aca75aa4fa1ba6a5e51302fbd56-e4ee3f1f1a14a51d.gif</thumbnail_url><duration>182.233333</duration><title>Streamcore AI Plug-in System Overview 🚀</title><description>Hi, I am Angel from Streamcore AI. I walked through Streamcore AI, a self hostable, low latency platform built on a fast Go core for media handling and real time voice processing. I explained our plug in system, where long lived sub processes are registered via a manifest and callable by the LLM, with SDKs for Python, TypeScript, JavaScript, and Native Go. We use it to build real time voice apps like voice agents, assistants, support chatbots, and more. I did not ask for any specific action from viewers.</description></oembed>