{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/e0f3ae24367841b58b31b2d53a222e5d\" frameborder=\"0\" width=\"1658\" height=\"1243\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1243,"width":1658,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1243,"thumbnail_width":1658,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/e0f3ae24367841b58b31b2d53a222e5d-87297abb995b6cc5.gif","duration":286.684,"title":"ShopEase Voice Agent Implementation Overview 🚀","description":"Hi, this is my submission for the Cosmo Take-Home Engineering assignment, showcasing a PipeCat implementation. I'm utilizing Grok LLAMA 3.18b for low-latency LLM inference and DeepGram for both text-to-speech and speech-to-text, with a SQLite database running locally. During the demo, I’ll be testing order updates and refund policy inquiries while streaming latency metrics in real-time. I also have retry logic in place for handling rate limits, and I'm randomizing inputs to demonstrate the system's capabilities. Please take a look at the K8 image specs in the repo for deployment details and let me know your thoughts!"}