<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/0763766b1d7e42fbb094c7a4c1962e99&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/0763766b1d7e42fbb094c7a4c1962e99-cf06ca258ffecbc8.gif</thumbnail_url><duration>1266.639</duration><title>WideFinderAgent Music Recommender Demo 🟣</title><description>Hey everyone, I demoed my final project, WideFinderAgent, a full stack agentic music recommender built with LangGraph, FastAPI with GraphQL endpoints, Next.js, and LangFuse for observability. I started from a rule based Python recommender with about 18 songs and limited inputs, and replaced it with a natural language, multi node agent that thinks out loud, audits itself, loops with stop conditions, and learns from your feedback. During the demo you saw real time SSE streaming of agent steps, feedback buttons, and LangFuse traces with token counts and latency. I did not ask viewers for a specific action, but I shared setup instructions, including needing a Grok and Langfuse account to run and view traces.</description></oembed>