<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/032fb25afbd94c77b98777797a20aeed&quot; frameborder=&quot;0&quot; width=&quot;1108&quot; height=&quot;831&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>831</height><width>1108</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>831</thumbnail_height><thumbnail_width>1108</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/032fb25afbd94c77b98777797a20aeed-76761dcf69f91f3a.gif</thumbnail_url><duration>194.391</duration><title>Synigraph AI, Deterministic Multi-Agent Story Video Pipeline</title><description>I built Synigraph AI as my task one submission, a deterministic multi agent narrative system visualized as an engineered pipeline. When you submit your free form narrative text with a title, description, and optional LLM mode and seed, the backend creates a durable compilation job that runs agents in sequence. First it structures characters, locations, timeline themes, and ambiguity flags, then it plans scenes and adds director decisions like camera angle and color palette, and finally it generates frame level metadata and animated storyboard video. The output supports storyboard frames and downloadable video, compiled and deployed on Render or Vercel. No specific viewer action was requested.</description></oembed>