<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/84c413168ba34c0ab794a54c9f552fc3&quot; frameborder=&quot;0&quot; width=&quot;1422&quot; height=&quot;1066&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1066</height><width>1422</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1066</thumbnail_height><thumbnail_width>1422</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/84c413168ba34c0ab794a54c9f552fc3-21f1b4ef14f65ddc.gif</thumbnail_url><duration>88.213</duration><title>Chat Implementation with Streaming Context 🤖</title><description>In this Loom I set out a plan for ChatGPT implementation and then the initial implementation, including the prompt builder from the context, the endpoint, service side events, and a basic skeleton. I tested it by adding an orange item and watching how streaming chunks build the context on top of what we provide. I also started another run and stopped the server to show errors. There is an issue with a double error indicator. No specific viewer action was requested.</description></oembed>