{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/84c413168ba34c0ab794a54c9f552fc3\" frameborder=\"0\" width=\"1422\" height=\"1066\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1066,"width":1422,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1066,"thumbnail_width":1422,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/84c413168ba34c0ab794a54c9f552fc3-21f1b4ef14f65ddc.gif","duration":88.213,"title":"Chat Implementation with Streaming Context 🤖","description":"In this Loom I set out a plan for ChatGPT implementation and then the initial implementation, including the prompt builder from the context, the endpoint, service side events, and a basic skeleton. I tested it by adding an orange item and watching how streaming chunks build the context on top of what we provide. I also started another run and stopped the server to show errors. There is an issue with a double error indicator. No specific viewer action was requested."}