{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/a808c1c6a5034a5cb857b697ed5a8a77\" frameborder=\"0\" width=\"1386\" height=\"1039\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1039,"width":1386,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1039,"thumbnail_width":1386,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/a808c1c6a5034a5cb857b697ed5a8a77-00001.gif","duration":366.9,"title":"Server-Sent Events (SSE) support demonstrated based on OpenAI's Codex (Co-Pilot) code-generation model with streaming mode on"}