<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/f3889cabf0df410b9fb86dc5f2814304&quot; frameborder=&quot;0&quot; width=&quot;1156&quot; height=&quot;867&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>867</height><width>1156</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>867</thumbnail_height><thumbnail_width>1156</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/f3889cabf0df410b9fb86dc5f2814304-f829fae2179d27b2.gif</thumbnail_url><duration>233.5642</duration><title>Understanding Context Loss in Conversational AI 🤖</title><description>In this video, I discuss how the checkpoint can lose context when clarifying information. I provide examples of how it asks repeated questions and fails to recall previous responses, which can hinder our workflow. I highlight the issue of it only referencing what&apos;s currently displayed, leading to a lack of continuity in our discussions. I hope this insight is helpful for understanding the limitations we&apos;re facing. I encourage you to consider these points as we work on improving our communication processes.</description></oembed>