<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/efd5162382dd4be89fe4bfe30ffce90a&quot; frameborder=&quot;0&quot; width=&quot;1298&quot; height=&quot;973&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>973</height><width>1298</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>973</thumbnail_height><thumbnail_width>1298</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/efd5162382dd4be89fe4bfe30ffce90a-e7e8a05bdeaea4aa.gif</thumbnail_url><duration>121.3762</duration><title>Enhancing LLM Observability for Better User Experience 📊</title><description>In this video, I walk you through how we&apos;ve enhanced LLM observability at LaunchDarkly to make it more discoverable and easier to read. I demonstrate analyzing traces and spans over the last 15 minutes, highlighting LLM attributes and user interactions that can lead to frustration, such as a lack of context in responses. I also discuss the importance of tracking metrics like estimated cost and total tokens to identify issues quickly. My goal is to show you how these improvements can simplify debugging and enhance our understanding of user experiences. Please take a moment to explore these features and consider how they can be applied in your work.</description></oembed>