<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/29cc930d60c0438eb9174ae90a568051&quot; frameborder=&quot;0&quot; width=&quot;1848&quot; height=&quot;1386&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1386</height><width>1848</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1386</thumbnail_height><thumbnail_width>1848</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/29cc930d60c0438eb9174ae90a568051-b657c9e286fd48e2.gif</thumbnail_url><duration>305.5915</duration><title>Introduction to the LLM Context Handling Problem</title><description>This video introduces a new content series focused on learnings from the &quot;AI Tools experiment.&quot; It explores how to improve the use of AI coding assistants like Cursor and GitHub Copilot. A key takeaway is the importance of prompt engineering—giving better instructions to get more accurate code results.

Beyond prompting, it highlights a bigger challenge: managing the context of large language models (LLMs). As development progresses and more instructions are given, the context grows, making the model slower and more prone to errors. To address this, the approach is to keep the context lightweight and clean throughout the workflow.

One solution being tested is breaking down large user stories into smaller, more manageable tasks. Instead of building entire features at once, tasks are split—for example, a login feature might be divided into a frontend form and a backend API. This makes it easier for the LLM to focus on specific tasks with less context, improving both speed and accuracy.

This video sets the stage for a series that will go deeper into these learnings and share practical ways to manage context and improve productivity with AI tools.</description></oembed>