<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/d1270097c6c148148bdbaf9a2f7bfe7e&quot; frameborder=&quot;0&quot; width=&quot;1532&quot; height=&quot;1149&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1149</height><width>1532</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1149</thumbnail_height><thumbnail_width>1532</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/d1270097c6c148148bdbaf9a2f7bfe7e-ce345d4365ff587d.gif</thumbnail_url><duration>195.8376</duration><title>Cody IDE - Autocomplete</title><description>In this video, I explain the different interaction modalities with Kodi and how to choose the right models for auto-complete. Larger models are used in chat and commands, while smaller, faster models are optimized for auto-complete. I provide examples to help you decide which modality suits your needs best. Remember, auto-complete only considers local context due to latency demands. No remote context is included. Pay attention to when auto-complete makes suggestions and how to accept or remove them.</description></oembed>