<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/d7059b059c0f425fb0b8839418adffd6&quot; frameborder=&quot;0&quot; width=&quot;1110&quot; height=&quot;832&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>832</height><width>1110</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>832</thumbnail_height><thumbnail_width>1110</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/d7059b059c0f425fb0b8839418adffd6-8a43972e6b005b4b.gif</thumbnail_url><duration>52.0114</duration><title>Qwen Code x LiteLLM</title><description>In this video, I’m going to guide you on how to use QuenCode with any model by connecting it to your Lite LLM proxy. To get started, you’ll need to set up three environment variables: your OpenAI base URL, your OpenAI API key (which is your proxy API key), and the API. Once these are configured, Lite LLM will provide the necessary routes for using any model with QuenCode. Please make sure to follow these steps to ensure a smooth setup.</description></oembed>