<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/aa65b23acaaf419eb6b348500c2a9df3&quot; frameborder=&quot;0&quot; width=&quot;722&quot; height=&quot;541&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>541</height><width>722</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>541</thumbnail_height><thumbnail_width>722</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/aa65b23acaaf419eb6b348500c2a9df3-47bdd016e3f1c96b.gif</thumbnail_url><duration>342.435</duration><title>Running Open Models with Skill Files ✅</title><description>I tried running local models through Ollama, fine tuning PHI, but I kept hitting compatibility issues with tools like Codex, GitHub CLI, and Claude. So I restyled Adr, pushed my fork to the repo, and bundled some improvements for skill invocation UI. With Quinn, a mixture of experts 30B down to 3B active parameters, it now loads my skill files from a connected path and invokes commands like align, checking institutional knowledge as needed. I shared the fork link in Signal and asked if you have a better UX way than this for running skills locally, please speak up.</description></oembed>