{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/aa65b23acaaf419eb6b348500c2a9df3\" frameborder=\"0\" width=\"722\" height=\"541\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":541,"width":722,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":541,"thumbnail_width":722,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/aa65b23acaaf419eb6b348500c2a9df3-47bdd016e3f1c96b.gif","duration":342.435,"title":"Running Open Models with Skill Files ✅","description":"I tried running local models through Ollama, fine tuning PHI, but I kept hitting compatibility issues with tools like Codex, GitHub CLI, and Claude. So I restyled Adr, pushed my fork to the repo, and bundled some improvements for skill invocation UI. With Quinn, a mixture of experts 30B down to 3B active parameters, it now loads my skill files from a connected path and invokes commands like align, checking institutional knowledge as needed. I shared the fork link in Signal and asked if you have a better UX way than this for running skills locally, please speak up."}