<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/dc8b3ca8f19d4e05ad64b52899af396b&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/dc8b3ca8f19d4e05ad64b52899af396b-00001.gif</thumbnail_url><duration>711.980347</duration><title>Arrakis</title><description>In this video, I delve into the concept of mechanistic interpretability in AI models, focusing on the insights provided by hierarchies and activations within the models. I introduce a tool that bridges the gap between heuristic and open-source approaches, aiming to streamline experimentation processes for researchers. The video showcases how to leverage decomposability hierarchies, experimentation features, and tools like sparsity analysis and graphing to enhance interpretability and efficiency in AI model analysis.</description></oembed>