<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/3478057cec484a1b85471585fef10811&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/3478057cec484a1b85471585fef10811-1710191934824.gif</thumbnail_url><duration>511.14</duration><title>3. TDB worked example: name mover heads in the IOI circuit</title><description>In this video, I explore the concept of Transformer Debugger and its application in understanding a circuit. I discuss a well-known prompt from the Interpretability in the Wild paper and highlight the importance of certain attention heads for the task. I also demonstrate how TDB allows us to qualitatively reproduce a finding from the paper. Additionally, I delve into the findings of the paper, including the impact of ablation on attention heads and the explanation proposed for backup behavior.</description></oembed>