{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/3478057cec484a1b85471585fef10811\" frameborder=\"0\" width=\"1920\" height=\"1440\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1440,"width":1920,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1440,"thumbnail_width":1920,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/3478057cec484a1b85471585fef10811-1710191934824.gif","duration":511.14,"title":"3. TDB worked example: name mover heads in the IOI circuit","description":"In this video, I explore the concept of Transformer Debugger and its application in understanding a circuit. I discuss a well-known prompt from the Interpretability in the Wild paper and highlight the importance of certain attention heads for the task. I also demonstrate how TDB allows us to qualitatively reproduce a finding from the paper. Additionally, I delve into the findings of the paper, including the impact of ablation on attention heads and the explanation proposed for backup behavior."}