<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/3f2f15f3b1c74605b8f933b880200eb8&quot; frameborder=&quot;0&quot; width=&quot;1280&quot; height=&quot;960&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>960</height><width>1280</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>960</thumbnail_height><thumbnail_width>1280</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/3f2f15f3b1c74605b8f933b880200eb8-00001.gif</thumbnail_url><duration>165.483</duration><title>Understanding Problem 960 in Language Models</title><description>In this video, I will be discussing problem 960, which focuses on studying learned features in language models. The goal is to perform dimensionality reduction on neural activations across a large amount of text. This topic is important in terms of interpretability, as we want to understand how interpretable the resulting plots are. I will explain how we separate positive and negative comments in vector space and visualize them using a PCA of two dimensions. The quality of the model directly affects the effectiveness of the separation. Join me as we explore this fascinating problem!</description></oembed>