{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/3f2f15f3b1c74605b8f933b880200eb8\" frameborder=\"0\" width=\"1280\" height=\"960\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":960,"width":1280,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":960,"thumbnail_width":1280,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/3f2f15f3b1c74605b8f933b880200eb8-00001.gif","duration":165.483,"title":"Understanding Problem 960 in Language Models","description":"In this video, I will be discussing problem 960, which focuses on studying learned features in language models. The goal is to perform dimensionality reduction on neural activations across a large amount of text. This topic is important in terms of interpretability, as we want to understand how interpretable the resulting plots are. I will explain how we separate positive and negative comments in vector space and visualize them using a PCA of two dimensions. The quality of the model directly affects the effectiveness of the separation. Join me as we explore this fascinating problem!"}