{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/edd7ef43d536466f8f23228d5f4e6211\" frameborder=\"0\" width=\"1108\" height=\"831\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":831,"width":1108,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":831,"thumbnail_width":1108,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/edd7ef43d536466f8f23228d5f4e6211-36f3eb0904ffa27b.gif","duration":179.349,"title":"Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes - ACL Anthology - 15 March 2026"}