<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/277bdecb284842ffae7b539eb61f1dff&quot; frameborder=&quot;0&quot; width=&quot;1680&quot; height=&quot;1260&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1260</height><width>1680</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1260</thumbnail_height><thumbnail_width>1680</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/277bdecb284842ffae7b539eb61f1dff-f042bc62a5a84140.gif</thumbnail_url><duration>252.167</duration><title>10-Understanding Deep Learning: The Power of Neural Networks and Space Folding</title><description>In this video, I explore why deep learning outperforms wide networks by using a geometric understanding of neural networks through space folding. I demonstrate how deep networks can create complex decision boundaries by stacking layers that fold and bend the input space, allowing for hierarchical transformations. The universal approximation theorem shows that while a single hidden layer can approximate any function, deep networks are more efficient and effective in practice. I compare the performance of a shallow wide network with many neurons against a deep narrow network, revealing that the deep network learns complex boundaries significantly better. I encourage you to consider how depth in neural networks enhances representational power and problem-solving capabilities.</description></oembed>