<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/4809c8958f48431cba8e05280c926765&quot; frameborder=&quot;0&quot; width=&quot;1680&quot; height=&quot;1260&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1260</height><width>1680</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1260</thumbnail_height><thumbnail_width>1680</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/4809c8958f48431cba8e05280c926765-4a028b5fcc0a74fc.gif</thumbnail_url><duration>285.087</duration><title>6-Understanding Probability Theory in Neural Networks 🤖</title><description>In this video, I explain the probability theory that underlies neural networks, emphasizing that they are essentially probability machines estimating conditional probabilities. I discuss how the softmax function converts logits into a valid probability distribution, ensuring that all outputs sum to 1. We also explore concepts like cross-entropy loss, Bayes&apos; theorem, and the importance of minimizing uncertainty through entropy. Additionally, I touch on key distributions used in classification tasks and the role of batch normalization in stabilizing training. I encourage you to reflect on these concepts as we continue to deepen our understanding of neural networks.</description></oembed>