<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/9e6c6ca017134c91bc75167c898a39ec&quot; frameborder=&quot;0&quot; width=&quot;1680&quot; height=&quot;1260&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1260</height><width>1680</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1260</thumbnail_height><thumbnail_width>1680</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/9e6c6ca017134c91bc75167c898a39ec-3dc8fec77de793c6.gif</thumbnail_url><duration>198.092</duration><title>4-Introduction to JAX for Deep Learning</title><description>In this video, I introduce JAX and its powerful features for deep learning, including automatic differentiation, just-in-time compilation, and vectorization. I explain how JAX operates with arrays similar to NumPy but emphasizes the use of explicit random keys and immutable arrays, which enhance reproducibility and functional programming. We cover essential tensor operations, matrix multiplication, and the importance of shape compatibility for computing gradients in neural networks. I encourage you to explore these concepts further, as they are foundational for building effective neural networks. Overall, JAX combines the simplicity of NumPy with advanced transformations, making it a robust framework for our projects.</description></oembed>