<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/badb94ad47cf40b7828a5decc5fbd5a9&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/badb94ad47cf40b7828a5decc5fbd5a9-45219a45e9f55828.gif</thumbnail_url><duration>862.7892</duration><title>CustomGPTs</title><description>In this video, I delve into the evolution of my GPTs, starting from my first language model 8 years ago to the latest version, DSLmodels. I discuss the significance of sparse priming representations (SPR) in efficiently representing ideas and memories. I demonstrate the process of creating a custom metaprompt assistant and delve into the theory behind SPR techniques. Viewers are encouraged to follow along as we create a new custom GPT for decompression.</description></oembed>