<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/33ba6c7a8c8944f9b3b2a9e50405e8c2&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/33ba6c7a8c8944f9b3b2a9e50405e8c2-f776be908a190e98.gif</thumbnail_url><duration>1937.58</duration><title>Navigating the Risks of AI: Understanding Large Language Model Induced Psychosis 🤖</title><description>In this episode, I discuss the concept of Large Language Model Induced Psychosis and the psychological risks associated with AI tools, especially for professionals over 50. Drawing from my experience as a former LAPD officer and certified peer counselor, I emphasize the importance of recognizing early signs of psychological distress that can arise from excessive AI interaction. I urge viewers to conduct a self-assessment and implement at least three protective protocols this week to safeguard their mental health. It&apos;s crucial to maintain a healthy relationship with AI technology, ensuring it enhances rather than detracts from our lives. Remember, don&apos;t wait until you&apos;re in crisis to take action; reach out for help if needed.</description></oembed>