<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/7818eb6984954b5bb407edbf6ce9bcaf&quot; frameborder=&quot;0&quot; width=&quot;1440&quot; height=&quot;1080&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1080</height><width>1440</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1080</thumbnail_height><thumbnail_width>1440</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/7818eb6984954b5bb407edbf6ce9bcaf-1699647099126.jpg</thumbnail_url><duration>3995.24</duration><title>Understanding AI Bias in Language Models</title><description>S/o to my new friend @srohanahmed! Give Rohan a follow! 

Large Language Models: https://datasci101.com/what-are-llms-part-1
Maven GenAI Course: https://maven.com/britney-muller/generative-ai-fundamentals/
Note: I give Google a hard time in this video but want to acknowledge that they&apos;ve done a lot for the field of ML/AI. Heck, they gave us Transformers in 2017! My intention was to draw attention to the fact that if these AI mistakes can happen at Google, they can happen anywhere.

In this video, I discuss the importance of understanding AI bias in language models, specifically focusing on Bard and ChatGPT. I explain how these models are trained on uncurated and unconsented content from the internet, which can lead to biased outputs. I also highlight the need for transparency in training data and the potential risks associated with relying on these models.</description></oembed>