<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/adedbd6566014bb499fd39f118f3c5c6&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/adedbd6566014bb499fd39f118f3c5c6-5976bd658323251d.gif</thumbnail_url><duration>299.016</duration><title>Local ChatGPT for Famous People and Places</title><description>Hi, I’m Dedemirc, and this Loom is my demo for Project 3. Our goal was to build a ChatGPT style assistant that answers questions about famous people and famous places, with everything running locally and no external APIs. I ingest 20 people and 20 places from Wikipedia, chunk them into overlapping word based chunks of about 500 words, embed them with a local sentence transformer, and store everything in SQLite and Chroma. At question time, I embed the query, do a basic person or place classification, retrieve relevant chunks, and generate a grounded answer using a local Llama 3.2 model through Ollama. The main trade offs are slower local responses and no chat memory, and I suggested improving classifier quality, adding streaming, and implementing chat history. There was no specific action requested from viewers.</description></oembed>