{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/ecf0342b2d83423a9790e84e82caacfd\" frameborder=\"0\" width=\"1920\" height=\"1440\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1440,"width":1920,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1440,"thumbnail_width":1920,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/ecf0342b2d83423a9790e84e82caacfd-d8001adad62e049f.gif","duration":324.766667,"title":"Local RAG Wikipedia Assistant Setup and Tests","description":"I built and tested a fully local RAG system with Olamas and a local Mistral model, ingesting a Wikipedia playlist of 40 famous people and places in about 2 to 3 minutes. I then ran an embedding store locally using a Mini LM model, saving a minicromovector based database. I demonstrated queries like what Mercury discovered, where Now is, and Albert Einstein questions, including chat memory and compression behavior. I also showed a no answer case with who is the president of Mars, returning I do not know. I showed the clear chat history button. No action was specifically requested from viewers."}