<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/476c1addac204830bdbc60292ad52d4e&quot; frameborder=&quot;0&quot; width=&quot;1728&quot; height=&quot;1296&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1296</height><width>1728</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1296</thumbnail_height><thumbnail_width>1728</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/476c1addac204830bdbc60292ad52d4e-00001.gif</thumbnail_url><duration>300.6</duration><title>Building Naive King Lear RAG in Python</title><description>In this video,  I give a high level walkthrough of building your first RAG/RAQA application:
 -  Split docs into chunks
-  create embeddings for each chunks using an openai embedding model
-  Store embeddings in a local vector database
- Wrap the vector store  in  a Retriever
-  Take the user query and compute cosine similarity between our question and stuff in vector store
-  Setup Visiblity and Eval using Wandb so that we can inspect what went wrong 
-  Using GPT4 as a custom evaluator for our RAG application

Github Notebook Link: https://github.com/rajkstats/AIE2/blob/main/Week%202/Day%201/Pythonic%20RAG%20Assignment_rk.ipynb</description></oembed>