<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/2353227a88fe4de9934431f9ff48b2fc&quot; frameborder=&quot;0&quot; width=&quot;1152&quot; height=&quot;864&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>864</height><width>1152</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>864</thumbnail_height><thumbnail_width>1152</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/2353227a88fe4de9934431f9ff48b2fc-00001.gif</thumbnail_url><duration>272.46</duration><title>LangChain RAG with Hugging Face Inference API Endpoints</title><description>In this video, I&apos;ll walk you through the following:
-  Setting up Inference End points on HF for open source LLMs and embedding model
-  Creating a simple Langchain RAG using QLoRA docs
- Create custom dataset  for QLoRA in Langsmith and custom evaluator using gpt4 for evaluation</description></oembed>