{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/2353227a88fe4de9934431f9ff48b2fc\" frameborder=\"0\" width=\"1152\" height=\"864\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":864,"width":1152,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":864,"thumbnail_width":1152,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/2353227a88fe4de9934431f9ff48b2fc-00001.gif","duration":272.46,"title":"LangChain RAG with Hugging Face Inference API Endpoints","description":"In this video, I'll walk you through the following:\n-  Setting up Inference End points on HF for open source LLMs and embedding model\n-  Creating a simple Langchain RAG using QLoRA docs\n- Create custom dataset  for QLoRA in Langsmith and custom evaluator using gpt4 for evaluation"}