{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/c90b34abc6f04f848be32fdc861a0d86\" frameborder=\"0\" width=\"1440\" height=\"1080\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1080,"width":1440,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1080,"thumbnail_width":1440,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/c90b34abc6f04f848be32fdc861a0d86-99e3e791af53a48e.gif","duration":116.794,"title":"Setup for Efficient AI Prompt Caching 🤖","description":"In this video, I demonstrate a setup to cache AI prompt results efficiently. By preventing duplicate runs for the same company across multiple leads, I show how to set up a workflow to analyze positive news articles about a company and avoid redundant enriching and scraping processes. The key is using a write to table and a lookup column to manage prompt instances effectively."}