<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/c90b34abc6f04f848be32fdc861a0d86&quot; frameborder=&quot;0&quot; width=&quot;1440&quot; height=&quot;1080&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1080</height><width>1440</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1080</thumbnail_height><thumbnail_width>1440</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/c90b34abc6f04f848be32fdc861a0d86-99e3e791af53a48e.gif</thumbnail_url><duration>116.794</duration><title>Setup for Efficient AI Prompt Caching 🤖</title><description>In this video, I demonstrate a setup to cache AI prompt results efficiently. By preventing duplicate runs for the same company across multiple leads, I show how to set up a workflow to analyze positive news articles about a company and avoid redundant enriching and scraping processes. The key is using a write to table and a lookup column to manage prompt instances effectively.</description></oembed>