<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/f6c0ce9387eb456896394a81108cbfca&quot; frameborder=&quot;0&quot; width=&quot;1112&quot; height=&quot;834&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>834</height><width>1112</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>834</thumbnail_height><thumbnail_width>1112</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/f6c0ce9387eb456896394a81108cbfca-1714402209722.gif</thumbnail_url><duration>99.367</duration><title>Using Images with Your Prompts (Vision Model)</title><description>This 2Slash video provides a concise tutorial on how to use the vision model, highlighting the steps to add images to prompts, select a model with vision capabilities, and execute tasks like transcribing text from an image. The guide emphasizes simplicity and direct instruction, making it accessible for users to apply the model for various image-related tasks.</description></oembed>