<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/ef79d7860b544f689ee962d786e84868&quot; frameborder=&quot;0&quot; width=&quot;1842&quot; height=&quot;1381&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1381</height><width>1842</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1381</thumbnail_height><thumbnail_width>1842</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/ef79d7860b544f689ee962d786e84868-d9c8b7ce0507dc42.gif</thumbnail_url><duration>1189.373</duration><title>Giving Follow-up Instructions (Wondering) - Newsletter</title><description>I demonstrate how specific prompts influence AI follow-up questions in research tasks. I provide detailed instructions for AI to ask about participants&apos; AI usage in research, focusing on tools and analysis tasks. I request viewers to note the impact of prompt complexity on AI responses and the importance of concise instructions for effective follow-up questions.</description></oembed>