<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/b4c322552955434a9059c17e22438e30&quot; frameborder=&quot;0&quot; width=&quot;1720&quot; height=&quot;1290&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1290</height><width>1720</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1290</thumbnail_height><thumbnail_width>1720</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/b4c322552955434a9059c17e22438e30-f59c0f23395629df.gif</thumbnail_url><duration>179.2307</duration><title>Improving Document Relevance in Retrieval Systems</title><description>In this video, I&apos;m discussing the need to determine precision and recall for our retrieval system, focusing on whether documents are relevant or irrelevant. I emphasize the importance of having annotated documents to create a golden dataset, which will help us evaluate the effectiveness of our search system. Currently, I can only assess the entire search result, but I want to be able to annotate documents individually and manage metadata for relevance. I also highlight the need for a way to add or remove documents based on false negatives. If anyone has insights on how to streamline this process, I would greatly appreciate your input.</description></oembed>