<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/8652ca1268c94649a77295f4041f72ff&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/8652ca1268c94649a77295f4041f72ff-29bb3fa5f37855fa.gif</thumbnail_url><duration>874.772</duration><title>Model Evaluation</title><description>In this video, I dive into the crucial aspects of model evaluation, focusing on precision and recall curves. I explain how to balance these metrics to optimize fraud detection, aiming for a precision of around 70-80%. I also discuss the financial implications of false positives and false negatives, highlighting a potential loss of $15 million due to misclassifications. Please take a moment to review the calculations and insights I present, as your feedback will be valuable for our next steps.</description></oembed>