<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/79cb485e56d34676a152ce4b49ad4253&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/79cb485e56d34676a152ce4b49ad4253-abe69f328e6cf254.gif</thumbnail_url><duration>356.805</duration><title>Model Selection</title><description>In this video, I discuss the process of model selection and hyperparameter tuning, specifically using XGBoost for fraud detection. I highlight the importance of parameter tuning and the strategies I employed, such as random grid search and down-sampling techniques to address class imbalance. I also share the AUC results from my experiments, which show improvements in both validation and test sets. Please take a moment to review the findings and let me know your thoughts on the model&apos;s performance.</description></oembed>