<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/6e793e34688a490abcec2042f664325d&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/6e793e34688a490abcec2042f664325d-a481e073b56a8e90.gif</thumbnail_url><duration>240.027</duration><title>Building a Fisher Recognition Web App with Image Uploads and Predictions 🌟</title><description>Hi everyone, in this video, I present my MLO summative project where I built a Fisher recognition web app using the V2A mobile app and deployed it with a public URL. I demonstrate how to upload images, showing that I successfully uploaded three pictures of &apos;blurry&apos; and three of &apos;Justin.&apos; The model retraining process is highlighted, with a distribution of 10 images for blurry and 9 for Justin, achieving an accuracy of 38% after 30 training sessions. I also showcase the prediction feature, where the app identified an uploaded image with a confidence of 61.41% for Justin and 15.2% for blurry. Thanks for your attention, and I encourage you to try uploading images and see the predictions for yourself!</description></oembed>