{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/6e793e34688a490abcec2042f664325d\" frameborder=\"0\" width=\"1920\" height=\"1440\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1440,"width":1920,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1440,"thumbnail_width":1920,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/6e793e34688a490abcec2042f664325d-a481e073b56a8e90.gif","duration":240.027,"title":"Building a Fisher Recognition Web App with Image Uploads and Predictions 🌟","description":"Hi everyone, in this video, I present my MLO summative project where I built a Fisher recognition web app using the V2A mobile app and deployed it with a public URL. I demonstrate how to upload images, showing that I successfully uploaded three pictures of 'blurry' and three of 'Justin.' The model retraining process is highlighted, with a distribution of 10 images for blurry and 9 for Justin, achieving an accuracy of 38% after 30 training sessions. I also showcase the prediction feature, where the app identified an uploaded image with a confidence of 61.41% for Justin and 15.2% for blurry. Thanks for your attention, and I encourage you to try uploading images and see the predictions for yourself!"}