{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/67b6b7ce415a467ea44863854ddf0c62\" frameborder=\"0\" width=\"1280\" height=\"960\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":960,"width":1280,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":960,"thumbnail_width":1280,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/67b6b7ce415a467ea44863854ddf0c62-d476a7f1d7126b87.gif","duration":109.517,"title":"NeuroSense Multimodal AI - 10 June 2025","description":"Multimodal Affect : Unified deep learning framework for emotion and sentiment recognition from video, audio, and text. Powered by BERT, ResNet3D, and CNNs. End-to-end training, robust evaluation —built for research and real-world affective computing."}