{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/9729bee88cde4da58d30e8d05737c824\" frameborder=\"0\" width=\"1280\" height=\"960\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":960,"width":1280,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":960,"thumbnail_width":1280,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/9729bee88cde4da58d30e8d05737c824-9b2d9bd92eba3523.gif","duration":160.193,"title":"Transforming AI Troubleshooting with Monte Carlo: A Case Study ✈️","description":"In this video, I discuss how Monte Carlo's troubleshooting agent can dramatically improve our ability to diagnose AI failures in production, which often manifest as incorrect outputs rather than obvious errors. For instance, we observed a drop in recommendation validity from 99.2% to 94.6%, leading to over 2,000 invalid recommendations due to stale flight data. The agent quickly traced the issue back to a failing airflow DAG that had not updated the data for six hours. I emphasize that the root cause was not an AI problem but rather stale source data, and I outline a remediation playbook that includes restarting the DAG and enhancing our data freshness checks. I encourage everyone to be proactive in ensuring our data pipelines are functioning optimally to maintain the quality of our AI outputs."}