{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/6addf8673d4f440ca20964519f5f1047\" frameborder=\"0\" width=\"1416\" height=\"1062\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1062,"width":1416,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1062,"thumbnail_width":1416,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/6addf8673d4f440ca20964519f5f1047-add2fc53b7c8c085.gif","duration":129.3543,"title":"Testing Prompts Against Different Models","description":"In this video, I discuss the importance of evaluating prompts across different AI models to determine which performs best. I demonstrate how to set up an evaluation using an AI Chef prompt, showing how to duplicate and modify models, specifically using GPT-4 and GPT-5. I emphasize that while we can change prompts inline, overriding the model for evaluation is a straightforward approach. I encourage you to experiment with various models and prompts to see how they perform. Please take the time to run these evaluations and share your findings."}