{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/b8a270f1f643477dae44a8d93a8ca0fe\" frameborder=\"0\" width=\"1416\" height=\"1062\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1062,"width":1416,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1062,"thumbnail_width":1416,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/b8a270f1f643477dae44a8d93a8ca0fe-45b328e438dd8733.gif","duration":297.475,"title":"Testing and Evaluating Prompts with in PromptLayer","description":"In this video, I walk you through the eVals feature in PromptLayer, which allows for rigorous testing and evaluation of prompts. I demonstrate how to create a dataset of sample cases, using examples like scrambled eggs and sushi, and then set up an evaluation to ensure our AI Chef prompt produces concise and accurate outputs. We ran the evaluation on a batch of inputs and achieved a perfect score, indicating that the prompt is functioning well. I encourage you to explore eVals for your own prompts to enhance their effectiveness. Please take the time to familiarize yourself with this feature and consider how it can benefit your work."}