{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/7744d333b75a42a79fbcc672f69e5c67\" frameborder=\"0\" width=\"2446\" height=\"1834\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1834,"width":2446,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1834,"thumbnail_width":2446,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/7744d333b75a42a79fbcc672f69e5c67-e9e47f9e9a023d61.gif","duration":138.941,"title":"Benchmarking model decode rates with Antigravity 🚀","description":"Hey Alex, I wanted to show off a project I did this weekend using Antigravity instead of Cloud Code for the first time. I built a UI that benchmarks multiple models and logs results like time to first token, prompt processing versus decode, and tokens per second, with averages saved into history after prompt processing finishes. I store the runs in a SQLite database and the project is in Go and Mantean, with requirements in the public README. I also tried serving it on GCP and hit an out of memory error to debug later. Let me know if you have any questions."}