{"type":"video","version":"1.0","html":"<iframe src=\"https://www.loom.com/embed/0b4bb303513d44a1bc5d01d2c22f1a85\" frameborder=\"0\" width=\"2208\" height=\"1656\" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>","height":1656,"width":2208,"provider_name":"Loom","provider_url":"https://www.loom.com","thumbnail_height":1656,"thumbnail_width":2208,"thumbnail_url":"https://cdn.loom.com/sessions/thumbnails/0b4bb303513d44a1bc5d01d2c22f1a85-7bc72666732139e2.gif","duration":376.111,"title":"Felafax -- building AI infra for non-NVIDIA GPUs","description":"Hey, we are twin brothers who worked on ML Infra at Google and Meta the last 5 years and we built a new AI stack for fine-tuning and serving LLMs. Our platform works both on non-NVIDIA chipsets like TPUs, Trainium and AMD GPUs, and also works on NVIDIA GPUs."}