<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/915d2fc8395a4ff9a32c5bb508ce45c3&quot; frameborder=&quot;0&quot; width=&quot;1660&quot; height=&quot;1245&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1245</height><width>1660</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1245</thumbnail_height><thumbnail_width>1660</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/915d2fc8395a4ff9a32c5bb508ce45c3-a91254c80cf26346.gif</thumbnail_url><duration>1107.817</duration><title>Fixing Apache DataFusion Comet Fallback</title><description>In this video I walk through a bug fix in Apache DataFusion Comet, which runs Spark SQL operators in native Rust instead of the JVM. While benchmarking TPC DS, three queries, 16, 94, and 95, were silently falling back to the JVM due to hash aggregate partial merge being unsupported. I traced the issue through the Spark to Comet pipeline and added partial merge support across the proto, Scala serializer, Rust planner, and expression binding. After the four changes, all three queries ran fully native in Comet with zero JVM fallback. I did not request any action from viewers.</description></oembed>