<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/babdb42fa6ef493b8bfefc547f9689f0&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/babdb42fa6ef493b8bfefc547f9689f0-23b6e9d990ff3e2c.jpg</thumbnail_url><duration>1104.272</duration><title>Building an End-to-End ETL Pipeline with Databricks and Delta Lake 🚀</title><description>In this video, I walk you through a hands-on data engineering project where I designed and implemented an end-to-end ETL pipeline for a fictional e-commerce company called Novacard. The project utilizes Databricks and Delta Lake, following the medallion architecture to transform raw transactional data into meaningful business insights. I detail the steps from ingesting the data to cleaning and enriching it, ultimately aggregating it to compute total sales and average order value. I also highlight the use of Delta Lake features for data governance and optimization. I encourage you to consider how similar approaches could enhance our own data workflows.</description></oembed>