<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/bb861ccecdb84c2c8df73516edde8440&quot; frameborder=&quot;0&quot; width=&quot;1920&quot; height=&quot;1440&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>1440</height><width>1920</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>1440</thumbnail_height><thumbnail_width>1920</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/bb861ccecdb84c2c8df73516edde8440-958ca0f559f61f72.gif</thumbnail_url><duration>103.470033</duration><title>Demo: Wealthsimulate (https://wealthsimulate.vercel.app/)</title><description>https://wealthsimulate.vercel.app/
(https://github.com/pangpangcodes/wealthsimulate)

What the human can now do that they couldn&apos;t before

Wealthsimple shows you what you have. It can&apos;t help you reason about your future. When a 33-year-old user wants to know whether she can afford a career break in 2026, she goes on Reddit. She gets a rule of thumb. She has no way to run that question against her actual accounts, tax bracket, savings rate, and retirement trajectory.
Wealthsimulate adds a reasoning layer on top of her existing Wealthsimple data. Before she&apos;s typed a word, the AI has noticed her emergency fund covers only 1.6 months of expenses and that her lump-sum savings habit is costing over $20K across 32 years. Then she asks her question in plain English. The system runs 1,000 simulated futures and streams a personalized analysis: cash flow gap, emergency runway against liquid savings, and retirement impact. Her numbers, her situation.
This is a deliberately scoped MVP. The persona is specific: a Wealthsimple user in her early 30s whose primary lens is retirement. That focus is a design choice, not a limitation. The simulation engine, tool architecture, and AI reasoning layer are domain-agnostic. New scenario types or personas extend the same plumbing, they don&apos;t rebuild it.

What AI is responsible for

Three types of cognitive work, framed as judgment, not tasks. Inference from structured data: detecting income from deposit patterns, distinguishing savings from spending, building a financial profile before the user has answered a single question. Ambiguity judgment: interpreting what &quot;retire early&quot; means for this specific person, deciding which variables are inferable versus missing, surfacing only what matters most. Personalized analysis: explaining what 1,000 simulated paths mean for this person&apos;s income, tax bracket, account mix, and goals.

Where AI must stop

The hardest boundary isn&apos;t &quot;don&apos;t give advice.&quot; It&apos;s risk tolerance calibration. The system can show that 100% equities gives 40% better expected retirement income but a 60% worse worst case. Whether that trade-off is acceptable, how someone actually feels when their portfolio drops 40%, is a values question no simulation can answer. The architecture enforces this: the AI interprets the question, the deterministic engine runs the math, the user makes the tradeoffs. The AI cannot hallucinate figures because it never generates them.
When the stakes warrant professional guidance, the system routes to Wealthsimple&apos;s advisory team with full context: profile, question, simulation results. Not a generic upsell. A qualified lead.</description></oembed>