The model

What’s inside the forecast.

A one-page read on the model behind every number on the site. We publish the equation, the inputs, and the latest accuracy numbers. No black box. If something looks wrong, email us.

Currently shipping: v0.2-mean-revert · 411 of 410 metros scored · see the changelog →

The equation

For each metro we predict the change in house price over the next 12 months as a weighted blend of price-history signals, a national rate adjustment, and three small per-metro feature nudges:

forecast(metro) =
0.50 × momentum(metro) // last 12mo realized
+ 0.25 × yoy_3y(metro) // 3-year trend
+ 0.25 × national_mean // cross-sectional pull
+ rate_drift // national, ±1.5pp cap
+ permits_adj(metro) // ±0.5pp cap
+ unemp_adj(metro) // ±0.5pp cap
+ gdp_adj(metro) // ±0.5pp cap
+ inventory_adj(metro) // ±0.5pp cap

Every output and every input above is logged on market_signals.rationale so an individual forecast can be decomposed back to the inputs that produced it.

What feeds it

  • FHFA HPI — the target series and the source of momentum / 3-year trend / volatility. Per metro, quarterly.
  • FRED MORTGAGE30US — current 30-year fixed vs trailing 10-year mean drives the uniform rate_drift term. Above-average rates are a headwind (negative drift); below-average are a tailwind.
  • Census BPS permits — 12-month single-unit permit count vs the prior 12 months. Rising permits = supply pressure on prices = small negative adjustment.
  • BLS LAUS unemployment — change in metro unemployment over the last year. Higher unemployment = weaker demand = small negative adjustment.
  • BEA real GDP by MSA — year-over-year change in metro GDP. Faster local growth = small positive adjustment.
  • Zillow For-Sale Inventory — year-over-year change in active listings. More inventory = supply meeting demand = small negative adjustment.

Confidence intervals

Around every point forecast we publish an 80% interval, scaled from the metro’s own quarterly volatility. We winsorize the recent return history to keep one outlier quarter (2008 Q4, 2020 Q2) from blowing up the band, and we blend each metro’s σ with the national mean σ — heavier shrinkage for metros with shorter histories. The interval scale factor (3.1) is calibrated empirically from the backtest, not assumed normal.

Latest accuracy

The first backtest lands as soon as the model has a year of forecast history. Until then the live numbers above describe the production spec, not realized accuracy.

Full breakdown + wins-and-losses by chain.

When did each input last refresh?

The forecast is only as fresh as its inputs. Below is the last successful pull for every upstream source. Stale sources are flagged on /status with a degraded badge.

beaBureau of Economic Analysis — annual real GDP by MSA
no successful pull yet
blsBureau of Labor Statistics — national + per-MSA jobs / unemployment
no successful pull yet
censusUS Census Bureau — ACS + per-MSA building permits
no successful pull yet
fhfaFederal Housing Finance Agency — quarterly metro HPI
no successful pull yet
fredFederal Reserve Economic Data — rates + macro panel
no successful pull yet
hudHUD — fair market rents
no successful pull yet
redfinredfin
no successful pull yet
zillowZillow Research — ZHVI / ZORI / for-sale inventory
no successful pull yet

What this model deliberately doesn’t do

  • No price-target predictions for individual addresses or ZIPs. Every output is metro-level (CBSA).
  • No regime-switching (different weights during cuts vs hikes). Coefficients are constant; rate effect comes through the drift term.
  • No machine-learning model — this is a closed-form formula a human can audit line by line. The XGBoost upgrade is in progress and will replace this baseline once trained.
  • No use of the proprietary feeds we can’t link to (Moody’s, CoreLogic, ATTOM, MLS). Every input traces to a public source.
Read the full whitepaper →

Model spec, training protocol, evaluation methodology, every metric we track.

How accurate have we been? →

Wins, losses, every test we publish. Updated monthly.