How we build defensible forecasts.
Full transparency on every step — from ingestion to publication. We believe forecasts are only as good as the trust you can place in them, and trust is earned by showing work.
Data ingestion
Seven federal and open sources pulled on their native cadence (daily, weekly, monthly). All raw responses versioned to S3 before any transformation.
Feature engineering
Raw series are cleaned, normalized to MSA boundaries, seasonally adjusted where appropriate, and combined into ~120 features per market per month.
Model fit
An ensemble of gradient-boosted trees and a Bayesian structural time-series model. Fit per-MSA with global priors, cross-validated on rolling windows.
Forecast + intervals
12-month-ahead point forecasts with 80% and 95% prediction intervals. Forecasts are versioned — every historical forecast is recoverable for audit.
Validation
Out-of-sample MAE tracked continuously against BLS / Case-Shiller / FHFA releases. Degradation beyond a threshold triggers human review.
Publication
Forecasts published through the product, API, and research notes. Every number cites its underlying feature weights and data lineage.
We publish our forecast errors.
Out-of-sample mean absolute error against the FHFA HPI release, on 12-month horizons, refreshed monthly.
Terms we use.
Read the full technical specification
Features, training protocol, full accuracy metrics (MAE, RMSE, WMAPE, MdAPE, skill score), and reproducibility instructions.