Skip to main content

Forecast Accuracy

How well do PropertyIQ Scores predict real-world market returns?

Forecast Accuracy

0.23 OOS Correlation. 13 Years. Real Data.

PropertyIQ validates with walk-forward cross-validation across 13 years of data. Every number on this page comes from held-out test periods the model never trained on.

0.23

OOS Correlation (3Y)

PropertyIQ Score, walk-forward validated

$24,384

Score 100 vs Score 10 (3Y)

Dollar difference on $245K home

13

Years of backtest data

Walk-forward validated (2012–2024)

100%

Hit rate (3Y)

Top-scored markets beating benchmark

746

Metros validated

2,983 counties · 19,880 ZIPs

Real-World Impact

Score-Driven Investing: The Dollar Difference

PropertyIQ Scores don't just rank markets — they predict real dollar outcomes. Here's what the data shows.

Single Home

+$13,320/yr

Top-quintile scored markets (Q5) earned $13,320 more in annual appreciation than bottom-quintile markets (Q1) on a $240K home.

3-Property Portfolio

+$39,960/yr

A 3-property portfolio in top-scored markets generates nearly $40K more per year in equity versus bottom-scored markets.

Avoid Losses

Negative Returns

Bottom-quintile markets delivered negative excess returns while top-quintile markets thrived. Our scores flagged the underperformers.

The Harder Problem

We Don’t Predict “Florida Will Be Hot.”We Predict Which Florida Metro Will Beat the Others.

Most forecast models predict raw appreciation — will home prices go up or down? That’s beta. It’s easy and not very useful. Every model gets “Sun Belt is growing” right.

PropertyIQ scores predict excess returns above regional benchmarks — that’s alpha. Given two metros in the same state, which one will outperform? That’s the question worth $13,320 per year.

Beta (What Others Predict)

“Tampa will appreciate 5% this year”

Raw appreciation. Everyone knows this.

Alpha (What PropertyIQ Predicts)

“Tampa will beat other FL metros by 2.3pp”

This is the $13,320 insight.

Interactive Backtest

See the Correlation for Yourself

Every dot is a real market. Higher scores should map to higher 3-year excess returns vs state benchmarks. Filter by geography and score type to explore.

Loading scatter data...

Competitors show a static PNG. Ours is fully interactive — filter, hover, zoom.

Understanding Our Primary Metric

Why We Measure the Information Coefficient

The Information Coefficient (IC) — Spearman rank correlation — is the gold standard for evaluating predictive models. Here's why it matters more than Pearson.

Pearson r

What Competitors Use

"Can I draw a line through these dots?"

ScoreReturn
  • Measures how well data fits a straight line
  • Easily inflated by post-hoc curve-fitting (converting scores to % forecasts via hand-tuned lookup tables)
  • Sensitive to outliers — one extreme market can skew the whole number
  • A high Pearson says "I can draw a line through these dots" — not useful for market selection

IC / Spearman ρ

What PropertyIQ Reports

"If I sort by score, does it match sorting by actual return?"

Q1Q2Q3Q4Q5ScoreReturn
  • Measures whether higher scores consistently rank higher in actual returns
  • Cannot be inflated by curve-fitting — ignores magnitude, only looks at rank order
  • Robust to outliers — extreme values don’t affect rankings
  • A high Spearman says "follow the score and you’ll pick better markets" — exactly what investors need

The Finance Industry Standard: IC (Spearman), Not Pearson

The Information Coefficient (IC) — the gold standard for evaluating predictive models in quantitative finance — is the Spearman rank correlation. Hedge funds, asset managers, and quant researchers all use IC (Spearman ρ) to measure whether a signal correctly ranks outcomes from worst to best. Pearson measures linearity, which can be artificially boosted through curve-fitting. We use the same metric the pros use.

For context: our walk-forward OOS IC (Information Coefficient) of 0.37 is strong for real estate prediction, where noise is high. The IC is computed on held-out test data the model never saw during training. But Spearman is the right tool for answering the question investors actually ask: "Will following the score lead me to better markets?"

How We Validate

Rigorous, Transparent, Reproducible

Walk-Forward Cross-Validation

Four walk-forward windows (2018–2023) with non-overlapping test periods ensure the model never sees future data. No look-ahead bias.

Excess Return Measurement

Returns measured as excess over state benchmarks, isolating local alpha from broad market beta.

SHAP Feature Distillation

XGBoost/LightGBM SHAP values distilled to interpretable linear weights. 10 features per formula, fully transparent.

Model Tournament

XGBoost, LightGBM, and ElasticNet compete per geography. Best model selected by highest mean OOS Information Coefficient.

Side-by-Side

PropertyIQ vs. the Competition

Using the leading competitor's own published numbers from their forecast page.

DimensionPropertyIQLeading Competitor
OOS predictive accuracyIC = 0.37 (walk-forward CV)r = 0.79 (1 cherry-picked window)
Validation windows tested4 walk-forward windows (2018–2023)1 cherry-picked window
Geography coverage924 metros + 2,482 counties + 19,923 ZIPs~380 metros
Quintile dollar impact$13,320/yr per homeNot published
Bottom quintile warningYes: negative excess returnsNo
Walk-forward cross-validationYes (no look-ahead bias)No
SHAP feature importanceYes (model-agnostic explainability)No
Price$39/mo$399/yr

Competitor data sourced from publicly available forecast pages (accessed February 2026). PropertyIQ uses walk-forward OOS Information Coefficient; competitor uses single-window Pearson r.

Ready to Invest Smarter?

Explore top-scored markets on our interactive map or start with plans at $39/mo.