V2.475 - Theoretical Error Budget — How Precisely Does the Framework Predict Ω_Λ?
V2.475: Theoretical Error Budget — How Precisely Does the Framework Predict Ω_Λ?
Status: COMPLETE — Theory-limited at 1.2%, dominated by graviton mode count
The Question
The framework claims “zero free parameters.” But every prediction has theoretical uncertainty. How precisely does the framework actually predict Ω_Λ — and what dominates the error?
Central Prediction
Ω_Λ = 0.6877 ± 0.0082 (theory) vs Planck 0.6847 ± 0.0073 (obs)
Tension: 0.28σ — fully consistent.
Error Budget
| Source | ΔΩ_Λ | Fractional | Variance share | Reducible? |
|---|---|---|---|---|
| Graviton mode count | ±0.0075 | 1.1% | 82.3% | Yes |
| Non-perturbative α (QCD/EW) | ±0.0034 | 0.5% | 17.5% | Yes |
| α_s value (analytical vs lattice) | ±0.0003 | 0.05% | 0.2% | Yes |
| Higher-order δ (Adler-Bardeen) | ±0.00007 | 0.01% | 0.0% | No (exact) |
| Massive field decoupling | ~10⁻⁶⁴ | — | 0.0% | No |
| Curvature corrections | ~10⁻¹²¹ | — | 0.0% | No |
Total: ±0.0082 (1.2%), prediction band [0.680, 0.696].
The hierarchy is extreme
The graviton mode count alone accounts for 82% of the total variance. Non-perturbative α corrections add 17.5%. Everything else is negligible — the trace anomaly δ is exact by the Adler-Bardeen theorem, mass corrections are suppressed by (H₀/m)² ~ 10⁻⁶⁴, and curvature corrections by (H₀/M_Pl)² ~ 10⁻¹²¹.
Discrete Graviton Mode Test
| n_grav | Ω_Λ | Pull (σ) | Viable? |
|---|---|---|---|
| 0 (no graviton) | 0.665 | -2.8 | No |
| 2 (physical TT) | 0.734 | +6.7 | No |
| 5 (traceless sym) | 0.716 | +4.2 | No |
| 8 | 0.699 | +1.9 | Yes |
| 10 (symmetric h_μν) | 0.688 | +0.4 | Yes |
| 12 | 0.677 | -1.0 | Yes |
| 15 | 0.662 | -3.1 | No |
Only n_grav ∈ {8, 10, 12} are viable. The theory prediction n=10 (symmetric tensor components D(D+1)/2 for D=4) sits at the sweet spot.
Theory vs Experiment
| Experiment | Year | σ_Ω (obs) | σ_Ω (theory) | Status |
|---|---|---|---|---|
| Planck 2018 | 2018 | 0.0073 | 0.0082 | Theory > Experiment |
| DESI Y1 | 2024 | 0.0060 | 0.0082 | Theory > Experiment |
| DESI Y3 | 2027 | 0.0040 | 0.0082 | Theory > Experiment |
| DESI Y5 | 2029 | 0.0030 | 0.0082 | Theory > Experiment |
| Euclid + CMB-S4 | 2032 | 0.0015 | 0.0082 | Theory > Experiment |
The framework is ALREADY theory-limited. Planck’s observational precision (0.0073) exceeds the framework’s theoretical precision (0.0082). Improving experiments won’t help until the graviton mode counting is resolved.
What This Means
The honest picture
-
“Zero free parameters” is correct but incomplete. The framework has no adjustable parameters, but its prediction carries ±1.2% theoretical uncertainty, dominated by the graviton mode count.
-
The framework is theory-limited, not data-limited. Even with current Planck data, the observational error bar (0.0073) is smaller than the theoretical one (0.0082). The bottleneck is resolving n_grav, not collecting more data.
-
Three numbers matter; three don’t. The error budget has a sharp hierarchy:
- n_grav (82% of variance) — must be resolved
- α non-perturbative corrections (17.5%) — needs QCD lattice computation
- α_s value (0.2%) — essentially resolved
- δ, mass, curvature (<0.01% combined) — exact or negligible
-
The prediction band [0.680, 0.696] comfortably contains Planck’s 0.6847. The framework is consistent with all current data.
What would sharpen the prediction
Resolving the graviton mode count from first principles would reduce the total uncertainty from 1.2% to 0.5% (dominated by non-perturbative α). A proper lattice QFT computation of α with interaction corrections would bring it to ~0.05%. At that point, the framework would predict Ω_Λ to ±0.0003 — testable by Euclid + CMB-S4.
What would falsify the framework
If future measurements pin Ω_Λ outside [0.680, 0.696] at 3σ, the framework is falsified regardless of n_grav. If the graviton mode count is independently determined (e.g., from black hole entropy or gravitational wave observations) and it’s not 8-12, the framework fails.
Files
src/error_budget.py: All 6 error sources, combined budget, discrete n_grav test, precision comparison, falsification forecasttests/test_error_budget.py: 32 tests, all passingrun_experiment.py: Full analysis with 8 phasesresults.json: Machine-readable results