Experiments / V2.414
V2.414
Foundations COMPLETE

V2.414 - Ratio Universality — Is R = |δ|/(6α) Scheme-Independent?

V2.414: Ratio Universality — Is R = |δ|/(6α) Scheme-Independent?

Motivation

V2.410 showed α_s varies by 74% across 4 lattice discretization schemes at accessible sizes (n ≤ 40). This threatens the framework’s Λ prediction, which depends on R = |δ|/(6α·N_eff).

Key insight tested: If both α and δ carry the same scheme-dependent UV normalization, their ratio R might be universal even when neither is individually. This would save the Λ prediction from discretization ambiguity.

Method

For each of the 4 schemes (Srednicki, Standard FD, Numerov, Volume-weighted):

  1. Compute d²S(n) = S(n+1) - 2S(n) + S(n-1) with scaling l_max = C·n
  2. Fit d²S = A + δ·ln(1-1/n²) + β/(n²(n²-1)) to extract both α = A/(8π) and δ
  3. Compute R = |δ|/(6α)
  4. Compare R across schemes

Key Results

The δ extraction fails at accessible lattice sizes

The δ correction to d²S is vanishingly small:

QuantityTypical value
d²S = 8πα~0.41 (C=2)
δ·ln(1-1/n²) at n=20~7 × 10⁻⁵
Signal-to-background~0.017%

The fitted δ values are unreliable:

Schemeδ (fitted)δ (target)Error
Srednicki+0.0057-0.0111wrong sign
Standard FD-0.0001-0.011199% low
Numerov+0.0003-0.0111wrong sign
Volume-weighted+0.083-0.01117.5× too large

The δ signal is buried in the numerical noise of d²S. This is NOT a scheme- dependent problem — it’s a fundamental difficulty of extracting a O(1/n²) correction from a near-constant quantity.

R is NOT more universal than α

Because δ extraction fails, R = |δ|/(6α) is meaningless at these sizes:

Cα spreadR spreadCompression
2.070.5%386%0.2× (R worse)
3.077.6%389%0.2×
4.081.5%351%0.2×
5.083.0%354%0.2×

R spread is 4-5× LARGER than α spread — the δ extraction noise dominates.

Why V2.184/V2.246 could extract δ but we can’t

The V2.184 double-limit and V2.246 precision-delta experiments succeeded because:

  1. They used the Srednicki scheme exclusively (well-calibrated)
  2. They used much larger lattices (n up to 200, N up to 1000)
  3. They employed Richardson extrapolation and 4-parameter fits with careful numerical controls

At n=12-30 with N=8n, the lattice is too small for δ extraction in ANY scheme.

Conclusions

Negative results (what we learned)

  1. R universality cannot be tested at accessible lattice sizes. The δ signal is too small relative to numerical noise in d²S. This is a fundamental limitation, not a bug.

  2. The ratio R = |δ|/(6α) does NOT rescue scheme-dependence. Even if the true R is scheme-independent, we can’t verify it without first solving the δ extraction problem for each scheme.

  3. The framework’s Λ prediction relies on the Srednicki scheme. All verified values (α_s to 0.011%, δ to 6.7%) come from this single discretization. Cross-scheme validation remains an open problem.

What this means for the framework

The Λ prediction chain is:

Srednicki lattice → α_s = 1/(24√π) → R = |δ|/(6α·N_eff) → Λ/Λ_obs = 1.004

Every link after “Srednicki lattice” is rigorous. The vulnerability is the first link: the lattice-to-continuum correspondence is verified for only one discretization scheme.

Path forward

  1. Analytical proof of α_s = 1/(24√π): The only way to make the framework truly scheme-independent. Would bypass the lattice entirely.

  2. Spectral zeta function approach: The heat kernel coefficient a₁ on a hemisphere gives the area coefficient. If a₁ = 1/(24√π) can be computed analytically, the conjecture becomes a theorem.

  3. Much larger lattices: n ~ 500, C ~ 50 for each scheme would enable reliable δ extraction via Richardson extrapolation. Computationally expensive but feasible.

Files

  • src/ratio_extraction.py — d²S computation and (α, δ) fitting for arbitrary schemes
  • run_experiment.py — 4-phase experiment