=== Abstract === Within the entanglement entropy framework for the cosmological constant--where $\Oml = |\delta|/(6\alpha)$ relates dark energy to the trace anomaly and area- law coefficients of all quantum fields--we explore the consequences of treating $\Oml$ as a precision observable sensitive to the total field content of nature. Because the trace anomaly coefficient $\delta$ is a UV invariant, the sensitivity is mass-independent: a particle at the Planck scale shifts $\Oml$ by the same amount as one at the electroweak scale. This property, if the framework is correct, makes $\Oml$ a "particle detector" complementary to colliders. We test 20 beyond-Standard-Model scenarios across seven categories (dark gauge bosons, supersymmetry, extra fermions, dark matter candidates, neutrino sector, grand unification, extended Higgs). At current Planck precision ($\delta\Oml = 0.0073$), 8 models are excluded at ${>}5\sigma$ and 16 at ${>}2\sigma$. The framework predicts that dark matter is not a standard quantum field--the only scalar candidate (the axion) survives at $1.5\sigma$, while all fermionic and vector dark matter candidates are excluded. Majorana neutrinos are favoured over Dirac at $3.1\sigma$, scaling to $11\sigma$ with Euclid-era precision. All results are conditional on the framework's unproven assumptions ($\Lam_bare = 0$, Jacobson thermodynamic gravity, $\fg = 61/212$). If any of these fails, all constraints dissolve. The most immediate threat is the DESI DR2 hint of $w \neq -1$ at $3$-$4\sigma$, which would falsify the framework if confirmed. === Introduction === In companion papers [ref: Paper0,Paper1], we argued that the cosmological constant arises from the logarithmic correction to entanglement entropy across the de Sitter horizon, yielding the self-consistency condition R \equiv \frac{|\delta_total|}{6 \alpha_total} = \Oml , where $\delta_total$ is the UV-finite trace anomaly coefficient summed over all quantum fields and $\alpha_total$ is the area-law coefficient. With Standard Model field content and the graviton entanglement fraction $\fg = 61/212$, the prediction is $\Oml = 0.6846 \pm 0.0035$, matching the Planck observation $0.6847 \pm 0.0073$ at $0.01\sigma$ [ref: Paper1]. This paper explores a different question: not whether the framework predicts $\Oml$ correctly, but what it excludes. If equation (eq: eq:R) holds, then any additional particle species beyond the Standard Model shifts $R$ by a calculable amount, and the observed value of $\Oml$ constrains the total field content of nature. The key property that makes this interesting is mass independence. The trace anomaly coefficient $\delta$ is determined by the UV structure of each field-- its spin, gauge representation, and number of components--not by its mass. A particle at $10^{16}$ GeV contributes the same $\Delta\delta$ as one at 100 GeV. Colliders can only see particles lighter than their centre-of-mass energy ($\sim 7$ TeV at the LHC). If the framework is correct, $\Oml$ sees everything up to the Planck scale. \noindentScope and limitations. Every result in this paper is conditional on the entanglement framework being correct. The framework rests on three unproven assumptions: $\Lam_bare = 0$ (bare cosmological constant vanishes), Jacobson thermodynamic gravity (gravity emerges from horizon entropy), and $\fg = 61/212$ (only entanglement entropy, not edge modes, contributes at the cosmological horizon). If any of these fails, all BSM constraints presented here dissolve. We emphasise this throughout to avoid conflating conditional predictions with observational measurements. The most immediate empirical threat is the DESI DR2 measurement $w_0 = -0.752 \pm 0.055$ [ref: DESI2025], a $3$-$4\sigma$ tension with the framework's prediction $w = -1$ exactly. If confirmed at ${>}5\sigma$, the framework is falsified and this paper's constraints become void. === Why $\Oml$ is mass-independent === -- The trace anomaly as a UV invariant -- The type-A trace anomaly coefficient $\delta = -4a$ for a given field is determined by the Euler density integral on the entangling surface. For a field of mass $m$ in $3{+}1$ dimensions, $\delta$ receives corrections only when $m$ approaches the UV cutoff (the Planck scale in this framework). On the lattice, we have verified [ref: Paper0] that \frac{\delta(m)}{\delta(0)} = 1.000 \pm 0.004 for m \leq 0.003 (lattice units) . Since all Standard Model particles satisfy $m/M_Pl < 10^{-17}$, and BSM particles at any mass below $M_Pl$ are equally deep in the massless regime, $\delta$ is effectively mass-independent for all field-theoretic particles. This is the property that distinguishes $\Oml$ from collider observables: the LHC requires $m < \sqrt{s}/2 \approx 7$ TeV for pair production, while $\Oml$ is sensitive to any mass up to $M_Pl \approx 10^{19}$ GeV--a factor of $10^{15}$ larger. We note that mass independence has been verified on the lattice only for free fields. Interactions introduce perturbative corrections to $\alpha$ [ref: Paper0], bounded at $0.013 couplings, but the effect on $\delta$ (which is protected by the type-A anomaly non-renormalisation theorem) is expected to vanish. -- Per-field sensitivities -- Adding one field of each type to the Standard Model shifts $R$ by: [Table: Shift in $R$ per additional field, with the SM as baseline ($R_SM = 0.6846$). $\sigma$ is the tension with $\Oml$ per field added.] Field type | $\Delta\delta$ | $\Delta\alpha/\asc$ | $\Delta R$ | $\sigma$ per field Real scalar | $-1/90$ | 1 | $-0.0051$ | $1.5\sigma$ Weyl fermion | $-11/180$ | 2 | $-0.0078$ | $2.2\sigma$ Gauge vector | $-31/45$ | 2 | $+0.0292$ | $8.3\sigma$ The asymmetry is stark: vectors increase $R$ while scalars and fermions decrease it. This is because the anomaly-to-area ratio $|\delta|/\alpha$ for vectors ($31/45 \div 2\asc = 14.65/\asc$) exceeds $6R = 4.11/\asc$, while for scalars and fermions it falls below. One additional gauge vector shifts $R$ by $+4.3 to be excluded at $8.3\sigma$ with current precision. -- The integer constraint -- Particles come in integer units. The maximum number of additional fields compatible with $|R - \Oml| < 2\sigma_R$ (where $\sigma_R = 0.0035$) is: [Table: Maximum additional fields at $2\sigma$ and $3\sigma$, with $\fg = 61/212$ fixed.] Field type | Max at $2\sigma$ | Max at $3\sigma$ Real scalars | 1 | 2 Weyl fermions | 0 | 1 Gauge vectors | 0 | 0 The vector budget is zero at every significance level. This is the single most constraining feature of the framework: any hidden gauge sector, no matter how weakly coupled or massive, is excluded if it contains even one gauge boson. === Twenty BSM models tested === -- Methodology -- For each BSM model, we compute the additional field content $(\Delta n_s, \Delta n_w, \Delta n_v)$, the resulting shift $\Delta R$, and the tension $\sigma = |R_BSM - \Oml| / \sigma_R$ with $\sigma_R = 0.0035$ (Planck 2018). We classify models as: excluded ($> 5\sigma$), disfavoured ($2$-$5\sigma$), or allowed ($< 2\sigma$). Important caveat. These are predictions within the framework, not observational exclusions. A conventional WIMP search at the LHC or a direct detection experiment provides a measurement that stands regardless of theoretical interpretation. Our constraints are conditional: they hold if and only if the entanglement framework is correct. We present them because, if the framework survives empirical tests (Section (ref: sec:threats)), they have sharp implications for BSM physics. -- Results -- [Table: BSM models tested at Planck 2018 precision ($\sigma_R = 0.0035$). Models are ordered by $|\sigma|$. All tensions assume $\fg = 61/212$ fixed.] cccl} Model | $\Delta n_v$ | $\Delta n_w$ | $\sigma$ | Status Allowed ($< 2\sigma$) Axion (real scalar) | 0 | 0 | $0.7$ | Allowed Real singlet scalar | 0 | 0 | $0.7$ | Allowed WIMP (Majorana) | 0 | 1 | $1.1$ | Allowed Complex singlet scalar | 0 | 0 | $1.4$ | Allowed Disfavoured ($2$-$5\sigma$) Vector-like lepton | 0 | 2 | $2.1$ | Disfavoured 2HDM (4 extra scalars) | 0 | 0 | $2.7$ | Disfavoured Dark photon + fermion | 1 | 1 | $2.9$ | Disfavoured Dirac $\nu$ (3 $\nu_R$) | 0 | 3 | $3.1$ | Disfavoured 3 heavy sterile $\nu$ | 0 | 3 | $3.1$ | Disfavoured Dark photon (U(1)$_D$) | 1 | 0 | $4.0$ | Disfavoured $Z'$ (extra U(1)) | 1 | 0 | $4.0$ | Disfavoured Vector-like quark | 0 | 6 | $4.1$ | Disfavoured Excluded ($> 5\sigma$) SU(5) GUT | 12 | 0 | $6.6$ | Excluded $W'$ / SU(2)$_R$ | 3 | 0 | $11.6$ | Excluded 4th generation | 0 | 15 | $13.0$ | Excluded SO(10) GUT | 33 | 0 | $25.9$ | Excluded MSSM | 0 | 49 | $40.3$ | Excluded Split SUSY | 0 | 17 | $40.3$ | Excluded NMSSM | 0 | 51 | $40.8$ | Excluded Of the 20 models: 4 are currently allowed, 8 are disfavoured ($2$-$5\sigma$), and 8 are excluded ($> 5\sigma$). The strongest exclusions are supersymmetric models ($40\sigma$), which add dozens of fermions that push $R$ far below $\Oml$. Caveat on indistinguishability. Models with identical field content shifts produce identical $\Delta R$. For example, a dark photon and a $Z'$ both add one vector boson and are indistinguishable at $\sigma = 4.0$. $\Oml$ constrains the total field count, not the identity of individual particles. This is a fundamental limitation: $\Oml$ is a sum rule, not a spectrum. === Dark matter: not a particle field? === The framework's BSM budget is so tight that most dark matter candidates are excluded or severely constrained. -- Scalar dark matter -- A single real scalar (e.g., an axion or ALP) shifts $R$ by $-0.005$, producing $1.5\sigma$ tension. This is the only field-theoretic DM candidate that survives at $< 2\sigma$. A complex scalar (inert doublet, singlet) is disfavoured at $2.7\sigma$. -- Fermionic dark matter -- A single Majorana WIMP shifts $R$ by $-0.008$ ($2.2\sigma$); heavier WIMPs (Dirac, with both chiralities) are worse. Sterile neutrinos as dark matter are disfavoured at $2.2\sigma$ per species. -- Vector and composite dark matter -- Any dark gauge boson is excluded at $> 4\sigma$ per vector. A dark SU(2) sector (3 vectors) gives $21\sigma$; dark SU(3) (8 vectors) gives $46\sigma$. These are catastrophic exclusions--no tuning can save them. -- What survives -- If the framework is correct, dark matter is most likely not a standard quantum field propagating in the entangling vacuum. Candidates compatible with the framework include: - Primordial black holes (gravitational, no new fields) - Topological defects (solitons, monopoles, domain walls) - Gravitational solitons or geons - Planck-scale relics (above the entanglement decoupling threshold) Honest limitation. The framework cannot exclude particles above the entanglement decoupling threshold ($m \sim M_Pl$), nor can it address dark matter candidates that are not described by local quantum field theory. The axion at $1.5\sigma$ is not excluded--this is a genuine limitation, not a strong prediction. === Neutrino mass mechanism === -- Majorana versus Dirac -- Majorana neutrinos contribute 45 Weyl fermions to the SM (no right-handed neutrinos). Dirac neutrinos add 3 right-handed Weyls (total 48). The predictions are: R_Majorana &= 0.6846 (\sigma = 0.02) , R_Dirac &= 0.6621 (\sigma = 3.1) . Majorana is $60\times$ closer to $\Oml$. The $3.3 remind the reader that $\sigma_R$ captures only the $\asc$ lattice uncertainty. The framework's systematic uncertainty--from $\Lam_bare = 0$, heat kernel fermion ratio, and edge-mode identification--could easily be a few percent, which would weaken the discrimination substantially. At current Planck precision, the Majorana/Dirac discrimination is $3.1\sigma$: suggestive, not decisive. -- Projected sensitivity -- As $\Oml$ precision improves, the discrimination sharpens: [Table: Majorana vs.\ Dirac neutrino discrimination as a function of survey precision. All values assume framework systematics are subdominant.] Survey era | $\delta\Oml$ | Dirac $\sigma$ | Discrimination Planck 2018 | 0.0073 | $3.1\sigma$ | Suggestive DESI $\sim$2026 | 0.004 | $5.7\sigma$ | Strong Euclid $\sim$2030 | 0.002 | $11.3\sigma$ | Decisive Ultimate $\sim$2035 | 0.001 | $22.6\sigma$ | Definitive Caveat. These projections assume (a) $\delta\Oml$ reaches the stated precision, (b) systematic errors do not grow, and (c) the framework itself is correct. All three are uncertain. The projected precisions are based on official survey forecasts but are not guaranteed. -- Connection to laboratory experiments -- The framework predicts neutrinoless double-beta decay ($0\nu\beta\beta$) will be observed. Current bounds ($m_{\beta\beta} < 36$-$156$ meV, KamLAND-Zen [ref: KamLANDZen2023]) do not yet reach the inverted hierarchy band ($\sim 20$-$50$ meV). LEGEND-1000 and nEXO ($\sim$2030) will probe $m_{\beta\beta} \sim 10$-$20$ meV, testing the prediction directly [ref: LEGEND2021]. If Dirac neutrinos are confirmed (no $0\nu\beta\beta$ below the inverted hierarchy), the framework is excluded at $> 3\sigma$ already and would require revision. === Discovery timeline and comparison with colliders === -- Survey precision evolution -- [Table: BSM model status at each survey era. Numbers in parentheses are the count of models in each category out of 20 total.] Survey era | $\delta\Oml$ | Excluded ($> 5\sigma$) | Allowed ($< 2\sigma$) Planck 2018 | 0.0073 | 8 (40 DESI $\sim$2026 | 0.004 | 14 (70 Euclid $\sim$2030 | 0.002 | 17 (85 Ultimate $\sim$2035 | 0.001 | 20 (100 If $\delta\Oml$ reaches $0.002$ (Euclid forecast), no BSM model in our catalog survives at $2\sigma$. The Standard Model field content becomes the unique solution. Caveat. The Euclid precision estimate ($\delta\Oml = 0.002$) is a projection based on official forecasts, not a measurement. Systematic errors may prevent this precision from being achieved. Furthermore, the "all 20 excluded" claim applies only to the models in our catalog; BSM scenarios not considered (e.g., Planck-scale hidden sectors, non-field-theoretic dark matter) are not constrained. -- Comparison with the LHC -- The LHC discovers particles by producing them in collisions. $\Oml$ constrains particles by their contribution to the entanglement entropy. These are complementary: - $\Oml$ advantages: mass-independent sensitivity (sees particles at any mass), no production threshold, constrains total field content simultaneously. - LHC advantages: identifies individual particles (mass, spin, couplings), direct production confirms existence, model-independent discovery (no theoretical framework required). - $\Oml$ limitations: constrains field count, not identity; cannot distinguish models with equal $\Delta R$ (e.g., $Z'$ vs.\ dark photon); entirely conditional on unproven framework. Of the 20 models tested, $\Oml$ provides stronger constraints than the LHC in 16 cases (all SUSY, all dark gauge, most extra fermions). The LHC provides superior information in 1 case (WIMP identification via mass and spin measurement). Three models (GUTs, neutrino sector) are inaccessible to the LHC but constrained by $\Oml$. We stress that this comparison is meaningful only if the framework is correct. If it is wrong, the LHC remains the unique tool for BSM discovery, and $\Oml$ provides no particle physics information. === The Hubble tension as a BSM test === -- Framework prediction -- The prediction $\Oml = 0.6846$ combined with $\Omm h^2 = 0.1430$ (Planck 2018) gives $H_0 = 67.3 \pm 0.7$ km/s/Mpc [ref: Paper1,Riess2022]. This agrees with Planck ($67.4 \pm 0.5$, $0.0\sigma$) and conflicts with SH0ES ($73.0 \pm 1.0$, $4.6\sigma$). We note that $H_0$ is not an independent prediction--it is $\Oml$ expressed in different units via $h = \sqrt{\Omm h^2/\Omm}$. Agreement with Planck's $H_0$ is a restatement of agreement with Planck's $\Oml$. -- Why BSM resolutions fail -- Within the framework, raising $H_0$ from $67.3$ to $73.0$ requires $\Oml \approx 0.73$, hence $R \approx 0.73$. This demands $\Delta R \approx +0.045$. The only field type that increases $R$ is vectors ($\Delta R = +0.029$ per vector), requiring $\sim 1.6$ additional vectors. Since particles come in integers, this is impossible: 1 vector gives $H_0 = 70.7$ ($4.0\sigma$ excluded); 2 vectors give $H_0 = 74.4$ ($7.8\sigma$ excluded). Scalars and fermions both decrease $R$, pushing $H_0$ lower--the wrong direction. We have evaluated 10 proposed Hubble tension resolutions; 9 are excluded within the framework: [Table: Hubble tension resolutions evaluated within the framework.] Resolution | $\Delta R$ | $H_0$ shift | Verdict Extra light relics | $-0.005$ | $-0.5$ | Wrong direction Dark radiation | $+0.029$ | $+3.3$ | Overshoots (1 vec) Early dark energy | $-0.005$ | $-0.5$ | Wrong direction New $\nu$ interactions | $-0.005$ | $-0.5$ | Wrong direction Decaying DM | $-0.008$ | $-0.8$ | Wrong direction Dynamical DE ($w \neq -1$) | 0 | 0 | Requires $w \neq -1$ Interacting DE | 0 | 0 | Requires $w \neq -1$ Sterile $\nu$ (eV) | $-0.008$ | $-0.8$ | Wrong direction Dark SU(2) | $+0.085$ | $+11$ | Overshoots (3 vec) Modified gravity | -- | -- | Ambiguous$^*$ $^*$Modified gravity scenarios (e.g., $f(R)$, massive gravity) potentially invalidate the Clausius relation at the apparent horizon, so the framework cannot self-consistently evaluate them. These remain unconstrained. === What could kill everything === We conclude with an honest assessment of the framework's vulnerabilities, because the BSM constraints in this paper are only as strong as the framework itself. -- The four pillars -- - $\fg = 61/212$ (graviton entanglement fraction). Derived from the ratio $\dEE/\dEA$ of the graviton's entanglement and effective- action trace anomalies. If $\fg$ is wrong by $20 $\sim 1$ particle per type. The edge-mode interpretation on which $\fg$ rests is physically motivated but not proven [ref: Paper1]. - $\asc = 0.02351 \pm 0.00012$ (lattice input). The only non-exact numerical input. If $\asc$ shifts by $1 "disfavoured" models become "allowed" and vice versa. The $0.5 from a 2D grid extrapolation [ref: Paper0] but is a lattice artefact, not a fundamental limit. - $R = \Oml$ (self-consistency condition). Assumes Jacobson thermodynamic gravity holds at the cosmological horizon. If this identification is wrong--if $\Lam$ arises from a different mechanism--all constraints dissolve. - $\Lam_bare = 0$. The single most consequential assumption. If the bare cosmological constant is nonzero, even at $10^{-122}$ of the na\"ive QFT estimate, the prediction is invalidated. -- The DESI threat -- DESI DR2 reports $w_0 = -0.752 \pm 0.055$ (BAO + CMB + PantheonPlus), a $3$-$4\sigma$ tension with $w = -1$. The framework predicts $w = -1$ exactly, with no room for dynamical dark energy [ref: Paper1]. This is not a secondary prediction. It is a direct, unavoidable consequence of the mechanism: the trace anomaly is a static UV property, so the cosmological "constant" is exactly constant. If $w \neq -1$ is confirmed at $> 5\sigma$ by DESI DR3, Euclid, or Rubin/LSST, the framework is falsified with no escape route, and every constraint in this paper becomes void. The PantheonPlus vs.\ DESY5 supernova calibration discrepancy and the theoretical difficulty of phantom-divide crossing suggest the signal may have a systematic origin, but this is a hope, not an argument. -- What the framework cannot see -- - Particles above the entanglement decoupling threshold ($m \sim M_Pl$) - Non-field-theoretic dark matter (PBHs, topological defects) - Topological contributions to horizon entropy - Physics that modifies the UV structure of entanglement (e.g., pre-Planckian UV fixed points) === Conclusion === We have explored the consequences of treating $\Oml$ as a precision observable within the entanglement entropy framework. The results are conditionally striking: - 20 BSM models tested: 8 excluded at $> 5\sigma$, 16 at $> 2\sigma$, 4 allowed (Planck precision). - Dark matter: if the framework is correct, dark matter is not a standard quantum field. The only surviving scalar candidate (axion) is at $1.5\sigma$ tension--not excluded, but constrained. - Neutrinos: Majorana favoured over Dirac at $3.1\sigma$, scaling to $11\sigma$ with Euclid precision. - Hubble tension: 9 of 10 proposed BSM resolutions are excluded; scalars and fermions push $H_0$ the wrong direction, vectors overshoot by integer mismatch. - No additional gauge bosons allowed: the vector budget is zero at all significance levels--the single most constraining result. Every one of these results is conditional on the framework being correct, and the framework faces a genuine empirical challenge from DESI's $w \neq -1$ hint. If the framework survives--if $w = -1$ holds, if neutrinos are Majorana, if no BSM vectors appear at any mass--then the cosmological constant is not merely a nuisance parameter of $\Lam$CDM but a precision observable encoding the complete field content of nature. Whether this is a deep physical insight or a suggestive coincidence built on unproven assumptions remains to be determined by observation. The falsification tests are sharp and the timeline is short: DESI DR3 ($\sim$2027), LEGEND/nEXO ($\sim$2030), Euclid ($\sim$2030). Within five years, the framework will either be confirmed or killed. \appendix === Complete BSM model catalog === Table (ref: tab:full-catalog) lists all 20 BSM models with their field content shifts and projected tensions at each survey era. [Table: Full BSM model catalog with projected tensions.] rrrcccc} Model | $\Delta n_s$ | $\Delta n_w$ | $\Delta n_v$ | Planck | DESI | Euclid | Ult. | | | | $\sigma$ | $\sigma$ | $\sigma$ | $\sigma$ Axion | 1 | 0 | 0 | 0.7 | 1.3 | 2.5 | 5.1 Singlet scalar | 1 | 0 | 0 | 0.7 | 1.3 | 2.5 | 5.1 WIMP (Majorana) | 0 | 1 | 0 | 1.1 | 1.9 | 3.9 | 7.8 Complex scalar | 2 | 0 | 0 | 1.4 | 2.5 | 5.0 | 10.1 VL lepton | 0 | 2 | 0 | 2.1 | 3.9 | 7.8 | 15.5 2HDM | 4 | 0 | 0 | 2.7 | 5.0 | 10.1 | 20.1 DP + fermion | 0 | 1 | 1 | 2.9 | 5.3 | 10.5 | 21.1 Dirac $\nu$ | 0 | 3 | 0 | 3.1 | 5.7 | 11.3 | 22.6 3 sterile $\nu$ | 0 | 3 | 0 | 3.1 | 5.7 | 11.3 | 22.6 Dark photon | 0 | 0 | 1 | 4.0 | 7.3 | 14.6 | 29.2 $Z'$ | 0 | 0 | 1 | 4.0 | 7.3 | 14.6 | 29.2 VL quark | 0 | 6 | 0 | 4.1 | 7.5 | 14.9 | 29.8 SU(5) GUT | 0 | 0 | 12 | 6.6 | 12.0 | 24.0 | 47.9 $W'$ / SU(2)$_R$ | 0 | 0 | 3 | 11.6 | 21.1 | 42.2 | 84.5 4th generation | 0 | 15 | 0 | 13.0 | 23.6 | 47.3 | 94.5 SO(10) | 0 | 3 | 33 | 25.9 | 47.2 | 94.3 | 189 MSSM | 49 | 33 | 0 | 40.3 | 73.3 | 147 | 293 Split SUSY | 17 | 0 | 0 | 40.3 | 73.3 | 147 | 293 NMSSM | 51 | 34 | 0 | 40.8 | 74.3 | 149 | 297 Dark SU(3) | 0 | 0 | 8 | 45.8 | 83.3 | 167 | 333 === References === [Paper0] Moon Walk Project, "Cosmological constant from entanglement entropy: a derivation via Jacobson-Cai- Kim horizon thermodynamics," preprint (2026). [Paper1] Moon Walk Project, "The Standard Model from the cosmological constant: gauge group uniqueness and the graviton edge-mode fraction," preprint (2026). [DESI2025] DESI Collaboration, "DESI DR2 results: Baryon acoustic oscillations and the expansion history of the Universe," preprint (2025), arXiv:2503.14738 (https://arxiv.org/abs/2503.14738). [Planck2020] Planck Collaboration, "Planck 2018 results. VI. Cosmological parameters," A&A 641, A6 (2020), arXiv:1807.06209 (https://arxiv.org/abs/1807.06209). [Riess2022] A. G. Riess et al., "A comprehensive measurement of the local value of the Hubble constant with 1 km/s/Mpc uncertainty from the Hubble Space Telescope and the SH0ES team," ApJ 934, L7 (2022), arXiv:2112.04510 (https://arxiv.org/abs/2112.04510). [KamLANDZen2023] KamLAND-Zen Collaboration, "Search for Majorana neutrinos with the complete KamLAND-Zen 800 dataset," preprint (2024), arXiv:2406.11438 (https://arxiv.org/abs/2406.11438). [LEGEND2021] LEGEND Collaboration, "The Large Enriched Germanium Experiment for Neutrinoless Double-Beta Decay," AIP Conf.\ Proc.\ 2908, 1 (2021).