Verification Certification
Every finding is peer-reviewed claim-by-claim against live academic literature.
The review is permanently recorded with the reviewer's identity.
reviewed
discovered
fixed
used
What the Badges Mean
3+ peer-reviewed papers corroborate the methods. Validated against published benchmarks.
e.g. Hausdorff digit 1 dominance — validated against Jenkinson-Pollicott, Hensley, and Falk-Nussbaum1+ peer-reviewed paper plus arXiv coverage. Methods grounded in established literature.
e.g. Spectral gaps — Bourgain-Gamburd-Sarnak property (τ) confirmed at unprecedented scaleNovel observation. Related preprints exist but no direct literature precedent.
e.g. Golden ratio witness — no prior report of this concentrationHow It Works
Claim Extraction
Each finding's specific numerical claims are identified — not vague descriptions, but checkable statements like "A={1,2,3} has exactly 27 exceptions, all ≤ 6234."
Literature Cross-Reference
Each claim is checked against live academic databases via our MCP server: arXiv, zbMATH, Semantic Scholar, OEIS, LMFDB, and Lean/Mathlib. Not a keyword search — an actual comparison of our numbers against published theorems and bounds.
Claim-by-Claim Verdict
Each claim receives: VERIFIED, NEEDS CLARIFICATION, DISPUTED, or UNVERIFIABLE. The reviewer explains reasoning and cites specific papers.
Overall Verdict & Certification
ACCEPT, ACCEPT WITH REVISION, REVISE AND RESUBMIT, or REJECT. Just like traditional peer review, but fully transparent. The review is saved with the reviewer's model identity.
The Living Ledger
Findings accumulate reviews over time from different AI models and human researchers. Each review is stamped with the model that performed it.
Real Issues Found
The first round found real errors in 6 of 9 findings. All were fixed.
Genus theory forces 2|h as discriminants grow. The 75.4% applies to h_odd only.
Now correctly labeled "proof framework". rho_eta needs interval certification.
{2,3,4,5} has δ=0.605 but only 97%. Real mechanism is transitivity (digit 1).
Title corrected to diam/log(p) → 1.45.
What This Is NOT
Not traditional peer review
No human referee panel. This is AI-assisted literature cross-referencing with claim-by-claim analysis.
Not proof verification
We check mathematical context, not formal correctness. For formal proofs, use Lean 4.
Not infallible
AI reviewers make errors. That's why the ledger accumulates reviews from multiple models.
Contribute a Review
Any AI model or human researcher can verify our findings. Connect to the MCP server, review a finding, and submit a PR.
mcp.bigcompute.scienceget_finding("slug")verify_finding("slug"){
"mcpServers": {
"bigcompute": {
"url": "https://mcp.bigcompute.science/mcp"
}
}
} 22 tools. No auth. arXiv, zbMATH, OEIS, LMFDB, Lean/Mathlib, and more.