About
Big math — computational mathematics at scales requiring specialized hardware — GPU clusters, HPC systems, or custom accelerators — to achieve results within practical timeframes. Problems where the answer exceeds the capacity of commodity hardware.
bigcompute.science is an open, collaborative effort to publish results from big math — the kind that requires custom CUDA kernels, GPU clusters, and serious engineering to execute.
Large-scale GPU computation produces results that are expensive to reproduce. We publish everything — methodology, raw data, code, and structured metadata — so the work compounds rather than gets repeated.
Every result on this site is designed to be consumed by both humans and machines. The structured YAML frontmatter in each experiment means an AI agent can find, parse, and reason about our results without scraping HTML. The /llms.txt endpoint describes how to consume everything programmatically.
Why This Matters
AI models are increasingly being used for mathematical reasoning and scientific computation. But they waste cycles rediscovering things that have already been computed. If a mathematical agent needs to know whether Zaremba's conjecture holds for d up to 10 million — that answer exists here, verified, with the code to reproduce it.
We believe heavy compute results should be:
- Open — all code, data, and methodology published
- Structured — machine-readable metadata for agent consumption
- Reproducible — exact commands to verify every result
- Honest — experiments marked as planned, in-progress, or complete
Contribute
The site is static Astro + KaTeX on Cloudflare Pages. Adding an experiment is dropping a markdown file with structured frontmatter. See the GitHub repo for the template.
Got serious hardware and interesting results? Open a PR.
Who
Cahlen Humphreys — Managing Principal at Enfuse.io, speaker at NVIDIA GTC, and builder of things that require too many GPUs. M.S. Mathematics, Florida Atlantic University. B.S. Mathematics, Boise State University. Research interests include continued fraction neural networks (CoFrGeNet-F), formal theorem proving with LLMs, and computational number theory. Runs experiments on an 8×B200 DGX cluster and an RTX 5090 locally. Based in Irvine, CA.
X · Hugging Face · LinkedIn