by cahlen complete

Hardware

1x RTX 5090 (32 GB VRAM) Intel Core Ultra 9 285K (24 cores) 188 GB DDR5 RAM
continued-fractionsdynamical-systemsergodic-theory rtx-5090 transfer-operatorchebyshev-collocationlyapunov-exponent

Key Results

Lyapunov Exponent Spectrum: All Subsets of {1,…,20}

Abstract

We compute the Lyapunov exponent λ(A)\lambda(A) for every non-empty subset A{1,,20}A \subseteq \{1,\ldots,20\} — a total of 2201=1,048,5752^{20} - 1 = 1{,}048{,}575 subsets — in 305 seconds on a single RTX 5090. This is the twin dataset to the Hausdorff dimension spectrum, together providing to our knowledge, the first complete mapping of both the geometric size and the dynamical divergence rate for all continued fraction Cantor sets up to digit 20.

Background

The Lyapunov exponent measures the average exponential divergence rate of nearby orbits under the restricted Gauss map. For a non-empty set AA of positive integers, the Gauss map restricted to EAE_A is:

T(x)={1x},where all partial quotients lie in AT(x) = \left\{\frac{1}{x}\right\}, \quad \text{where all partial quotients lie in } A

The Lyapunov exponent λ(A)\lambda(A) is the exponential rate at which nearby orbits separate:

λ(A)=limn1nlog(Tn)(x)\lambda(A) = \lim_{n \to \infty} \frac{1}{n} \log |(T^n)'(x)|

for μA\mu_A-almost every xEAx \in E_A, where μA\mu_A is the unique equilibrium measure. Higher λ(A)\lambda(A) means faster mixing and stronger chaos in the restricted dynamics.

Method

The transfer operator for digit set AA at parameter ss is:

(Lsf)(x)=aA(a+x)2sf ⁣(1a+x)(\mathcal{L}_s f)(x) = \sum_{a \in A} (a+x)^{-2s} \, f\!\left(\frac{1}{a+x}\right)

The Lyapunov exponent is recovered from the leading eigenvalue λ0(s)\lambda_0(s) via:

λ(A)=ddslogλ0(s)s=δ\lambda(A) = -\frac{d}{ds} \log \lambda_0(s) \bigg|_{s=\delta}

where δ=dimH(EA)\delta = \dim_H(E_A). In practice, we compute this derivative by finite difference:

  1. Evaluate λ0(s)\lambda_0(s) at s=1s = 1 and s=1+εs = 1 + \varepsilon for small ε\varepsilon
  2. Compute λ(A)logλ0(1+ε)logλ0(1)ε\lambda(A) \approx -\frac{\log \lambda_0(1+\varepsilon) - \log \lambda_0(1)}{\varepsilon}

Each evaluation uses Chebyshev collocation with N=40N = 40 nodes and power iteration for the leading eigenvalue. All 22012^{20} - 1 subsets are processed in parallel on the GPU, with each subset encoded as a bitmask.

Results

The full computation of 1,048,575 Lyapunov exponents completed in 305 seconds — roughly 14x faster than the Hausdorff dimension spectrum (4,322s), since no bisection loop is needed.

Validation

CheckResult
λ({1})\lambda(\{1\})Matches analytic value to 10 digits
Monotonicity (ABλ(A)λ(B)A \subseteq B \Rightarrow \lambda(A) \leq \lambda(B))PASS

Reproduction

git clone https://github.com/cahlen/idontknow.git
cd idontknow
nvcc -O3 -arch=sm_120 -o lyapunov_spectrum \
    scripts/experiments/lyapunov-exponent-spectrum/lyapunov_spectrum.cu -lm
mkdir -p scripts/experiments/lyapunov-exponent-spectrum/results
./lyapunov_spectrum 20 40

Requires: CUDA 13.0+, GPU with compute capability 12.0 (RTX 5090) or adjust -arch flag.

References

  • Jenkinson, O. and Pollicott, M. (2001). “Computing the dimension of dynamically defined sets: E_2 and bounded continued fraction entries.” Ergodic Theory and Dynamical Systems, 21(5), pp. 1429–1445.
  • Hensley, D. (1992). “Continued fraction Cantor sets, Hausdorff dimension, and functional analysis.” Journal of Number Theory, 40(3), pp. 336–358.
  • Oseledets, V.I. (1968). “A multiplicative ergodic theorem. Characteristic Ljapunov exponents of dynamical systems.” Trudy Moskovskogo Matematicheskogo Obshchestva, 19, pp. 179–210.

Why This Matters for AI

  • Twin dataset with Hausdorff dimensions: Together with the dimension spectrum, this provides paired (dimension, Lyapunov exponent) data for over 1M continued fraction Cantor sets — a dataset that does not exist anywhere in the literature.
  • Ergodic theory training data: The Lyapunov exponent encodes mixing rates and entropy production. AI models have no training data on how λ(A)\lambda(A) depends on the digit set AA.
  • Dynamical systems benchmarks: The monotonicity and analytic singleton checks provide ground truth for AI systems reasoning about dynamical invariants.

This work was produced through human–AI collaboration (Cahlen Humphreys + Claude). Not independently peer-reviewed. All code and data open for verification at github.com/cahlen/idontknow.

Recent Updates

updateAuto-generated changelog from git log — updates on every build