How Two Mathematicians Cracked a 50-Year Question About Large Values

Inside Guth & Maynard’s Breakthrough on Dirichlet Polynomials

Mathematics has a habit of hiding deep mysteries inside objects that seem simple. One such object is the Dirichlet polynomial, a short finite sum that behaves like a miniature version of the Riemann zeta function. For over fifty years, mathematicians tried to understand how often such a polynomial can take very large values, and progress stalled. In 2024, Larry Guth and James Maynard finally broke the barrier. Their work gives dramatically improved estimates for the frequency of large values and does so using a completely new combination of tools — from additive combinatorics to matrix geometry to harmonic analysis. This post is meant to explain their ideas in a way that feels intuitive, visually guided, and mathematically honest.

To start, consider the Dirichlet polynomial

\[D(t) = \sum_{n\sim N} b_n\, n^{it}, \qquad |b_n| \le 1.\]

A good way to picture this is to imagine many arrows of equal length rotating in the plane. At time (t), the arrow corresponding to the term (n^{it}) points at an angle (\log(n)\, t). So the Dirichlet polynomial is the vector sum of these arrows. Most of the time these arrows point all over the place and cancel each other. For the polynomial to become “large,” most of the arrows need to momentarily point in almost the same direction. This alignment is extremely rare, and understanding how rare is exactly the question that nobody could answer sharply for half a century.

Large values matter because Dirichlet polynomials appear as local models for the zeta function and other (L)-functions. The frequency of large spikes influences what we believe about prime gaps, moments of zeta, the distribution of zeros, and the randomness of multiplicative functions. A better bound ultimately tells us how chaotic primes can be. Yet for decades the known bounds — built from classical energy and analytic estimates — refused to improve. Many believed the plateau would remain until someone invented an entirely new insight.

A key concept in Guth and Maynard’s work is additive energy. If a set (A) contains many pairs whose differences match, we say it has high additive energy. Formally,

\[E(A) = \#\{(a_1,a_2,a_3,a_4)\in A^4 : a_1 - a_2 = a_3 - a_4 \}.\]

Why does this matter? Because arrows aligning in the Dirichlet polynomial correspond to the logs of the integers having similar differences. If many logs satisfy
(\log(n_1) - \log(n_2) \approx \log(n_3) - \log(n_4)),
the arrows turn together, and the polynomial becomes large. Larger additive energy makes large values more likely. Guth and Maynard turn this purely combinatorial measure into analytic control over when spikes occur.

Their next move involves matrix geometry. Think of collecting snapshots of the Dirichlet polynomial at moments when it is large. Each snapshot is like a high-dimensional vector capturing how all arrows contribute at that moment. Place these snapshots into a matrix (M). The strength of the alignment across all snapshots is encoded in the largest singular value of this matrix:

\[\sigma_1(M) = \max_{\|x\|=1} \|Mx\|.\]

If (\sigma_1(M)) is large, then the large-value times of the Dirichlet polynomial all point in roughly the same geometric direction — meaning many of the arrows keep aligning together across different moments. This geometric view makes analytic cancellation (or the lack of it) visible as a property of a matrix instead of a sum.

To sharpen this picture further, Guth and Maynard use the Poisson Summation Formula. In this context, it acts like a lens that converts oscillations in the original sum into patterns among the logarithms of integers. It answers the question: Do certain frequencies reinforce each other and cluster to support large values? If not, then the transformed system decays quickly, meaning large values cannot cluster. Additive energy, matrix alignment, and frequency reinforcement are now tied to the same object.

The final component — and the true innovation — is the use of non-stationary phase. Classical analytic number theory relies heavily on the method of stationary phase, which detects points where oscillations pause momentarily and constructive interference can occur. But in Dirichlet polynomials, the phases move too irregularly, and no true stationary point exists. The remarkable insight was that one can still extract very strong decay of oscillatory integrals even when the derivative of the phase never vanishes. In a schematic form, they show that an integral of the shape

\[I = \int e^{i\phi(x)}\, a(x)\, dx\]

still satisfies a strong decay estimate of the form

\[|I| \lesssim \frac{1}{|\phi'(x)|^k}\]

for appropriate (k). In plain terms, the waves cancel so aggressively when oscillation speeds vary that large values cannot persist unless the oscillations are tuned with extraordinary precision — something that almost never occurs.

When all of these components are combined, Guth and Maynard arrive at a dramatically stronger bound on the number of large values. If (R) is the number of times (t_j) such that

\[|D(t_j)| \ge N^\sigma,\]

they prove that such large spikes cannot happen often unless the structure of the coefficients forces an extremely rare geometric alignment. The final conclusion is that the large-value phenomenon is much rarer than previously known, breaking a barrier that had resisted improvement since the 1970s.

Why is the result so significant? Beyond the theorem itself, it introduces methods that feel destined to reshape analytic number theory: additive energy as a quantitative model of accidental alignment, singular values as a global measurement of correlation across spikes, Poisson summation as a frequency-space microscope, and non-stationary phase as a new replacement for stationary phase in contexts full of wild oscillation. Each technique is powerful individually; together, they form something genuinely new.

What makes the breakthrough especially beautiful is how natural it feels in hindsight. Of course rotating arrows align only when their angle differences match. Of course that leaves a footprint in matrix singular values. Of course oscillatory decay eliminates clustering of large values. It now feels inevitable — yet it took the right pair of mathematicians to recognize that these ideas all belong to the same story.

If you want to explore the original research in full detail, you can read the paper here:

Dirichlet polynomials.pdf