Difference between revisions of "Asymptotics of H t"

From Polymath Wiki
Jump to: navigation, search
(Asymptotics for t > 0)
Line 109: Line 109:
  
 
Thus locally <math>H_t(x+iy)</math> behaves like a trigonometric function, with zeroes real and equally spaced with spacing <math>4\pi</math> (in <math>a</math>-coordinates) or <math>\frac{4\pi}{\log T}</math> (in <math>x</math> coordinates).  Once <math>\tau</math> becomes large, further increase of <math>\tau</math> basically only increases <math>r_{T,\tau}</math> and also shifts <math>\theta_{T,\tau}</math> at rate <math>\pi/16</math>, causing the number of zeroes to the left of <math>T</math> to increase at rate <math>1/4</math> as claimed in [KKL2009].
 
Thus locally <math>H_t(x+iy)</math> behaves like a trigonometric function, with zeroes real and equally spaced with spacing <math>4\pi</math> (in <math>a</math>-coordinates) or <math>\frac{4\pi}{\log T}</math> (in <math>x</math> coordinates).  Once <math>\tau</math> becomes large, further increase of <math>\tau</math> basically only increases <math>r_{T,\tau}</math> and also shifts <math>\theta_{T,\tau}</math> at rate <math>\pi/16</math>, causing the number of zeroes to the left of <math>T</math> to increase at rate <math>1/4</math> as claimed in [KKL2009].
 +
 +
=== Riemann-Siegel formula ===
 +
 +
'''Proposition 1'''  (Riemann-Siegel formula) For any natural numbers <math>N,M</math> and complex number <math>s</math> that is not an integer, we have
 +
:<math>\zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw</math>
 +
where <math>w^{s-1} := \exp((s-1) \log w)</math> and we use the branch of the logarithm with imaginary part in <math>[0,2\pi)</math>, and <math>C_M</math> is any contour from <math>+\infty</math> to <math>+\infty</math> going once anticlockwise around the zeroes <math>2\pi i m</math> of <math>e^w-1</math> with <math>|m| \leq M</math>, but does not go around any other zeroes.
 +
 +
'''Proof''' This equation is in [T1986, p. 82], but we give a proof here.  The right-hand side is meromorphic in <math>s</math>, so it will suffice to establish that
 +
 +
# The right-hand side is independent of <math>N</math>;
 +
# The right-hand side is independent of <math>M</math>;
 +
# Whenever <math>\mathrm{Re}(s)>1</math> and <math>s</math> is not an integer, the right-hand side converges to <math>\zeta(s)</math> if <math>M=0</math> and <math>N \to \infty</math>.
 +
 +
We begin with the first claim.  It suffices to show that the right-hand sides for <math>N</math> and <math>N-1</math> agree for every <math>N > 1</math>.  Subtracting, it suffices to show that
 +
:<math>0 = \frac{1}{N^s} +  \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w}}{e^w-1}\ dw.</math>
 +
The integrand here simplifies to <math>- w^{s-1} e^{-Nw}</math>, which on shrinking <math>C_M</math> to wrap around the positive real axis becomes <math>N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)})</math>.  The claim then follows from the Euler reflection formula <math>\Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)}</math>.
 +
 +
Now we verify the second claim.  It suffices to show that the right-hand sides for <math>M</math> and <math>M-1</math> agree for every <math>M > 1</math>.  Subtracting, it suffices to show that
 +
:<math>0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} +  \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw.</math>
 +
The contour <math>C_M - C_{M-1}</math> encloses the simple poles at <math>+2\pi i M</math> and <math>-2\pi i M</math>, which have residues of <math>(2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2}</math> and <math>(-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2}</math> respectively.  So, on canceling the factor of </math>M^{s-1}</math> it suffices to show that
 +
:<math>0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)}  +  e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}).</math>
 +
But this follows from the duplication formula <math>\Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s}</math> and the Euler reflection formula <math>\Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)}</math>.
 +
 +
Finally we verify the third claim.  Since <math>\zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s}</math>, it suffices to show that
 +
:<math>\lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0.</math>
 +
We take <math>C_0</math> to be a contour that traverses a <math>1/N</math>-neighbourhood of the real axis.  Writing <math>C_0 = \frac{1}{N} C'_0</math>, with <math>C'_0</math> independent of <math>N</math>, we can thus write the left-hand side as
 +
:<math>\lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw,</math>
 +
and the claim follows from the dominated convergence theorem. <math>\Box</math>

Revision as of 18:35, 4 February 2018

Asymptotics for [math]t=0[/math]

The approximate functional equation (see e.g. [T1986, (4.12.4)]) asserts that

[math]\displaystyle \zeta(s) = \sum_{n \leq N} \frac{1}{n^s} + \pi^{s-1/2} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{n \leq N} \frac{1}{n^{1-s}} + O( t^{-\sigma/2} )[/math]

for [math]s = \sigma +it[/math] with [math]t[/math] large, [math]0 \lt \sigma \lt 1[/math], and [math]N := \sqrt{t/2\pi}[/math]. This implies that

[math]\displaystyle \xi(s) = F(s) + F(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} )[/math]

where

[math]\displaystyle F(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}.[/math]

Writing

[math]\displaystyle \frac{s(s-1)}{2} \Gamma(s/2) = 2 \Gamma(\frac{s+4}{2}) - 3 \Gamma(\frac{s+2}{2})[/math]

we have [math]F(s) = 2 F_0(s) - 3 F_{-1}(s)[/math], where

[math]\displaystyle F_j(s) := \pi^{-s/2} \Gamma(\frac{s+4}{2} + j) \sum_{n=1}^N \frac{1}{n^s}.[/math]

The [math]F_{-1}[/math] term sums to [math]O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} )[/math], hence

[math]\displaystyle \xi(s) = 2F_0(s) + 2F_0(1-s) + O( \Gamma(\frac{s+4}{2}) t^{-\sigma/2} )[/math]

and thus

[math]\displaystyle H(x+iy) = \frac{1}{4} F_0( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_0( \frac{1+ix+y}{2} )} + O( \Gamma(\frac{9+ix+y}{2}) x^{-(1+y)/2} ).[/math]

One would expect the [math]\sum_{n=1}^N \frac{1}{n^s}[/math] term to remain more or less bounded (this is basically the Lindelof hypothesis), leading to the heuristics

[math]\displaystyle |F_0(\frac{1+ix \pm y}{2})| \asymp \Gamma(\frac{9+ix \pm y}{2}).[/math]

Since [math]\Gamma(\frac{9+ix - y}{2}) \approx \Gamma(\frac{9+ix+y}{2}) (ix)^{-y}[/math], we expect the [math]F_0( \frac{1+ix+y}{2} )[/math] term to dominate once [math]y \gg \frac{1}{\log x}[/math].

Asymptotics for [math]t \gt 0[/math]

Let [math]z=x+iy[/math] for large [math]x[/math] and positive bounded [math]y[/math]. We have

[math]\displaystyle H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) \exp(izu)\ du[/math]

where

[math]\displaystyle \Phi(u) = \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]

We can shift contours to

[math]\displaystyle H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du[/math]

to any [math]-\pi/8 \lt \theta \lt \pi/8[/math] that we please; it seems that a good choice will be [math]\theta = \mathrm{arg} (ix+y+9) \approx \frac{\pi}{8} - \frac{y+9}{x}[/math]. By symmetry, we thus have

[math]\displaystyle H_t(z) = G_t(x+iy) + \overline{G_t(x-iy)}[/math]

where

[math]\displaystyle G_t(z) := \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) \exp(izu)\ du.[/math]

By Fubini's theorem we have

[math]\displaystyle G_{t}(x \pm i y) = \sum_{n=1}^\infty \pi^2 n^4 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 9) u)\ du[/math]
[math] \displaystyle - \sum_{n=1}^\infty \frac{3}{2} \pi n^2 \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \pi n^2 e^{4u} + (ix \mp y + 5) u)\ du.[/math]

The second terms end up being about [math]O(1/x)[/math] the size of the first terms and we will ignore them for now. Making the change of variables [math]u = \frac{1}{4} \log \frac{ix \pm y + 9}{4\pi n^2} + v[/math], we basically have

[math] \displaystyle G_t(x \pm iy) \approx \sum_{n=1}^\infty \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \int_{-\frac{1}{4} \log \frac{|ix\pm y+9|}{4\pi n^2}}^\infty \exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2 + (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv.[/math]

The function [math]\exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )[/math] decays rapidly away from [math]v=0[/math]. This suggests firstly that this integral is going to be very small when [math]n \gg N := \sqrt{x/4\pi}[/math] (since the left limit of integration will then be to the right of the origin), so we will assume heuristically that [math]n[/math] is now restricted to the range [math]n \leq N[/math]. Next, we approximate [math]\exp( \frac{t}{16} (\log \frac{ix \pm y+9}{4\pi n^2} + v)^2)[/math] by [math]\exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} )[/math], and then send the left limit off to infinity to obtain (heuristically)

[math] \displaystyle G_t(x \pm iy) \approx \sum_{n \leq N} \pi^2 n^4 (\frac{ix \pm y+9}{4\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) \int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv.[/math]

Making the change of variables [math]w := \frac{ix \mp y + 9}{4} e^{4v}[/math] we see that

[math]\int_{-\infty}^\infty \exp( (ix \mp y + 9) (v - \frac{1}{4} e^{4v}) )\ dv = \frac{1}{4} \Gamma(\frac{ix \mp y + 9}{4}) (\frac{4}{ix \mp y + 9})^{\frac{ix \mp y+9}{4}}[/math]

and thus

[math] \displaystyle G_t(x \pm iy) \approx \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\pi^2}{4} n^4 (\frac{1}{\pi n^2})^{\frac{ix \mp y+9}{4}} \exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} ) [/math]

which simplifies a bit to

[math] \displaystyle G_t(x \pm iy) \approx \frac{1}{4} \pi^{-\frac{ix \mp y + 1}{4}} \Gamma(\frac{ix \mp y + 9}{4}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{ix \pm y+9}{4\pi n^2} )}{n^{\frac{1 \mp y + ix}{2}}} [/math]

and thus we heuristically have

[math] H_t(x+iy) \approx \frac{1}{4} F_t( \frac{1+ix-y}{2} ) + \frac{1}{4} \overline{F_t( \frac{1+ix+y}{2} )} [/math]

where

[math]F_t( s ) := \pi^{-s/2} \Gamma(\frac{s+4}{2}) \sum_{n \leq N} \frac{\exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} )}{n^{s}}.[/math]

Here we can view [math]N[/math] as a function of [math]s[/math] by the formula [math]N = \mathrm{Im}(s)/2\pi[/math].

To understand these asymptotics better, let us inspect [math]H_t(x+iy)[/math] for [math]t\gt0[/math] in the region

[math]x+iy = T + \frac{a+ib}{\log T}; \quad t = \frac{\tau}{\log T}[/math]

with [math]T[/math] large, [math]a,b = O(1)[/math], and [math]\tau \gt \frac{1}{2}[/math]. If [math]s = \frac{1+ix-y}{2}[/math], then we can approximate

[math] \pi^{-s/2} \approx \pi^{-\frac{1+iT}{4}}[/math]
[math] \Gamma(\frac{s+4}{2}) \approx \Gamma(\frac{9+iT}{2}) T^{\frac{ia-b}{4 \log T}} = \exp( \frac{ia-b}{4} ) \Gamma(\frac{9+iT}{2}) [/math]
[math] \frac{1}{n^s} \approx \frac{1}{n^{\frac{1+iT}{2}}}[/math]
[math] \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} ) \approx \exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi} - \frac{t}{4} \log T \log n )[/math]
[math] \approx \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \frac{1}{n^{\frac{\tau}{4}}} [/math]

leading to

[math] F_t(\frac{1+ix-y}{2}) \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \exp( \frac{ia-b}{4} ) \exp( \frac{\tau}{16} \log T + \frac{i \pi \tau}{16} ) \sum_n \frac{1}{n^{\frac{1+iT}{2} + \frac{\tau}{4}}}[/math]
[math] \approx \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) \exp( \frac{ia-b}{4} ).[/math]

Similarly for [math]F_t(\frac{1+ix+y}{2}) [/math] (replacing [math]b[/math] by [math]-b[/math]). If we make a polar coordinate representation

[math] \frac{1}{2} \pi^{-\frac{1+iT}{4}} \Gamma(\frac{9+iT}{2}) \zeta(\frac{1+iT}{2} + \frac{\tau}{4}) = r_{T,\tau} e^{i \theta_{T,\tau}}[/math]

one thus has

[math] H_t(x+iy) \approx \frac{1}{2} ( r_{T,\tau} e^{i \theta_{T,\tau}} \exp( \frac{ia-b}{4} ) + r_{T,\tau} e^{-i \theta_{T,\tau}} \exp(\frac{-ia+b}{4}) ) [/math]
[math] = r_{T,\tau} \cos( \frac{a+ib}{4} + \theta_{T,\tau} ).[/math]

Thus locally [math]H_t(x+iy)[/math] behaves like a trigonometric function, with zeroes real and equally spaced with spacing [math]4\pi[/math] (in [math]a[/math]-coordinates) or [math]\frac{4\pi}{\log T}[/math] (in [math]x[/math] coordinates). Once [math]\tau[/math] becomes large, further increase of [math]\tau[/math] basically only increases [math]r_{T,\tau}[/math] and also shifts [math]\theta_{T,\tau}[/math] at rate [math]\pi/16[/math], causing the number of zeroes to the left of [math]T[/math] to increase at rate [math]1/4[/math] as claimed in [KKL2009].

Riemann-Siegel formula

Proposition 1 (Riemann-Siegel formula) For any natural numbers [math]N,M[/math] and complex number [math]s[/math] that is not an integer, we have

[math]\zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \sum_{m=1}^M \frac{1}{m^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw[/math]

where [math]w^{s-1} := \exp((s-1) \log w)[/math] and we use the branch of the logarithm with imaginary part in [math][0,2\pi)[/math], and [math]C_M[/math] is any contour from [math]+\infty[/math] to [math]+\infty[/math] going once anticlockwise around the zeroes [math]2\pi i m[/math] of [math]e^w-1[/math] with [math]|m| \leq M[/math], but does not go around any other zeroes.

Proof This equation is in [T1986, p. 82], but we give a proof here. The right-hand side is meromorphic in [math]s[/math], so it will suffice to establish that

  1. The right-hand side is independent of [math]N[/math];
  2. The right-hand side is independent of [math]M[/math];
  3. Whenever [math]\mathrm{Re}(s)\gt1[/math] and [math]s[/math] is not an integer, the right-hand side converges to [math]\zeta(s)[/math] if [math]M=0[/math] and [math]N \to \infty[/math].

We begin with the first claim. It suffices to show that the right-hand sides for [math]N[/math] and [math]N-1[/math] agree for every [math]N \gt 1[/math]. Subtracting, it suffices to show that

[math]0 = \frac{1}{N^s} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} (e^{-Nw} - e^{-(N-1)w}}{e^w-1}\ dw.[/math]

The integrand here simplifies to [math]- w^{s-1} e^{-Nw}[/math], which on shrinking [math]C_M[/math] to wrap around the positive real axis becomes [math]N^{-s} \Gamma(s) (1 - e^{2\pi i(s-1)})[/math]. The claim then follows from the Euler reflection formula [math]\Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)}[/math].

Now we verify the second claim. It suffices to show that the right-hand sides for [math]M[/math] and [math]M-1[/math] agree for every [math]M \gt 1[/math]. Subtracting, it suffices to show that

[math]0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} \frac{1}{M^{1-s}} + \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M - C_{M-1}} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw.[/math]

The contour [math]C_M - C_{M-1}[/math] encloses the simple poles at [math]+2\pi i M[/math] and [math]-2\pi i M[/math], which have residues of [math](2\pi i M)^{s-1} = - i (2\pi M)^{s-1} e^{\pi i s/2}[/math] and [math](-2\pi i M)^{s-1} = i (2\pi M)^{s-1} e^{3\pi i s/2}[/math] respectively. So, on canceling the factor of </math>M^{s-1}</math> it suffices to show that

[math]0 = \pi^{s-\frac{1}{2}} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)} + e^{-i\pi s} \Gamma(1-s) (2\pi)^{s-1} i (e^{3\pi i s/2} - e^{\pi i s/2}).[/math]

But this follows from the duplication formula [math]\Gamma(1-s) = \frac{\Gamma(\frac{1-s}{2}) \Gamma(1-\frac{s}{2})}{\pi^{1/2} 2^s}[/math] and the Euler reflection formula [math]\Gamma(\frac{s}{2}) \Gamma(1-\frac{s}{2}) = \frac{\pi}{\sin(\pi s/2)}[/math].

Finally we verify the third claim. Since [math]\zeta(s) = \lim_{N \to \infty} \sum_{n=1}^\infty \frac{1}{n^s}[/math], it suffices to show that

[math]\lim_{N \to \infty} \int_{C_0} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw = 0.[/math]

We take [math]C_0[/math] to be a contour that traverses a [math]1/N[/math]-neighbourhood of the real axis. Writing [math]C_0 = \frac{1}{N} C'_0[/math], with [math]C'_0[/math] independent of [math]N[/math], we can thus write the left-hand side as

[math]\lim_{N \to \infty} N^{-s} \int_{C'_0} \frac{w^{s-1} e^{-w}}{e^{w/N}-1}\ dw,[/math]

and the claim follows from the dominated convergence theorem. [math]\Box[/math]