Roth's Theorem concerning discrepancy on Arithmetic Progression

From Polymath Wiki
Jump to: navigation, search

Letting [math]\scriptstyle \mathcal{AP}(N) [/math] be the collection of all (not necessarily homogeneous) arithmetic progressions in [math]\{1, 2, ..., N\}[/math], Roth (1964) proved that [math] \scriptstyle \delta(\mathcal{AP}(N),{\mathbf x}) \geq c n^{1/4}[/math], independent of the sequence [math]\scriptstyle {\mathbf x}[/math].

  • Roth, K. F., Remark concerning integer sequences. Acta Arith. 9 1964 257--260. MR0168545 (29 #5806)

In 2000, László Lovász gave another proof using semidefinite optimization.

  • Lovász, László, Integer sequences and semidefinite programming. Dedicated to Professor Kálmán Gyory on the occasion of his 60th birthday.

Publ. Math. Debrecen 56 (2000), no. 3-4, 475--479. MR1765994 (2001k:11020)

Matoušek and Spencer (1996) have proven that the exponent [math] 1/4 [/math] is best possible, although their construction is not constructive.

"Representation of the identity proof of [math]\scriptstyle \gg n^{1/8}[/math]

This is an elaboration of the outline in this post. A further elaboration that achieves Roth's bound of [math]n^{1/4}[/math] can be found here.

We break the proof down into 9 easy pieces.

  • First, we consider APs modulo [math]n[/math] for a large [math]n[/math]. At the end, we'll have proven the existence of an AP with diameter smaller than n, so that if it wraps around it does so only once, and so at least half of the discrepancy is on a non-wrap-around AP. Set [math]\omega=e^{2\pi i /n}[/math]. Let [math] \scriptstyle {\mathbf x}[/math] be a [math] \scriptstyle \pm1[/math] column vector (our coloring!), and let [math]\delta[/math] be the maximum discrepancy of [math] \scriptstyle {\mathbf x}[/math] along any AP. Summations are over all of [math]{\mathbb Z}_n[/math] unless noted otherwise, or obviously otherwise.
  • Let [math]P_i[/math] be the indicator vector of the [math]i[/math]-th AP. Suppose that we have some [math]\lambda_{i,j}[/math] with
[math]I = \sum_{i,j} \lambda_{i,j} P_i P_j^T,[/math]

where [math]I[/math] is the [math] \scriptstyle n\times n[/math] identity matrix. Then

[math] n={\mathbf x}^T {\mathbf x} = {\mathbf x}^T I {\mathbf x} ={\mathbf x}^T\left(\sum \lambda_{i,j} P_i P_j^T\right) {\mathbf x} =\sum_{i,j} \lambda ({\mathbf x}^T P_i) (P_j^T {\mathbf x}) \leq \delta^2 \sum |\lambda_{i,j}|. [/math]

So, if [math]\sum |\lambda_{i,j}| = o(n)[/math], necessarily [math]\delta\to\infty[/math]. This simplifies the problem considerably: can the identity matrix be written in the form [math]\sum \lambda_{i,j} P_i P_j^T[/math] with small [math]\lambda_{i,j}[/math]? We no longer need concern ourselves with the vector [math] \scriptstyle {\mathbf x}[/math].

  • Set [math]P_{d,a}[/math] to be the indicator vector of the AP with length [math]2m+1[/math] (with [math]m[/math] to be determined later), difference [math]d\gt1[/math], and center [math]a[/math]. Set [math]\omega_r(t)=\omega^{rt}[/math], a column vector whose [math]t[/math]-th component is [math]\omega^{rt}[/math]. Note that
[math]I = \frac1n \sum_r \omega_r \overline{\omega_r}^T.[/math]

We now proceed to write [math]\omega_r[/math] as a sum of [math]P_{d,a}[/math].

  • We express a sum in two ways:
[math]\sum_s P_{d,0}(s) \omega^{r(t-s)} = \omega^{rt} \sum_s P_{d,0}\omega^{-rs}=\omega^{rt} \hat{P}_{d,0}(-r)[/math]


[math]\sum_s P_{d,0}(s) \omega^{r(t-s)} = \sum_s P_{d,0}(t-s) \omega^{rs}=\sum_s \omega^{rs}P_{d,s}(t)[/math]


[math]\omega_r = \frac{1}{\hat{P}_{d,0}(-r)} \sum_s \omega^{rs} P_{d,s},[/math]

the promised sum-of-APs representation of [math]\omega_r[/math], and consequently the promised representation of [math]I[/math] as a sum of [math]P_{d,s_1} P_{d,s_2}^T[/math]. All that remains is to analyze the coefficients of the representation of [math]I[/math].

  • As [math]\hat{P}_{d,0}(-r)=\sum_s P_{d,0}\omega^{-rs}[/math], and [math]P_{d,0}[/math] is symmetric about 0, we see that [math]\hat{P}_{d,0}(-r)=\hat{P}_{d,0}(r)[/math] is real. Moreover, as it is a geometric series, it is routine to find
[math]\alpha_r:=\hat{P}_{d,0}(-r)=\frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)}[/math]

We will want [math]\alpha_r[/math] to be bounded away from 0. To that end for each [math]r[/math], fix now and forever [math]d[/math] in [math][1,\sqrt{n}][/math] with [math]\|rd/n\|\leq n^{-1/2}[/math], where [math]\|\cdot\|[/math] means the distance to the nearest integer. This is a classic result of diophantine approximation (Dirichlet's theorem), whose proof is just the pigeonhole principle. Setting [math] \scriptstyle m=\frac15 \sqrt n[/math], we find

[math]\alpha_r = \frac{\sin\left((2m+1)\pi rd/n\right)}{\sin\left(\pi rd/n\right)} \geq \frac{2\cdot\left((2m+1)rd/n\right)}{\pi rd/n}\gtm,[/math]

and more obviously [math] \scriptstyle \alpha_r\leq 2m+1[/math]. In short [math] \scriptstyle \alpha_r[/math] has order [math] \scriptstyle \sqrt{n}[/math], to within a constant factor.

  • [math] I = n^{-1} \sum_r \omega_r \overline{\omega_r}^T = n^{-1} \sum_r \alpha_r^{-2}\left(\sum_x \omega^{rx}P_{d,x}\right)\left(\sum_y \omega^{-ry}P_{d,y}^T\right) = \sum_{r,x,y} n^{-1} \alpha_r^{-2} \omega^{r(x-y)} P_{d,x}P_{d,y}^T [/math]

It is tempting to try and be done the easy way, but [math] \sum_{r,x,y} \left|n^{-1} \alpha_r^{-2} \omega^{r(x-y)} \right|[/math] is easily recognized as a sum with [math]n^3[/math] summands, each of which has size [math]\approx n^{-2}[/math], so that the total is not [math]o(n)[/math]. In short, we need to find some cancelation.

  • Set [math]R(\Delta)=\{r \colon d=\Delta\}[/math]. By a pigeonhole argument, [math] \scriptstyle |R(\Delta)|\leq 2\sqrt{n}[/math]. The coefficient of [math]P_{\Delta,x}P_{\Delta,y}^T[/math] is [math]\sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)}.[/math]

We need to bound

[math] \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|.[/math]
  • [math] \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right| \leq \left(\sum_{x,y} 1^2 \right)^{1/2} \left( \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|^2\right)^{1/2}[/math]
[math]= n \,\cdot\,\left(\sum_{x,y} \sum_{r_1,r_2} n^{-2} \alpha_{r_1}^{-2} \alpha_{r_2}^{-2} \omega^{(r_1-r_2)(x-y)}\right)^{1/2} = n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2}[/math]

where the last equality is by noting that [math]\sum_{x,y}\omega^{(r_1-r_2)(x-y)}[/math] is either [math]n^2[/math] (if [math] \scriptstyle r_1-r_2=0[/math]) or 0 (if [math]r_1-r_2\not=0[/math]). Since [math]\alpha_r[/math] is between [math]m[/math] and [math]2m+1[/math] (recall [math] \scriptstyle m=\frac15 \sqrt n[/math]), we have [math]\alpha_r^{-4} \leq m^{-4} = \frac{5^4}{n^2}[/math]. Thus,

[math] n\,\cdot\, \left( \sum_{r} n^{-2}\alpha_r^{-4} n^2\right)^{1/2} \leq 25 |R(\Delta)|^{1/2}\leq 25\sqrt{2} n^{1/4},[/math]

and so

[math] \sum_{\Delta=1}^{\sqrt{n}} \sum_{x,y} \left| \sum_{r\in R(\Delta)} n^{-1}\alpha_r^{-2} \omega^{r(x-y)} \right|\leq 25\sqrt{2} n^{3/4}.[/math]
  • Returning to step 2, we have [math]\delta^2 \geq \frac{1}{25\sqrt{2}} n^{1/4}[/math], whence [math]\delta \gt \frac 17 n^{1/8}[/math].