# Find a different parameter, show that it tends to infinity, and show that that implies that the discrepancy tends to infinity

## Introduction

This is a strategy that could potentially be applied either to the general problem or to the multiplicative case. The latter would probably be easier so I shall mainly discuss that case.

The thought behind it is that it feels somehow as though the best multiplicative examples "ought" to be character-like, even though in fact they are not. But perhaps one could devise a parameter with the following properties.

1. The parameter is as small as possible for character-like sequences.

2. The parameter is unbounded for character-like sequences.

3. The parameter can be bounded in terms of the discrepancy.

If we have properties 1-3, then we have a proof of EDP for multiplicative sequences, since if there were an infinite multiplicative sequence with bounded discrepancy, it would follow from 3 that the new parameter was bounded, which by 1 would imply that the parameter could be bounded for character-like sequences, contradicting 2.

## A proposal for a parameter

Let $x=(x_n)$ be a $\pm 1$ sequence of length N, and for each n let $y_n=x_1+\dots+x_n$ be the partial sum up to n (with $y_0=0$. For any subinterval $I\subset\{1,2,\dots,N\}$ let osc(I) denote the difference between the maximum and minimum of the $y_n$ with $n\in I$. For each r, define the average r-drift[/itex] δ(r) of the sequence to be the average of osc(I) over all subintervals of $\{1,2,\dots,N\}$ of length r, and finally let the mean oscillation up to m $\Delta(m)$ be the weighted sum $(\log m)^{-1}\sum_{r=1}^mr^{-1}\delta(r)$.

## Motivation for this particular parameter

The motivation for a parameter that involves oscillation can be seen in this diagram, which shows the graphs of the partial sums of two multiplicative sequences. The first is one of the 500 longest possible multiplicative sequences of discrepancy at most 2. (The sequence itself can be found on the page devoted to [multiplicative sequences].) The second is the sequence obtained by sending 3 to -1 and all other primes p to 1 if they are 1 mod 3 and -1 if they are -1 mod 3.

The discrepancy of the second sequence is 3 rather than 2, so in that respect it is a worse sequence. However, its graph oscillates less on all distance scales except the very largest. This suggests that we might try to find a parameter with properties 1-3 by searching for a parameter that is particularly low for the second sequence and other character-like sequences.

Suppose we agree to take some weighted average of the r-drifts. Why should we take this particular one? The idea is that the graphs of the partial sums of the character-like functions have a self-similarity that means that it makes sense to look at them at several different distance scales, each larger than the previous one by a constant factor. We would like to attach equal weight to these distance scales, which suggests an average such as $k^{-1}(\delta(2)+\delta(4)+\dots+\delta(2^k))$. The particular average chosen is simply a smoother version that does not give any particular significance to one ratio such as 2.

## How might one prove that this parameter has Property 1?

It is not hard to show that this parameter is unbounded for character-like sequences, and it can trivially be bounded by the discrepancy, so it will be useful only if we can prove that it has Property 1. At the moment it is not even clear what Property 1 means, since there are many different character-like functions, and the parameter takes different values at them. It also appears to be at least locally minimized at character-like sequences, in the sense that it seems difficult to change a character-like sequence at a small number of primes without increasing the parameter. But how could one use it in proofs? At the time of writing this is far from clear, but the parameter has been introduced only very recently, so perhaps it will become clearer.

## Mean square of partial sums

Let $f$ be a completely multiplicative function (either to {-1,1} or to $\mathbb{T}$). We are particularly interested in the quantity $\max_N|f(1)+\dots+f(N)|$. However, estimating a maximum can be hard, and it is often easier to estimate an $L_2$ average instead. This suggests that we might look at the quantity $N^{-1}\sum_{n=1}^N|f(1)+\dots+f(n)|^2$ instead.

Of course, showing that this quantity is unbounded (as $N$ tends to infinity) is a stronger statement than showing that the largest partial sum is unbounded. However, if this stronger statement is true, then it should be easier to prove. And if we test it against the examples we know of completely multiplicative functions, we find that it does appear to be unbounded.

A much stronger reason for being interested in this quantity, and indeed the reason it came up, is that Fourier reduction has shown that the Erdős discrepancy problem for general $\pm 1$ functions has a positive answer if the mean square partial sum of a completely multiplicative function to $\mathbb{T}$ is bounded below by a function that tends to infinity.