Sharp bounds for the exponential sum of a polynomial

1. A theorem on the growth of exponential sums evaluated at polynomials

Let {d\geq 2} be a positive integer and fix {\underline{\alpha}=(\alpha_1,\dots,\alpha_d)\in [0,1]^d}. We are looking at bounds for

\displaystyle S_d(\underline{\alpha}, N):=\sum_{n\leq N}e(\alpha_1n+\cdots +\alpha_d n^d)

where {N} is a large number. The main theorem in this regard is the following.

Theorem 1 

Let {\epsilon>0}. Then

\displaystyle |S_d(\underline{\alpha}, N)|\leq N^{1/2+\epsilon},\ \ \ \textrm{as}\ N\longrightarrow +\infty

for all {\alpha\in [0,1]^d} outside a set of measure tending to 0 with N.

2. A probabilistic model

We will prove such theorem by showing that for any {\epsilon>0} the above exponential sum will have size bigger than N^{1/2+\epsilon} only on a set of measure tending to 0 with N.

Indeed, suppose that {\underline{\alpha}} is now being chosen at random in {[0,1]^d}, where the space is equipped with the uniform probability on {\mathbb{R}^d}. Let

\displaystyle X_{d,N}=\frac{|S_d(\underline{\alpha}, N)|^2}{N}.

By expanding the square and using the formula

{\int_{0}^{1}e(\alpha n)d\alpha=1} if {n=0} and {0} otherwise,

we may easily deduce that

\displaystyle \mathbb{E}[X_{d,N}^2]=\frac{\#\mathcal{S}(d,N)}{N^2},

where {S(d, N)} is the set of positive integers solution to the system of equations given by

\displaystyle \left\{ \begin{array}{lll} x_1+x_2=x_3+x_4;\\ \cdots + \cdots = \cdots + \cdots;\\ x_1^d+x_2^d=x_3^d+x_4^d.\end{array} \right.

This number is actually usually called Vinogradov’s mean value and denoted by {J_{2,d}(N)}. Its analytical representation is indeed

\displaystyle J_{2,d}(N)=\int_{[0,1]^d}|\sum_{n\leq N}e(\alpha_1 n+\cdots \alpha_d n^d)|^{4} d\alpha_1\cdots d\alpha_d.

By a well known theorem of Bourgain, Demeter and Guth we have for any {\epsilon>0} and {N\geq 2}

\displaystyle \#S(d, N)\ll_{d,\epsilon}N^{2+\epsilon}+N^{4-\frac{d(d+1)}{2}+\epsilon}.

However, here the situation is much easier and we do not need to appeal to such an important theorem. Indeed, since d\geq 2, the system has only trivial solutions. More precisely, once fixed a solution (x_1, x_2, x_3,x_4) all the other solutions are just a rearrangement of this tuple. One can see this by noticing that the solutions to the system

\displaystyle \left\{ \begin{array}{lll} x_1+x_2=n_1;\\ \cdots + \cdots = \cdots;\\ x_1^d+x_2^d=n_d.\end{array} \right.

with n_1, \dots, n_d positive integers determine via Newton’s identities all the elementary symmetric functions in x_1\dots x_d. Thus any other solution of it will just be a permutation of the roots of the polynomial (X-x_1)(X-x_2). We conclude that in such case \#S(d,N) =N^2(2+o(1)).

Thus, since {d\geq 2} we get {\mathbb{E}[X_{d,N}^2]\ll 1.} Since {\mathbb{E}[X_{d,N}]=0}, which is equivalent to {V(X_{d,N})=\mathbb{E}[X_{d,N}^2]}, by Chebyshev’s inequality we get

\displaystyle P(|X_{d,N}|>\eta)\ll\frac{1}{\eta^2},\ \textrm{for\ any}\ \eta>0,

which leads to

\displaystyle P(|S_d(\underline{\alpha},N)|>\sqrt{N\eta})\ll \frac{1}{\eta^2}.

Taking now {\eta=N^{2\epsilon}} and by the arbitrariness of {\epsilon} we deduce that {|S_d(\underline{\alpha},N)|\leq N^{1/2+\epsilon}} for all {d\geq 2} and all {\alpha\in[0,1]^d}, apart from those in a certain set of measure at most {N^{-4\epsilon}}, tending to {0} with {N}.

3. Some further considerations

Remark 1 Considering higher moments of {X_{d,N}} is it possible to prove for instance that

\displaystyle P(|S_d(\underline{\alpha},N)|>1.01\sqrt{N})>0,

or equivalently, {S_d(\underline{\alpha},N)} deviates a bit from its mean value.

Moreover, as {d,N\longrightarrow \infty} we also have {X_{d,N}\xrightarrow{\mathscr{L}}Exp(1)}, as convergence in law.

Some natural questions now arises:

Can the above procedures be made effective?

Can one do better by considering different scales?

Regarding the last one, we just point out that the random variables {X_{d,N}} behave like a random walk. Therefore, under some further assumptions one could use the iterated law of logarithms to deduce that

\displaystyle \limsup_{N\longrightarrow\infty}\frac{X_{d,N}}{\sqrt{N\log\log N}}=1\ \textrm{almost\ surely}.

Acknowledgement: The above discussion is a slightly enlarged version of what was presented by Sam Chow during one of the usual Number Theory meetings at Warwick.

Published by

Daniele Mastrostefano, Peter Mühlbacher

We are two PhD students at the University of Warwick.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s