Cauchy-Schwarz is both a very simple and a very powerful bound. We won’t even try to give an extensive list of possible applications, but concentrate instead on a classical one in probability theory.
First notice that Cauchy-Schwarz can simply be interpreted as a fancy way of saying that for all real by the following identity:
where is the angle between and .
In several scenarios the dimension of the underlying vector space is very large or infinite; think of random matrix theory, where one typically looks at sequences of vector spaces with dimension going to infinity, or function spaces. We will treat these two cases as prototypical examples of (I) analysis in terms of some diverging parameter and (II) infinite dimensional spaces, where the correlation between values plays an important role.
(I) On with , how much off is Cauchy-Schwarz “typically”?
The intuition is that in higher dimensions it becomes increasingly unlikely for two vectors to be approximately parallel (i.e. or $latex\theta\approx \pi$ in which case Cauchy-Schwarz is sharp) since there are “too many directions”. To answer question (I) it is instructive to consider a pair of independent vectors, chosen uniformly at random from the set of unit vectors.
By rotational symmetry we may fix to be and choose uniformly at random from the set of unit vectors. Clearly its distribution is invariant under permuting indices, hence we expect (because of the constraint of having norm 1) that each entry will be of order and thus .
While this tells us that Cauchy-Schwarz is off by a factor of (note that by assumption deterministically), it is not very illuminating. It is a good exercise to write out a proof not using the above symmetries — using the central limit theorem and the fact that entries are asymptotically independent as , one then notices that the missing factor of comes from the fact that Cauchy-Schwarz does not account for any cancellations in the inner product.
(II) How much off is ?
First we establish how this relates to Cauchy-Schwarz. We consider (more general) inner products of the form , where is some probability measure on and are square-integrable, complex-valued functions. Clearly we recover the usual Euclidean inner product (up to a factor) by choosing to be the normalised counting measure on . Now note that Cauchy-Schwarz gives us that
where is the constant 1 function. But note that the left hand side is just , whereas the right hand side equals , hence giving the claim in the heading.
It is hard to imagine angles between functions. So we give a different way to “derive” Cauchy-Schwarz here: Recall the formula giving the variance of a function in terms of expectations:
Comparing with equation we thus get
Side remark (a curious identity): Turning this “additive” bound into a “multiplicative” one (i.e. ) and comparing with the cosine formulation from before, we notice (using ) that . (/end of side remark)
We conclude with the heuristic for a fixed(!) function : “The smaller its variance, the better the bound . But it still remains to see how big the variance “typically” is. Now this “typical” will depend on the application one has in mind; the analogy to the previous section would be to choose randomly as white noise, but this is not (most of the time) not what one has in mind when thinking about a “uniformly random” function. Depending on which field one comes from, the “canonical” choice might be chosen from a smoother family of functions, like Brownian motion on the interval ; it is not hard to convince oneself that “more smoothness” will typically decrease the (now itself random!) . This can be made rigorous using Poincaré-type inequalities.