In this lecture we consider the same setting than in the previous post (that is we want to minimize a smooth convex function over ). Previously we saw that the plain Gradient Descent algorithm has a rate of convergence of order  after  steps, while the lower bound that we proved is of order .

We present now a beautiful algorithm due to Nesterov, called Nesterov’s Accelerated Gradient Descent, which attains a rate of order . First we define the following sequences:

" class="ql-img-displayed-equation quicklatex-auto-format" height="55" src="http://blogs.princeton.edu/imabandit/wp-content/ql-cache/quicklatex.com-320ac3e87f5287e14956f64c719a3337_l3.svg" style="margin: 0px !important; padding: 0px !important; border: none !important; vertical-align: middle !important; background-image: none !important; display: inline-block !important; background-position: initial initial !important; background-repeat: initial initial !important;" title="Rendered by QuickLaTeX.com" width="377" />

(Note that .) Now the algorithm is simply defined by the following equations, with an arbitrary initial point ,

In other words, Nesterov’s Accelerated Gradient Descent performs a simple step of gradient descent to go from  to , and then it ‘slides’ a little bit further than  in the direction given by the previous point .

The intuition behind the algorithm is quite difficult to grasp, and unfortunately the analysis will not be very enlightening either. Nonetheless Nesterov’s Accelerated Gradient is an optimal method (in terms of oracle complexity) for smooth convex optimization, as shown by the following theorem.

Theorem (Nesterov 1983) Let  be a convex and -smooth function, then Nesterov’s Accelerated Gradient Descent satisfies

" class="ql-img-displayed-equation quicklatex-auto-format" height="39" src="http://blogs.princeton.edu/imabandit/wp-content/ql-cache/quicklatex.com-33fc63d703b1e275e1474783b159fdf1_l3.svg" style="margin: 0px !important; padding: 0px !important; border: none !important; vertical-align: middle !important; background-image: none !important; display: inline-block !important; background-position: initial initial !important; background-repeat: initial initial !important;" title="Rendered by QuickLaTeX.com" width="237" />

We follow here the proof by Beck and Teboulle from the paper ‘A fast iterative shrinkage-thresholding algorithm for linear inverse problems‘.

Proof: We start with the following observation, that makes use of Lemma 1 and Lemma 2 from the previous lecture: let , then

Now let us apply this inequality to  and , which gives

(1)

Similarly we apply it to  and  which gives

(2)

Now multiplying (1) by  and adding the result to (2), one obtains with ,

" class="ql-img-displayed-equation quicklatex-auto-format" height="37" src="http://blogs.princeton.edu/imabandit/wp-content/ql-cache/quicklatex.com-64cca66d4f44282628543359d2f176b0_l3.svg" style="margin: 0px !important; padding: 0px !important; border: none !important; vertical-align: middle !important; background-image: none !important; display: inline-block !important; background-position: initial initial !important; background-repeat: initial initial !important;" title="Rendered by QuickLaTeX.com" width="582" />

Multiplying this inequality by  and using that by definition  one obtains

(3)

Now one can verify that

(4)

Next remark that, by definition, one has

(5)

Putting together (3), (4) and (5) one gets with ,

Summing these inequalities from  to  one obtains:

" class="ql-img-displayed-equation quicklatex-auto-format" height="42" src="http://blogs.princeton.edu/imabandit/wp-content/ql-cache/quicklatex.com-f0d6e7938ec09c86add6bbac940b9658_l3.svg" style="margin: 0px !important; padding: 0px !important; border: none !important; vertical-align: middle !important; background-image: none !important; display: inline-block !important; background-position: initial initial !important; background-repeat: initial initial !important;" title="Rendered by QuickLaTeX.com" width="131" />

By induction it is easy to see that  which concludes the proof.