Reminder: On Adapting Nesterov's Scheme to Accelerate Iterative Methods for Linear Problems
Hello everyone,
Next Monday (2023/10/09) at noon, Tao Hong will be presenting in EECS room 4122. Due to scheduling conflicts, we are meeting in a different room on this day only. Please fill out the food form before attending, so we can buy enough pizza for everyone.
If you have research to share, please volunteer to present using this link. Currently, there is no one scheduled for 2023/11/06 (next seminar). As a token of gratitude, presenters get to choose a customized meal from a selection of local restaurants, as listed here.
All seminar info is available on the SPEECS website, and a Google calendar link with dates/times/presenters is can be found here. If you have any questions, you can contact Zongyu Li or me directly, or email speecs.seminar-requests@umich.edu. Suggestions are always welcome :)
Speaker: Tao Hong
Topic: On Adapting Nesterov’s Scheme to Accelerate Iterative Methods for Linear Problems
Abstract: Nesterov’s well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed scheme requires more iterations, but it is trivial to implement and retains essentially the same computational cost as the unaccelerated method. An explicit formula for a fixed optimal parameter is derived in the case where the stationary iteration matrix has only real eigenvalues, based only on the smallest and largest eigenvalues. The fixed parameter, and corresponding convergence factor, are shown to maintain their optimality when the iteration matrix also has complex eigenvalues that are contained within an explicitly defined disk in the complex plane. A comparison to Chebyshev acceleration based on the same information of the smallest and largest real eigenvalues (dubbed Restricted Information Chebyshev acceleration) demonstrates that Nesterov’s scheme is more robust in the sense that it remains optimal over a larger domain when the iteration matrix does have some complex eigenvalues. Numerical tests validate the efficiency of the proposed scheme. This work generalizes and extends the results of [1, Lemmas 3.1 and 3.2 and Theorem 3.3].
Supplementary link: None
Mirror: http://websites.umich.edu/~speecsseminar/presentations/20231023/
Thanks,
Matt Raymond