Finite Horizon Optimization

Tom Luo (CUHK Shenzhen)

Jun 05. 2024, 09:00 — 09:30

In practical scenarios, there is often a strict upper bound on the number of algorithm iterations that can be performed within a given time limit. This raises the question of optimal step size and hyperparameter design for a fixed iteration budget. We present recent advances in effectively addressing this highly non-convex problem for gradient descent and other algorithms. Additionally, we extend the DeepMind work on AlphaTensor and introduce new reductions in the number of operations required for computation of more general matrix expressions. This provides further acceleration of calculations in linear algebra.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Associated Event:
One World Optimization Seminar in Vienna (Workshop)
Organizer(s):
Radu Ioan Bot (U of Vienna)
Yurii Malitskyi (U of Vienna)