390012 SE PhD-AW: Zero order stochastic optimization revisited (2022S)
Continuous assessment of course work
Labels
MIXED
Registration/Deregistration
Note: The time of your registration within the registration period has no effect on the allocation of places (no first come, first served).
- Registration is open from Mo 07.02.2022 09:00 to Mo 21.02.2022 23:59
- Registration is open from Th 24.02.2022 09:00 to We 30.03.2022 23:59
- Deregistration possible until We 30.03.2022 23:59
Details
Language: English
Lecturers
Classes (iCal) - next class is marked with N
- Wednesday 30.03. 15:00 - 16:30 Hörsaal 8 Oskar-Morgenstern-Platz 1 1.Stock
- Thursday 31.03. 15:00 - 16:30 Hörsaal 6 Oskar-Morgenstern-Platz 1 1.Stock
- Friday 01.04. 15:00 - 16:30 Hörsaal 5 Oskar-Morgenstern-Platz 1 Erdgeschoß
- Monday 04.04. 15:00 - 16:30 Hörsaal 8 Oskar-Morgenstern-Platz 1 1.Stock
- Tuesday 05.04. 15:00 - 16:30 Hörsaal 8 Oskar-Morgenstern-Platz 1 1.Stock
- Monday 25.04. 11:30 - 13:00 Digital
- Tuesday 26.04. 11:30 - 13:00 Digital
- Wednesday 27.04. 11:30 - 13:00 Digital
- Thursday 28.04. 11:30 - 13:00 Digital
- Friday 29.04. 11:30 - 13:00 Digital
Information
Aims, contents and method of the course
Assessment and permitted materials
More information to be announced in the first session of the seminar
Minimum requirements and assessment criteria
More information to be announced in the first session of the seminar
Examination topics
More information to be announced in the first session of the seminar
Reading list
References:Akhavan A., Pontil, M., Tsybakov, A.B. (2020) Exploiting higher order smoothness in derivative-free optimization and continuous bandits. Proceedings of NeurIPS-2020. arXiv:2006.07862Akhavan A., Pontil, M., Tsybakov, A.B. (2021) Distributed zero-order optimization under adversarial noise. Proceedings of NeurIPS-2021. arXiv:2102.01121Bach F., Perchet V. (2016) Highly-smooth zero-th order online optimization. Proc. 29th Annual Conference on Learning Theory, 1-27.Polyak, B.T., Tsybakov A.B. (1990) Optimal order of accuracy of search algorithms in stochastic optimization. Problems of Information Transmission, 26, 45-53.Shamir, O. (2013) On the complexity of bandit and derivative-free
stochastic convex optimization. J. Machine Learning Research, 30, 1-22.
stochastic convex optimization. J. Machine Learning Research, 30, 1-22.
Association in the course directory
Last modified: Th 11.05.2023 11:28
In the current course, we provide an introduction to zero-order stochastic optimization under this new perspective and highlight its connection to nonparametric estimation. The main objects of study are approximations of the gradient descent algorithm where the gradient is estimated by randomized procedures involving kernel smoothing. Assuming that the objective function is (strongly) convex and β-Hölder and the noise is adversarial, we obtain non-asymptotic upper bounds both for the optimization error and the cumulative regret of the algorithms. We also establish minimax lower bounds for any sequential search method implying that the suggested algorithms are nearly optimal in a minimax sense in terms of sample complexity and the problem parameters. The approach is extended to several other settings, such as non-convex stochastic optimization, estimation of the minimum value of the function and zero-order distributed stochastic optimization.