390053 DK PhD-BALOR: Advanced Methods in Optimization (2022W)
Prüfungsimmanente Lehrveranstaltung
Labels
VOR-ORT
An/Abmeldung
Hinweis: Ihr Anmeldezeitpunkt innerhalb der Frist hat keine Auswirkungen auf die Platzvergabe (kein "first come, first served").
- Anmeldung von Mo 12.09.2022 09:00 bis Fr 23.09.2022 12:00
- Abmeldung bis Fr 14.10.2022 23:59
Details
max. 15 Teilnehmer*innen
Sprache: Englisch
Lehrende
Termine (iCal) - nächster Termin ist mit N markiert
- Donnerstag 27.10. 08:00 - 18:15 Seminarraum 4 Oskar-Morgenstern-Platz 1 1.Stock
- Freitag 28.10. 08:00 - 18:15 Seminarraum 4 Oskar-Morgenstern-Platz 1 1.Stock
- Mittwoch 11.01. 11:30 - 13:00 Hörsaal 9 Oskar-Morgenstern-Platz 1 1.Stock
- Mittwoch 11.01. 13:15 - 14:45 Hörsaal 9 Oskar-Morgenstern-Platz 1 1.Stock
- Mittwoch 11.01. 15:00 - 18:15 Hörsaal 16 Oskar-Morgenstern-Platz 1 2.Stock
- Donnerstag 12.01. 08:00 - 09:30 Hörsaal 11 Oskar-Morgenstern-Platz 1 2.Stock
- Donnerstag 12.01. 09:45 - 14:45 Hörsaal 11 Oskar-Morgenstern-Platz 1 2.Stock
- Donnerstag 12.01. 15:00 - 18:15 Hörsaal 3 Oskar-Morgenstern-Platz 1 Erdgeschoß
Information
Ziele, Inhalte und Methode der Lehrveranstaltung
The course examination bases on a project work consisting of a coding, presentation, and documentation part. Students receive the task to solve a practical problem with deep reinforcement learning. To do so, they have to implement a deep reinforcement learning algorithm using the programming language Python. Students present their implementation and results in front of the lecturer and answer questions in a subsequent Q&A. They write a short documentation where they embed their work in current research streams, explain implementation and methodological aspects, and provide an outlook to further application fields of their implemented algorithm. By so doing, the students have to show that they can transfer the gained knowledge from lectures to practical problems and can apply deep reinforcement learning algorithms to solve analytics and business problems in practice. The overall grade consists of the grade for the presentation (50%) and the grade for the documentation (50%).
Art der Leistungskontrolle und erlaubte Hilfsmittel
To successfully attend this course, students should be comfortable with math-centric content, algorithms, and proofs. Students should have a general understanding of:
• basic linear algebra, including for example matrix multiplication and matrix-vector multiplication
• multivariate calculus, including for example partial derivatives, the chain rule, and gradients
• basic stochastics, including for example discrete and continuous random variables and probability distributions, as well as the notions of expectation and variance
• basics of mathematical optimization, including for example constrained optimization problems and the notion of convergence
For the programming exercises and the examination we use the Python programming language and the NumPy library. Thus, students should ideally be familiar with Python. Alternatively, knowledge of a general purpose programming language (e.g., C++, Java) or Matlab is sufficient as well, as students will be able to adapt to Python very quickly.
• basic linear algebra, including for example matrix multiplication and matrix-vector multiplication
• multivariate calculus, including for example partial derivatives, the chain rule, and gradients
• basic stochastics, including for example discrete and continuous random variables and probability distributions, as well as the notions of expectation and variance
• basics of mathematical optimization, including for example constrained optimization problems and the notion of convergence
For the programming exercises and the examination we use the Python programming language and the NumPy library. Thus, students should ideally be familiar with Python. Alternatively, knowledge of a general purpose programming language (e.g., C++, Java) or Matlab is sufficient as well, as students will be able to adapt to Python very quickly.
Mindestanforderungen und Beurteilungsmaßstab
After attending this course, students acquired
• the competence/capability to analyze a practical problem by modelling it as a Markov Decision Process (MDP)
• profound knowledge in the domain of reinforcement learning and understanding of fundamental reinforcement learning theory, e.g., Q-learning, TD learning
• basic knowledge in deep learning and understanding of fundamental machine learning and deep learning theory, e.g., stochastic gradient descent, logistic regression, artificial neural networks
• profound knowledge in the domain of deep reinforcement learning (DRL) that combines the previous two competence areas and understanding of fundamental DRL theory, e.g., deep Q-networks (DQN), advanced policy gradient methods such as proximal policy optimization (PPO)
• the competence/capability to apply a DRL framework to a practical problem
• the competence/capability to evaluate DRL methods w.r.t. to advantages and disadvantages
• the competence/capability to evaluate practical applications w.r.t. typical pitfalls (e.g., convergence issues with non-independent samples) when using DRL and how to circumvent them
• the competence/capability to analyze a practical problem by modelling it as a Markov Decision Process (MDP)
• profound knowledge in the domain of reinforcement learning and understanding of fundamental reinforcement learning theory, e.g., Q-learning, TD learning
• basic knowledge in deep learning and understanding of fundamental machine learning and deep learning theory, e.g., stochastic gradient descent, logistic regression, artificial neural networks
• profound knowledge in the domain of deep reinforcement learning (DRL) that combines the previous two competence areas and understanding of fundamental DRL theory, e.g., deep Q-networks (DQN), advanced policy gradient methods such as proximal policy optimization (PPO)
• the competence/capability to apply a DRL framework to a practical problem
• the competence/capability to evaluate DRL methods w.r.t. to advantages and disadvantages
• the competence/capability to evaluate practical applications w.r.t. typical pitfalls (e.g., convergence issues with non-independent samples) when using DRL and how to circumvent them
Prüfungsstoff
Students learn the theory behind (deep) reinforcement learning in lectures. In additional exercises and coding labs, students learn how to apply this knowledge to practical problems.
Literatur
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Bengio, Y., Goodfellow, I., & Courville, A. (2017). Deep learning. MIT press.
Russell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Prentice Hall.
Bengio, Y., Goodfellow, I., & Courville, A. (2017). Deep learning. MIT press.
Russell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Prentice Hall.
Zuordnung im Vorlesungsverzeichnis
Letzte Änderung: Do 12.01.2023 10:11