Introduction to reinforcement learning and control theory#

Hint

Use ctrl+k to search.

This page contains material and information related to the spring 2024 version of the course Introduction to reinforcement learning and control, offered at DTU.

If you are curious about the course, you can read more about the course here. If you are enrolled and just starting out, you should begin with the Installation. You can find the exercises, project descriptions in the menu to the left.

Practicalities#

Note

This page is automatically updated with typos, etc. I therefore recommend bookmarking it and using the newest version of the exercises.

Time and place:

Building B341, auditorium 21 (TBA), 08:00–12:00

DTU Learn:

02465

Exercise code:

https://lab.compute.dtu.dk/02465material/02465students.git

Course descriptions:

kurser.dtu.dk

Lecture recordings:

panopto.dtu.dk

Discord:

Discord channel (invitation link)

Campus-wide python support:

pythonsupport.dtu.dk

Contact:

Tue Herlau, tuhe@dtu.dk.

Course schedule#

The schedule and reading can be found below. Click on the titles to read the exercise and project descriptions.

#

Date

Title

Reading

Homework

Exercise

Slides

Jan 24th, 2025

Installation and self-test

Chapter 1-3 , [Her24]

[PDF]

1

Jan 31th, 2025

Chapter 4, [Her24]

1, 2

[PDF]

[1x] [6x]

2

Feb 7th, 2025

Chapter 5-6.2, [Her24]

1, 2

[PDF]

[1x] [6x]

3

Feb 14th, 2025

Section 6.3; Chapter 10-11, [Her24]

1, 2

[PDF]

[1x] [6x]

4

Feb 21th, 2025

Chapter 12-14, [Her24]

1, 2

[PDF]

[1x] [6x]

Feb 27th, 2025

Project 1: Dynamical Programming

5

Feb 28th, 2025

Chapter 15, [Her24]

1

[PDF]

[1x] [6x]

6

Mar 7th, 2025

Chapter 16, [Her24]

1

[PDF]

[1x] [6x]

7

Mar 14th, 2025

Chapter 17, [Her24]

1

[PDF]

[1x] [6x]

8

Mar 21th, 2025

Chapter 1; Chapter 2-2.7; 2.9-2.10, [SB18]

1

[PDF]

[1x] [6x]

Apr 3rd, 2025

Project 2: Control theory

9

Apr 4th, 2025

Chapter 3; 4, [SB18]

1, 2

[PDF]

[1x] [6x]

10

Apr 11th, 2025

Chapter 5-5.4+5.10; 6-6.3, [SB18]

1

[PDF]

[1x] [6x]

11

Apr 18th, 2025

Chapter 6.4-6.5; 7-7.2; 9-9.3; 10.1, [SB18]

1

[PDF]

[1x] [6x]

12

Apr 25th, 2025

Chapter 10.2; 12-12.7, [SB18]

1

[PDF]

[1x] [6x]

May 1st, 2025

Project 3: Reinforcement Learning

13

May 2nd, 2025

Chapter 6.7-6.9; 8-8.4; 16-16.2; 16.5; 16.6, [SB18]

1

[PDF]

[1x] [6x]

The reading material is available here:

[Her24]:

02465_Notes.pdf

[SB18]:

Introduction to Reinforcement Learning (2020) (Authors homepage)

You can find the exam QA slides here. Details about the exam QA session will be announced on DTU Learn.

Note

  • Chapters 1–3 is background information about python and are therefore not part of the main course content (pensum). Knowledge of python is required for the exams.

  • The Homework column list those problems that will be discussed during class. They are also indicated by a in the margin of the PDF file. I encourage you to prepare them at home and present your solution during the exercise session.

Exercise sessions#

Hint

I will upload solutions to some of the python problems on gitlab.

The teaching assistants will be available Fridays 10:00–12:00 after the lecture.

Location

Instructor

Email

(Not confirmed) Building B341, auditorium 21

Tue Herlau

tuhe@dtu.dk

For the exercises, you are encouraged to prepare the homework problems at home (see syllabus above), and present your solution during the exercise session.

Additional reading material#

The material below is referenced in the course but it is not part of the course syllabus.

Reference name

Download

(Tassa et al. [TET12])

tassa2012.pdf

(Kelly [Kel17])

kelly2017.pdf

Contents#

Indices and tables#

Bibliography#

[Her24] (1,2,3,4,5,6,7,8)

Tue Herlau. Sequential decision making. (Freely available online), 2024.

[Kel17]

Matthew Kelly. An introduction to trajectory optimization: how to do your own direct collocation. SIAM Review, 59(4):849–904, 2017. (See kelly2017.pdf). URL: https://epubs.siam.org/doi/pdf/10.1137/16M1062569.

[SB18] (1,2,3,4,5,6)

Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. (Freely available online). URL: http://incompleteideas.net/book/the-book-2nd.html.

[TET12]

Yuval Tassa, Tom Erez, and Emanuel Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 4906–4913. IEEE, 2012. (See tassa2012.pdf). URL: https://ieeexplore.ieee.org/abstract/document/6386025.

This page was last updated at:

>>> from datetime import datetime
>>> print("Document updated at:", datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
Document updated at: 03/12/2024 15:27:25