Plans for week 40
Hi all. We hope you had a great weekend. Here's a shot update with a brief summary from last week.
Last week, unfortunately, the wireless was down (or very flaky) at the auditorium we use. This means that the zoom session aborted several times and we could not produce a video of the lecture. We will make a separate and new recording today and post it tomorrow. We apologize for these technical problems.
Last week we covered optimization methods and these will be part of project 2. There the aim is to write your own gradient descent method with adaptive learning rate schedulers, as discussed during the lectures last week.
This week starts thus with a recap from last week on optimization schemes and where to find code examples relevant for project 2. Project 2 is available from Friday this week.
This week we start also with neural networks and the week looks more or less like this
Wednesday: Lab at F?434, and Thursday at F?398
Thursday: Repetition and summary of Stochastic Gradient descent with examples and automatic differentiation and begin Neural Networks.
Friday: Neural Networks, setting up the basic steps, from the simple perceptron model to the multi-layer perceptron model. Presentation of project 2.
Reading suggestions for both days: Slides for week 40
Aurelien Geron's chapter 10 and Hastie et al chapter 11. For Stochastic Gradient Descent, we recommend chapter 4 of Geron's text. For neural networks we recommend Goodfellow et al chapters 6 and 7 and Bishop 5.1-5.4
Best wishes to you all,
Morten et al