Before the lectures

On this page you will find a short description of what is going to happen in the next lecture. Being prepared is not such a bad idea!

Wednesday, January 17th: We shall begin by reviewing some of the material from the appendix at the end of the book. Most of this should be known from previous courses, but it may be a good idea to review if from our perspective. The lecture will be a little shorter than usual as I have an exam going on from 3 PM.

Friday, January 19th: The plan is to cover section 1.1 with an emphasis on the examples (but we probably won't do quite as many as the book). Hopefully we shall also get started on section 1.2. It's a good idea to look at the material beforehand!

Wednesday, January 24th: The physical lecture is cancelled as I'm out of town, but I have made a recorded lecture in three parts (se links below). The material is from section 1.3, but I do things in a slightly different order than the book. The lecture of Friday January 26th will have the same format and be on the remaining parts of section 1.3 and section 1.4.

1. Stopping times and the strong Markov property: Notes. Video.

2. Recurrent and transient states: Notes. Video. (Look out for slips of the tongue: In one of the examples I first call some states "transient", before I correct myself and change to the correct "recurrent".)

3. A criterion for recurrence: Notes. Video.

Friday, January 26th: The physical lecture is cancelled as I'm out of town, but I have instead made a recorded lecture in three parts (se links below). The first two are from section 1.3, the last one from section 1.4.

1. Decomposition of the state space. Notes. Video.

2. Communication classes. Notes. Video. (This year's textbook doesn't seem to use the term "communication classes" – or just "classes" – but they appear in so many old exam problems that they are good to know about).

3. Stationary distributions. Notes. Video. (Excuse the quality - I have to correct myself several times, but I didn't have the energy to make a new version).

Wednesday, January 31st: The physical lecture is cancelled due a minor accident, and I have put out three videos instead. They are all from sections 1.4 and 1.5.

1. More about stationary distributions and especially the doubly stochastic situation. Notes. Video.

2. The detailed balance equation. Notes. Video.

3. Time reversed processes. Notes. Video.

Friday, February 2nd: I'm not able to give this lecture in person and have prepared two videos instead. The first video has a format I don't really like (the text is written out beforehand, and I only talk my way through it), but sitting for long periods is uncomfortable at the moment. The second video combines material from sections 1.6 and 1.8 (I'm aiming at proving the Convergence Theorem 1.19 before I do section 1.7 and the rest of section 1.6).

1. Kolmogorov's criterion for the detailed balance equation. Notes. Video. (In the book, this argument is hard to read due to many \(q\)'s appearing in random places where \(p\) should have been.)

2. Periodicity. Notes. Video.

Wednesday, February 7th: I'm on partial sick leave for the next two weeks, and will continue with the digital lectures through this period.

The main theme for February 7th will be convergence to stationary distributions, and the main event will be the proof of the Convergence Theorem 1.23 (also known as Theorem 1.19). The proof is in section 1.8 and uses the so called coupling method, which is a much used technique in more advanced Markov chain theory. The two other videos are on important theorems in section 1.7. As the three videos below cover more material than an ordinary lecture, I will make the videos for Friday shorter – and also less theoretical (the course is not at all as theoretical as this weeks videos may lead you to assume!)

1. The Convergence Theorem. Notes. Video.

2. Return to states (Theorems 1.20 and 1.21). Notes. Video.

3. Existence of stationary measures (Theorem 1.24). Notes. Video.

Friday, February 9th: The first video is on the last remaining result from sections 1.6/1.7, the so-called Ergodic Theorem 1.22. The proof in the book is very sketchy, and I have tried to fill in a few missing steps. The other two videos illustrate how you can use the tools we now have at our disposal to analyse simple Markov chains. 

1. The Ergodic Theorem. Notes. Video

2. Example: An irreducible Markov chain. Notes. Video.

3. Example: A Markov chain with transient states. Notes. Video.

Wednesday, February 14th: As there has been a lot of theory lately, I thought it was time for an example that sums up much of what we have been doing so far, and at the same time introduces the themes we are going to look at next. So in the first video, you will find what is basically a solution of Problem 1 from the exam in 2005. If you want to try this problem on your own, be aware that some of the older exam problems use a different convention for transition matrices: What they call the transition matrix is the transpose of what we call the transition matrix, and hence you have to turn columns into rows before you start to solve the problem our way.

The second video is an alternative treatment of the last part of the first video, and it paves the way for the third video, which is again more theoretical.

1. Long example based on Problem 1 from Exam 2005. Notes. Video.

2. An alternative (more matrix oriented) approach to the last part of the previous video. Notes. Video.

3. The fundamental theorem of exit probabilities (Theorem 1.28). Notes. Video.

Friday, February 16th. To compensate for the recent long and dense lectures, here are just a few short examples of applications of exit probabilities (and how to compute them).

1. Endgame in tennis. Notes. Video.

2. An unfair game. Notes. Video.

Wednesday, February 21st: Today's topic is exit times from section 1.10. In addition to the material in the book, I have a written a supplementary note on the expected time a Markov chain spends in transient states. As this has been a popular exam topic, it is useful to know something about it when one tries to solve old exam problems. As usual for Wednesdays, there is more material here than for a normal lecture, and I shall try to restrain myself on Friday (seems I didn't manage).

1. Introductory examples. Notes. Video.

2. Proof of Theorem 1.29. Notes. Video.

3. Example: Three heads in a row. Notes. Video.

4. Time spent in transient states (see the supplementary note). By accident, the notes for the video got split in two: Notes (part 1), Notes (part 2). Video.

Friday, February 23rd: The main topic of the day is from section 1.11 and deals with the distinction between two kinds of recurrent behavior, positive recurrence and null recurrence. Null recurrence is a borderline kind of recurrence, and it can only occur when the state space is infinite. The first video introduces the topic and proves some fundamental theorems; the second video illustrates the concepts by looking at what happens with random walks on the half-line. The third video introduces the notion of branching processes and proves a fundamental theorem about when such a process will die out in final time. This videos are the last ones from Chapter 1, and we shall continue with Chapter 2 next time.

1. Positive recurrence and null recurrence. Notes. Video

2. Random walks on the half-line. Notes. Video.

3. Branching processes. Notes. Video.

Wednesday, February 28th: It's time to start Chapter 2. The first (long) video is on exponential random variables, the second, much shorter one, is on Poisson distributions.

1. Exponential distributions (section 2.1): Notes. Video. (Note that I make a mistake with the limits of integration at about 41:00, but correct myself soon after).

2. Poisson distributions (from the beginning of section 2.2 - most of you have probably seen this b