This is an overview of the mandatory readings for the exam. The syllabus generally consists of the lecture slides, weekly exercises, mandatory assignments, along with additional readings, described below.
Jurafsky & Martin 3rd ed. (January 2023 version):
- Chapter 2 (text normalization): only sections 2.2 and 2.4
- Chapter 3 (n-gram LMs): until and including 3.5
- Chapter 4 (Na?ve Bayes): except 4.9
- Chapter 5 (Logistic regression): except 5.10
- Chapter 6 (vectors and embeddings): except 6.6
- Chapter 7 (neural networks): until and including 7.5
- Chapter 8 (sequence labeling): except 8.7
- Chapter 9 (RNNs and LSTMs): only 9.8 (in connection with MT)
- Chapter 10 (Transformers): 10.1, 10.2 and 10.4
- Chapter 11 (fine-tuning and MLM): until and including 11.2
- Chapter 13 (machine translation)
- Chapter 15, "Dialogue systems and chatbots: the full chapter
- Chapter 16, "Speech recognition", only 16.1 and 16.5 (and excluding the part on statistical significance).
NLTK book:
- Chapter 1: section 3
- Chapter 2: sections 2, 4, 5
- Chapter 5: sections 1, 2, 5, 7
- Chapter 6: sections 1, 3, 5
Other obligatory readings:
- On ranking (covered in the first lecture on dialogue systems):
- Ransaka Ravihara, What Is Learning to Rank: A Beginner’s Guide to Learning to Rank Methods, Towards Data science.
- On decoding (covered in the second lecture on dialogue sytems):
- Fabio Chiusano, Most used Decoding Methods for Language Models, Medium.
- On MDPs:
- Section 24.6 from the Dialogue chapter of 2nd edition of Jurafsky & Martin.
-
On fairness:
-
Ziyuan Zhong, "A tutorial on Fairness in Machine Learning", Towards Data science. NB: you can skip Section 5 of the text.
-
-
On explainability:
-
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). NB: you can skip Section 4 of the paper, as well as the details of the experimental design and evaluation results.
-
-
On privacy:
-
Chapter 2 of Domingo-Ferrer, J., Sánchez, D., & Soria-Comas, J. (2016). Database anonymization: privacy models, data utility, and microaggregation-based inter-model connections. Synthesis Lectures on Information Security, Privacy, & Trust, 8(1), 1-136. NB: You can skip the technical details on measuring information loss.
-
Additional readings mentioned in the course but not required for the exam:
- Jurafsky & Martin, Appendix A (Viterbi algorithm)
- IR book, chapter 13 (Bernoulli Na?ve Bayes)
- Forcada (2017) on machine translation
Formulas:
We expect you to know the formulas listed below. However, the most important is to understand the logic behind them and to be able to explain how they should be applied and what they are used for.
- Zipf’s laws, type-token ratio, (conditional) frequency distributions
- Accuracy, precision, recall, F-measure, micro- and macro-averaging
- Bayes’ theorem, Na?ve Bayes training and prediction formulas
- Additive smoothing
- Perceptron prediction formula and update rule
- Softmax, logistic regression update rule
- HMM training formula, greedy and Viterbi inference formulas
- Language model interpolation, perplexity
- Cosine similarity, TF-IDF weighting, analogical parallelograms
- Sigmoid function, ReLU, cross-entropy loss
- Self-attention
- Response selection in IR-based dialogue systems
- Word error rate
- Bellman equation (and the definition of MDPs)
- BLEU score
- Formulas for group fairness
Other useful links for the exam preparation: