Lecture November 10:
Part 1: Ethics for NLP, part 2
Slides: PPTX | PDF
Recordings
Mandatory reading
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). NB: you can skip Section 4 of the paper, as well as the details of the experimental design and evaluation results.
- Chapter 2 of Domingo-Ferrer, J., Sánchez, D., & Soria-Comas, J. (2016). Database anonymization: privacy models, data utility, and microaggregation-based inter-model connections. Synthesis Lectures on Information Security, Privacy, & Trust, 8(1), 1-136. NB: You can skip the technical details on measuring information loss.
Optional reading:
- Zhou, X., & Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys (CSUR), 53(5), 1-40.
- Jayshree Pandya (2019), The Dual-Use Dilemma Of Artificial Intelligence, Forbes.
- Wagner, A. R., Borenstein, J., & Howard, A. (2018). Overtrust in the robotic age. Communications of the ACM, 61(9), 22-24.
- Williams, M. L., Burnap, P., & Sloan, L. (2017). Towards an ethical framework for publishing Twitter data in social research: Taking into account users’ views, online context and algorithmic estimation. Sociology, 51(6), 1149-1168.
Part 2: Neural Models for NLP
Presentation (roughly first 30 slides this week)
Recordings
Mandatory reading
Jurafsky and Martin, Speech and Language Processing, 3. ed. (edition of Jan. 2022)
- Ch. 7, "Neural Networks and Neural Language Models"
Lab-session, Tuesday, November 15 at Sed
Exercise set 7