Student peer-to-peer code evaluation of mandatory assignments

The aim of peer-to-peer assessment is to enable the students to demonstrate their understanding by discussing segments of their code with a peer reviewer (another student). While students are permitted to utilize smart assistants (e.g., generative language models) to assist with mandatory assignments, it is imperative that the students still comprehend their code thoroughly.

The peer-to-peer evaluation is conducted for every mandatory assignment of the course - thus, 3 times.

Guidelines for the students

  • During a 15-minute discussion, the reviewer asks questions about sections of the assignment (specific questions are sent out to the reviewer by the course teachers), while the reviewee elaborates on their code.
  • Evaluation is binary: satisfactory (the reviewee can explain their code) or unsatisfactory (the reviewee cannot properly explain what or why their code does).
  • Peer pairs are assigned by the course teachers via email, along with corresponding guiding questions.
  • The reviewer should complete and sign the PDF form, then upload it to Devilry by the specified deadline:
    • March 21st for the 1st obligatory assignment (UPDATE: extended till March 26th)
    • April 29th for the 2nd obligatory assignment
    • May 20th for the 3rd obligatory assignment

Background

IN3050/IN4050 has three mandatory assignments where the students submit their code. We cannot prohibit using generative models and smart assistants like [UiO-]GPT or Github Copilot , even though their use can in theory lead to over-reliance and lack of the subject understanding. The reason is that we can't (and don't want to) control how exactly a student works on their code. Prohibiting such assistance would also make teaching less relevant, since most real-world software developers nowadays use generative models to help their work.

Formally, the students are required to acknowledge what parts of their code are auto-generated, but this still leaves the problem of cases when the students do not actually learn anything because they submit auto-generated code without spending time on understanding it.

That’s why we introduce student peer-reviewing for the IN3050/IN4050 mandatory assignments. It works like this. 

For every mandatory assignment, students are randomly matched into asymmetric “reviewer-author” peer pairs. Every reviewer has to submit both their own solution of the obligatory assignment and their evaluation of the submission of their “reviewee” match.  This evaluation is binary (satisfactory / not satisfactory) and is obtained after the “reviewer” interviews the “reviewee” about their code for this specific assignment. It is the responsibility of the reviewer and the reviewee to find each other and have this interview session (in-person or online). The submission of the evaluation is an obligatory part of the assignment for both the reviewer and the reviewee.

The interview should be about 10-15 minutes long; in it, the reviewee is supposed to explain parts of their code to the reviewer. It does not matter whether any generative assistant was used during the preparation of the assignment or not: the reviewer evaluates whether the author understands what their code does or not.

Note that this is not an official grading. The results of the peer review will be used to better inform the course teachers about the obligatory assignment submissions and possible problems.

The purpose of the peer-to-peer evaluation is to incentivize the students (you) to actually understand their code even if a large part of it is auto-generated. The whole procedure is based on trust: we do not plan to control the interviews themselves. There will be designated time slots for the interviews during the group sessions, but the students are also free to have interviews at any time and place which is convenient for them.

In case there are substantial reasons for you to opt out from the peer-to-peer evaluation schema, please get in touch with one of the course teachers.

Published Feb. 14, 2024 4:44 PM - Last modified Apr. 17, 2024 1:11 PM