Mandatory student peer-to-peer code evaluation of mandatory assignments

The aim of this mandatory peer-to-peer assessment is to enable the students to demonstrate their understanding by discussing segments of their code with a peer reviewer (another student). While students are permitted to utilize smart assistants (e.g., generative language models) to assist with mandatory assignments, it is imperative that the students still comprehend their code thoroughly.

The peer-to-peer evaluation is conducted for the two last mandatory assignment of the course  thus, 2 times. Note that this is different from last year, when there was peer-to-peer review for all three assignments.

Guidelines for the students

  • During a 15-minute discussion, the reviewer asks questions about specific sections of the assignment, while the reviewee elaborates on their code. The reviewers are told by the course administration which sections they should ask about by e-mail.
  • Evaluation is binary: satisfactory (the reviewee can explain their code) or unsatisfactory (the reviewee cannot properly explain what or why their code does).
  • Peer pairs are assigned by the course teachers via email, along with instructions on which sections of the oblig the reviewer should ask the reviewee about. The sections are decided randomly.
  • The reviewer should complete and sign the PDF form, then upload it to Devilry within the following deadlines:
    • April 22nd for the second obligatory assignment
    • May 13th for the third obligatory assignment
  • Both reviewer and reviewee are required to contact each other as soon as possible, and while the reviewer report follows the deadlines outlined above, contact between should be established within 3 days. If your peer does not reply, contact the course administration (whoever sent you the e-mail) and let them know. This is to prevent delay in reassigning reviewers when needed.

Background

IN3050/IN4050 has three mandatory assignments where the students submit their code. We cannot prohibit using generative models and smart assistants like [UiO-]GPT or Github Copilot , even though their use can in theory lead to over-reliance and lack of the subject understanding. The reason is that we cannot (and do not want to) control how exactly a student works on their code. Prohibiting such assistance would also make teaching less relevant, since most real-world software developers nowadays use generative models to help their work.

Formally, the students are required to acknowledge what parts of their code are auto-generated, but this still leaves the problem of cases when the students do not actually learn anything because they submit auto-generated code without spending time on understanding it.

That’s why we introduce student peer-reviewing for the IN3050/IN4050 mandatory assignments. It works like this: 

For every mandatory assignment, students are randomly matched into asymmetric “reviewer-author” peer pairs. In other words, the reviewer is not reviewed by their reviewee. Every reviewer has to submit both their own solution of the obligatory assignment and their evaluation of the submission of their “reviewee” match.  This evaluation is binary (satisfactory / not satisfactory) and is obtained after the “reviewer” interviews the “reviewee” about their code for this specific assignment. It is the responsibility of the reviewer and the reviewee to find each other and have this interview session (in-person or online). The submission of the evaluation is an obligatory part of the assignment for both the reviewer and the reviewee.

The interview should be about 10-15 minutes long; in it, the reviewee is supposed to explain parts of their code to the reviewer. It does not matter whether any generative assistant was used during the preparation of the assignment or not: the reviewer evaluates whether the author understands what their code does or not.

Note that it is the process of evaluation that is weighted in this part of the assignment, not the evaluation the reviewer might have of the reviewee. The results of the peer review will be used to better inform the course teachers about the obligatory assignment submissions and possible problems.

The purpose of the peer-to-peer evaluation is to incentivize the students (you) to actually understand their code even if a large part of it is auto-generated. The whole procedure is based on trust: we do not plan to control the interviews themselves. There will be designated time slots for the interviews during the group sessions, but the students are also free to have interviews at any time and place which is convenient for them.

In case there are substantial reasons for you to opt out from the peer-to-peer evaluation schema, please get in touch with one of the course teachers.

Published Jan. 8, 2025 5:13 PM - Last modified Jan. 13, 2025 4:38 PM