Understanding Peer Assessment Grade Components: Sync Mode Assignments
There are three components that make up the Overall grade awarded on a Peerceptiv synchronous assignment: the submission grade, review grade, and task grade. (If the assignment is a group assignment with peer evaluation, there will be four components: the submission grade, review grade, task grade, and peer evaluation grade.)
These three components are combined to produce the Overall Grade for any given assignment. The instructor can adjust the Weights of each component in the Assignment Set-Up Wizard or in the Settings for the assignment. The default weights are the Submission Grade counts as 40% of the Overall Grade, the Review Grade counts as 40% of the Overall Grade, and the Task Grade counts as 20% of the Overall Grade. With this default setting, the submission grade and the review grade carry the same weight and will affect the overall grade equally, while the task grade weighs less and will affect the Overall grade less than the other two components. The instructor can alter these percentages based on their learning objectives for the assignment.
The Submission Grade is a measure of how well the student’s submission scored according to the rubric. This is determined using a weighted average of the responses of the reviewers to the rating prompts in the assignment rubric.
To arrive at a Submission Grade, the system first calculates the reviewing accuracy of each of the reviewers. A more accurate student reviewer (as determined by their ratings compared with the average ratings on all documents that they rated) will have their ratings count more heavily than a student who has low accuracy.
In Peerceptiv, instructors can choose to have students review one another and let the grades be entirely generated by the peer review process. Or, instructors can be more involved in grading by reviewing student submissions. If an instructor or TA reviews a submission, their ratings are considered to have the highest accuracy. In that case, the student reviewer accuracy would be judged against the standard set by the instructor reviews.
The submission grade is relative, meaning that it will be calculated relative to the performance of other students in the assignment. You can choose to set up the assignment grading as Curved or Benchmarked. Please see the Relative Grading section below for a more detailed explanation of these options.
The Review grade in Peerceptiv assignment measures the quality of each student’s reviewing behavior. In other words, it takes into account how accurately they rated their peers’ work and the helpfulness of their comments as determined by their peers.
The rating accuracy is determined by the rank order of the submissions reviewed for each rating prompt item in relation to the mean rank order of those same submissions. A student whose rank order is similar to that of the other reviewers will have a higher accuracy score. A student whose rank order is significantly different from the other reviewers will have a lower accuracy score.
Other factors affecting the rating accuracy are the distance from the mean for each rating prompt and whether there are teacher reviews. If a student rates a submission as a 10/10 but the mean score is actually closer to a 6/10, that student will have a lower accuracy score. Secondly, if a teacher or TA reviews submissions in the assignment, their ratings are considered to be 100% accurate. If the student’s ratings are similar to the instructor, the student will have a higher accuracy score, but if the student’s ratings are very different from the instructor’s, that student will have a low accuracy grade. In this way, if the instructor or TAs choose to be part of the reviewing process, they can have an impact on the submission grades and reviewing grades of multiple students.
The Peerceptiv algorithm (based on over 15 years of academic research) has been designed to identify when students are careless reviewers or giving overly generous ratings. Students who do not assign careful ratings while reviewing or are trying to give all submissions the top rating will have a low accuracy score. This score means that their reviewing grade is low and that the ratings they gave will have little to no effect on the submission grades of the content creators. In other words, students who do not take reviewing seriously are not helping the submission grade of their fellow students but are lowering their own reviewing grade.
A student’s comment helpfulness is determined by the feedback ratings that they receive from their peers whose work they reviewed. In the feedback phase, the submission creators rate how helpful the reviewers’ comments are on a scale from 1-5. If a reviewer receives scores of 5, they will have a high comment helpfulness grade. If their scores are lower, they will receive a lower helpfulness grade.
Both rating accuracy and comment helpfulness measures are combined and distributed in relation to the other student scores in the class. This means that the most accurate and helpful reviewers in the class will likely have an ‘A’ for their reviewing grade and the least accurate and helpful reviewers will have much lower reviewing grades.
The task grade is a measure of how many assignment tasks a student completed. If they completed all required reviews and feedback tasks, they will receive a 100% for their task grade. If they completed only some of the tasks, their grade will be lowered accordingly. For example, if a student completes 2 out of 3 required reviews and 2 out of 3 required feedback tasks, their task grade will be 67% (4/6). If the assignment has a self-assessment or peer evaluation component, then the student will have to complete those tasks as well to get full credit for their task grade.