Peerceptiv Grading Summary
There are three components that make up the Overall grade awarded on a Peerceptiv assignment: the submission grade, review grade, and task grade.
These three components are combined to produce the Overall Grade for any given assignment. The instructor can adjust the Weights of each component in the Assignment Set-Up Wizard or in the Settings for the assignment. The default weights are the Submission Grade counts as 40 percent of the Overall Grade, the Review Grade counts as 40 percent of the Overall Grade, and the Task Grade counts as 20 percent of the Overall Grade. With this default setting, the submission grade and the review grade carry the same weight and will affect the overall grade equally, while the task grade weighs less and will affect the Overall grade less than the other two components. The instructor can alter these percentages based on their learning objectives for the assignment.
Read on for a more detailed explanation of each component, as well as an explanation of Curved and Benchmarked Grading.
The Submission Grade is a weighted average of how reviewers rated the work product, according to the rating prompts in the assignment rubric.
To arrive at a Submission Grade, the system first calculates the reviewing accuracy of each of the reviewers. A more accurate student reviewer (as determined by their ratings compared with the average ratings on all documents that they rated) will have their ratings count more heavily than a student who has low accuracy.
In Peerceptiv, instructors can choose to have students review one another and let the grades be entirely generated by the peer review process. Or, instructors can be more involved in grading by reviewing student submissions (as many or as few as they would like for a Curved assignment, or a total of ten for a Benchmarked assignment). If an instructor or TA reviews a submission, their ratings are considered to have the highest accuracy. In that case, the student reviewer accuracy would be judged against the standard set by the instructor reviews.
To enter a review, you can click on Student Progress on the sidebar of the assignment overview page. Then, click Details next to the student’s name for whom you would like to enter a review. Click on the assignment and then the blue button on the left that says Enter Review Now. See below.
Instructor ratings are included in the calculation of the Submission Grade to the extent determined by the instructor in assignment setup, based on the setting, “Teacher’s Rating is:” The assignment can be set up so that the instructor’s rating overrides the student ratings (overrides student grades), has a weighted percentage (sliding scale), or counts the same as a student’s review (just one in the set). The default is that Instructor Ratings are “just one in the set,” meaning that instructor reviews are given the same weight as student reviews when the Submission Grade is calculated.
The submission grade is relative, meaning that it will be calculated relative to the performance of other students in the assignment. You can choose to set up the assignment grading as Curved or Benchmarked. Please see the Relative Grading section below for a more detailed explanation of these options.
The Review Grade measures the quality of student rating and comments on peer documents. This grade is made of 2 components: the Accuracy and Helpfulness grades. The review grade is also relative and set according to the curve chosen during assignment set-up.
Accuracy Grades measure how closely the ranking order of the ratings a reviewer provides on peer documents corresponds to the ranking order of peer rating averages for each rating prompt on those same documents.
Students with the highest accuracy rank the documents in the same order as the average rankings and assign ratings close to the average ratings for that document. In other words, if a student’s ratings, for example, assign Document B the lowest score, Document C a middle score, and Document A the highest score and the average ratings for those documents rank B as the lowest, C in the middle, and A as the highest, that student will have good accuracy. If the student’s rank order differs from the average for that rating, the accuracy will decrease.
In addition to rank order, the deviation from the mean for each rating is also calculated as part of accuracy. The less a rating deviates from the average rating, the better accuracy the student will have. For example, if the average score for the first rating on a document is 6 and the student rates it a 6, the student’s accuracy will be higher than if they gave it a rating of 4. The deviation from the mean is considered after calculating accuracy based on rank order.
Note: students will receive an accuracy score of 0 if they assign the same rating to all rating prompts across all the submissions that they review. This helps ensure that students provide thoughtful reviews that fairly assess their peers, rather than just assigning the same rating to everything. If you are setting up an assignment that includes many pass/fail or yes/no rating prompts, you may want to vary the point value for the rating prompts so that students are prevented from getting a 0 for giving the same “yes” or “pass” rating to all of the prompts on submissions that do meet those requirements.
Each student’s Grades page contains graphs that depict their ratings relative to the average peer scores, if you click to expand the Reviewing Grade section and click either question mark icon. See the example below:
The helpfulness grade is the extent to which authors believed a reviewer’s comments were helpful and specific, as awarded by the feedback ratings (5 high to 1 low). Students are advised to give higher ratings to reviewer’s comments that are detailed, specific, and provide suggestions that are based on evidence from the student work. Students should give lower ratings to reviews that are brief, vague, and don’t provide thoughtful suggestions. Helpfulness scores are distributed on the same Reviewing Grade curve chosen at assignment setup--this calculation uses the Curve Mean and Standard Deviation settings.
Relative Grading Options
The submission grade and review grade are both relative, and they are set according to the same curve. The instructor can choose for the assignment to be graded on a curve (curved grading) or according to benchmarks (benchmarked grading).
When Curved Grading is selected for the Grading Style setting, all grades are distributed on the curve chosen by the instructor (using the Curve Mean and Standard Deviation settings). The default is a mean of 85 and a standard deviation of 10, but the instructor can change these settings and thus change the distribution.
It is NOT a true bell curve, meaning that there does not need to be an equal distribution of grades at the higher and lower ends of the curve. Instead, using the default curve means that most students’ submission and reviewing grades will be within the range of 75-95 percent. It is important to remember that a student’s score on the curve is determined by the ratings that they receive in relation to the rest of the class.
If you look at the Curved Grade Distribution graph below, you will see that most students in this class received scores between 75 and 95, with the greatest number receiving a score between 82 and 88. In a class of 200, 25 students received a document score below 75 and 7 received a score above 95. These scores do not include any late penalties or missing documents.
Looking at the same data represented in View 2, you can see that the grades are distributed along a straight line with most students scoring in the 75-95% range for their submission score.
Benchmark Grading also distributes grades based on relative performance. However, Benchmark Grading sets the range of grades based on the lowest and highest peer-ranked papers. When an instructor chooses Benchmark Grading as the Grading Style, Peerceptiv delivers the top 5 and the bottom 5 work products to the instructor after the Review phase. The instructor then grades those work products on a 0-100 scale. After the instructor grades those 10 work products, all other Submission Grades are distributed in a linear fashion in-between the instructor set points in accordance with the weighted peer ratings.
You can see in the Benchmark Grade Distribution graph below that the instructor set the score (the stars) for the top 5 and bottom 5 documents. The rest of the documents are distributed according to the curve set by those documents. As in the curved grading example, most students’ grades are between 75 and 95.
Benchmark Grade Distribution View 2 makes it even clearer how the curve is set according to the benchmark grades set by the instructor.
As in curved grading, these grades are distributed based on the average ratings given by the students during the reviewing period. A few of the scores are distributed at the higher or lower ends along with the benchmarked submissions, but most scores fall between the set of benchmarked submissions.
Task Grades are a simple measure of whether the student did all the required tasks in the assignment. If a student completed all the tasks, they will receive 100% of the task grade. The Reviewing Task Grade and Feedback Task Grade are weighted equally. The task grade is not set on a curve.
If the option for bonus reviews is set up, then the bonus points will be awarded to the Overall Grade, and will appear in the task grade column of the student grade breakdown. Bonus review points are not affected by the weight of the task grade.