This article explains how to understand and improve your reviewing grade.
Reviewing Grades in a Sync Mode Assignment
When you view your results in a Peerceptiv assignment, you will see a Grades card. In a Sync Mode assignment, your reviewing score is listed to the right of the overall grade.
Your reviewing grade is based on two measures: the accuracy of your ratings and the helpfulness of your comments. Read on to learn more about these two measures.
Part of your reviewing grade is based on how accurate your ratings are. The Peerceptiv algorithm has been tested rigorously over the last decade and has been shown to correctly evaluate your skill in rating others’ work. The grade is then generated from the raw data by looking at the reviewing accuracy behavior of all of your classmates and placing you on the curve set by your instructor.
If you have a very low reviewing grade, it is likely due to poor accuracy, meaning that you significantly differed from your peers in your ratings. Additionally, a very low reviewing grade can occur if you gave everything you reviewed the same score; meaning that you selected the same numerical score for all rating prompts on all of the submissions you reviewed. The Peerceptiv algorithm penalizes this kind of behavior because it usually indicates that the reviewer is not carefully assessing the true quality of the work.
The other part of your reviewing grade is based on how your peers perceive the helpfulness of your comments. During the feedback phase of the assignment, students read the comments their submission received and rate them for their helpfulness on a scale of 1-5. If you give thoughtful, detailed comments that help your peers understand and improve their work, it is more likely that you will get high feedback ratings. If you give brief, vague comments that don’t provide any useful information, then it is more likely that you will get lower feedback ratings. Lower feedback ratings on your comments lead to a lower helpfulness score.
Again, the accuracy data and helpfulness data are combined to produce your reviewing grade. Jump ahead to read more about how to improve your reviewing grade on future assignments.
Reviewing Grades in an Async Mode Assignment
In an Async Mode assignment, you will see an Overall grade and you will see either green check marks or red X’s in the submission, reviewing, and task categories. If you see a red X in the reviewing category, it means that you did not pass our reviewing accuracy threshold. This means that your reviews were found to be inaccurate or much less accurate than your peers according to our algorithm. Usually this occurs because you gave the same rating to all prompts for all the submissions you reviewed, and your peers did not do so. This red X does not affect your overall grade, but your instructor will see that you did not pass the reviewing threshold and may choose to require additional reviewing or take other measures.
How to Improve Your Reviewing Grade
Improve your helpfulness scores by providing more specific and constructive comments.
- Give yourself plenty of time to complete the reviewing task and write thoughtful, detailed comments.
- Read the comment prompt carefully and respond to what is being asked.
- Be as specific and constructive as possible in your comments.
- Think about what you can tell your peers that will help them submit an improved product next time.
Improve your accuracy scores by carefully rating the submissions.
- Give yourself sufficient time to review the submissions. The necessary amount of time will vary depending on the length of the assignment, the number of rating prompts, and the number of required reviews. Students who procrastinate on the reviewing task often end up choosing ratings at random and leaving basic comments that are not helpful, affecting their accuracy and helpfulness grades.
- Read the rubric carefully and think about each rating prompt. Do you understand what each prompt is assessing? Do you recognize the difference between the rating levels? If you have questions about the rubric or rating prompts, please ask your instructor before submitting your first review.
- Try to rate like your instructor. Your instructor is objectively assigning the rating which most closely describes that aspect of the document. Instructors generally do not try to give all students A’s or F’s. Approach reviewing like your instructor to try to rate the submission as objectively as possible, based on the rating descriptors at each level.
- Notice if you tend to rate high or low. Do you consistently tend to rate higher or lower than what might be the average? If so, go back to the rubric and descriptions and try to use those to more carefully inform your ratings.
Note: If you receive a very low reviewing grade and you think it is in error, please reach out to the Peerceptiv support team (firstname.lastname@example.org) and we will be able to closely analyze the data that informed your grade and determine whether it is valid.