This article covers detailed explanations of:
Peerceptiv Grading Summary
There are three components that make up the Overall grade awarded on a Peerceptiv assignment: the submission grade, review grade, and task grade. The instructor can vary the weight on each component for any assignment. You can check this by clicking on the settings button on the assignment page.
The algorithm used to calculate student grades is complex and is based on the ratings that you receive, the ratings that you assign, the accuracy of those rating your document, the accuracy of your ratings, and the number of required tasks that you complete.
In Peerceptiv, submission grades and reviewing grades are determined in relation to others’ in the class. Your instructor sets the benchmarks or the curve and standard deviation, and then students’ grades are placed on a line based on that curve, informed by the ratings the document received (submission grade) or the accuracy of your ratings (reviewing grade). This is why assigning the highest rating does not guarantee that the document will receive an A grade. Each assignment grade is determined by multiple inputs, including ratings, accuracy, helpfulness, the assignment settings, and the number of tasks completed.
Your submission grade is determined by the ratings that you receive, based on your reviewers’ accuracy. In other words, a more accurate student reviewer (as determined by their ratings compared with the average ratings on all documents that they rated) will have their ratings count more heavily than a student who has low accuracy.
An instructor can review documents as well, and their reviews are considered to have the highest level of accuracy. The assignment can be set up so that the instructor’s rating overrides the student ratings (overrides student grades), has a weighted percentage (sliding scale), or counts the same as a student’s review (just one in the set). You can see this information if you click on the Settings button on the assignment screen. If the assignment is benchmarked, it means the instructor grades at least the five top-ranked documents and the five bottom-ranked documents, and then those grades determine the curve for the other documents.
The Review Grade measures the quality of your rating and comments on peer documents. This grade is made of 2 components: the Accuracy and Helpfulness grades. Remember that your review grade is also set according to the curve your instructor sets. If the mean is set at 85 with a standard deviation of 10, most students’ review grade will be between 75 and 95.
Accuracy Grades measure how closely your ratings track with peer ratings on the same document. Students with the highest accuracy rank the documents in the same order as the average rankings and assign ratings close to the average ratings for that document. In other words, if your ratings, for example, assign Document B the lowest score, Document C a middle score, and Document A the highest score and the average ratings for those documents rank B as the lowest, C in the middle, and A as the highest, you will have good accuracy. If your rank order differs from the average for that rating, your accuracy will decrease.
In addition to rank order, your deviation from the mean for each rating is also calculated as part of your accuracy. The less your rating deviates from the average rating, the better accuracy you have. For example, if the average score for the first rating on a document is 6 and you rate it a 6, your accuracy will be higher than if you give it a rating of 4. The deviation from the mean is considered after calculating your accuracy based on rank order.
Note: If you assign the same score to all rating prompts for all documents that you rate for an assignment, you will receive an accuracy score of 0. This is designed to make you think about the performance of each document and the rating it deserves.
You can look at your accuracy more closely by going into your Peerceptiv grades for an assignment, expanding the grades to see each component of your assignment grade, and clicking on the question mark by accuracy or helpfulness. This displays a set of graphs that show your ratings compared with the average ratings, which can help you to see your accuracy.
Helpfulness Grades are calculated based on the feedback you receive from your peers. This is the average of the reviews which were back-evaluated with feedback, and it is assigned on a scale from 1-5. You can always click on the View Reviews by You and Others button to see how your peers reviewed the same document. If your helpfulness scores are low and there are not helpful suggestions in the feedback comments themselves, read some of the other reviews to get ideas for what kinds of comments to make and how to make them in a constructive way.
Task Grades are a simple measure of whether you did all the required tasks in the assignment. If you completed all the tasks, you receive 100% of the task grade. The Reviewing Task Grade and Feedback Task Grade are weighted equally. The task grade is not set on a curve.
If bonus reviews are allowed, during the reviewing period, you must complete all bonus reviews that you begin in order to get full task credit for reviews.
Relative Grading Options
The document grade and review grade are relative, and they are set according to the same curve. Your instructor has chosen for the assignment to be graded on a curve (curved grading) or according to benchmarks (benchmarked grading).
In curved grading, the curve is determined by the mean and standard deviation your instructor sets before the assignment opens. The default is a curve of 85 and a standard deviation of 10, but the instructor can change these settings. It is NOT a true bell curve, meaning that there does not need to be equal distribution of grades at the higher and lower ends of the curve. Instead, using a curve means that for any Peerceptiv assignment using the default curve, most students’ writing and reviewing grade will be within the range of 75-95. It is important to remember that a student’s score on the curve is determined by the ratings that they receive in relation to the rest of the class.
If you look at the Curved Grade Distribution graph below, you will see that most students in this class received scores between 75 and 95, with the greatest number receiving a score between 82 and 88. In a class of 200, 25 students received a document score below 75 and 7 received a score above 95. These scores do not include any late penalties or missing documents.
Looking at the same data represented in View 2, you can see that the grades are distributed along a straight line with most students scoring in the 75-95% range for their document score.
In Benchmark Grading, the curve is determined by the grades the instructor assigns to the top 5 and bottom 5 documents. After the review period, the instructor grades the 10 documents that received the highest and lowest student ratings, and these determine the grade distribution.
You can see in the Benchmark Grade Distribution graph below that the instructor set the score (the stars) for the top 5 and bottom 5 documents. The rest of the documents are distributed according to the curve set by those documents. As in the curved grading example, most students’ grades are between 75 and 95.
Benchmark Grade Distribution View 2 makes it even clearer how the curve is set according to the benchmark grades set by the instructor.
As in curved grading, these grades are distributed based on the average ratings given by the students during the reviewing period. A few of the document scores are distributed at the higher or lower ends along with the benchmarked documents, but most scores fall between the set of benchmarked documents.