After posting and grading a Question Bank assessment, you can use the Summary Report to review the performance for each exam-taker and each category and to evaluate the effectiveness of each question.
Also see: Legacy Portal: View the Summary Report for a Performance Assessment
Before You Begin
To ensure that the report is complete, wait until all assessments have been submitted and graded.
Note: You can generate the report at any time, but if assessments are still being submitted and/or graded, the data will be incomplete.
- Select the Assessments menu.
- Select the assessment that you want to review.
- Select the Reporting/Scoring tab, and then select Summary Report.
- Select the categories to include.
- Select Categories: Use this button if you want to see a list of categories and select them one by one. The list will include all categories that you have permissions for and that were used on this assessment.
- Use Top 25: Use this button if you want to include the top 25 most-used categories in the assessment.
Report DetailsThe Summary Report will appear in a PDF Viewer window.
- Assessment Performance: This section shows you the overall outcomes of this assessment.
- Average, Low, and High scores
- Total Student Performance Histogram: This bar chart shows the number of exam-takers who scored in each percentage range.
- Assessment Score Reliability (KR-20): This quality metric indicates the reliability and consistency of this assessment. A high KR-20 indicates that if the same exam-takers took the same assessment again, there is a higher chance that the results would be the same. A low KR-20 means that the results would be more likely to be different.
- Questions Needing Categories: If this section appears, it indicates that one or more questions had no categories assigned to them at the time when the report was run.
- Learning Outcomes (Category Performance): In this section, each selected category is represented by a number line with indicators for the Low, Average, and High score. You'll also see the total number of questions that were tagged with each category. (Some questions might have been tagged with multiple categories.)
- At Risk Students: The section identifies the exam-takers with the lowest scores, so that you can follow up with them.
- Question Performance: In this section, you'll see the performance of each question.
The metrics are described briefly below. For more details and tips about interpreting and applying them, see: A Guide to the Statistics
- Sequence #, Item ID, and Item Stem: These details help you to identify the question.
- Correct: The percentage of exam-takers who answered this question correctly.
- Upper 27%: The percentage of high-scorers who answered this question correctly.
- Lower 27%: The percentage of low-scorers who answered this question correctly.
- Point Bi Serial: The correlation between an exam-taker's response on this question and the exam-taker's overall performance on the assessment. A score close to 1 indicates a very strong correlation; success or failure on this question is a strong predictor of success or failure on the exam as a whole.
- Disc Index: The Discrimination Index indicates the difference in performance between the upper 27% and the lower 27% of exam-takers. If this question was a mastery-level item, a score from 0 to 0.2 is acceptable. If it was intended to be highly discriminating, a score of 0.25 to 0.5 should be expected.
- Response Frequencies: For True/False and Multiple Choice questions, you'll see the number of times that each answer choice was selected. For Hotspot questions, column A represents correct responses, and column B represents incorrect responses.