The computer-based administration of the scenario-based assessments combined with the broad range of selected and constructed responses possible with this approach will provide many opportunities to measure students' capabilities as defined in the assessment targets. The range of measures will be greater than those generated in a typical NAEP assessment of other subjects, so it is necessary to describe how all these measures might be handled. It is helpful to think of the measures as falling into two categories: student direct responses and pattern-tracking measures that are based on student interactions with the tools and systems portrayed in the scenarios.
Conventional items always involve the student in a direct response. For example, after being presented with information in a diagram, the student is asked a text-based question and given a limited set of choices from which to select the best answer. Student direct responses can also be used in scenarios. For example, an assessment task in the scenario may have asked the student to set two different values for a component of the system and observe what happens. The student direct response comes when, after observing the interaction with the system, the student is asked, for example, to compare and contrast the two outcomes and explain in a short written response why they happened as they did. This is a student direct response because, although the student interacted with the system, none of that interaction was captured to score the appropriateness of the student response to the item. Only the written observation and explanation are to be scored.
One type of student direct response is selection from a set of choices—e.g., multiple choice, checking all the boxes that apply, or, in a scenario-based assessment, selecting an object or choosing a tool for the task. Other types of direct response include providing a written analysis of a set of results and writing a short explanation of why a selection was made in a scenario.
By contrast, in pattern-tracking measures the interactions that the student engaged in may provide relevant evidence about whether the student possesses a skill that is an assessment target and that should, therefore, be captured, measured, and interpreted. For example, a student may have been asked to pinpoint a malfunction in a technological system, such as a leak in a lawn sprinkling system. In responding to that task, the student's manipulation of the components of the system shows whether the student is testing the sprinkler components in a random or systematic way. Thus the things that the student chose to manipulate, how the student manipulated them, and how long it took might all be measured and interpreted in combination so as to provide a measure of whether the student possesses a particular skill related to troubleshooting.
One type of pattern-tracking measure is the observation of patterns of action—for example, capturing a sequence of actions taken to determine if the correct set of actions was taken and if the actions were executed in the optimal order. Another pattern-tracking measure is tracking the manipulations that the student performs in a scenario. How, for instance, did the student change the features of a Web search query (e.g., narrowing the topic) or vary the parameters that control a component of a system (e.g., changing the gauge of wire mesh for bird protection in a model of a rooftop wind turbine) or transform an object from one form into another (e.g., transforming database entries into a graph or table for a presentation)? Pattern-tracking measures might be used to assess certain aspects of communication or collaboration skills. For example, measuring the number of times a student communicates with virtual team members with particular expertise can provide a measure of the efficiency of the student's collaboration strategies.