Chapter Two Design of the Reading Framework A Goal for Reading Literacy Education If reading literacy includes knowing when to read, how to read, and how to reflect on what has been read, then an obvious goal of reading literacy education is to develop good readers. Substantial research and classroom experience have provided a great deal of information about good readers. In general, good readers have positive attitudes about reading and positive self-perceptions of themselves as readers. They choose to read a variety of materials, recognizing that reading serves many purposes in their lives. They read often and have developed their own criteria for what makes a text enjoyable or useful. They function successfully in schools, homes, and workplaces. They attain personal satisfaction that can come only from reading. Some characteristics of good readers that distinguish them from less proficient readers follow. Good readers:
Reading for meaning involves a dynamic, complex interaction among three elements: the reader, the text, and the context. The context of a reading situation includes the purposes for reading that the reader might use in building a meaning of the text. The graphic below illustrates the reader-text-context interaction.
![]() Good readers bring to this interaction their prior knowledge about the topic of the text and their purposes for reading it, as well as their skill in reading, which includes their knowledge about the reading process and about the structure of texts. Different types of texts have different organizations and features that have an effect on how a reader reads them. Readers are oriented to a given text very differently, depending on the text itself and on their purposes for reading. Some readers are comfortable and successful when reading stories but are uncomfortable and unsuccessful when reading directions for assembling a bicycle. Some readers may have learned how to read and learn from textbooks but are less able to approach and appreciate a poem. Because students can be more or less proficient in reading different types of texts and in adopting different purposes for reading, it seems evident that the assessment of their performance must involve different types of text and different purposes for reading. The NAEP reflects these considerations by assessing three general types of text and reading situations:
Reading for literary experience usually involves the reading of novels, short stories, poems, plays, and essays. In these reading situations, readers explore the human condition and consider interplays among events, emotions, and possibilities. In reading for literary experience, readers are guided by what and how an author might write in a specific genre and by their expectations of how the text will be organized. The readers' orientation when reading for literary experience usually involves looking for how the author explores or uncovers experiences and engaging in vicarious experiences through the text. Reading to be informed usually involves the reading of articles in magazines and newspapers, chapters in textbooks, entries in encyclopedias and catalogues, and books on particular topics. The type of prose found in such texts has its own features and readers need to be aware of those features to understand it. For example, depending on what they are reading, readers need to know the rules of literary criticism, historical sequences of cause and effect, or scientific taxonomies. Readers read to be informed for different purposes; for example, to find specific pieces of information when preparing a research project or to get some general information when glancing through a magazine article. These purposes call for different orientations to text from those used in reading for literary experience because readers are specifically focused on acquiring information. Reading to perform a task usually involves the reading of documents such as bus or train schedules; directions for games, repairs, or classroom or laboratory procedures; tax or insurance forms; recipes; voter registration materials; maps; referenda; consumer warranties; or office memos. When they read to perform tasks, readers must use their expectations of the purpose and structure of documents to guide how they select, understand, and apply information. The readers' orientation in these tasks involves looking for specific information in order to do something. Readers need to be able to apply the information, not simply understand it, as is usually the case in reading to be informed. Readers engaging in this type of reading are not likely to savor the style or thought in these texts, as they might in reading for literary experience. The reading situations described above form the basis of the development of scales by which the NAEP in Reading is reported. Performance on the literary, informational, and task-performing components are reported on separate scales. The proportion of items related to each reading situation changes from grade to grade to reflect the changing demands made of students as they mature. The proportions of items at each grade level are shown below.
At the fourth-grade level, reading to perform a task will not be reported as a scale but rather as descriptive results. This decision was the result of three considerations:
Though literature is still very important for these students, they read extensively for information in content areas such as social studies and science, for information related to hobbies and interests, and for job-related purposes. These scales support the need for teachers of science, civics, health, business, and technology -- as well as literature -- to understand the importance of skilled reading in their content areas and to promote the types of reading necessary to perform well in their classrooms. Constructing, Extending, and Examining Meaning Readers respond to a given text in a variety of ways as they use background knowledge and information from the text to construct an initial understanding, develop an interpretation to extend the text's meaning, and examine the meaning so as to respond personally and critically to the text. These various interactions between readers and text do not form a sequential hierarchy or a set of subskills. Rather, they should be in the repertoire of readers who are at every developmental level in reading. An understanding of these interactions was crucial in the development of the assessment. Forming an Initial Understanding Forming an initial understanding requires readers to provide an initial impression or global understanding of what they have read. It involves considering the text as a whole or in a broad perspective. In the assessment, the first question following the passage taps this aspect of reading. Questions on initial understanding might include:
Developing an interpretation requires readers to extend their initial impressions to develop a more complete understanding of what they have read. It involves linking information across parts of a text as well as focusing on specific information. Questions that ask readers to develop their interpretation might include the following:
Personal reflection and response require readers to connect knowledge from the text with their own general background knowledge. Questions that ask readers to reflect and respond might include:
Demonstrating a critical stance requires readers to stand apart from the text and consider it objectively. It involves a range of tasks including critical evaluation; comparing and contrasting; and understanding the impact of such features as irony, humor, and organization. Questions asking readers to demonstrate a critical stance might include the following:
The following table illustrates the interactions among the aspects of reading assessed in the 1992-2000 NAEP.
Some questions require making connections across parts of a text or between texts using personal reflection, critical stance, or both.
Fluency -- Special study of how well students read orally.
Strategic Behaviors and Knowledge about Reading -- When you have difficulty understanding what you are reading, what do you do?
Reading Habits and Practices -- Have you read a book for enjoyment in the last week? Do you have a library card for your public library? To effectively and efficiently explore students' abilities to construct, examine, and extend meaning in a text, a combination of open-ended and multiple-choice items is used in the assessment. The type of items -- multiple-choice or open-ended -- is determined by the nature of the task. The Board supported the inclusion of many open-ended items for a number of reasons. The first has to do with the nature of reading. As they read, readers are involved in a number of processes, including integrating information from the text with their own background knowledge, reorganizing ideas, and analyzing and critically considering the text. In an assessment of reading, it is important to have items that can directly and accurately reflect how readers use these processes. Open-ended items that require extended responses provide a means of examining whether students can generate their own organized and carefully thought out responses to what they have read. Multiple-choice items do not permit this kind of assessment. Furthermore, open-ended items more closely resemble the real-world reading tasks that students must be able to perform to succeed both in and out of school. Open-ended items are the trend in state and international assessment programs. It is important that NAEP participate in such developments. Multiple-choice items are used where the nature of the task calls for a single, clear answer to a question. Multiple-choice questions emphasize critical thinking and reasoning rather than factual recall. Open-ended items are scored using primary-trait scoring, with scoring rubrics created for each question. Primary-trait scoring rates how well a reader accomplishes a task according to a few major criteria. The rubrics guide the scoring by giving specific criteria for assigning a number score to levels of success in answering the question. (See appendix C for passages and rubrics.) To ensure that only important questions (those that will reveal how readers construct an understanding or extend and examine the meaning of the text) are used, the assessment passages were analyzed, using an approach such as mapping essential text elements, before the questions were developed. This analysis helped the test developers to determine if a given passage has a coherent, orderly structure and is rich enough in meaning to provide several items useful for examining student performance. The first question following a selection taps the student's initial understanding of the passage. Questions requiring a more developed understanding or further examination of the meaning of the passage follow. Some questions require readers to integrate or compare information across more than one passage. Passages selected for the assessment were drawn from authentic texts actually found and used by students in everyday reading. Whole stories, articles, or sections of textbooks were used, rather than excerpts or abridgements. Passages written solely for a test or to provide drill in a specific skill were not considered appropriate for this assessment. Does the use of authentic passages give some students an unfair advantage because of their familiarity with the texts? Two safeguards protect against this. First, passages are not drawn from classroom basal readers, but are taken from books and magazines students are unlikely to have read. For example, some passages might be taken from magazines published in 1990 or earlier. The second safeguard is the nature of the items. They require students to engage in careful inspection and consideration of the passages so that even in the unlikely event that a student has read a given passage, she or he would need to reread and reconsider it to respond to the items. The difficulty of items is a function of the difficulty of the passages and of the amount of background knowledge a reader must use from outside the text itself to answer them. Because of their limitations, conventional readability estimates were not the only or even the main criteria for determining the difficulty level of a passage. The difficulty of text was judged by its length, the complexity of its arguments, the abstractness of its concepts, and the inclusion of unusual points of view and shifting time frames -- factors that are not addressed by traditional readability measures. As the difficulty of the passages increases, so does the difficulty of the questions, because the questions focus on important points in the text. Texts range in difficulty from those that specific grade-level teachers agree could be read by the least proficient students in a class (for example, about grade two in a fourth-grade class) to those texts that can be read by only the most proficient readers in the class (possibly grade eight level in a fourth-grade class). In general, the assessment consists of items that most students at the given grade levels can do. This means not only that students possess the requisite abilities, but that they are likely to have encountered the particular type of text or task. NAEP achievement levels describe what students should know and be able to do at grades 4, 8, and 12. The new reading assessments were constructed with these levels in mind to ensure congruence between the levels and the test content. See appendix A for the Reading Achievement Level Descriptions.
|