Assessing Comprehension: What are we looking for?

For some time now I have been looking at our Informal Prose Inventory and wondering how I can improve it. It has faithfully withstood the test of time (23 years) but may be in need of a revamp in light of what we now know about the comprehension process.

The purposes of Comprehension Assessment has not changed.

  1. BENCHMARKING: We are seeking data to determine a level of competence which can become a benchmark for future measurement (has progress been made?)

  2. DIAGNOSTICS: We want data that will identify weaknesses in the reader's skill set which will guide our future instruction.

We do this by using tools which will allow us to identify a reader's Instructional Reading age or level of text difficulty.

There are many differing points of view about this so let me explain further what I mean by this and why I think it is important.

If we can agree that there is an essential  comprehension skill set that the reader uses to comprehend language, we will also acknowledge that any particular reader will have a different mix of these skills.

The skill set I am referring is that proposes by Hollis Scarborough in her well known and accepted Reading Rope.

To comprehend the language that the reader encounters in a passage of text, the reader accesses their knowledge of the world, their known vocabulary, their familiarity of sentence structure, their knowledge of written genre, all of which are processed and synthesised by their verbal reasoning; their ability to making inferences).

This skill set (the 5 strands of Scarborough's 'Language Comprehension Rope) has been developing intuitively since birth and children experience language and the world around them. Depending on that exposure, they will arrive in your classroom with a unique mix of these skills.

Our job as teachers is to identify that mix and provide them with practice opportunities to grow that skill set.

These are not quick fix instructional goals that we can set and slot into our weekly lesson plans.

Background knowledge comes from a diverse and varied curriculum (science, social studies etc) and will grow with time.
Likewise, the average 10 year old will have acquired a mental vocabulary of around 10,000 words and will add to that through exposure to text and oral conversation. Of course we can support his through learning word lists but this is not the foremost means of growing a vocabulary.
We can teach the rules of grammar but the research shows that this has little impact on the in-the-moment construction of meaning that occurs in the brain as it appropriates sentence structure in its quest for constructing meaning.
Similarly, you will make some headway by teaching idiom, and other figurative language features but not necessarily be rewarded by huge leaps with your learners in their reading comprehension ability.

The reality is that they need structured practice with developing this skill set until they acquire automacy.

The key to success here is that they get to practice on text which has JUST THE RIGHT AMOUNT OF CHALLENGE for them to  

The Components of the 'testing' procedure

With our Informal Prose Inventory we included there parts. We look at these and some suggested changes we are contemplating.

Checking Accuracy

This has always been an important part of comprehension assessment because it underlines my belief (supported by many of the experts) that a certain degree of decoding fluency is an important precursor to the act of comprehension (constructing meaning and thinking critically).

In my experience, if a reader is to be able to successfully address the comprehension challenges of a passage, then they need to be able to read that text at 97% PLUS accuracy.

That means that there should be is no more that 3 uncorrected miscues or errors out of a hundred words. Three errors in a hundred words will not sufficiently distract the reader from constructing meaning and the context should allow them to generate an accurate understanding of the passage.

Anything more, and the cognitive energy switches from the message to the reading of the words.

Checking Accuracy

is need of 

What am I doing when I ask a student “The door was open’. What does that mean.

Much communication relies on the reader to fill in the gaps that the writer has left out. The reality is it would be incredibly tedious for the reader if the writer filled in ALL the information.

For example (from Willingham).

By forcing the reader to verbalise some of the verbal reasoning that is taking place in their heads, we make them aware of the process of creating meaning and develop the mental flexibility and dexterity to do this.

Traditionally, the fluent decoder flits over the words in the sentence, grabbing snippets of meaning.

So for example.

“The door was open.” 

This is probably a door to a house. Usually closed to keep people in or out. In this case someone had left it open.


Oral reading of the text. Is the reader able to accurately lift the words off the page.
This is a prerequisite of comprehension. Accurate comprehension of written text is the end goal of all reading. While the brain is consumed with cracking the code, there is little cognitive space available for constructing meaning. It ill be happening at a very intuitive level, but the brain cannot focus on this beyond a superficial level.


The Accuracy score that you get from the first oral reading of the IPI test gives you an overall picture of how the reader is coping with this level of text difficulty.  An accuracy score below 94% indicates that the level of difficulty is too high - the decoding task will be absorbing too much of the reader's working memory for them to be able to process the meaning. A score of 97%+ indicates a recreational level - the reader is not experiencing any significant word recognition or decoding difficulties. A score between 94% and 97% indicates that this is the correct level of difficulty for your reading instruction.
Miscue Analysis procedure (Ken Goodman)

The Miscue Analysis - analysing the decoding behaviour - gives additional information about the cues that the reader is using to decode the text. The evidence of a 'fluent decoder' is that they cross-check, self-correct, and self-monitor using all three cuing sources of information, visual, meaning, and syntax, to solve decoding problems.


Silent Reading of the test passage followed by Retelling and Questioning

Measuring the use of comprehension strategies is not so straightforward as these occur 'in the head' of the reader and are not evident to the observer. As a result the IPI test can only measure the level of comprehension or understanding of the passage, which will by default indicate whether comprehension strategies have been employed or not.

Retelling as a measure of comprehension

Click here for an enlarged view of IPI recording sheetsRetelling gives you useful information about the reader's ability to reconstruct the text; are they remembering random facts with little attention to sequence, or have they identified the structure of the text and used this to hang the information on. A 50% retell of all the significant detail represents a pass at each level of difficulty. While giving insight into the use of text structure (an important comprehension strategy) this is primarily a measure of comprehension, not of the use of the text structure comprehension strategy.
Answering questions as a measure of comprehension

Answering explicit literal questions straight from the text quickly establishes whether the face value content of the passage has been understood. Each passage also includes 2 inferential questions to give you an indication of the reader's ability to "read between the lines". Once again, questioning does not measure the use of comprehension strategies, only the level of comprehension. 

The need for Comprehension Strategy Instruction
Click here to view one of our video tutorials on Comprehension Strategy Instruction
A common result is to find that while a student is reading a piece of text with a high degree of accuracy (97-100%) in the inital phase of the test, their comprehension scores are below the established criteria (Retell 50%, Response to questions 75%).
This suggests that the reader has well developed word recognition and decoding skills but is a passive reader - not interacting with, or processing the text - and highlights the need for Comprehension Strategy Instruction as outlined in this website.

What Comprehension Strategy Instruction is not
A traditional view of developing comprehension skills is to require the reader to answer lots of questions, either in oral discussion or in a written format after reading. It is important to realise that questioning only tests comprehension. It does nothing to develop strategies. In some cases it will stimulate thinking which results in deeper understanding. However the reader is still dependant on the questions to unpack the text. Comprehension Strategy Instruction advocates the teaching of metacognitive strategies so that the reader knows how to unpack the text themselves. 

More on Comprehension Strategy Instruction ...

IDENTIFYING NEEDS - the key to a successful reading programme
If our reading programmes are going to make a difference we must have a clear picture of what our students can do and what they are struggling with.

Our Informal Prose Inventory procedure not only provides a comprehensive check on the all important decoding strategies that our readers are using, but MOST IMPORTANTLY exposes those readers who are reading fluently but not actively processing the text and who need to be taught comprehension strategies. 


This product has been added to your cart