whole language and ideology

It took my son two years to learn to read. Despite his love of books and a lot of hard work, he just couldn’t manage it. Eventually he cracked it. Overnight. All by himself. Using whole word recognition. He’s the only member of our family who didn’t learn to read effortlessly – and he’s the only one who was taught to read using synthetic phonics (SP). SP was the bee’s knees at the time – his reading breakthrough happened a few months before the interim Rose Report was published. Baffled, I turned to the TES forum for insights and met the synthetic phonics teachers. They explained systematic synthetic phonics. They questioned whether my son had been taught SP systematically or intensively enough. (He hadn’t.) And they told me that SP was based on scientific evidence, whereas the whole language approach, which they opposed, was ideologically driven.

SP supporters are among the most vocal advocates for evidence-based education policies, so I checked out the evidence. What I could find, that is. Much of it predated the internet or was behind a paywall. What I did find convinced me that SP was the most effective way of teaching children to decode text. I’m still convinced. But the more I read, the more sceptical I became about some of the other claims made by SP proponents. In the next few posts, I want to look at three claims; about the whole language approach to learning to read, the impact of SP and reading and the brain.

whole language: evidence and ideology

The once popular whole language approach to learning to read was challenged by research findings that emerged during the 1980s and 90s. The heated debate that ensued is often referred to as the Reading Wars. The villains of the piece for SP proponents seemed to be a couple of guys called Goodman and Smith. I was surprised to find that they are both academics. Frank Smith has a background in psycholinguistics, a PhD from Harvard and a co-authored book with his supervisor, George “the magical number seven” Miller. Ken Goodman had accumulated an array of educational awards. Given their credentials, ideology clearly wasn’t the whole story.

In 1971 Frank Smith published Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read, which explains the whole language approach. It’s a solid but still readable and still relevant summary of how research from cognitive science and linguistics relates to reading. So how did Smith end up fostering the much maligned – and many would say discredited – whole language approach?

bottom-up vs top-down

By 1971 it was well established that brains process sensory information in a ‘bottom-up’ fashion. Cognitive research showed that complex visual and auditory input from the environment is broken down into simple fragments by the sense organs. The fragments are then reconstituted in the brain, step-by-step, into the whole visual images or patterns of sound that we perceive. This process is automatic and pre-conscious and gets faster and more efficient the more familiar we are with a particular item.

But this step-by-step sequential model of cognitive processing didn’t explain what readers did. Research showed that people read words faster than non-words, can identify words from only a few key features, and that the meaning of the beginning of a sentence influences the way they pronounce words at the end of it (as in ‘her eyes were full of tears’).

According to the sequential model of cognition, this is impossible; you can’t determine the meaning of a word before you’ve decoded it. The only explanation that made sense was that a ‘top-down’ processing system was also in operation. What wasn’t clear at the time was how the two systems interacted. A common view was that the top-down process controlled the bottom-up one.

For Smith, the top-down model had some important implications such as:

• Young children wouldn’t be able to detect the components of language (syllables, phonemes, nouns, verbs etc) so teaching reading using components wouldn’t be effective.
• If children had enough experience of language, spoken and written, they would learn to read as easily as they learned to speak.
• Skilled readers readers would use contextual cues to identify words; poorer readers would rely more heavily on visual features.

Inspired by Smith’s model of reading, Keith Stanovich and Richard West, then graduate students at the University of Michigan, decided to test the third hypothesis. To their surprise, they found exactly the opposite of Smith’s prediction. The better readers were, the more they relied on visual recognition. The poorer readers relied more on context. It wasn’t that the skilled readers weren’t using contextual cues, but their visual recognition process was simply faster – they defaulted to using context if visual recognition failed.

As Stanovich explains (Stanovich, 2000, pp.21-23) the flaw in most top-down models of reading was that they assumed top-down controlled bottom-up processing. What Stanovich and West’s finding implied (and later research supported) was that the two systems interacted at several levels. Although some aspects of Smith’s model were wrong it was based on robust evidence. So why did SP proponents think it was ideologically driven? One clue is in Ken Goodman’s work.

a psycholinguistic guessing game

Smith completed his PhD in 1967, the year that Goodman, then an Associate Professor at Wayne State University, Detroit, published his (in)famous article in the Journal of the Reading Specialist “Reading: A psycholinguistic guessing game”. The title is derived from a key concept in contemporary reading models – that skilled readers used rapid, pre-conscious hypothesis testing to identify words. It’s an eye-catching title, but open to misunderstanding; the skilled ‘guessing game’ that Goodman was referring to is very different from getting a beginner reader to have a wild stab at an unfamiliar word. Which was why Goodman and Smith recommended extensive experience of language.

Goodman’s background was in education rather than psycholinguistics. According to Diane McGuinness, (McGuinness, 1998, p.129) Goodman does have some peer-reviewed publications, but the most academic text I could find was his introduction to The Psycholinguistic Nature of the Reading Process published in 1968. In contrast to the technical content of the rest of the book, Goodman’s chapter provides only a brief overview of reading from a psycholinguistic perspective and in the four-sentence chapter summary he refers to his ‘beliefs’ twice – a tendency McGuinness uses as evidence against him. (Interestingly, although she also ridicules some quotes from Smith, his name is tucked away in her Notes section.)

Although Goodman doesn’t come across as a heavyweight academic, the whole language model he enthusiastically supports is nonetheless derived from the same body of evidence used by Smith and Stanovich. And the miscue analysis technique Goodman developed is now widely used to identify the strategies adopted by individual readers. So where does ideology come in?

Keith Stanovich sheds some light on this question in Progress in Understanding Reading. Published in 2000, it’s a collection of Stanovich’s key papers spanning a 25-year career. In the final section he reflects on his work and the part it played in the whole language debate. Interestingly, Stanovich emphasises what the two sides had in common. Here’s his take on best practice in the classroom;

Fortunately the best teachers have often been wise enough to incorporate the most effective practices from the two different approaches into their instructional programs.” (p.361)

and on the way research findings have been used in the debate;

Whole language proponents link [a model of the reading process at variance with the scientific data] with the aspects of whole language philosophy that are legitimately good and upon which virtually no researchers disagree.” (p.362)

Correspondence and coherence

For Stanovich the heat in the debate didn’t come from disagreements between reading researchers, but from the clash between two conflicting theories about the nature of truth; correspondence vs coherence. Correspondence theory assumes that there is a real world out there, independent of our perceptions of it. In contrast the coherence theory assumes that our “knowledge is internally constructed – that our evolving knowledge is not tracking an independently existing world, but that internally constructed knowledge literally is the world” (p.371, emphasis Stanovich’s). The whole language model fits nicely into the coherence theory of truth, so research findings that challenged whole language also challenged what Stanovich describes as the “extreme constructivism” of some whole language proponents.

Stanovich also complains that whole language proponents often fail to provide evidence for their claims, cherry-pick supporting evidence only, ignore contradictory evidence and are prone to the use of straw men and ad hominem attacks. He doesn’t mention that synthetic phonics proponents are capable of doing exactly the same. I don’t think this is due to bias on his part; what’s more likely is that when his book was published in 2000 the whole language model had had plenty of time to filter through to classrooms, policy makers’ offices and university education and philosophy departments. The consensus on synthetic phonics was relatively new and hadn’t gathered so much popular support. Fifteen years on, that situation has changed. In my experience, some SP proponents are equally capable of making sweeping claims, citing any supporting evidence regardless of its quality, and being dismissive towards anyone who disagrees with anything they believe. Which brings me to the subject of my next post; claims about what SP can achieve.

References

Goodman, K (Ed). 1998. The Psycholinguistic Nature of the Reading Process. Wayne State University Press.
McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
Smith, F. (1971). Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read. Lawrence Erlbaum. (My copy is 4th edition, published 1988).
Stanovich, K (2000). Progress in Understanding Reading. Guilford Press.

Advertisements

6 thoughts on “whole language and ideology

  1. Very nice post thank you. Well researched and coherent. I am looking forward to the rest of the series.

    It is likely that later today you will be accused of starting a “twitchhunt” and a certain amount of criticism and vitriol will be aimed in your direction by the usual suspects, but for me the sorts of discussions that take place on here are one of the reasons a series of posts such as yours will be useful.

    More power to your keyboard.

  2. Thank you for this interesting and informative blog post. It is above all …. calm. I think it needs a cool head to negotiate the trenches of the Reading War. Coincidentally, I have just written a blog post in which I reflect on stuff I have come across while tracking the thinking behind the ‘ascendancy’ of SSP. Your post looks at it from another angle – very interesting. I look forward to reading more.
    Find my (less knowledgable) blog post here:
    https://community.tes.co.uk/reading_theory_and_practice/b/weblog/archive/2015/02/21/why-are-teachers-still-using-mixed-methods.aspx

  3. Like any war story, the history of the Reading Wars depends on who is telling the story. The story can be traced back as least to the days of Rousseau I wasn’t there at the time, but I did happen to be there before “Whole Language” was birthed by Ken Goodman.

    Goodman’s article presenting reading as a “guessing game” was considered by educational psychologists at the time as “another silly idea from a ‘Curriculum Guy’ at a low-tiered University.” Jean Chall of Harvard had just completed a major investigation of the question, Do children
    learn better with a beginning method that stresses meaning or with one that stresses learning the code [how to use the Alphabetic Code that links written and spoken English]? Chall came down in favor of “Systematic Phonics” and researchers believed the Reading War was over.
    http://qoshe.com/the-atlantic/sophie-gilbert/mind-the-gap/253654

    I met Frank Smith in 1968-69 when I was directing an Educational R&D Lab in Los Angeles (SWR) We had about a dozen PhD Linguists on the staff. Frank was fresh off the boat from Australia and was looking for a job. He had been working for a newspaper and pitched himself as a Linguist (There were no “Psycholinguists” at the time.) Frank talked like an Aussie, and was presentable, so I had him talk to our Linguists. Their verdict: He’s not much a Linguist, but he talks a good game. He would be pleasant to have around if you want to hire him. So with his newspaper background, we hired him to do the Lab newsletter–Director of Laboratory Communications.

    The position didn’t work out, but the problem wasn’t Frank, The staff all knew, or thought they knew, everything that was going on, so there was never any “news” that was new. Frank left on cordial terms by mutual agreement. I was surprised that he landed a job at the Ontario Institute for Studies in Education in Toronto, his book, Understanding Reading, and his move to Teachers College, Columbia University in New York. Frank doesn’t mention his time at SWRL in any of his bios. But the rest is history.

    Your account of Whole Language and Ideology is off to a good start, and I’ll look forward to the other two chapters of the story.

    http://qoshe.com/the-atlantic/sophie-gilbert/mind-the-gap/253654

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s