mixed methods for teaching reading (1)

Many issues in education are treated as either/or options and the Reading Wars have polarised opinion into synthetic phonics proponents on the one hand and those supporting the use of whole language (or ‘mixed methods’) on the other. I’ve been asked on Twitter what I think of ‘mixed methods’ for teaching reading. Apologies for the length of this reply, but I wanted to explain why I wouldn’t dismiss mixed methods outright and why I have some reservations about synthetic phonics. I wholeheartedly support the idea of using synthetic phonics (SP) to teach children to read. However, I have reservations about some of the assumptions made by SP proponents about the effectiveness of SP and about the quality of the evidence used to justify its use.

the history of mixed methods

As far as I’m aware, when education became compulsory in England in the late 19th century, reading was taught predominantly via letter-sound correspondence and analytic phonics – ‘the cat sat on the mat’ etc. A common assumption was that if people couldn’t read it was usually because they’d never been taught. What was found was that a proportion of children didn’t learn to read despite being taught in the same way as others in the class. The Warnock committee reported that teachers in England at the time were surprised by the numbers of children turning up for school with disabilities or learning difficulties. That resulted in special schools being set up for those with the most significant difficulties with learning. In France Alfred Binet was commissioned to devise a screening test to identify learning difficulties that evolved into the ‘intelligence test’. In Italy, Maria Montessori adapted methods to mainstream education that had been used to teach hearing-impaired children.

Research into acquired reading difficulties in adults generated an interest in developmental problems with learning to read, pioneered by James Hinshelwood and Samuel Orton in the early 20th century. The term developmental dyslexia began as a descriptive label for a range of problems with reading and gradually became reified into a ‘disorder’. Because using the alphabetic principle and analytic phonics clearly wasn’t an effective approach for teaching all children to read, and because of an increased interest in child development, researchers began to look at what adults and children actually did when reading and learning to read, rather than what it had been thought they should do.

What they found was that people use a range of cues (‘mixed methods’) to decode unfamiliar words; letter-sound correspondence, analytic phonics, recognising words by their shape, using key letters, grammar, context and pictures, for example. Educators reasoned that if some children hadn’t learned to read using alphabetic principles and/or analytic phonics, applying the strategies that people actually used when reading new words might be a more effective approach.

This idea, coinciding with an increased interest in child-led pedagogy and a belief that a species-specific genetic blueprint meant that children would follow the same developmental trajectory but at different rates, resulted in the concept of ‘reading-readiness’. The upshot was that no one panicked if children couldn’t read by 7, 9 or 11; they often did learn to read when they were ‘ready’. It’s impossible to compare the long-term outcomes of analytic phonics and mixed methods because the relevant data aren’t available. We don’t know for instance, whether children’s educational attainment suffered more if they got left behind by whole-class analytic phonics, or if they got left alone in schools that waited for them to become ‘reading-ready’.

Eventually, as is often the case, the descriptive observations about how people tackle unfamiliar words became prescriptive. Whole word recognition began to supersede analytic phonics after WW2, and in the 1960s Ken Goodman formalised mixed methods in a ‘whole language’ approach. Goodman was strongly influenced by Noam Chomsky, who believes that the structure underpinning language is essentially ‘hard-wired’ in humans. Goodman’s ideas chimed with the growing social constructivist approach to education that emphasises the importance of meaning mediated by language.

At the same time as whole language approaches were gaining ground, in England the national curriculum and standardised testing were introduced, which meant that children whose reading didn’t keep up with their peers were far more visible than they had been previously, and the complaints that had followed the introduction of whole language in the USA began to be heard here. In addition, the national curriculum appears to have focussed on the mechanics of understanding ‘texts’ rather than on reading books for enjoyment. What has also happened is that with the advent of multi-channel TV and electronic gadgets, reading has nowhere near the popularity it once had as a leisure activity amongst children, so children tend to get a lot less reading practice than they did in the past. These developments suggest that any decline in reading standards might have multiple causes, rather than ‘mixed methods’ being the only culprit.

what do I think about mixed methods?

I think Chomsky has drawn the wrong conclusions about his linguistic theory, so I don’t subscribe to Goodman’s reading theory either. Although meaning is undoubtedly a social construction, it’s more than that. Social constructivists tend to emphasise the mind at the expense of the brain. The mind is such vague concept that you can say more or less what you like about it, but we’re very constrained by how our brains function. I think marginalising the brain is an oversight on the part of social constructivists, and I can’t see how a child can extract meaning from a text if they can’t read the words.

Patricia Kuhl’s work suggests that babies acquire language computationally, from the frequency of sound patterns within speech. This is an implicit process; the baby’s brain detects the sounds and learns the patterns, but the baby isn’t aware of the learning process, nor of phonemes. What synthetic phonics does is to make the speech sounds explicit, develop phonemic awareness and allow children to learn phoneme-grapheme correspondence and how words are constructed.

My reservations about SP are not about the approach per se, but rather about how it’s applied and the reasons assumed to be responsible for its effectiveness. In cognitive terms, SP has three main components;

• phonemic and graphemic discrimination
• grapheme-phoneme correspondence
• building up phonemes/graphemes into words – blending

How efficient children become at these tasks is a function of the frequency of their exposure to the tasks and how easy they find them. Most children pick up the skills with little effort, but anyone who has problems with any or all of the tasks could need considerably more rehearsals. Problems with the cognitive components of SP aren’t necessarily a consequence of ineffective teaching or the child not trying hard enough. Specialist SP teachers will usually be aware of this, but policy-makers, parents, or schools that simply adopt a proprietary SP course might not.

My son’s school taught reading using Jolly Phonics. Most of the children in his class learned to read reasonably quickly. He took 18 months over it. He had problems with each of the three elements of SP. He couldn’t tell the difference between similar-sounding phonemes – i/e or b/d, for example. He couldn’t tell the difference between similar-looking graphemes either – such as b/d, h/n or i/j. As a consequence, he struggled with some grapheme-phoneme correspondences. Even in words where his grapheme-phoneme correspondences were secure, he couldn’t blend more than three letters.

After 18 months of struggling and failing, he suddenly began to read using whole word recognition. I could tell he was doing this because of the errors he was making; he was using initial and final letters and word shape and length as cues. Recognising patterns is what the human brain does for a living and once it’s recognised a pattern it’s extremely difficult to get it to unrecognise it. Brains are so good at recognising patterns they often see patterns that aren’t what they think they are – as in pareidolia or the behaviourists’ ‘superstition’. Once my son could recognise word-patterns, he was reading and there was no way he was going to be persuaded to carry on with all that tedious sounding-out business. He just wanted to get on with reading, and that’s what he did.

[Edited to add: I should point out that the reason the apparent failure of an SP programme to teach my son to read led to me supporting SP rather than dismissing it, was because after conversations with specialist SP teachers, I realised that he hadn’t had enough training in phonemic and graphemic discrimination. His school essentially put the children through the course, without identifying any specific problems or providing additional training that might have made a significant difference for him.]

When I trained as a teacher ‘mixed methods’ included a substantial phonics component – albeit as analytic phonics. I get the impression that the phonics component has diminished over time so ‘mixed methods’ aren’t what they once were. Even if they included phonics, I wouldn’t recommend ‘mixed methods’ prescriptively as an approach to teaching reading. Having said that, I think mixed methods have some validity descriptively, because they reflect the way adults/children actually read. I would recommend the use of SP for teaching reading, but I think some proponents of SP underestimate the way the human brain tends to cobble together its responses to challenges, rather than to follow a neat, straight pathway.

Advocacy of mixed methods and opposition to SP is often based on accurate observations of the strategies children use to read, not on evidence of what teaching methods are most effective. Our own personal observations tend to be far more salient to us than schools we’ve never visited reporting stunning SATs results. That’s why I think SP proponents need to ensure that the evidence they refer to as supporting SP is of a high enough quality to be convincing to sceptics.