According to Diane McGuinness in Why Can’t Children Read, first published in 1997, California’s low 4th grade reading scores prompted it in 1996 to revert to using phonics rather than ‘real books’ for teaching reading. McGuinness, like the legislators in California, clearly expected phonics to make a difference to reading levels. It appears to have had little impact (NCES, 2013). McGuinness would doubtless point out that ‘phonics’ isn’t systematic synthetic phonics, and that might have made a big difference. Indeed it might. We don’t know.
Synthetic phonics and functional literacy
Synthetic phonics is important because it can break a link in a casual chain that leads to functional illiteracy:
• poor phonological awareness ->
• poor decoding ->
• poor reading comprehension ->
• functional illiteracy and low educational attainment
The association between poor phonological awareness and reading difficulties is well established. And obviously if you can’t decode text you won’t understand it and if you can’t understand text your educational attainment won’t be very high.
SP involves training children to detect, recognise and discriminate between phonemes, so we’d expect it to improve phonological awareness and decoding skills, and that’s exactly what studies have shown. But as far as I can tell, we don’t know what impact SP has on the rest of the causal chain; on functional literacy rates in school leavers or on overall educational attainment.
This is puzzling. The whole point of teaching children to read is so they can be functionally literate. The SP programmes McGuinness advocates have been available for at least a couple of decades, so there’s been plenty of time to assess their impact on functional literacy. One of them, Phono-graphix (developed by a former student of McGuinness’s, now her daughter-in-law), has been the focus of several peer-reviewed studies all of which report improvements, but none of which appears to have assessed the impact on functional literacy by school leaving age. SP proponents have pointed out that might be because they’ve had enough difficulty getting policy-makers to take SP seriously, let alone fund long-term pilot studies.
The Clackmannanshire study
One study that did involve SP and followed the development of literacy skills over time was carried out in Clackmannanshire in Scotland by Rhona Johnston and Joyce Watson, then based at the University of Hull and the University of St Andrews respectively.
They compared three reading instruction approaches implemented in Primary 1 and tracked children’s performance in word reading, spelling and reading comprehension up to Primary 7. The study found very large gains in word reading (3y 6m; fig 1) and spelling (1y 9m; fig 2) for the group of children who’d had the SP intervention. The report describes reading comprehension as “significantly above chronological age throughout”. What it’s referring to is a 7-month advantage in P1 that had reduced to a 3.5-month advantage by P7.
A noticeable feature of the Clackmannanshire study is that scores were presented as group means, although boys’ and girls’ scores and those of advantaged and disadvantaged children were differentiated. One drawback of aggregating scores this way is that it can mask effects within the groups. So an intervention might be followed by a statistically significant average improvement that’s caused by some children performing much better than others.
This is exactly what we see in the data on ‘underachievers’ (fig 9). Despite large improvements at the group level, by P7 5% of children were more than two years behind their chronological age norm for word reading, 10% for spelling and 15% for reading comprehension. The improvements in group scores on word reading and spelling increased with age – but so did the proportion of children who were more than two years behind. This is an example of the ‘Matthew effect’ that Keith Stanovich refers to; children who can decode read more so their reading improves, whereas children who can’t decode don’t read so don’t improve. For the children in the Clackmannanshire study as a group, SP significantly improved word reading and spelling and slightly improved their comprehension, but it didn’t eliminate the Matthew effect.
The phonics check
There’s a similar within-group variation in the English KS1 phonics check, introduced in 2012. Ignoring the strange shape of the graph in 2012 and 2013 (though Dorothy Bishop’s observations are worth reading), the percentage of Year 2 children who scored below the expected standard was 15% in 2013 and 12% in 2014. The sharp increase at the cut-off point suggests that there are two populations of children – those who grasp phonics and those who don’t. Or that most children have been taught phonics properly but some haven’t. There’s also a spike at the end of the long tail of children who don’t ‘get’ phonics at all for whatever reason, representing the 5783 children who scored 0.
It’s clear that SP significantly improves children’s ability to decode and spell – at the group level. But we don’t appear to know whether that improvement is due to children who can already decode a bit getting much better at it, or to children who previously couldn’t decode learning to do it, or both, or if there are some children for whom SP has no impact.
And I have yet to find evidence showing that SP reduces the rates of functional illiteracy that McGuinness, politicians and the press complain about. The proportion of school leavers who have difficulty with reading comprehension has hovered around 17% for decades in the US (NCES, 2013) and in the UK (Rashid & Brooks, 2010). A similar proportion of children in the US and the UK populations have some kind of learning difficulty. And according to the Warnock report that figure appears to have been stable in the UK since mass education was introduced.
The magical number 17 plus or minus 2
There’s a likely explanation for that 17% (or thereabouts). In a large population, some features (such as height, weight, IQ or reading ability) are the outcome of what are essentially random variables. If you measure one of those features across the population and plot a graph of your measurements, they will form what’s commonly referred to as a normal distribution – with the familiar bell curve shape. The curve will be symmetrical around the mean (average) score. Not only does that tell you that 50% of your population will score above the mean and 50% below it, it also enables you to predict what proportion of the population will be significantly taller/shorter, lighter/heavier, more/less intelligent or better/worse at reading than average. Statistically, around 16% of the population will score more than one standard deviation below the mean. Those people will be significantly shorter/lighter/less intelligent or have more difficulties with reading than the rest of the population.
Bell curves tend to ring alarm bells so I need to make it clear what I am not saying. I’m not saying that problems with reading are due to a ‘reading gene’ or biology or IQ, and so we can’t do anything about them. What I am saying is that if reading ability in a large population is the outcome of not just one factor, but many factors that are to all intents and purposes random, then it’s a pretty safe bet that around 16% of children will have a significant problem with it. What’s important for that 16% is figuring out what factors are causing reading problems for individual children within that group. There are likely to be several different causes, as the NCES (1993) study found. So a child might have reading difficulties due to persistent glue ear as an infant, an undiagnosed developmental disorder, having a mother with mental health problems who hardly speaks to them, having no books at home or because their family dismisses reading as pointless. Or all of the above. SP might help, but is unlikely to address all of the obstacles to word reading, spelling and comprehension that child faces.
The data show that SP enables 11 year-olds as a group to make huge gains in their word reading and spelling skills. That’s brilliant. Let’s use synthetic phonics.
The data also show that SP doesn’t eliminate reading comprehension problems for at least 15% of 11 year-olds – or the word reading problems of around 15% of 6-7 year-olds. That could be due to some SP programmes not being taught systematically enough, intensively enough or for long enough. But it could be due to other causes. If so, those causes need to be identified and addressed or the child’s functional literacy will remain at risk.
I can see why the Clackmannanshire study convinced the UK government to recommend then mandate the use of SP for reading instruction in English schools (things are different in Scotland), but I haven’t yet found a follow-up study that measured literacy levels at 16, or the later impact on educational attainment; and the children involved in the study would now be in their early 20s.
What concerns me is that if more is being implicitly claimed for SP than it can actually deliver or if it fails to deliver a substantial improvement in the functional literacy of school leavers in a decade’s time, then it’s likely to be seen as yet another educational ‘fad’ and abandoned, regardless of the gains it brings in decoding and spelling. Meanwhile, the many other factors involved in reading comprehension are at risk of being marginalised if policy-makers pin their hopes on SP alone. Which just goes to show why nationally mandated educational policies should be thoroughly piloted and evaluated before they are foisted on schools.
McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (2013). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.