Improving reading from Clackmannanshire to West Dunbartonshire

In the 1990s, two different studies began tracking the outcomes of reading interventions in Scottish schools.   One, run by Joyce Watson and Rhona Johnston then from the University of St Andrews, started in 1992/3 in schools in Clackmannanshire, which hugs the River Forth, just to the east of Stirling. The other began in 1998 in West Dunbartonshire, with the Clyde on side and Loch Lomond on the other, west of Glasgow. It was led by Tommy MacKay, an educational psychologist with West Dunbartonshire Council, who also lectured in psychology at the University of Strathclyde.

I’ve blogged about the Clackmannanshire study in more detail here. It was an experiment involving 13 schools and 300 children divided into three groups, taught to read using synthetic phonics, analytic phonics or analytic phonics plus phonemic awareness. The researchers measured and compared the outcomes.

The West Dunbartonshire study had a more complex design involving five different studies and ten strands of intervention over ten years in all pre-schools and primary schools in the local authority area (48 schools and 60 000 children). As in Clackmannanshire, analytic phonics was used as a control for the synthetic phonics experimental group. The study also had an aim; to eradicate functional illiteracy in school leavers in West Dunbartonshire. It very nearly succeeded; Achieving the Vision, the final report, shows that by the time the study finished in 2007 only three children were deemed functionally illiterate. ( Thanks to @SaraJPeden on Twitter for the link.)

Five studies, ten strands of intervention

The main study was a multiple-component intervention using cross-lagged design. Four supporting studies were;

  • Synthetic phonics study (18 schools)
  • Attitudes study (24 children from earlier RCT)
  • Declaration study (12 nurseries & primaries in another education authority area
  • Individual support study (24 secondary pupils).

The West Dunbartonshire study was unusual in that it addressed multiple factors already known to impact on reading attainment, but that are often sidelined in interventions focusing on the mechanics of reading. The ten strands were (p.14);

Strand 1: Phonological awareness and the alphabet

Strand 2: A strong and structured phonics emphasis

Strand 3: Extra classroom help in the early years

Strand 4: Fostering a ‘literacy environment’ in school and community

Strand 5: Raising teacher awareness through focused assessment

Strand 6: Increased time spent on key aspects of reading

Strand 7: Identification of and support for children who are failing

Strand 8: Lessons from research in interactive learning

Strand 9: Home support for encouraging literacy

Strand 10: Changing attitudes, values and expectations

Another unusual feature was that the researchers were looking not only for statistically significant improvements in reading, but wider significant improvements;

statistical significance must be viewed in terms of wider questions that were primarily social, cultural and political rather than scientific – questions about whether lives were being changed as a result of the intervention; questions about whether children would leave school with the skills needed for a successful career in a knowledge society; questions about whether ‘significant’ results actually meant significant to the participants in the research or only to the researcher.” (p.16)

The researchers also recognized the importance of ownership of the project throughout the local community, everyone “from the leader of the Council to the parents and the children themselves identifying with it and owning it as their own project”. (p.7)

In addition they were aware that a project following students through their entire school career would need to survive inevitable organisational challenges. Despite the fact that West Dunbartonshire was the second poorest council in Scotland, the local authority committed to continue funding the project;

The intervention had to continue and to succeed through virtually every major change or turmoil taking place in its midst – including a total restructuring of the educational directorate, together with significant changes in the Council. (p.46)

Results

 The results won’t surprise anyone familiar with the impact of synthetic phonics; there were significant improvements in reading ability in children in the experimental group. What was remarkable was the impact of the programme on children who didn’t participate. Raw scores for pre-school assessments improved noticeably between 1997 and 2006 and there were many reports from parents that the intervention had stimulated interest in reading in older siblings.

One of the most striking results was that at the end of the study, there were only three pupils in secondary schools in the local authority area with reading ages below the level of functional literacy (p.31). That’s impressive when compared to the 17% of school leavers in England considered functionally illiterate. So why hasn’t the West Dunbartonshire programme been rolled out nationwide? Three factors need to be considered in order to answer that question.

1.What is functional literacy?

The 17% figure for functional illiteracy amongst school leavers is often presented as ‘shocking’ or a ‘failure’ on the part of the education system. These claims are valid only if those making them have evidence that higher levels of school-leaver literacy are attainable. The evidence cited often includes literacy levels in other countries or studies showing very high percentages of children being able to decode after following a systematic synthetic phonics (SSP) programme. Such evidence is akin to comparing apples and oranges because:

– Many languages are orthographically more transparent than English (there’s a higher direct correspondence between graphemes and phonemes). The functional illiteracy figure of 17% (or thereabouts) holds for the English-speaking world, not just England, and has done so since at least the end of WW2  – and probably earlier given literacy levels in older adults.  (See Rashid & Brooks (2010) and McGuinness (1998).)

– Both the Clackmannanshire and West Dunbartonshire studies resulted in high levels of decoding ability. Results were less stellar when it came to comprehension.

– It depends what you mean by functional literacy. This was a challenge faced by Rashid & Brooks in their review; measures of functional literacy have varied, making it difficult to identify trends across time.

In the West Dunbartonshire study, children identified as having significant reading difficulties followed an intensive 3-month individual support programme in early 2003. This involved 91 children in P7, 12 in P6 and 1 in P5. By 2007, 12 pupils at secondary level were identified as still having not reached functional literacy levels; reading ages ranged between 6y 9m and 8y 10m (p.31). By June 2007, only three children had scores below the level of functional literacy. (Two others missed the final assessment.)

The level of functional literacy used in the West Dunbartonshire study was a reading age of at least 9y 6m using the Neale Assessment of Reading Ability (NARA-II). I couldn’t find an example online, but there’s a summary here. The tasks are rather different to the level 1 tasks in National Adult Literacy survey carried out in the USA in 1992 (NCES p.86).

A reading/comprehension age of 9y 6m is sufficient for getting by in adult life; reading a tabloid newspaper or filling in simple forms. Whether it’s sufficient for doing well in GCSEs (reading age 15y 7m ), getting a decent job in later life, or having a good understanding of how the world works is another matter.

2. What were the costs and benefits?

Overall, the study cost £13 per student per year, or, 0.5% of the local authority’s education budget (p.46), which doesn’t sound very much. But for 60 000 students over a ten year period it adds up to almost £8m, a significant sum. I couldn’t find details of the overall reading abilities of secondary school students when the study finished in 2007, and haven’t yet tracked down any follow-up studies showing the impact of the interventions on the local community.

Also, we don’t know what difference the study would have made to adult literacy levels in the area. Adult literacy levels are usually presented as averages, and in the case of the US National Adult Literacy survey included those with disabilities. Many children with disabilities in West Dunbartonshire would have been attending special schools and the study appears to have involved only mainstream schools.  Whether the impact of the study is sufficient to persuade cash-strapped local authorities to invest in it is unclear.

3. Could the interventions be implemented nationwide?

One of the strengths of Achieving the Vision is that it explores the limitations of the study in some detail (p.38ff). One of the strengths of the study was that the researchers were well aware of the challenges that would have to be met in order for the intervention to achieve its aims. These included issues with funding; the local Council, although supportive, was working within a different funding framework to the Scottish Executive Education Department. The funding issues had a knock-on impact on staff seconded to the project – who had no guarantee of employment once the initial funding ran out. The study was further affected by industrial action and by local authority re-structuring. How many projects would have access to the foresight, tenacity and collaborative abilities of those leading the West Dunbartonshire initiative?

Conclusion

The aim of the West Dunbartonshire initiative was to eradicate functional illiteracy in an entire local authority area. The study effectively succeeded in doing so – in mainstream schools, and if a functional illiteracy level is considered to be below a reading/ comprehension age of 9y 6m. Synthetic phonics played a key role.  Synthetic phonics is frequently advocated as a remedy for functional illiteracy in school leavers and in the adult population. The West Dunbartonshire study shows, pretty conclusively, that synthetic phonics plus individual support plus a comprehensive local authority-backed focus on reading, can result in significant improvements in reading ability in secondary school students. Does it eradicate functional illiteracy in school leavers or in the adult population?  We don’t know.

References

MacKay, T (2007).  Achieving the Vision: The Final Research Report of the West Dunbartonshire Literacy Initiative.

McGuinness, D (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

NCES (1993). Adult Literacy in America. National Center for Educational Statistics.

Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study, The Scottish Executive website. http://www.gov.scot/Resource/Doc/36496/0023582.pdf

 

 

 

 

Advertisements

Clackmannanshire revisited

The Clackmannanshire study is often cited as demonstrating the positive impact of synthetic phonics (SP) on children’s reading ability. The study tracked the reading, spelling and comprehension progress, over seven years, of three groups of children initially taught to read using one of three different methods;

  • analytic phonics programme
  • analytic phonics programme supplemented by a phonemic awareness programme
  • synthetic phonics programme.

The programmes were followed for 16 weeks in Primary 1 (P1, 5-6 yrs). Reading ability was assessed before and after the programme and for each year thereafter, spelling ability each year from P1, and comprehension each year from P2. After the first post-test, the two analytic phonics groups followed the SP programme, completing it by the end of P1.

I’ve blogged briefly about this study previously, based on a summary of the research. It’s quite clear that the children in the SP group made significantly more progress in reading and spelling than those in the other two groups.  One of my concerns about the results is that in the summary they are presented at group level, ie as the mean scores of the children in each different condition. There’s no indication of the range of scores within each group.

The range is important because we need to know whether the programme improved reading and spelling for all the children in the group, or for just some of them. Say for example, that the mean reading age of children in the SP group was 12 months ahead of the children in the other groups at the end of P1. We wouldn’t know, without more detail, whether all the children’s scores clustered around the 12 month mark, or whether the group mean had been raised by a few children having very high scores, or had been lowered by a few having very low scores.

At the end of the summary is a graph showing the progress made by ‘underachievers’ ie any children who were more than 2 years behind in their test scores. There were some children in that category at the end of P2; by the end of P7 the proportion had risen to 14%. So clearly there were children who were still struggling despite following an SP programme.

During a recent Twitter conversation, Kathy Rastle, Professor of Psychology at Royal Holloway College London (@Kathy_Rastle), sent me a link to a more detailed report by the Clackmannanshire researchers, Rhona Johnston and Joyce Watson.

more detail

I hoped that the more detailed report would provide more… well, detail. It did, but the ranges of scores within the groups were presented as standard deviations, so the impact of the programmes on individual children still wasn’t clear. That’s important. Obviously, if a reading programme enables a group of children to make significant gains in their reading ability, it’s worth implementing. But we also need to know the impact it has on individual children, because the point of teaching children to read is that each child learns to read.

The detail I was looking for is in Chapter 8 “Underachieving Children”, ie those with scores more than 2 years below the mean for their age. Obviously, in P1 no children could be allocated to that category because they hadn’t been at school long enough. But from P2 onwards, the authors tabulated the numbers of ‘underachievers’. (They note that some children were absent for some of the tests.) I’ve summarised the proportions (for boys and girls together) below:

more than 1 year behind (%)

P2 P3 P4 P5 P6 P7
reading 2.2 2.0 6.0 8.6 15.1 11.9
spelling 1.1 4.0 8.8 12.6 15.7 24.0
comprehension 5.0 18.0 15.5 19.2 29.4 27.6

more than 2 years behind (%)

P2 P3 P4 P5 P6 P7
reading 0 0.8 0 1.6 8.4 5.6
spelling 0.4 0.4 0.4 1.7 3.0 10.1
comprehension 0 1.2 1.6 5.0 16.2 14.0

The researchers point out that the proportion of children with serious problems with reading and spelling is quite low, but that it would be “necessary to collect control data to establish what would be typical levels of underachievement in a non-synthetic phonics programme.” Well, yes.

The SP programme clearly had a significantly positive impact on reading and spelling for most children. However that wasn’t true for all of them. The authors provide a detailed case study for one child (AF) who had a hearing difficulty and poor receptive and expressive language.  They compare his progress with that of the other 15 children in P4 who were one year or more behind their chronological age with reading.

Case study – AF

AF started school a year later than his peers and his class was in the analytic phonics and phonemic awareness group.  They then followed the SP programme at the end of P1.  Early in P2, AF started motor movement and language therapy programmes.

By the middle of P4, AF’s reading and spelling scores were almost the average for the group whose reading was a year or more behind, but his knowledge of letter sounds, phoneme segmentation and nonword reading was better than theirs. A detailed analysis  suggests his reading errors are the result of his lack of familiarity with some words, and that he’s spelling words as they sound to him. Like the other 15 children experiencing difficulties, he needed to revisit more complex phonics rules, so a supplementary phonics programme was provided in P5. When tested afterwards, the mean scores for the group showed spelling and reading above chronological age, and AF’s reading and spelling improved considerably as a result.

During P6 and P7 a peripatetic Support for Learning (SfL) teacher worked with AF on phonics for three 45 minute sessions each week and taught him strategies to improve his comprehension. An cccupational therapist and physiotherapist worked with him on his handwriting, and he was taught to touch type.  By the end of P7, AF’s reading age was 9 months above his chronological age and his spelling was more than 2 years ahead of the mean for the underachieving group.

conclusion

The ‘Clacks’ study is often cited as conclusive proof of the efficacy of SP programmes. It’s often implied that SP will make a significant difference for the troublesome 17% of school leavers who lack functional literacy.   What intrigued me about the study was the proportion of children in P7 who still had difficulty with functional literacy despite having had SP training. It’s 14%, suspiciously close to the proportion of ‘functionally illiterate’ school leavers.

Some teachers have argued that if all the children had had systematic synthetic phonics teaching from the outset, the ‘Clacks’ figures might be different, but AF’s experience suggests otherwise.  He obviously had substantial initial difficulties with reading, but by the end of primary school had effectively caught up with his peers. But his success wasn’t due only to the initial SP programme. Or even to the supplementary SP programme provided in P5. It was achieved only after intensive, tailored 1-1 interventions on the part of a team of professionals from outside school.

My children’s school, in England, at the time when AF was in P7, was not offering these services to children with AF’s level of difficulty. Most of the children had followed an initial SP programme, but there was no supplementary SP course on offer. The equivalent to the SfL teacher carried out annual assessments and made recommendations. Speech and Language and Occupational therapists didn’t routinely offer treatment to individual children except via schools, and weren’t invited into the one my children attended. And I’ve yet to hear of a physiotherapist working in a mainstream primary in our area.

As a rule of thumb, local authorities will not carry out a statutory assessment of a child until their school can demonstrate that they don’t have the resources to meet the child’s needs.  As a rule of thumb, schools are reluctant to spend money on specialist professionals if there’s a chance that the LA will bear the cost of that in a statutory assessment.  As a consequence, children are often several years ‘behind’ before they even get assessed, and the support they get is often in the form of a number of hours working with a teaching assistant who’s unlikely to be a qualified teacher, let alone a speech and language therapist, occupational therapist or physio.

If governments want to tackle the challenge of functional illiteracy, they need to invest in services that can address the root causes.

reference

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study. The Scottish Executive website http://www.gov.scot/Resource/Doc/36496/0023582.pdf

the view from the signpost: learning styles

Discovering that some popular teaching approaches (Learning Styles, Brain Gym, Thinking Hats) have less-than-robust support from research has prompted teachers to pay more attention to the evidence for their classroom practice. Teachers don’t have much time to plough through complex research findings. What they want are summaries, signposts to point them in the right direction. But research is a work in progress. Findings are often not clear-cut but contradictory, inconclusive or ambiguous. So it’s not surprising that some signposts – ‘do use synthetic phonics, ‘don’t use Learning Styles’ – often spark heated discussion. The discussions often cover the same ground. In this post, I want look at some recurring issues in debates about synthetic phonics (SP) and Learning Styles (LS).

Take-home messages

Synthetic phonics is an approach to teaching reading that begins by developing children’s awareness of the phonemes within words, links the phonemes with corresponding graphemes, and uses the grapheme-phoneme correspondence to decode the written word. Overall, the reading acquisition research suggests that SP is the most efficient method we’ve found to date of teaching reading. So the take-home message is ‘do use synthetic phonics’.

What most teachers mean by Learning Styles is a specific model developed by Fleming and Mills (1992) derived from the theory behind Neuro-Linguistic Programming. It proposes that students learn better in their preferred sensory modality – visual, aural, read/write or kinaesthetic (VARK). (The modalities are often reduced in practice to VAK – visual, auditory and kinaesthetic.) But ‘learning styles’ is also a generic term for a multitude of instructional models used in education and training. Coffield et al (2004) identified no fewer than 71 of them. Coffield et al’s evaluation didn’t include the VARK or VAK models, but a close relative – Dunn and Dunn’s Learning Styles Questionnaire – didn’t fare too well when tested against Coffield’s reliability and validity criteria (p.139). Other models did better, including Allinson and Hayes Cognitive Styles Index that met all the criteria.

The take-home message for teachers from Coffield and other reviews is that given the variation in validity and reliability between learning styles models, it isn’t worth teachers investing time and effort in using any learning style approach to teaching. So far so good. If the take-home messages are clear, why the heated debate?

Lumping and splitting

‘Lumping’ and ‘splitting’ refer to different ways in which people categorise specific examples; they’re terms used mainly by taxonomists. ‘Lumpers’ tend to use broad categories and ‘splitters’ narrow ones. Synthetic phonics proponents rightly emphasise precision in the way systematic, synthetic phonics (SSP) is used to teach children to read. SSP is a systematic not a scattergun approach, it involves building up words from phonemes not breaking words down to phonemes, and developing phonemic awareness rather than looking at pictures or word shapes. SSP advocates are ‘splitters’ extraordinaire – in respect of SSP practice at least. Learning styles critics, by contrast, tend to lump all learning styles together, often failing to make a distinction between LS models.

SP proponents also become ‘lumpers’ where other approaches to reading acquisition are concerned. Whether it’s whole language, whole words or mixed methods, it makes no difference… it’s not SSP. And both SSP proponents and LS critics are often ‘lumpers’ in respect of the research behind the particular take-home message they’ve embraced so enthusiastically. So what? Why does lumping or splitting matter?

Lumping all non-SSP reading methods together or all learning styles models together matters because the take-home messages from the research are merely signposts pointing busy practitioners in the right direction, not detailed maps of the territory. The signposts tell us very little about the research itself. Peering at the research through the spectacles of the take-home message is likely to produce a distorted view.

The distorted view from the signpost

The research process consists of several stages, including those illustrated in the diagram below.
theory to application
Each stage might include several elements. Some of the elements might eventually emerge as robust (green), others might be turn out to be flawed (red). The point of the research is to find out which is which. At any given time it will probably be unclear whether some components at each stage of the research process are flawed or not. Uncertainty is an integral part of scientific research. The history of science is littered with findings initially dismissed as rubbish that later ushered in a sea-change in thinking, and others that have been greeted as the Next Big Thing that have since been consigned to the trash.

Some of the SP and LS research findings have been contradictory, inconclusive or ambiguous. That’s par for the course. Despite the contradictions, unclear results and ambiguities, there might be general agreement about which way the signposts for practitioners are pointing. That doesn’t mean it’s OK to work backwards from the signpost and make assumptions about the research. In the diagram, there’s enough uncertainty in the research findings to put a question mark over all potential applications. But all that question mark itself tells us is that there’s uncertainty involved. A minor tweak to the theory could explain the contradictory, inconclusive or ambiguous results and then it would be green lights all the way down.

But why does that matter to teachers? It’s the signposts that are important to them, not the finer points of research methodology or statistical analysis. It matters because some of the teachers who are the most committed supporters of SP or critics of LS are also the most vociferous advocates of evidence-based practice.

Evidence: contradictory, inconclusive or ambiguous?

Decades of research into reading acquisition broadly support the use of synthetic phonics for teaching reading, although many of the research findings aren’t unambiguous. One example is the study carried out in Clackmannanshire by Rhona Johnston and Joyce Watson. The overall conclusion is that SP leads to big improvements in reading and spelling, but closer inspection of the results shows they are not entirely clear-cut, and the study’s methodology has been criticised. But you’re unlikely to know that if you rely on SP advocates for an evaluation of the evidence. Personally, I can’t see a problem with saying ‘the research evidence broadly supports the use of synthetic phonics for teaching reading’ and leaving it at that.

The evidence relating to learning styles models is also not watertight, although in this case, it suggests they are mostly not effective. But again, you’re unlikely to find out about the ambiguities from learning styles critics. Tom Bennett, for example, doesn’t like learning styles – as he makes abundantly clear in a TES blog post entitled “Zombie bølløcks: World War VAK isn’t over yet.”

The post is about the VAK Learning Styles model. But in the ‘Voodoo teaching’ chapter of his book Teacher Proof, Bennett concludes about learning styles in general “it is of course, complete rubbish as far as I can see” (p.147). Then he hedges his bets in a footnote; “IN MY OPINION”.

Tom’s an influential figure – government behaviour adviser, driving force behind the ResearchEd conferences and a frequent commentator on educational issues in the press. He’s entitled to lump together all learning styles models if he wants to and to write colourful opinion pieces about them if he gets the chance, but presenting the evidence in terms of his opinion, and missing out evidence that doesn’t support his opinion is misleading. It’s also at odds with an evidence-based approach to practice. Saying there’s mixed evidence for the effectiveness of learning styles models doesn’t take more words than implying there’s none.

So why don’t supporters in the case of SP, or critics in the case of LS, say what the evidence says, rather than what the signposts say? I’d hazard a guess it’s because they’re worried that teachers will see contradictory, inconclusive or ambiguous evidence as providing a loophole that gives them licence to carry on with their pet pedagogies regardless. But the risk of looking at the signpost rather than the evidence is that one set of dominant opinions will be replaced by another.

In the next few posts, I’ll be looking more closely at the learning styles evidence and what some prominent critics have to say about it.

Note:

David Didau responded to my thoughts about signposts and learning styles on his blog. Our discussion in the comments section revealed that he and I use the term ‘evidence’ to mean different things. Using words in different ways. Could explain everything.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

is systematic synthetic phonics generating neuromyths?

A recent Twitter discussion about systematic synthetic phonics (SSP) was sparked by a note to parents of children in a reception class, advising them what to do if their children got stuck on a word when reading. The first suggestion was “encourage them to sound out unfamiliar words in units of sound (e.g. ch/sh/ai/ea) and to try to blend them”. If that failed “can they use the pictures for any clues?” Two other strategies followed. The ensuing discussion began by questioning the wisdom of using pictures for clues and then went off at many tangents – not uncommon in conversations about SSP.
richard adams reading clues

SSP proponents are, rightly, keen on evidence. The body of evidence supporting SSP is convincing but it’s not the easiest to locate; much of the research predates the internet by decades or is behind a paywall. References are often to books, magazine articles or anecdote; not to be discounted, but not what usually passes for research. As a consequence it’s quite a challenge to build up an overview of the evidence for SSP that’s free of speculation, misunderstandings and theory that’s been superseded. The tangents that came up in this particular discussion are, I suggest, the result of assuming that if something is true for SSP in particular it must also be true for reading, perception, development or biology in general. Here are some of the inferences that came up in the discussion.

You can’t guess a word from a picture
Children’s books are renowned for their illustrations. Good illustrations can support or extend the information in the text, showing readers what a chalet, a mountain stream or a pine tree looks like, for example. Author and artist usually have detailed discussions about illustrations to ensure that the book forms an integrated whole and is not just a text with embellishments.

If the child is learning to read, pictures can serve to focus attention (which could be wandering anywhere) on the content of the text and can have a weak priming effect, increasing the likelihood of the child accessing relevant words. If the picture shows someone climbing a mountain path in the snow, the text is unlikely to contain words about sun, sand and ice-creams.

I understand why SSP proponents object to the child being instructed to guess a particular word by looking at a picture; the guess is likely to be wrong and the child distracted from decoding the word. But some teachers don’t seem to be keen on illustrations per se. As one teacher put it “often superficial time consuming detract from learning”.

Cues are clues are guesswork
The note to parents referred to ‘clues’ in the pictures. One contributor cited a blogpost that claimed “with ‘mixed methods’ eyes jump around looking for cues to guess from”. Clues and cues are often used interchangeably in discussions about phonics on social media. That’s understandable; the words have similar meanings and a slip on the keyboard can transform one into the other. But in a discussion about reading methods, the distinction between guessing, clues and cues is an important one.

Guessing involves drawing conclusions in the absence of enough information to give you a good chance of being right; it’s haphazard, speculative. A clue is a piece of information that points you in a particular direction. A cue has a more specific meaning depending on context; e.g. theatrical cues, social cues, sensory cues. In reading research, a cue is a piece of information about something the observer is interested in or a property of a thing to be attended to. It could be the beginning sound or end letter of a word, or an image representing the word. Cues are directly related to the matter in hand, clues are more indirectly related, guessing is a stab in the dark.

The distinction is important because if teachers are using the terms cue and clue interchangeably and assuming they both involve guessing there’s a risk they’ll mistakenly dismiss references to ‘cues’ in reading research as guessing or clues, which they are not.

Reading isn’t natural
Another distinction that came up in the discussion was the idea of natural vs. non-natural behaviours. One argument for children needing to be actively taught to read rather than picking it up as they go along is that reading, unlike walking and talking, isn’t a ‘natural’ skill. The argument goes that reading is a relatively recent technological development so we couldn’t possibly have evolved mechanisms for reading in the same way as we have evolved mechanisms for walking and talking. One proponent of this idea is Diane McGuinness, an influential figure in the world of synthetic phonics.

The argument rests on three assumptions. The first is that we have evolved specific mechanisms for walking and talking but not for reading. The ideas that evolution has an aim or purpose and that if everybody does something we must have evolved a dedicated mechanism to do it, are strongly contested by those who argue instead that we can do what our anatomy and physiology enable us to do (see arguments over Chomsky’s linguistic theory). But you wouldn’t know about that long-standing controversy from reading McGuinness’s books or comments from SSP proponents.

The second assumption is that children learn to walk and talk without much effort or input from others. One teacher called the natural/non-natural distinction “pretty damn obvious”. But sometimes the pretty damn obvious isn’t quite so obvious when you look at what’s actually going on. By the time they start school, the average child will have rehearsed walking and talking for thousands of hours. And most toddlers experience a considerable input from others when developing their walking and talking skills even if they don’t have what one contributor referred to as a “WEIRDo Western mother”. Children who’ve experienced extreme neglect (such as those raised in the notorious Romanian orphanages) tend to show significant developmental delays.

The third assumption is that learning to use technological developments requires direct instruction. Whether it does or not depends on the complexity of the task. Pointy sticks and heavy stones are technologies used in foraging and hunting, but most small children can figure out for themselves how to use them – as do chimps and crows. Is the use of sticks and stones by crows, chimps or hunter-gatherers natural or non-natural? A bicycle is a man-made technology more complex than sticks and stones, but most people are able to figure out how to ride a bike simply by watching others do it, even if a bit of practice is needed before they can do it themselves. Is learning to ride a bike with a bit of support from your mum or dad natural or non-natural?

Reading English is a more complex task than riding a bike because of the number of letter-sound correspondences. You’d need a fair amount of watching and listening to written language being read aloud to be able to read for yourself. And you’d need considerable instruction and practice before being able to fly a fighter jet because the technology is massively more complex than that involved in bicycles and alphabetic scripts.

One teacher asked “are you really going to go for the continuum fallacy here?” No idea why he considers a continuum a fallacy. In the natural/non-natural distinction used by SSP proponents there are three continua involved;

• the complexity of the task
• the length of rehearsal time required to master the task, and
• the extent of input from others that’s required.

Some children learn to read simply by being read to, reading for themselves and asking for help with words they don’t recognise. But because reading is a complex task, for most children learning to read by immersion like that would take thousands of hours of rehearsal. It makes far more sense to cut to the chase and use explicit instruction. In principle, learning to fly a fighter jet would be possible through trial-and-error, but it would be a stupidly costly approach to training pilots.

Technology is non-biological
I was told by several teachers that reading, riding a bike and flying an aircraft weren’t biological functions. I fail to see how they can’t be, since all involve human beings using their brain and body. It then occurred to me that the teachers are equating ‘biological’ with ‘natural’ or with the human body alone. In other words, if you acquire a skill that involves only body parts (e.g. walking or talking) it’s biological. If it involves anything other than a body part it’s not biological. Not sure where that leaves hunting with wooden spears, making baskets or weaving woolen fabric using a wooden loom and shuttle.

Teaching and learning are interchangeable
Another tangent was whether or not learning is involved in sleeping, eating and drinking. I contended that it is; newborns do not sleep, eat or drink in the same way as most of them will be sleeping, eating or drinking nine months later. One teacher kept telling me they don’t need to be taught to do those things. I can see why teachers often conflate teaching and learning, but they are not two sides of the same coin. You can teach children things but they might fail to learn them. And children can learn things that nobody has taught them. It’s debatable whether or not parents shaping a baby’s sleeping routine, spoon feeding them or giving them a sippy cup instead of a bottle count as teaching, but it’s pretty clear there’s a lot of learning going on.

What’s true for most is true for all
I was also told by one teacher that all babies crawl (an assertion he later modified) and by a school governor that they can all suckle (an assertion that wasn’t modified). Sweeping generalisations like this coming from people working in education is worrying. Children vary. They vary a lot. Even if only 0.1% of children do or don’t do something, that would involve 8 000 children in English schools. Some and most are not all or none and teachers of all people should be aware of that.

A core factor in children learning to read is the complexity of the task. If the task is a complex one, like reading, most children are likely to learn more quickly and effectively if you teach them explicitly. You can’t infer from that that all children are the same, they all learn in the same way or that teaching and learning are two sides of the same coin. Nor can you infer from a tenuous argument used to justify the use of SSP that distinctions between natural and non-natural or biological and technological are clear, obvious, valid or helpful. The evidence that supports SSP is the evidence that supports SSP. It doesn’t provide a general theory for language, education or human development.

jumping the literacy hurdle

Someone once said that getting a baby dressed was like trying to put an octopus into a string bag. I was reminded of that during another recent discussion with synthetic phonics (SP) advocates. The debate was triggered by this comment; “Surely, the most fundamental aim of schools is to teach children to read.”

This sentence looks like an essay question for trainee teachers – if they’re still expected to write essays, that is. It encapsulates what has frustrated me so much about the SP ‘position’; all those implicit assumptions.

First there is no ‘surely’ about any aspect of education. You name it, there’s been heated debate about it. Second, it’s not safe to assume schools should have a ‘most fundamental’ aim. Education is a complex business and generally involves quite a few fundamental aims; focussing on one rather than the others is a risky strategy. Third, the sentence assumes a role for literacy that requires some justification.

reading in the real world

Reading is our primary means of recording spoken language. It provides a way of communicating with others across space and time. It extends working memory. It’s important. But in a largely literate society it’s easy to assume that all members of that society are, should be, or need to be equally literate. They’re not. They never have been. And I’ve yet to find any evidence showing that uniform literacy across the population is either achievable or necessary.

I’m not claiming that it doesn’t matter if someone isn’t a competent reader or if 15% of school leavers are functionally illiterate. What I am claiming is that less than 100% functional literacy doesn’t herald the end of civilisation as we know it.

For thousands of years, functionally illiterate people have grown food, baked, brewed, made clothes, pots, pans, furniture, tools, weapons and machines, built houses, palaces, cities, chariots, sailing ships, dams and bridges, navigated halfway around the world, formed exquisite glassware and stunning jewellery, composed songs, poems and plays, devised judicial systems and developed sophisticated religious beliefs.

All those things require knowledge and skill – but not literacy. The quality of human life has undoubtedly been transformed by literacy, and transformed for the better. But literacy is a vehicle for knowledge, a means to an end not an end in itself. It’s important, not for its own sake but because of what it has enabled us – collectively – to achieve. I’m not disparaging reading for enjoyment; but reading for enjoyment didn’t change the world.

What the real world needs is not for everyone to be functionally literate, but for a critical mass of people to be functionally literate. And for some people to be so literate that they can acquire complex skills and knowledge that can benefit the rest of us. What proportion of people need to be functionally or highly literate will depend on what a particular society wants to achieve.

Human beings are a highly social species. Our ecological success (our ability to occupy varied habitats – what we do to those habitats is something else entirely) is due to our ability to solve problems, to communicate those solutions to each other and to work collectively. What an individual can or can’t do is important, but what we can do together is more important because that’s a more efficient way of using resources for mutual benefit.

This survey found that 20% of professionals and 30% of managers don’t have adequate literacy skills. It’s still possible to hold down a skilled job, draw a good salary, drive a car, get a mortgage, raise a family and retire on an adequate pension even if your literacy skills are flaky. Poor literacy might be embarrassing and require some ingenious workarounds to cover it up, but that’s more of a problem with social acceptability than utility. And plenty of jobs don’t require you to be a great reader.

It looks as though inadequate literacy, although an issue in the world of work, isn’t an insurmountable obstacle. So why would anyone claim that teaching children to read is ‘the most fundamental aim of schools’?

reading in schools

There are several reasons. Mass education systems were set up partly to provide manufacturing industry with a literate, numerate workforce. Schools in those fledgling education systems were often run on shoestring budgets. If a school had very limited resources, making reading a priority at least provided children with the opportunity to educate themselves in later life. Literacy takes time to develop, so if you have the luxury of being able to teach additional subjects, it makes sense to access them via reading and writing – thus killing two birds with one stone. Lastly, because for a variety of reasons public examinations are written ones, literacy is a key measure of pupil and school achievement.

In the real world, if you find reading especially difficult you can still learn a lot – by watching and listening or trial and error. But the emphasis schools place on literacy means that if in school you happen to be a child who finds reading especially difficult, you’re stumped. You can’t even compensate by becoming knowledgeable if you’re required to jump the literacy hurdle first. And poor knowledge, however literate you are, is a big problem in the real world.

SP advocates would say that the reason some children find reading difficult is because they haven’t been taught properly. And that if they were taught properly they would be able to read. That’s a possible explanation, but one possible explanation doesn’t rule out all the other possible explanations. And if Jeanne Chall’s descriptions of teachers’ approaches to formal reading instruction programmes are anything to go by, it’s unlikely that all children are going to get taught to read ‘properly’ any time soon. If some children have problems learning to read for whatever reason, we need to make sure that they’re not denied access to knowledge as well. Because in the real world, it’s knowledge that makes things work.

Now for some of the arms of the reading octopus that got tangled up in the string bag that is Twitter.

• I’m not saying reading isn’t important; it is – but that doesn’t make it the ‘fundamental aim of schools’, nor ‘a fundamental skill needed for life’.
• I’m not saying children shouldn’t be taught to read; they should be, but variation in reading ability doesn’t automatically mean a ‘deficit’ in instruction, home life or in the child.
• I’m not saying some children struggle to read because they are ‘less able’ than others; some kids find reading especially challenging but that has nothing to do with their intelligence.
• Nor am saying we shouldn’t have high aspirations for students; we should, but there’s no reason to have the same aspirations for all of them. Our strength as a species is in our diversity.

Frankly, if forced to choose, I’d rather live in a community populated by competent, practical people with reading skills that left something to be desired, than one populated by people with, say, PPE degrees from Oxford who’ve forgotten which way is up.

synthetic phonics, dyslexia and natural learning

Too intense a focus on the virtues of synthetic phonics (SP) can, it seems, result in related issues getting a bit blurred. I discovered that some whole language supporters do appear to have been ideologically motivated but that the whole language approach didn’t originate in ideology. And as far as I can tell we don’t know if SP can reduce adult functional illiteracy rates. But I wouldn’t have known either of those things from the way SP is framed by its supporters. SP proponents also make claims about how the brain is involved in reading. In this post I’ll look at two of them; dyslexia and natural learning.

Dyslexia

Dyslexia started life as a descriptive label for the reading difficulties adults can develop due to brain damage caused by a stroke or head injury. Some children were observed to have similar reading difficulties despite otherwise normal development. The adults’ dyslexia was acquired (they’d previously been able to read) but the children’s dyslexia was developmental (they’d never learned to read). The most obvious conclusion was that the children also had brain damage – but in the early 20th century when the research started in earnest there was no easy way to determine that.

Medically, developmental dyslexia is still only a descriptive label meaning ‘reading difficulties’ (causes unknown, might/might not be biological, might vary from child to child). However, dyslexia is now also used to denote a supposed medical condition that causes reading difficulties. This new usage is something that Diane McGuinness complains about in Why Children Don’t Learn to Read.

I completely agree with McGuinness that this use isn’t justified and has led to confusion and unintended and unwanted outcomes. But I think she muddies the water further by peppering her discussion of dyslexia (pp. 132-140) with debatable assertions such as:

“We call complex human traits ‘talents’”.

“Normal variation is on a continuum but people working from a medical or clinical model tend to think in dichotomies…”.

“Reading is definitely not a property of the human brain”.

“If reading is a biological property of the brain, transmitted genetically, then this must have occurred by Lamarckian evolution.”

Why debatable? Because complex human traits are not necessarily ‘talents’; clinicians tend to be more aware of normal variation than most people; reading must be a ‘property of the brain’ if we need a brain to read; and the research McGuinness refers to didn’t claim that ‘reading’ was transmitted genetically.

I can understand why McGuinness might be trying to move away from the idea that reading difficulties are caused by a biological impairment that we can’t fix. After all, the research suggests SP can improve the poor phonological awareness that’s strongly associated with reading difficulties. I get the distinct impression, however, that she’s uneasy with the whole idea of reading difficulties having biological causes. She concedes that phonological processing might be inherited (p.140) but then denies that a weakness in discriminating phonemes could be due to organic brain damage. She’s right that brain scans had revealed no structural brain differences between dyslexics and good readers. And in scans that show functional variations, the ability to read might be a cause, rather than an effect.

But as McGuinness herself points out reading is a complex skill involving many brain areas, and biological mechanisms tend to vary between individuals. In a complex biological process there’s a lot of scope for variation. Poor phonological awareness might be a significant factor, but it might not be the only factor. A child with poor phonological awareness plus visual processing impairments plus limited working memory capacity plus slow processing speed – all factors known to be associated with reading difficulties – would be unlikely to find those difficulties eliminated by SP alone. The risk in conceding that reading difficulties might have biological origins is that using teaching methods to remediate them might then called into question – just what McGuinness doesn’t want to happen, and for good reason.

Natural and unnatural abilities

McGuinness’s view of the role of biology in reading seems to be derived from her ideas about the origin of skills. She says;

It is the natural abilities of people that are transmitted genetically, not unnatural abilities that depend upon instruction and involve the integration of many subskills”. (p.140, emphasis McGuinness)

This is a distinction often made by SP proponents. I’ve been told that children don’t need to be taught to walk or talk because these abilities are natural and so develop instinctively and effortlessly. Written language, in contrast, is a recent man-made invention; there hasn’t been time to evolve a natural mechanism for reading, so we need to be taught how to do it and have to work hard to master it. Steven Pinker, who wrote the foreword to Why Children Can’t Read seems to agree. He says “More than a century ago, Charles Darwin got it right: language is a human instinct, but written language is not” (p.ix).

Although that’s a plausible model, what Pinker and McGuinness fail to mention is that it’s also a controversial one. The part played by nature and nurture in the development of language (and other abilities) has been the subject of heated debate for decades. The reason for the debate is that the relevant research findings can be interpreted in different ways. McGuinness is entitled to her interpretation but it’s disingenuous in a book aimed at a general readership not to tell readers that other researchers would disagree.

Research evidence suggests that the natural/unnatural skills model has got it wrong. The same natural/unnatural distinction was made recently in the case of part of the brain called the fusiform gyrus. In the fusiform gyrus, visual information about objects is categorised. Different types of objects, such as faces, places and small items like tools, have their own dedicated locations. Because those types of objects are naturally occurring, researchers initially thought their dedicated locations might be hard-wired.

But there’s also word recognition area. And in experts, the faces area is also used for cars, chess positions, and specially invented items called greebles. To become an expert in any of those things you require some instruction – you’d need to learn the rules of chess or the names of cars or greebles. But your visual system can still learn to accurately recognise, discriminate between and categorise many thousands of items like faces, places, tools, cars, chess positions and greebles simply through hours and hours of visual exposure.

Practice makes perfect

What claimants for ‘natural’ skills also tend to overlook is how much rehearsal goes into them. Most parents don’t actively teach children to talk, but babies hear and rehearse speech for many months before they can say recognisable words. Most parents don’t teach toddlers to walk, but it takes young children years to become fully stable on their feet despite hours of daily practice.

There’s no evidence that as far as the brain is concerned there’s any difference between ‘natural’ and ‘unnatural’ knowledge and skills. How much instruction and practice knowledge or skills require will depend on their transparency and complexity. Walking and bike-riding are pretty transparent; you can see what’s involved by watching other people. But they take a while to learn because of the complexity of the motor-co-ordination and balance involved. Speech and reading are less transparent and more complex than walking and bike-riding, so take much longer to master. But some children require intensive instruction in order to learn to speak, and many children learn to read with minimal input from adults. The natural/unnatural distinction is a false one and it’s as unhelpful as assuming that reading difficulties are caused by ‘dyslexia’.

Multiple causes

What underpins SP proponents’ reluctance to admit biological factors as causes for reading difficulties is, I suspect, an error often made when assessing cause and effect. It’s an easy one to make, but one that people advocating changes to public policy need to be aware of.

Let’s say for the sake of argument that we know, for sure, that reading difficulties have three major causes, A, B and C. The one that occurs most often is A. We can confidently predict that children showing A will have reading difficulties. What we can’t say, without further investigation, is whether a particular child’s reading difficulties are due to A. Or if A is involved, that it’s the only cause.

We know that poor phonological awareness is frequently associated with reading difficulties. Because SP trains children to be aware of phonological features in speech, and because that training improves word reading and spelling, it’s a safe bet that poor phonological awareness is also a cause of reading difficulties. But because reading is a complex skill, there are many possible causes for reading difficulties. We can’t assume that poor phonological awareness is the only cause, or that it’s a cause in all cases.

The evidence that SP improves children’s decoding ability is persuasive. However, the evidence also suggests that 12% – 15% of children will still struggle to learn to decode using SP. And that around 15% of children will struggle with reading comprehension. Having a method of reading instruction that works for most children is great, but education should benefit all children, and since the minority of children who struggle are the ones people keep complaining about, we need to pay attention to what causes reading difficulties for those children – as individuals. In education, one size might fit most, but it doesn’t fit all.

Reference

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

synthetic phonics and functional literacy: the missing link

According to Diane McGuinness in Why Can’t Children Read, first published in 1997, California’s low 4th grade reading scores prompted it in 1996 to revert to using phonics rather than ‘real books’ for teaching reading. McGuinness, like the legislators in California, clearly expected phonics to make a difference to reading levels. It appears to have had little impact (NCES, 2013). McGuinness would doubtless point out that ‘phonics’ isn’t systematic synthetic phonics, and that might have made a big difference. Indeed it might. We don’t know.

Synthetic phonics and functional literacy

Synthetic phonics is important because it can break a link in a casual chain that leads to functional illiteracy:

• poor phonological awareness ->
• poor decoding ->
• poor reading comprehension ->
• functional illiteracy and low educational attainment

The association between phonological awareness and reading difficulties is well established. And obviously if you can’t decode text you won’t understand it and if you can’t understand text your educational attainment won’t be very high.

SP involves training children to detect, recognise and discriminate between phonemes, so we’d expect it to improve phonological awareness and decoding skills, and that’s exactly what studies have shown. But as far as I can tell, we don’t know what impact SP has on the rest of the causal chain; on functional literacy rates in school leavers or on overall educational attainment.

This is puzzling. The whole point of teaching children to read is so they can be functionally literate. The SP programmes McGuinness advocates have been available for at least a couple of decades, so there’s been plenty of time to assess their impact on functional literacy. One of them, Phono-graphix (developed by a former student of McGuinness’s, now her daughter-in-law), has been the focus of several peer-reviewed studies all of which report improvements, but none of which appears to have assessed the impact on functional literacy by school leaving age. SP proponents have pointed out that might be because they’ve had enough difficulty getting policy-makers to take SP seriously, let alone fund long-term pilot studies.

The Clackmannanshire study

One study that did involve SP and followed the development of literacy skills over time was carried out in Clackmannanshire in Scotland by Rhona Johnston and Joyce Watson, then based at the University of Hull and the University of St Andrews respectively.

They compared three reading instruction approaches implemented in Primary 1 and tracked children’s performance in word reading, spelling and reading comprehension up to Primary 7. The study found very large gains in word reading (3y 6m; fig 1) and spelling (1y 9m; fig 2) for the group of children who’d had the SP intervention. The report describes reading comprehension as “significantly above chronological age throughout”. What it’s referring to is a 7-month advantage in P1 that had reduced to a 3.5-month advantage by P7.

A noticeable feature of the Clackmannanshire study is that scores were presented as group means, although boys’ and girls’ scores and those of advantaged and disadvantaged children were differentiated. One drawback of aggregating scores this way is that it can mask effects within the groups. So an intervention might be followed by a statistically significant average improvement that’s caused by some children performing much better than others.

This is exactly what we see in the data on ‘underachievers’ (fig 9). Despite large improvements at the group level, by P7 5% of children were more than two years behind their chronological age norm for word reading, 10% for spelling and 15% for reading comprehension. The improvements in group scores on word reading and spelling increased with age – but so did the proportion of children who were more than two years behind. This is an example of the ‘Matthew effect’ that Keith Stanovich refers to; children who can decode read more so their reading improves, whereas children who can’t decode don’t read so don’t improve. For the children in the Clackmannanshire study as a group, SP significantly improved word reading and spelling and slightly improved their comprehension, but it didn’t eliminate the Matthew effect.

The phonics check

There’s a similar within-group variation in the English KS1 phonics check, introduced in 2012. Ignoring the strange shape of the graph in 2012 and 2013 (though Dorothy Bishop’s observations are worth reading), the percentage of Year 2 children who scored below the expected standard was 15% in 2013 and 12% in 2014. The sharp increase at the cut-off point suggests that there are two populations of children – those who grasp phonics and those who don’t. Or that most children have been taught phonics properly but some haven’t. There’s also a spike at the end of the long tail of children who don’t quite ‘get’ phonics for whatever reason, representing the 5783 children who scored 0.

It’s clear that SP significantly improves children’s ability to decode and spell – at the group level. But we don’t appear to know whether that improvement is due to children who can already decode a bit getting much better at it, or to children who previously couldn’t decode learning to do it, or both, or if there are some children for whom SP has no impact.

And I have yet to find evidence showing that SP reduces the rates of functional illiteracy that McGuinness, politicians and the press complain about. The proportion of school leavers who have difficulty with reading comprehension has hovered around 17% for decades in the US (NCES, 2013) and in the UK (Rashid & Brooks, 2010). A similar proportion of children in the US and the UK populations have some kind of learning difficulty. And according to the Warnock report that figure appears to have been stable in the UK since mass education was introduced.

The magical number 17 plus or minus 2

There’s a likely explanation for that 17% (or thereabouts). In a large population, some features (such as height, weight, IQ or reading ability) are the outcome of what are essentially random variables. If you measure one of those features across the population and plot a graph of your measurements, they will form what’s commonly referred to as a normal distribution – with the familiar bell curve shape. The curve will be symmetrical around the mean (average) score. Not only does that tell you that 50% of your population will score above the mean and 50% below it, it also enables you to predict what proportion of the population will be significantly taller/shorter, lighter/heavier, more/less intelligent or better/worse at reading than average. Statistically, around 16% of the population will score more than one standard deviation below the mean. Those people will be significantly shorter/lighter/less intelligent or have more difficulties with reading than the rest of the population.

Bell curves tend to ring alarm bells so I need to make it clear what I am not saying. I’m not saying that problems with reading are due to a ‘reading gene’ or to biology or IQ and that we can’t do anything about them. What I am saying is that if reading ability in a large population is the outcome of not just one factor, but many factors that are to all intents and purposes random, then it’s a pretty safe bet that around 16% of children will have a significant problem with it. What’s important for that 16% is figuring out what factors are causing reading problems for individual children within that group. There are likely to be several different causes, as the NCES (1993) study found. So a child might have reading difficulties due to persistent glue ear as an infant, an undiagnosed developmental disorder, having a mother with mental health problems who hardly speaks to them, having no books at home or because their family dismisses reading as pointless. Or all of the above. SP might help, but is unlikely to address all of the obstacles to word reading, spelling and comprehension that child faces.

The data show that SP enables 11 year-olds as a group to make huge gains in their word reading and spelling skills. That’s brilliant. Let’s use synthetic phonics.

The data also show that SP doesn’t eliminate reading comprehension problems for at least 15% of 11 year-olds – or the word reading problems of around 15% of 6-7 year-olds. That could be due to some SP programmes not being taught systematically enough, intensively enough or for long enough. But it could be due to other causes. If so, those causes need to be identified and addressed or the child’s functional literacy will remain at risk.

I can see why the Clackmannanshire study convinced the UK government to recommend then mandate the use of SP for reading instruction in English schools (things are different in Scotland), but I haven’t yet found a follow-up study that measured literacy levels at 16, or the later impact on educational attainment; and the children involved in the study would now be in their early 20s.

What concerns me is that if more is being implicitly claimed for SP than it can actually deliver or if it fails to deliver a substantial improvement in the functional literacy of school leavers in a decade’s time, then it’s likely to be seen as yet another educational ‘fad’ and abandoned, regardless of the gains it brings in decoding and spelling. Meanwhile, the many other factors involved in reading comprehension are at risk of being marginalised if policy-makers pin their hopes on SP alone. Which just goes to show why nationally mandated educational policies should be thoroughly piloted and evaluated before they are foisted on schools.


References

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (2013). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.