Improving reading from Clackmannanshire to West Dunbartonshire

In the 1990s, two different studies began tracking the outcomes of reading interventions in Scottish schools.   One, run by Joyce Watson and Rhona Johnston then from the University of St Andrews, started in 1992/3 in schools in Clackmannanshire, which hugs the River Forth, just to the east of Stirling. The other began in 1998 in West Dunbartonshire, with the Clyde on side and Loch Lomond on the other, west of Glasgow. It was led by Tommy MacKay, an educational psychologist with West Dunbartonshire Council, who also lectured in psychology at the University of Strathclyde.

I’ve blogged about the Clackmannanshire study in more detail here. It was an experiment involving 13 schools and 300 children divided into three groups, taught to read using synthetic phonics, analytic phonics or analytic phonics plus phonemic awareness. The researchers measured and compared the outcomes.

The West Dunbartonshire study had a more complex design involving five different studies and ten strands of intervention over ten years in all pre-schools and primary schools in the local authority area (48 schools and 60 000 children). As in Clackmannanshire, analytic phonics was used as a control for the synthetic phonics experimental group. The study also had an aim; to eradicate functional illiteracy in school leavers in West Dunbartonshire. It very nearly succeeded; Achieving the Vision, the final report, shows that by the time the study finished in 2007 only three children were deemed functionally illiterate. ( Thanks to @SaraJPeden on Twitter for the link.)

Five studies, ten strands of intervention

The main study was a multiple-component intervention using cross-lagged design. Four supporting studies were;

  • Synthetic phonics study (18 schools)
  • Attitudes study (24 children from earlier RCT)
  • Declaration study (12 nurseries & primaries in another education authority area
  • Individual support study (24 secondary pupils).

The West Dunbartonshire study was unusual in that it addressed multiple factors already known to impact on reading attainment, but that are often sidelined in interventions focusing on the mechanics of reading. The ten strands were (p.14);

Strand 1: Phonological awareness and the alphabet

Strand 2: A strong and structured phonics emphasis

Strand 3: Extra classroom help in the early years

Strand 4: Fostering a ‘literacy environment’ in school and community

Strand 5: Raising teacher awareness through focused assessment

Strand 6: Increased time spent on key aspects of reading

Strand 7: Identification of and support for children who are failing

Strand 8: Lessons from research in interactive learning

Strand 9: Home support for encouraging literacy

Strand 10: Changing attitudes, values and expectations

Another unusual feature was that the researchers were looking not only for statistically significant improvements in reading, but wider significant improvements;

statistical significance must be viewed in terms of wider questions that were primarily social, cultural and political rather than scientific – questions about whether lives were being changed as a result of the intervention; questions about whether children would leave school with the skills needed for a successful career in a knowledge society; questions about whether ‘significant’ results actually meant significant to the participants in the research or only to the researcher.” (p.16)

The researchers also recognized the importance of ownership of the project throughout the local community, everyone “from the leader of the Council to the parents and the children themselves identifying with it and owning it as their own project”. (p.7)

In addition they were aware that a project following students through their entire school career would need to survive inevitable organisational challenges. Despite the fact that West Dunbartonshire was the second poorest council in Scotland, the local authority committed to continue funding the project;

The intervention had to continue and to succeed through virtually every major change or turmoil taking place in its midst – including a total restructuring of the educational directorate, together with significant changes in the Council. (p.46)

Results

 The results won’t surprise anyone familiar with the impact of synthetic phonics; there were significant improvements in reading ability in children in the experimental group. What was remarkable was the impact of the programme on children who didn’t participate. Raw scores for pre-school assessments improved noticeably between 1997 and 2006 and there were many reports from parents that the intervention had stimulated interest in reading in older siblings.

One of the most striking results was that at the end of the study, there were only three pupils in secondary schools in the local authority area with reading ages below the level of functional literacy (p.31). That’s impressive when compared to the 17% of school leavers in England considered functionally illiterate. So why hasn’t the West Dunbartonshire programme been rolled out nationwide? Three factors need to be considered in order to answer that question.

1.What is functional literacy?

The 17% figure for functional illiteracy amongst school leavers is often presented as ‘shocking’ or a ‘failure’ on the part of the education system. These claims are valid only if those making them have evidence that higher levels of school-leaver literacy are attainable. The evidence cited often includes literacy levels in other countries or studies showing very high percentages of children being able to decode after following a systematic synthetic phonics (SSP) programme. Such evidence is akin to comparing apples and oranges because:

– Many languages are orthographically more transparent than English (there’s a higher direct correspondence between graphemes and phonemes). The functional illiteracy figure of 17% (or thereabouts) holds for the English-speaking world, not just England, and has done so since at least the end of WW2  – and probably earlier given literacy levels in older adults.  (See Rashid & Brooks (2010) and McGuinness (1998).)

– Both the Clackmannanshire and West Dunbartonshire studies resulted in high levels of decoding ability. Results were less stellar when it came to comprehension.

– It depends what you mean by functional literacy. This was a challenge faced by Rashid & Brooks in their review; measures of functional literacy have varied, making it difficult to identify trends across time.

In the West Dunbartonshire study, children identified as having significant reading difficulties followed an intensive 3-month individual support programme in early 2003. This involved 91 children in P7, 12 in P6 and 1 in P5. By 2007, 12 pupils at secondary level were identified as still having not reached functional literacy levels; reading ages ranged between 6y 9m and 8y 10m (p.31). By June 2007, only three children had scores below the level of functional literacy. (Two others missed the final assessment.)

The level of functional literacy used in the West Dunbartonshire study was a reading age of at least 9y 6m using the Neale Assessment of Reading Ability (NARA-II). I couldn’t find an example online, but there’s a summary here. The tasks are rather different to the level 1 tasks in National Adult Literacy survey carried out in the USA in 1992 (NCES p.86).

A reading/comprehension age of 9y 6m is sufficient for getting by in adult life; reading a tabloid newspaper or filling in simple forms. Whether it’s sufficient for doing well in GCSEs (reading age 15y 7m ), getting a decent job in later life, or having a good understanding of how the world works is another matter.

2. What were the costs and benefits?

Overall, the study cost £13 per student per year, or, 0.5% of the local authority’s education budget (p.46), which doesn’t sound very much. But for 60 000 students over a ten year period it adds up to almost £8m, a significant sum. I couldn’t find details of the overall reading abilities of secondary school students when the study finished in 2007, and haven’t yet tracked down any follow-up studies showing the impact of the interventions on the local community.

Also, we don’t know what difference the study would have made to adult literacy levels in the area. Adult literacy levels are usually presented as averages, and in the case of the US National Adult Literacy survey included those with disabilities. Many children with disabilities in West Dunbartonshire would have been attending special schools and the study appears to have involved only mainstream schools.  Whether the impact of the study is sufficient to persuade cash-strapped local authorities to invest in it is unclear.

3. Could the interventions be implemented nationwide?

One of the strengths of Achieving the Vision is that it explores the limitations of the study in some detail (p.38ff). One of the strengths of the study was that the researchers were well aware of the challenges that would have to be met in order for the intervention to achieve its aims. These included issues with funding; the local Council, although supportive, was working within a different funding framework to the Scottish Executive Education Department. The funding issues had a knock-on impact on staff seconded to the project – who had no guarantee of employment once the initial funding ran out. The study was further affected by industrial action and by local authority re-structuring. How many projects would have access to the foresight, tenacity and collaborative abilities of those leading the West Dunbartonshire initiative?

Conclusion

The aim of the West Dunbartonshire initiative was to eradicate functional illiteracy in an entire local authority area. The study effectively succeeded in doing so – in mainstream schools, and if a functional illiteracy level is considered to be below a reading/ comprehension age of 9y 6m. Synthetic phonics played a key role.  Synthetic phonics is frequently advocated as a remedy for functional illiteracy in school leavers and in the adult population. The West Dunbartonshire study shows, pretty conclusively, that synthetic phonics plus individual support plus a comprehensive local authority-backed focus on reading, can result in significant improvements in reading ability in secondary school students. Does it eradicate functional illiteracy in school leavers or in the adult population?  We don’t know.

References

MacKay, T (2007).  Achieving the Vision: The Final Research Report of the West Dunbartonshire Literacy Initiative.

McGuinness, D (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

NCES (1993). Adult Literacy in America. National Center for Educational Statistics.

Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study, The Scottish Executive website. http://www.gov.scot/Resource/Doc/36496/0023582.pdf

 

 

 

 

Advertisements

Clackmannanshire revisited

The Clackmannanshire study is often cited as demonstrating the positive impact of synthetic phonics (SP) on children’s reading ability. The study tracked the reading, spelling and comprehension progress, over seven years, of three groups of children initially taught to read using one of three different methods;

  • analytic phonics programme
  • analytic phonics programme supplemented by a phonemic awareness programme
  • synthetic phonics programme.

The programmes were followed for 16 weeks in Primary 1 (P1, 5-6 yrs). Reading ability was assessed before and after the programme and for each year thereafter, spelling ability each year from P1, and comprehension each year from P2. After the first post-test, the two analytic phonics groups followed the SP programme, completing it by the end of P1.

I’ve blogged briefly about this study previously, based on a summary of the research. It’s quite clear that the children in the SP group made significantly more progress in reading and spelling than those in the other two groups.  One of my concerns about the results is that in the summary they are presented at group level, ie as the mean scores of the children in each different condition. There’s no indication of the range of scores within each group.

The range is important because we need to know whether the programme improved reading and spelling for all the children in the group, or for just some of them. Say for example, that the mean reading age of children in the SP group was 12 months ahead of the children in the other groups at the end of P1. We wouldn’t know, without more detail, whether all the children’s scores clustered around the 12 month mark, or whether the group mean had been raised by a few children having very high scores, or had been lowered by a few having very low scores.

At the end of the summary is a graph showing the progress made by ‘underachievers’ ie any children who were more than 2 years behind in their test scores. There were some children in that category at the end of P2; by the end of P7 the proportion had risen to 14%. So clearly there were children who were still struggling despite following an SP programme.

During a recent Twitter conversation, Kathy Rastle, Professor of Psychology at Royal Holloway College London (@Kathy_Rastle), sent me a link to a more detailed report by the Clackmannanshire researchers, Rhona Johnston and Joyce Watson.

more detail

I hoped that the more detailed report would provide more… well, detail. It did, but the ranges of scores within the groups were presented as standard deviations, so the impact of the programmes on individual children still wasn’t clear. That’s important. Obviously, if a reading programme enables a group of children to make significant gains in their reading ability, it’s worth implementing. But we also need to know the impact it has on individual children, because the point of teaching children to read is that each child learns to read.

The detail I was looking for is in Chapter 8 “Underachieving Children”, ie those with scores more than 2 years below the mean for their age. Obviously, in P1 no children could be allocated to that category because they hadn’t been at school long enough. But from P2 onwards, the authors tabulated the numbers of ‘underachievers’. (They note that some children were absent for some of the tests.) I’ve summarised the proportions (for boys and girls together) below:

more than 1 year behind (%)

P2 P3 P4 P5 P6 P7
reading 2.2 2.0 6.0 8.6 15.1 11.9
spelling 1.1 4.0 8.8 12.6 15.7 24.0
comprehension 5.0 18.0 15.5 19.2 29.4 27.6

more than 2 years behind (%)

P2 P3 P4 P5 P6 P7
reading 0 0.8 0 1.6 8.4 5.6
spelling 0.4 0.4 0.4 1.7 3.0 10.1
comprehension 0 1.2 1.6 5.0 16.2 14.0

The researchers point out that the proportion of children with serious problems with reading and spelling is quite low, but that it would be “necessary to collect control data to establish what would be typical levels of underachievement in a non-synthetic phonics programme.” Well, yes.

The SP programme clearly had a significantly positive impact on reading and spelling for most children. However that wasn’t true for all of them. The authors provide a detailed case study for one child (AF) who had a hearing difficulty and poor receptive and expressive language.  They compare his progress with that of the other 15 children in P4 who were one year or more behind their chronological age with reading.

Case study – AF

AF started school a year later than his peers and his class was in the analytic phonics and phonemic awareness group.  They then followed the SP programme at the end of P1.  Early in P2, AF started motor movement and language therapy programmes.

By the middle of P4, AF’s reading and spelling scores were almost the average for the group whose reading was a year or more behind, but his knowledge of letter sounds, phoneme segmentation and nonword reading was better than theirs. A detailed analysis  suggests his reading errors are the result of his lack of familiarity with some words, and that he’s spelling words as they sound to him. Like the other 15 children experiencing difficulties, he needed to revisit more complex phonics rules, so a supplementary phonics programme was provided in P5. When tested afterwards, the mean scores for the group showed spelling and reading above chronological age, and AF’s reading and spelling improved considerably as a result.

During P6 and P7 a peripatetic Support for Learning (SfL) teacher worked with AF on phonics for three 45 minute sessions each week and taught him strategies to improve his comprehension. An cccupational therapist and physiotherapist worked with him on his handwriting, and he was taught to touch type.  By the end of P7, AF’s reading age was 9 months above his chronological age and his spelling was more than 2 years ahead of the mean for the underachieving group.

conclusion

The ‘Clacks’ study is often cited as conclusive proof of the efficacy of SP programmes. It’s often implied that SP will make a significant difference for the troublesome 17% of school leavers who lack functional literacy.   What intrigued me about the study was the proportion of children in P7 who still had difficulty with functional literacy despite having had SP training. It’s 14%, suspiciously close to the proportion of ‘functionally illiterate’ school leavers.

Some teachers have argued that if all the children had had systematic synthetic phonics teaching from the outset, the ‘Clacks’ figures might be different, but AF’s experience suggests otherwise.  He obviously had substantial initial difficulties with reading, but by the end of primary school had effectively caught up with his peers. But his success wasn’t due only to the initial SP programme. Or even to the supplementary SP programme provided in P5. It was achieved only after intensive, tailored 1-1 interventions on the part of a team of professionals from outside school.

My children’s school, in England, at the time when AF was in P7, was not offering these services to children with AF’s level of difficulty. Most of the children had followed an initial SP programme, but there was no supplementary SP course on offer. The equivalent to the SfL teacher carried out annual assessments and made recommendations. Speech and Language and Occupational therapists didn’t routinely offer treatment to individual children except via schools, and weren’t invited into the one my children attended. And I’ve yet to hear of a physiotherapist working in a mainstream primary in our area.

As a rule of thumb, local authorities will not carry out a statutory assessment of a child until their school can demonstrate that they don’t have the resources to meet the child’s needs.  As a rule of thumb, schools are reluctant to spend money on specialist professionals if there’s a chance that the LA will bear the cost of that in a statutory assessment.  As a consequence, children are often several years ‘behind’ before they even get assessed, and the support they get is often in the form of a number of hours working with a teaching assistant who’s unlikely to be a qualified teacher, let alone a speech and language therapist, occupational therapist or physio.

If governments want to tackle the challenge of functional illiteracy, they need to invest in services that can address the root causes.

reference

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study. The Scottish Executive website http://www.gov.scot/Resource/Doc/36496/0023582.pdf

the view from the signpost: learning styles

Discovering that some popular teaching approaches (Learning Styles, Brain Gym, Thinking Hats) have less-than-robust support from research has prompted teachers to pay more attention to the evidence for their classroom practice. Teachers don’t have much time to plough through complex research findings. What they want are summaries, signposts to point them in the right direction. But research is a work in progress. Findings are often not clear-cut but contradictory, inconclusive or ambiguous. So it’s not surprising that some signposts – ‘do use synthetic phonics, ‘don’t use Learning Styles’ – often spark heated discussion. The discussions often cover the same ground. In this post, I want look at some recurring issues in debates about synthetic phonics (SP) and Learning Styles (LS).

Take-home messages

Synthetic phonics is an approach to teaching reading that begins by developing children’s awareness of the phonemes within words, links the phonemes with corresponding graphemes, and uses the grapheme-phoneme correspondence to decode the written word. Overall, the reading acquisition research suggests that SP is the most efficient method we’ve found to date of teaching reading. So the take-home message is ‘do use synthetic phonics’.

What most teachers mean by Learning Styles is a specific model developed by Fleming and Mills (1992) derived from the theory behind Neuro-Linguistic Programming. It proposes that students learn better in their preferred sensory modality – visual, aural, read/write or kinaesthetic (VARK). (The modalities are often reduced in practice to VAK – visual, auditory and kinaesthetic.) But ‘learning styles’ is also a generic term for a multitude of instructional models used in education and training. Coffield et al (2004) identified no fewer than 71 of them. Coffield et al’s evaluation didn’t include the VARK or VAK models, but a close relative – Dunn and Dunn’s Learning Styles Questionnaire – didn’t fare too well when tested against Coffield’s reliability and validity criteria (p.139). Other models did better, including Allinson and Hayes Cognitive Styles Index that met all the criteria.

The take-home message for teachers from Coffield and other reviews is that given the variation in validity and reliability between learning styles models, it isn’t worth teachers investing time and effort in using any learning style approach to teaching. So far so good. If the take-home messages are clear, why the heated debate?

Lumping and splitting

‘Lumping’ and ‘splitting’ refer to different ways in which people categorise specific examples; they’re terms used mainly by taxonomists. ‘Lumpers’ tend to use broad categories and ‘splitters’ narrow ones. Synthetic phonics proponents rightly emphasise precision in the way systematic, synthetic phonics (SSP) is used to teach children to read. SSP is a systematic not a scattergun approach, it involves building up words from phonemes not breaking words down to phonemes, and developing phonemic awareness rather than looking at pictures or word shapes. SSP advocates are ‘splitters’ extraordinaire – in respect of SSP practice at least. Learning styles critics, by contrast, tend to lump all learning styles together, often failing to make a distinction between LS models.

SP proponents also become ‘lumpers’ where other approaches to reading acquisition are concerned. Whether it’s whole language, whole words or mixed methods, it makes no difference… it’s not SSP. And both SSP proponents and LS critics are often ‘lumpers’ in respect of the research behind the particular take-home message they’ve embraced so enthusiastically. So what? Why does lumping or splitting matter?

Lumping all non-SSP reading methods together or all learning styles models together matters because the take-home messages from the research are merely signposts pointing busy practitioners in the right direction, not detailed maps of the territory. The signposts tell us very little about the research itself. Peering at the research through the spectacles of the take-home message is likely to produce a distorted view.

The distorted view from the signpost

The research process consists of several stages, including those illustrated in the diagram below.
theory to application
Each stage might include several elements. Some of the elements might eventually emerge as robust (green), others might be turn out to be flawed (red). The point of the research is to find out which is which. At any given time it will probably be unclear whether some components at each stage of the research process are flawed or not. Uncertainty is an integral part of scientific research. The history of science is littered with findings initially dismissed as rubbish that later ushered in a sea-change in thinking, and others that have been greeted as the Next Big Thing that have since been consigned to the trash.

Some of the SP and LS research findings have been contradictory, inconclusive or ambiguous. That’s par for the course. Despite the contradictions, unclear results and ambiguities, there might be general agreement about which way the signposts for practitioners are pointing. That doesn’t mean it’s OK to work backwards from the signpost and make assumptions about the research. In the diagram, there’s enough uncertainty in the research findings to put a question mark over all potential applications. But all that question mark itself tells us is that there’s uncertainty involved. A minor tweak to the theory could explain the contradictory, inconclusive or ambiguous results and then it would be green lights all the way down.

But why does that matter to teachers? It’s the signposts that are important to them, not the finer points of research methodology or statistical analysis. It matters because some of the teachers who are the most committed supporters of SP or critics of LS are also the most vociferous advocates of evidence-based practice.

Evidence: contradictory, inconclusive or ambiguous?

Decades of research into reading acquisition broadly support the use of synthetic phonics for teaching reading, although many of the research findings aren’t unambiguous. One example is the study carried out in Clackmannanshire by Rhona Johnston and Joyce Watson. The overall conclusion is that SP leads to big improvements in reading and spelling, but closer inspection of the results shows they are not entirely clear-cut, and the study’s methodology has been criticised. But you’re unlikely to know that if you rely on SP advocates for an evaluation of the evidence. Personally, I can’t see a problem with saying ‘the research evidence broadly supports the use of synthetic phonics for teaching reading’ and leaving it at that.

The evidence relating to learning styles models is also not watertight, although in this case, it suggests they are mostly not effective. But again, you’re unlikely to find out about the ambiguities from learning styles critics. Tom Bennett, for example, doesn’t like learning styles – as he makes abundantly clear in a TES blog post entitled “Zombie bølløcks: World War VAK isn’t over yet.”

The post is about the VAK Learning Styles model. But in the ‘Voodoo teaching’ chapter of his book Teacher Proof, Bennett concludes about learning styles in general “it is of course, complete rubbish as far as I can see” (p.147). Then he hedges his bets in a footnote; “IN MY OPINION”.

Tom’s an influential figure – government behaviour adviser, driving force behind the ResearchEd conferences and a frequent commentator on educational issues in the press. He’s entitled to lump together all learning styles models if he wants to and to write colourful opinion pieces about them if he gets the chance, but presenting the evidence in terms of his opinion, and missing out evidence that doesn’t support his opinion is misleading. It’s also at odds with an evidence-based approach to practice. Saying there’s mixed evidence for the effectiveness of learning styles models doesn’t take more words than implying there’s none.

So why don’t supporters in the case of SP, or critics in the case of LS, say what the evidence says, rather than what the signposts say? I’d hazard a guess it’s because they’re worried that teachers will see contradictory, inconclusive or ambiguous evidence as providing a loophole that gives them licence to carry on with their pet pedagogies regardless. But the risk of looking at the signpost rather than the evidence is that one set of dominant opinions will be replaced by another.

In the next few posts, I’ll be looking more closely at the learning styles evidence and what some prominent critics have to say about it.

Note:

David Didau responded to my thoughts about signposts and learning styles on his blog. Our discussion in the comments section revealed that he and I use the term ‘evidence’ to mean different things. Using words in different ways. Could explain everything.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.