Clackmannanshire revisited

The Clackmannanshire study is often cited as demonstrating the positive impact of synthetic phonics (SP) on children’s reading ability. The study tracked the reading, spelling and comprehension progress, over seven years, of three groups of children initially taught to read using one of three different methods;

  • analytic phonics programme
  • analytic phonics programme supplemented by a phonemic awareness programme
  • synthetic phonics programme.

The programmes were followed for 16 weeks in Primary 1 (P1, 5-6 yrs). Reading ability was assessed before and after the programme and for each year thereafter, spelling ability each year from P1, and comprehension each year from P2. After the first post-test, the two analytic phonics groups followed the SP programme, completing it by the end of P1.

I’ve blogged briefly about this study previously, based on a summary of the research. It’s quite clear that the children in the SP group made significantly more progress in reading and spelling than those in the other two groups.  One of my concerns about the results is that in the summary they are presented at group level, ie as the mean scores of the children in each different condition. There’s no indication of the range of scores within each group.

The range is important because we need to know whether the programme improved reading and spelling for all the children in the group, or for just some of them. Say for example, that the mean reading age of children in the SP group was 12 months ahead of the children in the other groups at the end of P1. We wouldn’t know, without more detail, whether all the children’s scores clustered around the 12 month mark, or whether the group mean had been raised by a few children having very high scores, or had been lowered by a few having very low scores.

At the end of the summary is a graph showing the progress made by ‘underachievers’ ie any children who were more than 2 years behind in their test scores. There were some children in that category at the end of P2; by the end of P7 the proportion had risen to 14%. So clearly there were children who were still struggling despite following an SP programme.

During a recent Twitter conversation, Kathy Rastle, Professor of Psychology at Royal Holloway College London (@Kathy_Rastle), sent me a link to a more detailed report by the Clackmannanshire researchers, Rhona Johnston and Joyce Watson.

more detail

I hoped that the more detailed report would provide more… well, detail. It did, but the ranges of scores within the groups were presented as standard deviations, so the impact of the programmes on individual children still wasn’t clear. That’s important. Obviously, if a reading programme enables a group of children to make significant gains in their reading ability, it’s worth implementing. But we also need to know the impact it has on individual children, because the point of teaching children to read is that each child learns to read.

The detail I was looking for is in Chapter 8 “Underachieving Children”, ie those with scores more than 2 years below the mean for their age. Obviously, in P1 no children could be allocated to that category because they hadn’t been at school long enough. But from P2 onwards, the authors tabulated the numbers of ‘underachievers’. (They note that some children were absent for some of the tests.) I’ve summarised the proportions (for boys and girls together) below:

more than 1 year behind (%)

P2 P3 P4 P5 P6 P7
reading 2.2 2.0 6.0 8.6 15.1 11.9
spelling 1.1 4.0 8.8 12.6 15.7 24.0
comprehension 5.0 18.0 15.5 19.2 29.4 27.6

more than 2 years behind (%)

P2 P3 P4 P5 P6 P7
reading 0 0.8 0 1.6 8.4 5.6
spelling 0.4 0.4 0.4 1.7 3.0 10.1
comprehension 0 1.2 1.6 5.0 16.2 14.0

The researchers point out that the proportion of children with serious problems with reading and spelling is quite low, but that it would be “necessary to collect control data to establish what would be typical levels of underachievement in a non-synthetic phonics programme.” Well, yes.

The SP programme clearly had a significantly positive impact on reading and spelling for most children. However that clearly wasn’t true for all of them. The authors provide a detailed case study for one child (AF) who had a hearing difficulty and poor receptive and expressive language.  They compare his progress with that of the other 15 children in P4 who were one year or more behind their chronological age with reading.

Case study – AF

AF started school a year later than his peers and his class was in the analytic phonics and phonemic awareness group.  They then followed the SP programme at the end of P1.  Early in P2, AF started motor movement and language therapy programmes.

By the middle of P4, AF’s reading and spelling scores were almost the average for the group whose reading was a year or more behind, but his knowledge of letter sounds, phoneme segmentation and nonword reading was better than theirs. A detailed analysis  suggests his reading errors are the result of his lack of familiarity with some words, and that he’s spelling words as they sound to him. Like the other 15 children experiencing difficulties, he needed to revisit more complex phonics rules, so a supplementary phonics programme was provided in P5. When tested afterwards, the mean scores for the group showed spelling and reading above chronological age, and AF’s reading and spelling improved considerably as a result.

During P6 and P7 a peripatetic Support for Learning (SfL) teacher worked with AF on phonics for three 45 minute sessions each week and taught him strategies to improve his comprehension. An cccupational therapist and physiotherapist worked with him on his handwriting, and he was taught to touch type.  By the end of P7, AF’s reading age was 9 months above his chronological age and his spelling was more than 2 years ahead of the mean for the underachieving group.

conclusion

The ‘Clacks’ study is often cited as conclusive proof of the efficacy of SP programmes. It’s often implied that SP will make a significant difference for the troublesome 17% of school leavers who lack functional literacy.   What intrigued me about the study was the proportion of children in P7 who still had difficulty with functional literacy despite having had SP training. It’s 14%, suspiciously close to the proportion of ‘functionally illiterate’ school leavers.

Some teachers have argued that if all the children had had systematic synthetic phonics teaching from the outset, the ‘Clacks’ figures might be different, but AF’s experience suggests otherwise.  He obviously had substantial initial difficulties with reading, but by the end of primary school had effectively caught up with his peers. But his success wasn’t due only to the initial SP programme. Or even to the supplementary SP programme provided in P5. It was achieved only after intensive, tailored 1-1 interventions on the part of a team of professionals from outside school.

My children’s school, in England, at the time when AF was in P7, was not offering these services to children with AF’s level of difficulty. Most of the children had followed an initial SP programme, but there was no supplementary SP course on offer. The equivalent to the SfL teacher carried out annual assessments and made recommendations. Speech and Language and Occupational therapists didn’t routinely offer treatment to individual children except via schools, and weren’t invited into the one my children attended. And I’ve yet to hear of a physiotherapist working in a mainstream primary in our area.

As a rule of thumb, local authorities will not carry out a statutory assessment of a child until their school can demonstrate that they don’t have the resources to meet the child’s needs.  As a rule of thumb, schools are reluctant to spend money on specialist professionals if there’s a chance that the LA will bear the cost of that in a statutory assessment.  As a consequence, children are often several years ‘behind’ before they even get assessed, and the support they get is often in the form of a number of hours working with a teaching assistant who’s unlikely to be a qualified teacher, let alone a speech and language therapist, occupational therapist or physio.

If governments want to tackle the challenge of functional illiteracy, they need to invest in services that can address the root causes.

reference

Johnston, R & Watson, J (2005), “The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study”, The Scottish Executive website http://www.gov.scot/Resource/Doc/36496/0023582.pdf

the history of reading methods revisited (5)

My response to some of Maggie’s most recent points:

Frank Smith

Maggie: Indeed, he [Smith] was echoing much earlier theorists, such as Huey, in this belief and, of course, by the time he was writing many readers may have been using such strategies because of being taught by Word methods (I’m sticking to my hypothesis!). I can’t find that he has any evidence for his assertion and, as I pointed out, Stanovich and West disproved his theory.

Me: The first five chapters of Snowling & Hume’s book The Science of Reading are devoted to reviews of work on word recognition processes in reading. Most of the research looks at the ways in which adult, expert readers read. What emerges from these five chapters is that:

• expert readers do not use one single method for reading words; they tend to use rapid whole-word recognition for familiar words and slower, stepwise decoding for unfamiliar words;
• the speed with which they respond to target words increases in response to different types of priming;
• the jury is still out on how reading mechanisms actually work.

It was the fact that expert readers use two strategies that resulted in a plethora of ‘dual route’ models of reading; the first was proposed in the 1920s, but studies of brain-damaged patients had noted this in the 19th century. This is exactly what West and Stanovich found. What they ‘disproved’ was that the use of contextual information by children increased with age and reading ability.

There was a great deal of work on priming effects in reading during the 1970s, so although Smith might have been wrong, he wasn’t just ‘echoing earlier theorists’. He had a PhD in psycholinguistics/cognitive psychology from Harvard, so would have been very familiar with the direction of travel in contemporary reading research.

Your hypothesis that expert readers were using mixed methods because that’s how they’d been taught to read, might be right. But a more likely explanation is that recognition of complex sensory stimuli (e.g. words) becomes automated and fast if they are encountered frequently, but requires step-by-step analysis if they’re not. That’s how human brains deal with complex sensory stimuli.

There is no question that expert readers use more than one strategy when reading. The question is whether explicitly learning those strategies is the best way for children to learn to read.

the rejection of the alphabetic principle


Me: Maggie says my statement that the alphabetic principle and analytic phonics had been abandoned because they hadn’t been effective for all children ‘makes no sense at all’. If I’m wrong, why were these methods abandoned?



Maggie: I still don’t think it makes any sense. For a start, you give no time scale. When did this abandonment take place? And you are conflating Alphabetic with Analytic which I don’t think is correct (see my earlier comment).

Me: They were abandoned gradually. My PGCE reading tutor, who trained in the 1930s, was keen on analytic phonics but not on ‘flashcards’. I remember spending hours preparing phonics reading activities. Several teachers of her generation that I’ve spoken to, took a similar view. They didn’t advocate using analytic phonics ‘systematically, first and only’, but as a support strategy if children were struggling to decode a word. Clearly, the teachers I’ve encountered don’t form a representative sample, but some of them were using analytic phonics until they retired and at least one teacher training college in the UK was teaching students to use it until at least the late 1970s. And this definitely wasn’t ‘alphabetic’, it was phonetic. According to my reading tutor, the alphabetic method was widely perceived as flawed by the 1930s. The consensus amongst these teachers was:

• children use a range of strategies when learning to read
• whatever method of teaching reading is used, some children will learn with little effort and others will struggle
• no one method of teaching reading will be effective for all children, but some methods are more effective than others (which is why they still used analytic phonics).

I’m not saying they are right, but that’s what they thought.

Maggie: Another point is that you are crediting educationists and teachers with a degree of rationality which I don’t think is justified. The widespread acceptance of the Word method, which had no evidence to back it but strong appeals to ‘emotion’ with the language of its denigration of Phonic methods, is a case in point. Boring, laborious, ‘drill & kill’, barren, mechanical, uncomprehending, the list is long (and very familiar). It is a technique promoted today as ‘framing’ (though I might acquit its original users of deliberate use of it). Very easy to be persuaded by the language without really considering the validity of the method it purports to describe.

Me: I think you are not crediting them with enough rationality. The ‘drill and kill’ they were referring to was an approach many teachers resorted to in the early days of state education. Those teachers were often untrained, had to teach large numbers of children of different ages, had few books, were on performance related pay, used corporal punishment and had been taught themselves through rote learning entire lessons. Complaints about children being able to recite but having no understanding were commonplace in those early days. What has happened over time is that denigrating rote learning everything (justified in my view) has morphed into denigrating rote learning anything (not justified).

Prior to the 1980s, teachers in the UK were left to their own devices about how they did things, and some at least, took a keen interest in developing their own methods; they didn’t all slavishly follow fashion by any means. I agree that the ‘Word’ method might have been framed emotively, but it’s not true to say there was no evidence to back it.

The evidence was in the form of adult reading strategies. If you’re a teacher who’s seen ‘drill and kill’ not working for all children, then alphabetic and analytic phonics not working for all children, and someone comes along and tells you that scientific research has shown that adults use a range of strategies when reading (and you check out the research and find that indeed it has shown just that) so it would make sense to teach children to use a range of strategies to learn to read, what would you, as a rational person, do?

I think you are seeing claims that adults use a range of reading strategies through the spectacles of the ‘teaching reading’ literature, not through the spectacles of the ‘reading mechanisms’ literature. The body of evidence that supports the idea that adults use a range of strategies in reading is vast. And every teacher will have witnessed children attacking words using a range of strategies. Putting the two ideas together is not unreasonable. It just happens to be wrong, but it wasn’t clear that it was wrong for a very long time.

Maggie: I would also suggest that the discourse of ‘science’, ‘research’, ‘progressive’ would be enough to convince many without them delving too deeply into the evidence. Brain Gym, anybody?

Me: You’re quite right. The point I’m making is that there was robust evidence to support the Word method. But it was robust in respect of people who had learned to read, not those who hadn’t. The way the brain functions after learning something (in adults) doesn’t reflect the way it learns it (in children). But that was by no means clear in the 1970s. There is still a dispute going on about this amongst cognitive scientists.

using a range of cues


Me: The cues I listed are those identified in skilled adult readers in studies carried out predominantly in the post-war period. Maggie’s hypothesis is that the range of cues is an outcome of the way the participants in experiments (often college students) had been taught to read. It’s an interesting hypothesis; it would be great to test it.

Maggie: I stand by it! I have worked with too many children who read exactly as taught by the Searchlights!
I thought I would revisit these ‘cues’ which are supposed to have offered sufficient exposure to auditory and visual patterns to develop automated, fast recognition. They are ‘recognising words by their shape, using key letters, grammar, context and pictures, recognising words by their shape’.

Confounded at once by the fact that many words have the same shape: sack, sick, sock, suck, lack, lick, luck, lock, pock, pick, puck, pack,

using key letters, Would those be the ones that differentiate each word in the above word list?

grammar, Well, I can see how you might ‘predict’ a particular grammatical word form, noun, verb, adjective etc. but the specific word? By what repeated pattern would you develop automatic recognition of it?

context I think the same might apply as for grammar. You need a mechanism for recognising the actual word.

pictures, Hm. Very useful for words like oxygen, air, the, gritty, bang, etc.

Me: Again, you are confusing the strategies adults use when reading with the most effective way of teaching children to read. They are two different things. Your examples illustrate very clearly why using multiple cues isn’t a good way of teaching reading. But those inconsistencies don’t stop adults using these cues in their reading. If you don’t have a copy of Snowling and Hume’s book, get one and read it.

Maggie: In view of Stanovich & West’s findings I would be interested to see any studies which show that skilled adult readers did use the ‘cues’ you listed. (as above)

Me: There’s a vast literature on this. Summarised very well in Snowling and Hume, which is why I’ve recommended it. Incidentally, a ‘cue’ isn’t a term invented by proponents of the Word method, it’s a perfectly respectable word denoting a signal detected in incoming information; it can affect subsequent information.

Me: In chapter 2 of Stanovich’s book, West and Stanovich report fluent readers’ performance being facilitated by two automated processes; sentence context (essentially semantic priming) and word recognition.

Maggie: I appreciate that but this is described as a feature of fluent, skilled reading. To assume that beginning readers do this spontaneously might be to fall into the same trap as ‘assuming that children could learn by mimicking the behaviour of experts’

Me: In your original post, you said “Stanovich and West showed, in the 70s that these were strategies used by unskilled readers and that skilled readers used decoding strategies for word recognition (this is an extreme simplification of the research Stanovich outlines in ‘Progress in Understanding Reading’) and this has been the conclusion of cognitive scientists over the subsequent decades the validity of these strategies is seriously challenged.”

I think you’ve misunderstood what Stanovich and West (and other cognitive scientists) have shown. The literature shows, pretty conclusively, that fluent readers use word recognition first and decoding if word recognition fails. Sentence context isn’t used as a conscious strategy, it’s subconscious, because the content of the sentence increases access to words are semantically related. It’s not safe to assume that because experts do something, novices learn by copying them. Nor is it safe to assume that experts use the same strategies they did when learning as novices.

Me: According to chapter 3, fluent readers use phonological recoding if automated word recognition fails.



Maggie: Isn’t that the whole point. Fluent readers didn’t use context, or other ‘cues’, to identify unfamiliar words, they used phonological recoding.

Me: No. The point is that they used it if automated word recognition failed.

Maggie: It is also moot that they use context to predict upcoming words (although I do understand about priming effects). There is also the possibility that rapid, automatic and unconscious decoding is the mechanism of automatic word recognition (Dehaene). Possibly with context confirming that the word is correct? A reading sequence of ‘predicting’, then, presumably, checking for correctness of form and meaning (how? by decoding and blending?) seems like a strange use of processing when decoding gets the form of the word correctly straight away and immediately activates meaning.

Me: It’s possible that rapid, automatic and unconscious decoding is the mechanism of automatic word recognition but work on masking and priming suggests that readers are picking up the visual features of letters and words as well as their auditory features and semantic features. In other words, there are things going on in addition to decoding.

Whether readers use context to predict upcoming words depends on what you mean by ‘predict’. Priming results in some words being more likely than others to occur in a sentence; this isn’t a conscious process of ‘prediction’ but it is a subconscious process of narrowing down the possibilities for what comes next. But in some sentences you could consciously predict what comes next with a high degree of accuracy.

the history of reading methods revisited (4)

And here’s Maggie’s response to my comments, which are in italics.

On reflection, I think I could have signposted the key points I wanted to make more clearly in my post. My reasoning went like this;
1. Until the post-war period reading methods in the UK were dominated by alphabetic/phonics approaches.
2. Despite this, a significant proportion of children didn’t learn to read properly.
3. Current concerns about literacy levels don’t have a clear benchmark – what literacy levels do we expect and why?
4. Although literacy levels have fallen in recent years, the contribution of ‘mixed methods’ to this fall is unclear; other factors are involved.
A few comments on Maggie’s post:
Huey and reading methods
My observation about the use of alphabetic and analytic phonics approaches in the early days of state education in England is based on a fair number of accounts I’ve either heard or read from people who were taught to read in the late 19th/early 20th century. Without exception, they have reported;
• learning the alphabet
• learning letter-sound correspondences
• sounding out unfamiliar words letter-sound by letter-sound

This accords with the account I proposed, that phonics methods persisted in the UK for the early decades of 20th C. I’d also note, as I have on the RRF board, that my account was something of a gallop through the topic. It was bound to be broad brushed rather than detailed. Of course a variety of practices will have obtained at any period (as they do now) but I was trying to indicate what appeared to be the ‘dominant’ practice at any one time.

I’m well aware that that the first-hand accounts I’ve come across don’t form a representative sample, but from what Maggie has distilled from Huey, the accounts don’t appear to be far off the mark for what was happening generally. I concede that sounding out unfamiliar words doesn’t qualify as ‘analytic phonics’, but it’s analytic something – analytic letter-sound correspondence, perhaps?

Modern definitions of ‘analytic’ phonics make it clear that children are taught whole words initially and the words are then ‘analysed for their phonic structure. This may not necessarily be at the level of the phoneme; analytic phonics may also include analysis at the syllable level and at ‘onset/rime’ level (the familiar ‘word families’). This practice would seem to be more allied to the Word method (recall that Huey said that phonics could be taught once children had learned to read) than to the ‘Alphabetic’ method. Though, to be honest, it is very difficult to work out from contemporary primers and accounts of instructing/learning reading just how the Alphabetic method was taught. When accounts speak of ‘learning letters’ are letter names being taught or sound values? When they talk of ‘spelling’ words are they referring to actually writing words or to saying letter names followed by the whole word (see ai tee . cat) or to orally sounding out and blending? Certainly reading primers such as ‘Reading Without Tears’ first published 183?* are arranged in much the same way as a modern ‘decodable’ book.

However, if the Phonic method which Huey describes is anything like the method Rebecca Pollard outlines (‘Manual of Synthetic Reading and Spelling’(1897)) it is closely akin to the supposedly ‘new’ SP method in that it taught letter/sound correspondences, decoding and blending, from simple to complex, as did the method outlined by Nellie Dale (‘On the Teaching of English Reading’. 1898).

Montessori
I cited Montessori as an example of the Europe-wide challenge posed by children who struggled at school; I wasn’t referring to her approach to teaching reading specifically. In her book she frequently mentions Itard and Séguin who worked with hearing-impaired children. She applies a number of their techniques, but doesn’t appear to agree with them about everything – she questions Séguin’s approach to writing, for example.

In which case I misunderstood your reason for citing her. I thought it was specifically in relation to teaching reading. Her sections on teaching reading and writing are very interesting. What is striking is that she believed in the ‘developmental’ model, agreeing with Huey’s contention that children should not be taught to read before they were at least 6. She describes how she tried very hard to resist younger children’s appeals to be taught to read and write but found that after motor skills training with letter shapes some of them were self teaching anyway and delighted with their achievements!

Frank Smith
I haven’t read Smith, but the fact that skilled readers use context and prediction to read the words on the page wasn’t his ‘proposal’. By the 1970s it was a well-documented feature of contextual priming in skilled readers, i.e. skilled adult readers with large spoken vocabularies. From what Maggie has said, the error Smith appears to have made is to assume that children could learn by mimicking the behaviour of experts – a mistake that litters the history of pedagogy.

Indeed, he was echoing much earlier theorists, such as Huey, in this belief and, of course, by the time he was writing many readers may have been using such strategies because of being taught by Word methods (I’m sticking to my hypothesis!). I can’t find that he has any evidence for his assertion and, as I pointed out, Stanovich and West disproved his theory.

Hinshelwood and Orton
Hinshelwood was a British ophthalmologist interested in reading difficulties caused by brain damage. Orton was American, but was a doctor also interested in brain damage and its effect on reading. I can’t see how the work of either of them would have been affected by the use of Whole Word reading methods in US schools, although their work has frequently been referred to as an explanation for reading difficulties.

Orton’s interest famously ultimately extended beyond brain damaged subjects to the study of non-brain damaged subjects with ‘dyslexia’. At the time he was working Word methods were predominant in US schools and he implicated these methods as contributing to his subject’s problems. The Orton-Gillingham structured, systematic phonics programme was developed for helping these dyslexics. It appears to have been innovatory for its period and, believe it or not, from online contacts with US practitioners I understand that because it is SSP it is still fairly contentious in the US today! They express the same frustrations as do SP proponents. If only children were taught the OG way there wouldn’t be so much reading failure in the US!

I am not familiar with Hinshelwood but it’s clear that I shall have to look him up!

the rejection of the alphabetic principle
Maggie says my statement that the alphabetic principle and analytic phonics had been abandoned because they hadn’t been effective for all children ‘makes no sense at all’. If I’m wrong, why were these methods abandoned?

I still don’t think it makes any sense. For a start, you give no time scale. When did this abandonment take place? And you are conflating Alphabetic with Analytic which I don’t think is correct (see my earlier comment).

Another point is that you are crediting educationists and teachers with a degree of rationality which I don’t think is justified. The widespread acceptance of the Word method, which had no evidence to back it but strong appeals to ‘emotion’ with the language of its denigration of Phonic methods, is a case in point. Boring, laborious, ‘drill & kill’, barren, mechanical, uncomprehending, the list is long (and very familiar). It is a technique promoted today as ‘framing’ (though I might acquit its original users of deliberate use of it). Very easy to be persuaded by the language without really considering the validity of the method it purports to describe.

And, of course, there was the lure of modernity. Word methods were advocated by modern educationists as part of progressive educational methods (but let’s not get into an argument about ‘progressive  ). I don’t know how much teachers believed that there was some sort of research base for progressive methods but as Huey sets some store by research (pages and pages on eye movements, for example) and does have an evidence base for some of what he says I would suggest that it would be taken on trust that it was all evidence based. I would also suggest that the discourse of ‘science’, ‘research’, ‘progressive’ would be enough to convince many without them delving too deeply into the evidence. Brain Gym, anybody?

In addition, though my suggestion that ‘official’ advice was followed has been questioned, it might be noted that in respect of the post WW2 UK both the government committee of 1947 and the Bullock Report (1975) both firmly endorsed a mixed methods approach which started from Whole Word and taught phonics if necessary.

It is also interesting that Bullock notes that increasing numbers of children, particularly ‘working class’ children, were entering Junior school (Y2) unable to read. Might one ascribe this to developmentalist theory?

using a range of cues
The cues I listed are those identified in skilled adult readers in studies carried out predominantly in the post-war period. Maggie’s hypothesis is that the range of cues is an outcome of the way the participants in experiments (often college students) had been taught to read. It’s an interesting hypothesis; it would be great to test it.

I stand by it! I have worked with too many children who read exactly as taught by the Searchlights!

I thought I would revisit these ‘cues’ which are supposed to have offered sufficient exposure to auditory and visual patterns to develop automated, fast recognition. They are ‘recognising words by their shape, using key letters, grammar, context and pictures,’

recognising words by their shape, Confounded at once by the fact that many words have the same shape: sack, sick, sock, suck, lack, lick, luck, lock, pock, pick, puck, pack,

using key letters, Would those be the ones that differentiate each word in the above word list?

grammar, Well, I can see how you might ‘predict’ a particular grammatical word form, noun, verb, adjective etc. but the specific word? By what repeated pattern would you develop automatic recognition of it?

context I think the same might apply as for grammar. You need a mechanism for recognising the actual word.

pictures, Hm. Very useful for words like oxygen, air, the, gritty, bang, etc.

An alternative hypothesis is that the strategies used by skilled adult readers are an outcome of how brains work. Prior information primes neural networks and thus reduces response time, and frequent exposure to auditory and visual patterns such as spoken and written words results in automated, fast recognition.

In view of Stanovich & West’s findings I would be interested to see any studies which show that skilled adult readers did use the ‘cues’ you listed. (as above)

I know we have had discussions about the term ‘natural’ but ultimately reading is a taught skill. If readers use strategies which can be directly related to the strategies they were taught I cannot see that why they should be ascribed to untaught and unconscious exploitation of the brain’s capabilities. I could only accept this hypothsis in the case of self taught readers. I would be surprised to find the generality of beginning readers developing such strategies spontaneously (i.e. undirected/taught) when presented with text, though some outliers might. What would you do if presented with a page of unfamiliar script, Hebrew, Arabic, Thai, Chinese and told to read it without any help whatsoever? And you are 5ys old.

For example, in chapter 2 of Stanovich’s book, West and Stanovich report fluent readers’ performance being facilitated by two automated processes; sentence context (essentially semantic priming) and word recognition.

I appreciate that but this is described as a feature of fluent, skilled reading. To assume that beginning readers do this spontaneously might be to fall into the same trap as assuming’ that children could learn by mimicking the behaviour of experts’

According to chapter 3, fluent readers use phonological recoding if automated word recognition fails.

Isn’t that the whole point. Fluent readers didn’t use context, or other ‘cues’, to identify unfamiliar words, they used phonological recoding.

It is also moot that they use context to predict upcoming words (although I do understand about priming effects). There is also the possibility that rapid, automatic and unconscious decoding is the mechanism of automatic word recognition (Dehaene). Possibly with context confirming that the word is correct? A reading sequence of ‘predicting’, then, presumably, checking for correctness of form and meaning (how? by decoding and blending?) seems like a strange use of processing when decoding gets the form of the word correctly straight away and immediately activates meaning.

educators’ reasoning
I wasn’t saying that the educators’ assessment of alphabetic/phonics methods was right, just that it was what they claimed. Again, if they didn’t think that, why would alphabetic/phonics methods have been abandoned?

Se above!

falling literacy standards
The data that I suggested weren’t available would enable us to make a valid comparison between the literacy levels of school-leavers (aged 13, say) at the beginning of the 20th century when alphabetic/phonics methods were widely used in the UK, and current levels for young people of the same age. The findings Maggie has cited are interesting, but don’t give us a benchmark for the literacy levels we should expect.

There is some post WW2 data in the Bullock report though it is held to be not totally reliable. However, it finds that ‘reading standards’ rose from 1948 to 1961 but then fell back slightly from 1961 to 1971. Make of that what you will!

national curriculum and standardised testing
The point I was trying to make was not about the impact of the NC and SATs on reading, but that the NC and SATs made poor readers more obvious. In the reading-ready era, some children not reading at 7 would have learned to read by the time they were 11, but that delay wouldn’t have appeared in national statistics.

As, indeed, it appeared to be doing so in Bullock (see above)

reading for enjoyment
Children leaving school without functional literacy is certainly a cause for concern, and I agree that methods of teaching reading must be implicated. But technological changes since 1990 haven’t helped. The world of young people is not as text-based as it used to be, and not as text-based as the adult world. That issue needs to be addressed.

Which, as you might guess, I would partially ascribe to adoption of Whole Word, Whole Language & Mixed Methods. I have watched the ‘simplification’ of text over my lifetime in the cause of ‘including’ the semi-literate.

I think there’s a political element too, in the rejection of ‘elite’ language (aka ‘big words’). I shall have to dig out my copy of ‘The Uses of Literacy’ I think, to see what literacy expectations there were of the 50’s generation. Could be instructive.

What I do find interesting, and perhaps pertinent to the question of ‘dumbing down’ being discussed in other twitter conversations, is that, although we don’t really know what percentage of the population were literate in the latter half of the 19th C and the early 20th C, popular texts and the media appear to have expected a far more complex vocabulary knowledge, and an ability to comprehend far more complex syntax, of those who could read, even of children.. Compare, for example, Beatrix Potter with ORT.

Note:
Huey, Dewie & Louie are the names of Donald Duck’s three nephews
There’s no Louie in this story yet.

Perhaps Walt was taught the rhetorical ‘rule of three’!

It’s sad that we don’t have a Louie (or a Lewie) to complete the triumvirate. They would trip so nicely off the tongue..

Huey, Dewey and…? a response to the history of reading methods (3)

Before responding to Maggie’s post, I want first to thank her and other members of the Reading Reform Forum for the vast amount of information about reading that they have put into the public domain. The site is a great resource for anyone interested in teaching reading.

I also feel I should point out that my previous post on ‘mixed methods’ was intended to be a prompt response to a question asked on Twitter, not a fully-referenced essay on the history of methods for teaching reading. It accurately accounts for why I think what I think, but I’m grateful to Maggie for explaining where my understanding of the history of reading methods might be wrong.

On reflection, I think I could have signposted the key points I wanted to make more clearly in my post. My reasoning went like this;

1. Until the post-war period reading methods in the UK were dominated by alphabetic/phonics approaches.
2. Despite this, a significant proportion of children didn’t learn to read properly.
3. Current concerns about literacy levels don’t have a clear benchmark – what literacy levels do we expect and why?
4. Although literacy levels have fallen in recent years, the contribution of ‘mixed methods’ to this fall is unclear; other factors are involved.

A few comments on Maggie’s post:

Huey and reading methods
My observation about the use of alphabetic and analytic phonics approaches in the early days of state education in England is based on a fair number of accounts I’ve either heard or read from people who were taught to read in the late 19th/early 20th century. Without exception, they have reported;

• learning the alphabet
• learning letter-sound correspondences
• sounding out unfamiliar words letter-sound by letter-sound

I’m well aware that that the first-hand accounts I’ve come across don’t form a representative sample, but from what Maggie has distilled from Huey, the accounts don’t appear to be far off the mark for what was happening generally. I concede that sounding out unfamiliar words doesn’t qualify as ‘analytic phonics’, but it’s analytic something – analytic letter-sound correspondence, perhaps?

Montessori
I cited Montessori as an example of the Europe-wide challenge posed by children who struggled at school; I wasn’t referring to her approach to teaching reading specifically. In her book she frequently mentions Itard and Séguin who worked with hearing-impaired children. She applies a number of their techniques, but doesn’t appear to agree with them about everything – she questions Séguin’s approach to writing, for example.

Frank Smith
I haven’t read Smith, but the fact that skilled readers use context and prediction to read the words on the page wasn’t his ‘proposal’. By the 1970s it was a well-documented feature of contextual priming in skilled readers, i.e. skilled adult readers with large spoken vocabularies. From what Maggie has said, the error Smith appears to have made is to assume that children could learn by mimicking the behaviour of experts – a mistake that litters the history of pedagogy.

Hinshelwood and Orton
Hinshelwood was a British ophthalmologist interested in reading difficulties caused by brain damage. Orton was American, but was a doctor also interested in brain damage and its effect on reading. I can’t see how the work of either of them would have been affected by the use of Whole Word reading methods in US schools, although their work has frequently been referred to as an explanation for reading difficulties.

the rejection of the alphabetic principle
Maggie says my statement that the alphabetic principle and analytic phonics had been abandoned because they hadn’t been effective for all children ‘makes no sense at all’. If I’m wrong, why were these methods abandoned?

using a range of cues
The cues I listed are those identified in skilled adult readers in studies carried out predominantly in the post-war period. Maggie’s hypothesis is that the range of cues is an outcome of the way the participants in experiments (often college students) had been taught to read. It’s an interesting hypothesis; it would be great to test it. An alternative hypothesis is that the strategies used by skilled adult readers are an outcome of how brains work. Prior information primes neural networks and thus reduces response time, and frequent exposure to auditory and visual patterns such as spoken and written words results in automated, fast recognition. For example, in chapter 2 of Stanovich’s book, West and Stanovich report fluent readers’ performance being facilitated by two automated processes; sentence context (essentially semantic priming) and word recognition. According to chapter 3, fluent readers use phonological recoding if automated word recognition fails.

educators’ reasoning
I wasn’t saying that the educators’ assessment of alphabetic/phonics methods was right, just that it was what they claimed. Again, if they didn’t think that, why would alphabetic/phonics methods have been abandoned?

falling literacy standards
The data that I suggested weren’t available would enable us to make a valid comparison between the literacy levels of school-leavers (aged 13, say) at the beginning of the 20th century when alphabetic/phonics methods were widely used in the UK, and current levels for young people of the same age. The findings Maggie has cited are interesting, but don’t give us a benchmark for the literacy levels we should expect.

national curriculum and standardised testing
The point I was trying to make was not about the impact of the NC and SATs on reading, but that the NC and SATs made poor readers more obvious. In the reading-ready era, some children not reading at 7 would have learned to read by the time they were 11, but that delay wouldn’t have appeared in national statistics.

reading for enjoyment
Children leaving school without functional literacy is certainly a cause for concern, and I agree that methods of teaching reading must be implicated. But technological changes since 1990 haven’t helped. The world of young people is not as text-based as it used to be, and not as text-based as the adult world. That issue needs to be addressed.

Note:
Huey, Dewie & Louie are the names of Donald Duck’s three nephews
There’s no Louie in this story yet.

Maggie Downie on the history of methods of teaching reading: guest post (2)

My previous post was a reply to a question posed in a Twitter discussion about a blogpost by @HeatherBellaF on the evidence for synthetic phonics. I’m grateful to @MaggieDownie for summarising the history of reading methods used in the English speaking world on the Reading Reform Forum site here. Maggie wasn’t able to post this as a comment on my blog, so I’ve reproduced it in full below. I’ll respond later. I’ve edited Maggie’s post only to reduce spacing and restore italics. Here’s what she says:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Heather recently posted a blog about SSP which provoked a bit of a twitter storm and a series of exchanges over the quality of the evidence with another RRF message board contributor who posted her own blog in response.

Heather’s blog: http://heatherfblog.wordpress.com/2014/ … -evidence/
Response: http://logicalincrementalism.wordpress. … g-reading/

I felt that the ‘history’ of reading instruction’ run through in the second blog was, to say the least, vague and inaccurate, and tried to write a response. This has turned out to be extremely long so I am posting it here instead. It is not a polished piece of work, nor does it address everything but I have tried to be show how instructional methods have taken hold over the past 100 years or so. I realise that I could have gone further, looking at reports such as Bullock and Warnock but this isn’t an undergraduate essay.

I might also say that reading Huey is a real eyeopener. Diack comments that much of what had been written about reading prior to his own book (1965) could be found in Huey, though as the 20th C progressed it was increasingly unattributed. The same could be said now in that much of what Huey said is still being said today. The power of Ruling Theory at work!

Sections in italics are from the blog post

Here goes!

As far as I’m aware, when education became compulsory in England in the late 19th century, reading was taught predominantly via letter-sound correspondence and analytic phonics – ‘the cat sat on the mat’ etc. A common assumption was that if people couldn’t read it was usually because they’d never been taught. What was found was that a proportion of children didn’t learn to read despite being taught in the same way as others in the class. The Warnock committee reported that teachers in England at the time were surprised by the numbers of children turning up for school with disabilities or learning difficulties. That resulted in special schools being set up for those with the most significant difficulties with learning. In France Alfred Binet was commissioned to devise a screening test to identify learning difficulties that evolved into the ‘intelligence test’. In Italy, Maria Montessori adapted methods to mainstream education that had been used to teach hearing-impaired children.

The history of teaching reading is far more complex than your overview suggests. It is not a straight run of ‘letter/sound correspondence and analytic phonics teaching from the inception of universal schooling in the 1880s through to Ken Goodman’s ‘Whole Language’ of the 1960s. It is a period of differing theories and methodologies; of the beginning of the scientific study of the reading process (mainly of eye-movements) and of gathering momentum in the disagreements about the theory of reading instruction which has led to the ‘Reading Wars.’

It might be noted that this is a peculiarly Anglo-Centric history; countries which have more transparent orthographies (i.e mainly, or completely having only one way to represent each of the phonemes of the language) have, for the most part, carried serenely on as they have done for years, teaching letter/sound correspondences, decoding and blending for reading and segmenting for spelling, with no apparent detriment to the children so taught and with far higher levels of literacy than many English Speaking countries. And with no thought of changing their effective teaching methods.

A great deal of information on the history of reading instruction comes from the highly influential work of Edmund Burke Huey, an Educational psychologist, ‘The Psychology and Pedagogy of Reading’ 1908. Also from Hunter Diack’s ‘In Spite of the Alphabet’ 1965 and Jeanne Chall ‘Learning to Read, the great debate’ 1967. A paper by Dr Joyce Morris ‘Phonicsphobia’ gives insight into English practice post WW2. From the 80s on may be fairly common knowledge to older readers.

From my reading of Huey it seems that by the 19th century there were four main methods of teaching reading (with variations within each category). The method which seems to have obtained until at least the mid. 19th C was the Alphabetic, by which is meant the ‘traditional’ centuries old method of learning the alphabet letters and how to spell words out. It is not altogether clear whether children were taught letter sound correspondences or letter names (or both) by this method though Diack suggests that as the method involved learning consonant vowel combinations (ba, be, bo, bu etc.) it must have involved ‘sounds’ at some stage. Whole word (Look & Say) had been proposed from time to time during the 18th C but may have derived some impetus from Thomas Gallaudet, an early 19th C educator of deaf children who used Whole Word to teach his pupils to read. By the time Huey was writing it was being seriously proposed as an effective method. Huey also identified ‘Phonetic’ methods; not ‘phonics’ as we know it but methods using simplified alphabets or diacrital marks to simplify early reading instruction. The fourth category was Phonics, phonics of a kind quite familiar to SP proponents and even called ‘Synthetic’ by some late 19th C practitioners. (Analytic Phonics does not seem to have featured)

Huey himself favoured a version of Whole Word known as the Sentence Method, based on the theory that children would learn best something that was meaningful and interesting to them. Children were taught to recognise and ‘read’ a whole sentence (with no regard to the individual words which comprised it or the letters the words contained). Diack suggests that this method was validated by Gestalt theories (that the ‘whole’ is the unit of immediate perception) in the 1920s and I think it perhaps influenced the bizarre statement of Whole Language guru, Ken Goodman, to the effect that a paragraph is easier to read than a sentence and a sentence is easier to read than a word.

Huey did believe that phonics should be taught but after children had learned to read and not connected with the reading process, presumably the phonics was for spelling. He did acknowledge that Rebecca Pollard’s ‘Synthetic’ (phonics) Method was successful but dismissed it as old fashioned and tedious.

It is important to note that a key element of Whole Word instruction is the focus on reading for meaning alone. There is no attempt to teach any word recognition strategies beyond, perhaps, linking the word to a picture. The success of the method relied on children’s own ability to memorise the appearance of the word (and to be able to recognise it in different forms e.g differing fonts, cases or handwriting). The educationists who promoted the method did so because of their perception that children who did not read with ‘expression’ were not understanding what they were reading and that phonics instruction led to expressionless mechanical reading with no understanding. There seems to have been no attempt to verify this belief. Horace Mann gave expression to it when describing reading he heard in schools in the 1830s as being ‘too often a barren action of the organs of speech upon the atmosphere’ and it can be seen today, over 150 years later, expressed, in less picturesque terms, by denigrators of SSP methods of teaching reading.

Diack says that in reality phonic methods predominated in the UK & the US for at least the latter half of the 19th Century. Under the influence of figures such as Huey & Dewey Whole Word methods became widely accepted in the US from the early 20th Century whereas Phonics lingered on in the UK for far longer.

At this point it might be appropriate to mention Montessori. I am not sure why her method of teaching reading is thought to have been developed from her work with deaf children. As far as I can make out from her own book (The Montessori Method. 1912) her method for developing the motor skills need for writing and her use of letter shapes for learning the forms of letters were developed when she worked with what we would now call children with learning difficulties, but her method of teaching reading owes nothing whatsoever to work with hearing impaired children. She taught letter/sound correspondence right from the start and her account of how her children learned to read and write would have any SP proponent nodding in approval. It is very beautiful and well worth reading
(http://digital.library.upenn.edu/women/ … ethod.html) p246

It seems that Whole Word methods began to really take hold in the UK during the 1930s and proliferated post WW2 as part of the postwar desire for ‘modernisation. It was then that Joyce Morris encountered resistance to old fashioned ‘Phonics’ detailed in her article ‘Phonicsphobia’ (1994) as did Hunter Diack when he published papers in the 1950s in favour of phonics instruction. His approach to phonics was to teach letter/sound correspondences but in the context of whole words. I don’t know enough about his method to tell if it tends to Analytic or Synthetic but the reading tests he produced with J.C Daniels do not look to be ‘word family’ based.

It is possible that Whole Word may have slipped quietly away at some time had it not been for the rise to prominence of the highly charismatic and persuasive Frank Smith in the early 1970s. Having never taught a child to read he wrote a book called ‘Understanding Reading’. (1971) which seems never to have been out of print since. A great deal of it is regurgitation of Huey and some of it is stunningly inaccurate assertions of what happens in the reading process. The final chapter where he proposes that a really skilled reader can read a page of text and get the meaning of it without being aware of the words on the page is awe-inspiringly loopy. Yet he has a huge following and is revered. It was Frank Smith’s excitingly ‘modern’ take on reading that inspired two young cognitive psychologists, in the 1970s to base a study on Smith’s proposal that skilled readers use context and prediction to ‘read’ the words on the page and that poor readers laboured away with phonics. Stanovich and West were amazed to find that precisely the opposite was true.

Research into acquired reading difficulties in adults generated an interest in developmental problems with learning to read, pioneered by James Hinshelwood and Samuel Orton in the early 20th century.

From my foregoing account you should be aware that Orton and Hinshelwood were investigating reading disorders in the USA at a time when Whole Word had become the predominant method of teaching reading; any phonics instruction was incidental. ‘Alphabetic principle and analytic phonics’ really cannot be implicated here.

The term developmental dyslexia began as a descriptive label for a range of problems with reading and gradually became reified into a ‘disorder’. Because using the alphabetic principle and analytic phonics clearly wasn’t an effective approach for teaching all children to read, and because of an increased interest in child development, researchers began to look at what adults and children actually did when reading and learning to read, rather than what it had been thought they should do.

This is just extraordinary. Bearing in mind that no date is given for this rejection of the alphabetic principle and analytic phonics and that Dr Orton famously pioneered structured, systematic phonics instruction for remediation of dyslexics in the 1920s/30s (the implication being that this was not the instruction they received in schools) this statement makes no sense at all.

What they found was that people use a range of cues (‘mixed methods’) to decode unfamiliar words; letter-sound correspondence, analytic phonics, recognising words by their shape, using key letters, grammar, context and pictures, for example.

This is an odd one to unpick. It is probable that researchers did find that people used these strategies but they were used in the context of a belief that children could learn to read whole words, whole sentences etc. with no instruction in phonics until they *could* read. In the absence of initial phonics instruction, and, presumably because children struggled to learn to read when the Word method assumed that they would learn unaided, these ‘strategies’ were developed and taught in an attempt to help children learn more easily. Naturally these strategies would be observed in people taught to use them or people who had developed them by themselves in the absence of any other guidance. Chall shows clearly how basal readers developed the use of pictures and predictable text to facilitate the teaching of these strategies. But since Stanovich and West showed, in the 70s that these were strategies used by unskilled readers and that skilled readers used decoding strategies for word recognition (this is an extreme simplification of the research Stanovich outlines in ‘Progress in Understanding Reading’) and this has been the conclusion of cognitive scientists over the subsequent decades the validity of these strategies is seriously challenged.

Educators reasoned that if some children hadn’t learned to read using alphabetic principles and/or analytic phonics, applying the strategies that people actually used when reading new words might be a more effective approach.

As alphabetic principles weren’t being used to any great extent this statement is invalid. The tossing in of ‘analytic phonics’ seems more of a sop to phonics detractors than an indictment of ‘phonics’. McGuinness (1998) examination of US ‘analytic’ phonics instruction shows it to have been chaotic, illogical and unstructured and only marginally effective. There is no reason to believe that the situation was any different in the UK. Indeed, examination of pre SP phonics programmes (of which I have several) tends to confirm her conclusions.

This idea, coinciding with an increased interest in child-led pedagogy and a belief that a species-specific genetic blueprint meant that children would follow the same developmental trajectory but at different rates, resulted in the concept of ‘reading-readiness’. The upshot was that no one panicked if children couldn’t read by 7, 9 or 11; they often did learn to read when they were ‘ready’. It’s impossible to compare the long-term outcomes of analytic phonics and mixed methods because the relevant data aren’t available. We don’t know for instance, whether children’s educational attainment suffered more if they got left behind by whole-class analytic phonics, or if they got left alone in schools that waited for them to become ‘reading-ready’.

Some comparisons do exist. Diack notes that the committee set up by the UK government in 1947 ‘to consider the nature and extent of the illiteracy alleged to exist among school leavers and young people’ found that 11y olds in 1948 were a year behind those of 1938 and 15 y olds in 1948 were 2 years behind those in 1938. Martin Turner in his pamphlet ‘Sponsored Reading Failure (1990) found that standards in reading were falling (that was in the days when reading was monitored by Local Authorities) and suggested that this was caused by the prevalence of Whole Word and Real Books methodology.

Eventually, as is often the case, the descriptive observations about how people tackle unfamiliar words became prescriptive. Whole word recognition began to supersede analytic phonics after WW2, and in the 1960s Ken Goodman formalised mixed methods in a ‘whole language’ approach. Goodman was strongly influenced by Noam Chomsky, who believes that the structure underpinning language is essentially ‘hard-wired’ in humans. Goodman’s ideas chimed with the growing social constructivist approach to education that emphasises the importance of meaning mediated by language.
At the same time as whole language approaches were gaining ground, in England the national curriculum and standardised testing were introduced, which meant that children whose reading didn’t keep up with their peers were far more visible than they had been previously, and the complaints that had followed the introduction of whole language in the USA began to be heard here.

It seems that Whole Word/Whole Language approaches had been prevalent long before the introduction of the national curriculum and it is debateable that the National Curriculum Tests were truly standardised. But an account of government attempts to reintroduce more phonics into the teaching of reading since 1988 can be found here: http://www.rrf.org.uk/

In addition, the national curriculum appears to have focussed on the mechanics of understanding ‘texts’ rather than on reading books for enjoyment.

I would agree with that but would also note that the initial teaching of reading was such that, even with increased official emphasis on the teaching of phonics a consistent ‘tail’ of some 20% of children have left primary school with barely functional literacy (L3 or below; some 120,000 children annually) and that inability to read with ease militates strongly against getting any enjoyment from reading, or choosing to read as a leisure activity.

What has also happened is that with the advent of multi-channel TV and electronic gadgets, reading has nowhere near the popularity it once had as a leisure activity amongst children, so children tend to get a lot less reading practice than they did in the past. These developments suggest that any decline in reading standards might have multiple causes, rather than ‘mixed methods’ being the only culprit.

But concern must be not only focussed on failure to read for enjoyment. There are very significant numbers of children and young people who are unable to read to a level which enables them to access the functional reading needed to participate in a highly text based society.

Sources:
Huey. E.B the Psychology and Pedagogy of Reading 1908
https://archive.org/stream/psychologyan … 1/mode/1up
Montessori. M The Montessori Method 1912
http://digital.library.upenn.edu/women/ … ethod.html
Diack. H: In Spite of the Alphabet 1965
Chall. J Learning to Read: The Great Debate 1967
Smith. F Understanding Reading 1971
Morris J Phonicsphobia 1994
http://www.spellingsociety.org/journals … sfobia.php

McGuinness. D. Why Children Can’t Read 1998
Turner. M. Sponsored Reading Failure 1990
Stanovich. K Progress in Understanding Reading 2000

mixed methods for teaching reading (1)

Many issues in education are treated as either/or options and the Reading Wars have polarised opinion into synthetic phonics proponents on the one hand and those supporting the use of whole language (or ‘mixed methods’) on the other. I’ve been asked on Twitter what I think of ‘mixed methods’ for teaching reading. Apologies for the length of this reply, but I wanted to explain why I wouldn’t dismiss mixed methods outright and why I have some reservations about synthetic phonics. I wholeheartedly support the idea of using synthetic phonics (SP) to teach children to read. However, I have reservations about some of the assumptions made by SP proponents about the effectiveness of SP and about the quality of the evidence used to justify its use.

the history of mixed methods

As far as I’m aware, when education became compulsory in England in the late 19th century, reading was taught predominantly via letter-sound correspondence and analytic phonics – ‘the cat sat on the mat’ etc. A common assumption was that if people couldn’t read it was usually because they’d never been taught. What was found was that a proportion of children didn’t learn to read despite being taught in the same way as others in the class. The Warnock committee reported that teachers in England at the time were surprised by the numbers of children turning up for school with disabilities or learning difficulties. That resulted in special schools being set up for those with the most significant difficulties with learning. In France Alfred Binet was commissioned to devise a screening test to identify learning difficulties that evolved into the ‘intelligence test’. In Italy, Maria Montessori adapted methods to mainstream education that had been used to teach hearing-impaired children.

Research into acquired reading difficulties in adults generated an interest in developmental problems with learning to read, pioneered by James Hinshelwood and Samuel Orton in the early 20th century. The term developmental dyslexia began as a descriptive label for a range of problems with reading and gradually became reified into a ‘disorder’. Because using the alphabetic principle and analytic phonics clearly wasn’t an effective approach for teaching all children to read, and because of an increased interest in child development, researchers began to look at what adults and children actually did when reading and learning to read, rather than what it had been thought they should do.

What they found was that people use a range of cues (‘mixed methods’) to decode unfamiliar words; letter-sound correspondence, analytic phonics, recognising words by their shape, using key letters, grammar, context and pictures, for example. Educators reasoned that if some children hadn’t learned to read using alphabetic principles and/or analytic phonics, applying the strategies that people actually used when reading new words might be a more effective approach.

This idea, coinciding with an increased interest in child-led pedagogy and a belief that a species-specific genetic blueprint meant that children would follow the same developmental trajectory but at different rates, resulted in the concept of ‘reading-readiness’. The upshot was that no one panicked if children couldn’t read by 7, 9 or 11; they often did learn to read when they were ‘ready’. It’s impossible to compare the long-term outcomes of analytic phonics and mixed methods because the relevant data aren’t available. We don’t know for instance, whether children’s educational attainment suffered more if they got left behind by whole-class analytic phonics, or if they got left alone in schools that waited for them to become ‘reading-ready’.

Eventually, as is often the case, the descriptive observations about how people tackle unfamiliar words became prescriptive. Whole word recognition began to supersede analytic phonics after WW2, and in the 1960s Ken Goodman formalised mixed methods in a ‘whole language’ approach. Goodman was strongly influenced by Noam Chomsky, who believes that the structure underpinning language is essentially ‘hard-wired’ in humans. Goodman’s ideas chimed with the growing social constructivist approach to education that emphasises the importance of meaning mediated by language.

At the same time as whole language approaches were gaining ground, in England the national curriculum and standardised testing were introduced, which meant that children whose reading didn’t keep up with their peers were far more visible than they had been previously, and the complaints that had followed the introduction of whole language in the USA began to be heard here. In addition, the national curriculum appears to have focussed on the mechanics of understanding ‘texts’ rather than on reading books for enjoyment. What has also happened is that with the advent of multi-channel TV and electronic gadgets, reading has nowhere near the popularity it once had as a leisure activity amongst children, so children tend to get a lot less reading practice than they did in the past. These developments suggest that any decline in reading standards might have multiple causes, rather than ‘mixed methods’ being the only culprit.

what do I think about mixed methods?

I think Chomsky has drawn the wrong conclusions about his linguistic theory, so I don’t subscribe to Goodman’s reading theory either. Although meaning is undoubtedly a social construction, it’s more than that. Social constructivists tend to emphasise the mind at the expense of the brain. The mind is such vague concept that you can say more or less what you like about it, but we’re very constrained by how our brains function. I think marginalising the brain is an oversight on the part of social constructivists, and I can’t see how a child can extract meaning from a text if they can’t read the words.

Patricia Kuhl’s work suggests that babies acquire language computationally, from the frequency of sound patterns within speech. This is an implicit process; the baby’s brain detects the sounds and learns the patterns, but the baby isn’t aware of the learning process, nor of phonemes. What synthetic phonics does is to make the speech sounds explicit, develop phonemic awareness and allow children to learn phoneme-grapheme correspondence and how words are constructed.

My reservations about SP are not about the approach per se, but rather about how it’s applied and the reasons assumed to be responsible for its effectiveness. In cognitive terms, SP has three main components;

• phonemic and graphemic discrimination
• grapheme-phoneme correspondence
• building up phonemes/graphemes into words – blending

How efficient children become at these tasks is a function of the frequency of their exposure to the tasks and how easy they find them. Most children pick up the skills with little effort, but anyone who has problems with any or all of the tasks could need considerably more rehearsals. Problems with the cognitive components of SP aren’t necessarily a consequence of ineffective teaching or the child not trying hard enough. Specialist SP teachers will usually be aware of this, but policy-makers, parents, or schools that simply adopt a proprietary SP course might not.

My son’s school taught reading using Jolly Phonics. Most of the children in his class learned to read reasonably quickly. He took 18 months over it. He had problems with each of the three elements of SP. He couldn’t tell the difference between similar-sounding phonemes – i/e or b/d, for example. He couldn’t tell the difference between similar-looking graphemes either – such as b/d, h/n or i/j. As a consequence, he struggled with some grapheme-phoneme correspondences. Even in words where his grapheme-phoneme correspondences were secure, he couldn’t blend more than three letters.

After 18 months of struggling and failing, he suddenly began to read using whole word recognition. I could tell he was doing this because of the errors he was making; he was using initial and final letters and word shape and length as cues. Recognising patterns is what the human brain does for a living and once it’s recognised a pattern it’s extremely difficult to get it to unrecognise it. Brains are so good at recognising patterns they often see patterns that aren’t what they think they are – as in pareidolia or the behaviourists’ ‘superstition’. Once my son could recognise word-patterns, he was reading and there was no way he was going to be persuaded to carry on with all that tedious sounding-out business. He just wanted to get on with reading, and that’s what he did.

[Edited to add: I should point out that the reason the apparent failure of an SP programme to teach my son to read led to me supporting SP rather than dismissing it, was because after conversations with specialist SP teachers, I realised that he hadn’t had enough training in phonemic and graphemic discrimination. His school essentially put the children through the course, without identifying any specific problems or providing additional training that might have made a significant difference for him.]

When I trained as a teacher ‘mixed methods’ included a substantial phonics component – albeit as analytic phonics. I get the impression that the phonics component has diminished over time so ‘mixed methods’ aren’t what they once were. Even if they included phonics, I wouldn’t recommend ‘mixed methods’ prescriptively as an approach to teaching reading. Having said that, I think mixed methods have some validity descriptively, because they reflect the way adults/children actually read. I would recommend the use of SP for teaching reading, but I think some proponents of SP underestimate the way the human brain tends to cobble together its responses to challenges, rather than to follow a neat, straight pathway.

Advocacy of mixed methods and opposition to SP is often based on accurate observations of the strategies children use to read, not on evidence of what teaching methods are most effective. Our own personal observations tend to be far more salient to us than schools we’ve never visited reporting stunning SATs results. That’s why I think SP proponents need to ensure that the evidence they refer to as supporting SP is of a high enough quality to be convincing to sceptics.