Huey, Dewey and…? a response to the history of reading methods (3)

Before responding to Maggie’s post, I want first to thank her and other members of the Reading Reform Forum for the vast amount of information about reading that they have put into the public domain. The site is a great resource for anyone interested in teaching reading.

I also feel I should point out that my previous post on ‘mixed methods’ was intended to be a prompt response to a question asked on Twitter, not a fully-referenced essay on the history of methods for teaching reading. It accurately accounts for why I think what I think, but I’m grateful to Maggie for explaining where my understanding of the history of reading methods might be wrong.

On reflection, I think I could have signposted the key points I wanted to make more clearly in my post. My reasoning went like this;

1. Until the post-war period reading methods in the UK were dominated by alphabetic/phonics approaches.
2. Despite this, a significant proportion of children didn’t learn to read properly.
3. Current concerns about literacy levels don’t have a clear benchmark – what literacy levels do we expect and why?
4. Although literacy levels have fallen in recent years, the contribution of ‘mixed methods’ to this fall is unclear; other factors are involved.

A few comments on Maggie’s post:

Huey and reading methods
My observation about the use of alphabetic and analytic phonics approaches in the early days of state education in England is based on a fair number of accounts I’ve either heard or read from people who were taught to read in the late 19th/early 20th century. Without exception, they have reported;

• learning the alphabet
• learning letter-sound correspondences
• sounding out unfamiliar words letter-sound by letter-sound

I’m well aware that that the first-hand accounts I’ve come across don’t form a representative sample, but from what Maggie has distilled from Huey, the accounts don’t appear to be far off the mark for what was happening generally. I concede that sounding out unfamiliar words doesn’t qualify as ‘analytic phonics’, but it’s analytic something – analytic letter-sound correspondence, perhaps?

Montessori
I cited Montessori as an example of the Europe-wide challenge posed by children who struggled at school; I wasn’t referring to her approach to teaching reading specifically. In her book she frequently mentions Itard and Séguin who worked with hearing-impaired children. She applies a number of their techniques, but doesn’t appear to agree with them about everything – she questions Séguin’s approach to writing, for example.

Frank Smith
I haven’t read Smith, but the fact that skilled readers use context and prediction to read the words on the page wasn’t his ‘proposal’. By the 1970s it was a well-documented feature of contextual priming in skilled readers, i.e. skilled adult readers with large spoken vocabularies. From what Maggie has said, the error Smith appears to have made is to assume that children could learn by mimicking the behaviour of experts – a mistake that litters the history of pedagogy.

Hinshelwood and Orton
Hinshelwood was a British ophthalmologist interested in reading difficulties caused by brain damage. Orton was American, but was a doctor also interested in brain damage and its effect on reading. I can’t see how the work of either of them would have been affected by the use of Whole Word reading methods in US schools, although their work has frequently been referred to as an explanation for reading difficulties.

the rejection of the alphabetic principle
Maggie says my statement that the alphabetic principle and analytic phonics had been abandoned because they hadn’t been effective for all children ‘makes no sense at all’. If I’m wrong, why were these methods abandoned?

using a range of cues
The cues I listed are those identified in skilled adult readers in studies carried out predominantly in the post-war period. Maggie’s hypothesis is that the range of cues is an outcome of the way the participants in experiments (often college students) had been taught to read. It’s an interesting hypothesis; it would be great to test it. An alternative hypothesis is that the strategies used by skilled adult readers are an outcome of how brains work. Prior information primes neural networks and thus reduces response time, and frequent exposure to auditory and visual patterns such as spoken and written words results in automated, fast recognition. For example, in chapter 2 of Stanovich’s book, West and Stanovich report fluent readers’ performance being facilitated by two automated processes; sentence context (essentially semantic priming) and word recognition. According to chapter 3, fluent readers use phonological recoding if automated word recognition fails.

educators’ reasoning
I wasn’t saying that the educators’ assessment of alphabetic/phonics methods was right, just that it was what they claimed. Again, if they didn’t think that, why would alphabetic/phonics methods have been abandoned?

falling literacy standards
The data that I suggested weren’t available would enable us to make a valid comparison between the literacy levels of school-leavers (aged 13, say) at the beginning of the 20th century when alphabetic/phonics methods were widely used in the UK, and current levels for young people of the same age. The findings Maggie has cited are interesting, but don’t give us a benchmark for the literacy levels we should expect.

national curriculum and standardised testing
The point I was trying to make was not about the impact of the NC and SATs on reading, but that the NC and SATs made poor readers more obvious. In the reading-ready era, some children not reading at 7 would have learned to read by the time they were 11, but that delay wouldn’t have appeared in national statistics.

reading for enjoyment
Children leaving school without functional literacy is certainly a cause for concern, and I agree that methods of teaching reading must be implicated. But technological changes since 1990 haven’t helped. The world of young people is not as text-based as it used to be, and not as text-based as the adult world. That issue needs to be addressed.

Note:
Huey, Dewie & Louie are the names of Donald Duck’s three nephews
There’s no Louie in this story yet.

Advertisements

Maggie Downie on the history of methods of teaching reading: guest post (2)

My previous post was a reply to a question posed in a Twitter discussion about a blogpost by @HeatherBellaF on the evidence for synthetic phonics. I’m grateful to @MaggieDownie for summarising the history of reading methods used in the English speaking world on the Reading Reform Forum site here. Maggie wasn’t able to post this as a comment on my blog, so I’ve reproduced it in full below. I’ll respond later. I’ve edited Maggie’s post only to reduce spacing and restore italics. Here’s what she says:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Heather recently posted a blog about SSP which provoked a bit of a twitter storm and a series of exchanges over the quality of the evidence with another RRF message board contributor who posted her own blog in response.

Heather’s blog: http://heatherfblog.wordpress.com/2014/ … -evidence/
Response: http://logicalincrementalism.wordpress. … g-reading/

I felt that the ‘history’ of reading instruction’ run through in the second blog was, to say the least, vague and inaccurate, and tried to write a response. This has turned out to be extremely long so I am posting it here instead. It is not a polished piece of work, nor does it address everything but I have tried to be show how instructional methods have taken hold over the past 100 years or so. I realise that I could have gone further, looking at reports such as Bullock and Warnock but this isn’t an undergraduate essay.

I might also say that reading Huey is a real eyeopener. Diack comments that much of what had been written about reading prior to his own book (1965) could be found in Huey, though as the 20th C progressed it was increasingly unattributed. The same could be said now in that much of what Huey said is still being said today. The power of Ruling Theory at work!

Sections in italics are from the blog post

Here goes!

As far as I’m aware, when education became compulsory in England in the late 19th century, reading was taught predominantly via letter-sound correspondence and analytic phonics – ‘the cat sat on the mat’ etc. A common assumption was that if people couldn’t read it was usually because they’d never been taught. What was found was that a proportion of children didn’t learn to read despite being taught in the same way as others in the class. The Warnock committee reported that teachers in England at the time were surprised by the numbers of children turning up for school with disabilities or learning difficulties. That resulted in special schools being set up for those with the most significant difficulties with learning. In France Alfred Binet was commissioned to devise a screening test to identify learning difficulties that evolved into the ‘intelligence test’. In Italy, Maria Montessori adapted methods to mainstream education that had been used to teach hearing-impaired children.

The history of teaching reading is far more complex than your overview suggests. It is not a straight run of ‘letter/sound correspondence and analytic phonics teaching from the inception of universal schooling in the 1880s through to Ken Goodman’s ‘Whole Language’ of the 1960s. It is a period of differing theories and methodologies; of the beginning of the scientific study of the reading process (mainly of eye-movements) and of gathering momentum in the disagreements about the theory of reading instruction which has led to the ‘Reading Wars.’

It might be noted that this is a peculiarly Anglo-Centric history; countries which have more transparent orthographies (i.e mainly, or completely having only one way to represent each of the phonemes of the language) have, for the most part, carried serenely on as they have done for years, teaching letter/sound correspondences, decoding and blending for reading and segmenting for spelling, with no apparent detriment to the children so taught and with far higher levels of literacy than many English Speaking countries. And with no thought of changing their effective teaching methods.

A great deal of information on the history of reading instruction comes from the highly influential work of Edmund Burke Huey, an Educational psychologist, ‘The Psychology and Pedagogy of Reading’ 1908. Also from Hunter Diack’s ‘In Spite of the Alphabet’ 1965 and Jeanne Chall ‘Learning to Read, the great debate’ 1967. A paper by Dr Joyce Morris ‘Phonicsphobia’ gives insight into English practice post WW2. From the 80s on may be fairly common knowledge to older readers.

From my reading of Huey it seems that by the 19th century there were four main methods of teaching reading (with variations within each category). The method which seems to have obtained until at least the mid. 19th C was the Alphabetic, by which is meant the ‘traditional’ centuries old method of learning the alphabet letters and how to spell words out. It is not altogether clear whether children were taught letter sound correspondences or letter names (or both) by this method though Diack suggests that as the method involved learning consonant vowel combinations (ba, be, bo, bu etc.) it must have involved ‘sounds’ at some stage. Whole word (Look & Say) had been proposed from time to time during the 18th C but may have derived some impetus from Thomas Gallaudet, an early 19th C educator of deaf children who used Whole Word to teach his pupils to read. By the time Huey was writing it was being seriously proposed as an effective method. Huey also identified ‘Phonetic’ methods; not ‘phonics’ as we know it but methods using simplified alphabets or diacrital marks to simplify early reading instruction. The fourth category was Phonics, phonics of a kind quite familiar to SP proponents and even called ‘Synthetic’ by some late 19th C practitioners. (Analytic Phonics does not seem to have featured)

Huey himself favoured a version of Whole Word known as the Sentence Method, based on the theory that children would learn best something that was meaningful and interesting to them. Children were taught to recognise and ‘read’ a whole sentence (with no regard to the individual words which comprised it or the letters the words contained). Diack suggests that this method was validated by Gestalt theories (that the ‘whole’ is the unit of immediate perception) in the 1920s and I think it perhaps influenced the bizarre statement of Whole Language guru, Ken Goodman, to the effect that a paragraph is easier to read than a sentence and a sentence is easier to read than a word.

Huey did believe that phonics should be taught but after children had learned to read and not connected with the reading process, presumably the phonics was for spelling. He did acknowledge that Rebecca Pollard’s ‘Synthetic’ (phonics) Method was successful but dismissed it as old fashioned and tedious.

It is important to note that a key element of Whole Word instruction is the focus on reading for meaning alone. There is no attempt to teach any word recognition strategies beyond, perhaps, linking the word to a picture. The success of the method relied on children’s own ability to memorise the appearance of the word (and to be able to recognise it in different forms e.g differing fonts, cases or handwriting). The educationists who promoted the method did so because of their perception that children who did not read with ‘expression’ were not understanding what they were reading and that phonics instruction led to expressionless mechanical reading with no understanding. There seems to have been no attempt to verify this belief. Horace Mann gave expression to it when describing reading he heard in schools in the 1830s as being ‘too often a barren action of the organs of speech upon the atmosphere’ and it can be seen today, over 150 years later, expressed, in less picturesque terms, by denigrators of SSP methods of teaching reading.

Diack says that in reality phonic methods predominated in the UK & the US for at least the latter half of the 19th Century. Under the influence of figures such as Huey & Dewey Whole Word methods became widely accepted in the US from the early 20th Century whereas Phonics lingered on in the UK for far longer.

At this point it might be appropriate to mention Montessori. I am not sure why her method of teaching reading is thought to have been developed from her work with deaf children. As far as I can make out from her own book (The Montessori Method. 1912) her method for developing the motor skills need for writing and her use of letter shapes for learning the forms of letters were developed when she worked with what we would now call children with learning difficulties, but her method of teaching reading owes nothing whatsoever to work with hearing impaired children. She taught letter/sound correspondence right from the start and her account of how her children learned to read and write would have any SP proponent nodding in approval. It is very beautiful and well worth reading
(http://digital.library.upenn.edu/women/ … ethod.html) p246

It seems that Whole Word methods began to really take hold in the UK during the 1930s and proliferated post WW2 as part of the postwar desire for ‘modernisation. It was then that Joyce Morris encountered resistance to old fashioned ‘Phonics’ detailed in her article ‘Phonicsphobia’ (1994) as did Hunter Diack when he published papers in the 1950s in favour of phonics instruction. His approach to phonics was to teach letter/sound correspondences but in the context of whole words. I don’t know enough about his method to tell if it tends to Analytic or Synthetic but the reading tests he produced with J.C Daniels do not look to be ‘word family’ based.

It is possible that Whole Word may have slipped quietly away at some time had it not been for the rise to prominence of the highly charismatic and persuasive Frank Smith in the early 1970s. Having never taught a child to read he wrote a book called ‘Understanding Reading’. (1971) which seems never to have been out of print since. A great deal of it is regurgitation of Huey and some of it is stunningly inaccurate assertions of what happens in the reading process. The final chapter where he proposes that a really skilled reader can read a page of text and get the meaning of it without being aware of the words on the page is awe-inspiringly loopy. Yet he has a huge following and is revered. It was Frank Smith’s excitingly ‘modern’ take on reading that inspired two young cognitive psychologists, in the 1970s to base a study on Smith’s proposal that skilled readers use context and prediction to ‘read’ the words on the page and that poor readers laboured away with phonics. Stanovich and West were amazed to find that precisely the opposite was true.

Research into acquired reading difficulties in adults generated an interest in developmental problems with learning to read, pioneered by James Hinshelwood and Samuel Orton in the early 20th century.

From my foregoing account you should be aware that Orton and Hinshelwood were investigating reading disorders in the USA at a time when Whole Word had become the predominant method of teaching reading; any phonics instruction was incidental. ‘Alphabetic principle and analytic phonics’ really cannot be implicated here.

The term developmental dyslexia began as a descriptive label for a range of problems with reading and gradually became reified into a ‘disorder’. Because using the alphabetic principle and analytic phonics clearly wasn’t an effective approach for teaching all children to read, and because of an increased interest in child development, researchers began to look at what adults and children actually did when reading and learning to read, rather than what it had been thought they should do.

This is just extraordinary. Bearing in mind that no date is given for this rejection of the alphabetic principle and analytic phonics and that Dr Orton famously pioneered structured, systematic phonics instruction for remediation of dyslexics in the 1920s/30s (the implication being that this was not the instruction they received in schools) this statement makes no sense at all.

What they found was that people use a range of cues (‘mixed methods’) to decode unfamiliar words; letter-sound correspondence, analytic phonics, recognising words by their shape, using key letters, grammar, context and pictures, for example.

This is an odd one to unpick. It is probable that researchers did find that people used these strategies but they were used in the context of a belief that children could learn to read whole words, whole sentences etc. with no instruction in phonics until they *could* read. In the absence of initial phonics instruction, and, presumably because children struggled to learn to read when the Word method assumed that they would learn unaided, these ‘strategies’ were developed and taught in an attempt to help children learn more easily. Naturally these strategies would be observed in people taught to use them or people who had developed them by themselves in the absence of any other guidance. Chall shows clearly how basal readers developed the use of pictures and predictable text to facilitate the teaching of these strategies. But since Stanovich and West showed, in the 70s that these were strategies used by unskilled readers and that skilled readers used decoding strategies for word recognition (this is an extreme simplification of the research Stanovich outlines in ‘Progress in Understanding Reading’) and this has been the conclusion of cognitive scientists over the subsequent decades the validity of these strategies is seriously challenged.

Educators reasoned that if some children hadn’t learned to read using alphabetic principles and/or analytic phonics, applying the strategies that people actually used when reading new words might be a more effective approach.

As alphabetic principles weren’t being used to any great extent this statement is invalid. The tossing in of ‘analytic phonics’ seems more of a sop to phonics detractors than an indictment of ‘phonics’. McGuinness (1998) examination of US ‘analytic’ phonics instruction shows it to have been chaotic, illogical and unstructured and only marginally effective. There is no reason to believe that the situation was any different in the UK. Indeed, examination of pre SP phonics programmes (of which I have several) tends to confirm her conclusions.

This idea, coinciding with an increased interest in child-led pedagogy and a belief that a species-specific genetic blueprint meant that children would follow the same developmental trajectory but at different rates, resulted in the concept of ‘reading-readiness’. The upshot was that no one panicked if children couldn’t read by 7, 9 or 11; they often did learn to read when they were ‘ready’. It’s impossible to compare the long-term outcomes of analytic phonics and mixed methods because the relevant data aren’t available. We don’t know for instance, whether children’s educational attainment suffered more if they got left behind by whole-class analytic phonics, or if they got left alone in schools that waited for them to become ‘reading-ready’.

Some comparisons do exist. Diack notes that the committee set up by the UK government in 1947 ‘to consider the nature and extent of the illiteracy alleged to exist among school leavers and young people’ found that 11y olds in 1948 were a year behind those of 1938 and 15 y olds in 1948 were 2 years behind those in 1938. Martin Turner in his pamphlet ‘Sponsored Reading Failure (1990) found that standards in reading were falling (that was in the days when reading was monitored by Local Authorities) and suggested that this was caused by the prevalence of Whole Word and Real Books methodology.

Eventually, as is often the case, the descriptive observations about how people tackle unfamiliar words became prescriptive. Whole word recognition began to supersede analytic phonics after WW2, and in the 1960s Ken Goodman formalised mixed methods in a ‘whole language’ approach. Goodman was strongly influenced by Noam Chomsky, who believes that the structure underpinning language is essentially ‘hard-wired’ in humans. Goodman’s ideas chimed with the growing social constructivist approach to education that emphasises the importance of meaning mediated by language.
At the same time as whole language approaches were gaining ground, in England the national curriculum and standardised testing were introduced, which meant that children whose reading didn’t keep up with their peers were far more visible than they had been previously, and the complaints that had followed the introduction of whole language in the USA began to be heard here.

It seems that Whole Word/Whole Language approaches had been prevalent long before the introduction of the national curriculum and it is debateable that the National Curriculum Tests were truly standardised. But an account of government attempts to reintroduce more phonics into the teaching of reading since 1988 can be found here: http://www.rrf.org.uk/

In addition, the national curriculum appears to have focussed on the mechanics of understanding ‘texts’ rather than on reading books for enjoyment.

I would agree with that but would also note that the initial teaching of reading was such that, even with increased official emphasis on the teaching of phonics a consistent ‘tail’ of some 20% of children have left primary school with barely functional literacy (L3 or below; some 120,000 children annually) and that inability to read with ease militates strongly against getting any enjoyment from reading, or choosing to read as a leisure activity.

What has also happened is that with the advent of multi-channel TV and electronic gadgets, reading has nowhere near the popularity it once had as a leisure activity amongst children, so children tend to get a lot less reading practice than they did in the past. These developments suggest that any decline in reading standards might have multiple causes, rather than ‘mixed methods’ being the only culprit.

But concern must be not only focussed on failure to read for enjoyment. There are very significant numbers of children and young people who are unable to read to a level which enables them to access the functional reading needed to participate in a highly text based society.

Sources:
Huey. E.B the Psychology and Pedagogy of Reading 1908
https://archive.org/stream/psychologyan … 1/mode/1up
Montessori. M The Montessori Method 1912
http://digital.library.upenn.edu/women/ … ethod.html
Diack. H: In Spite of the Alphabet 1965
Chall. J Learning to Read: The Great Debate 1967
Smith. F Understanding Reading 1971
Morris J Phonicsphobia 1994
http://www.spellingsociety.org/journals … sfobia.php

McGuinness. D. Why Children Can’t Read 1998
Turner. M. Sponsored Reading Failure 1990
Stanovich. K Progress in Understanding Reading 2000

mixed methods for teaching reading (1)

Many issues in education are treated as either/or options and the Reading Wars have polarised opinion into synthetic phonics proponents on the one hand and those supporting the use of whole language (or ‘mixed methods’) on the other. I’ve been asked on Twitter what I think of ‘mixed methods’ for teaching reading. Apologies for the length of this reply, but I wanted to explain why I wouldn’t dismiss mixed methods outright and why I have some reservations about synthetic phonics. I wholeheartedly support the idea of using synthetic phonics (SP) to teach children to read. However, I have reservations about some of the assumptions made by SP proponents about the effectiveness of SP and about the quality of the evidence used to justify its use.

the history of mixed methods

As far as I’m aware, when education became compulsory in England in the late 19th century, reading was taught predominantly via letter-sound correspondence and analytic phonics – ‘the cat sat on the mat’ etc. A common assumption was that if people couldn’t read it was usually because they’d never been taught. What was found was that a proportion of children didn’t learn to read despite being taught in the same way as others in the class. The Warnock committee reported that teachers in England at the time were surprised by the numbers of children turning up for school with disabilities or learning difficulties. That resulted in special schools being set up for those with the most significant difficulties with learning. In France Alfred Binet was commissioned to devise a screening test to identify learning difficulties that evolved into the ‘intelligence test’. In Italy, Maria Montessori adapted methods to mainstream education that had been used to teach hearing-impaired children.

Research into acquired reading difficulties in adults generated an interest in developmental problems with learning to read, pioneered by James Hinshelwood and Samuel Orton in the early 20th century. The term developmental dyslexia began as a descriptive label for a range of problems with reading and gradually became reified into a ‘disorder’. Because using the alphabetic principle and analytic phonics clearly wasn’t an effective approach for teaching all children to read, and because of an increased interest in child development, researchers began to look at what adults and children actually did when reading and learning to read, rather than what it had been thought they should do.

What they found was that people use a range of cues (‘mixed methods’) to decode unfamiliar words; letter-sound correspondence, analytic phonics, recognising words by their shape, using key letters, grammar, context and pictures, for example. Educators reasoned that if some children hadn’t learned to read using alphabetic principles and/or analytic phonics, applying the strategies that people actually used when reading new words might be a more effective approach.

This idea, coinciding with an increased interest in child-led pedagogy and a belief that a species-specific genetic blueprint meant that children would follow the same developmental trajectory but at different rates, resulted in the concept of ‘reading-readiness’. The upshot was that no one panicked if children couldn’t read by 7, 9 or 11; they often did learn to read when they were ‘ready’. It’s impossible to compare the long-term outcomes of analytic phonics and mixed methods because the relevant data aren’t available. We don’t know for instance, whether children’s educational attainment suffered more if they got left behind by whole-class analytic phonics, or if they got left alone in schools that waited for them to become ‘reading-ready’.

Eventually, as is often the case, the descriptive observations about how people tackle unfamiliar words became prescriptive. Whole word recognition began to supersede analytic phonics after WW2, and in the 1960s Ken Goodman formalised mixed methods in a ‘whole language’ approach. Goodman was strongly influenced by Noam Chomsky, who believes that the structure underpinning language is essentially ‘hard-wired’ in humans. Goodman’s ideas chimed with the growing social constructivist approach to education that emphasises the importance of meaning mediated by language.

At the same time as whole language approaches were gaining ground, in England the national curriculum and standardised testing were introduced, which meant that children whose reading didn’t keep up with their peers were far more visible than they had been previously, and the complaints that had followed the introduction of whole language in the USA began to be heard here. In addition, the national curriculum appears to have focussed on the mechanics of understanding ‘texts’ rather than on reading books for enjoyment. What has also happened is that with the advent of multi-channel TV and electronic gadgets, reading has nowhere near the popularity it once had as a leisure activity amongst children, so children tend to get a lot less reading practice than they did in the past. These developments suggest that any decline in reading standards might have multiple causes, rather than ‘mixed methods’ being the only culprit.

what do I think about mixed methods?

I think Chomsky has drawn the wrong conclusions about his linguistic theory, so I don’t subscribe to Goodman’s reading theory either. Although meaning is undoubtedly a social construction, it’s more than that. Social constructivists tend to emphasise the mind at the expense of the brain. The mind is such vague concept that you can say more or less what you like about it, but we’re very constrained by how our brains function. I think marginalising the brain is an oversight on the part of social constructivists, and I can’t see how a child can extract meaning from a text if they can’t read the words.

Patricia Kuhl’s work suggests that babies acquire language computationally, from the frequency of sound patterns within speech. This is an implicit process; the baby’s brain detects the sounds and learns the patterns, but the baby isn’t aware of the learning process, nor of phonemes. What synthetic phonics does is to make the speech sounds explicit, develop phonemic awareness and allow children to learn phoneme-grapheme correspondence and how words are constructed.

My reservations about SP are not about the approach per se, but rather about how it’s applied and the reasons assumed to be responsible for its effectiveness. In cognitive terms, SP has three main components;

• phonemic and graphemic discrimination
• grapheme-phoneme correspondence
• building up phonemes/graphemes into words – blending

How efficient children become at these tasks is a function of the frequency of their exposure to the tasks and how easy they find them. Most children pick up the skills with little effort, but anyone who has problems with any or all of the tasks could need considerably more rehearsals. Problems with the cognitive components of SP aren’t necessarily a consequence of ineffective teaching or the child not trying hard enough. Specialist SP teachers will usually be aware of this, but policy-makers, parents, or schools that simply adopt a proprietary SP course might not.

My son’s school taught reading using Jolly Phonics. Most of the children in his class learned to read reasonably quickly. He took 18 months over it. He had problems with each of the three elements of SP. He couldn’t tell the difference between similar-sounding phonemes – i/e or b/d, for example. He couldn’t tell the difference between similar-looking graphemes either – such as b/d, h/n or i/j. As a consequence, he struggled with some grapheme-phoneme correspondences. Even in words where his grapheme-phoneme correspondences were secure, he couldn’t blend more than three letters.

After 18 months of struggling and failing, he suddenly began to read using whole word recognition. I could tell he was doing this because of the errors he was making; he was using initial and final letters and word shape and length as cues. Recognising patterns is what the human brain does for a living and once it’s recognised a pattern it’s extremely difficult to get it to unrecognise it. Brains are so good at recognising patterns they often see patterns that aren’t what they think they are – as in pareidolia or the behaviourists’ ‘superstition’. Once my son could recognise word-patterns, he was reading and there was no way he was going to be persuaded to carry on with all that tedious sounding-out business. He just wanted to get on with reading, and that’s what he did.

[Edited to add: I should point out that the reason the apparent failure of an SP programme to teach my son to read led to me supporting SP rather than dismissing it, was because after conversations with specialist SP teachers, I realised that he hadn’t had enough training in phonemic and graphemic discrimination. His school essentially put the children through the course, without identifying any specific problems or providing additional training that might have made a significant difference for him.]

When I trained as a teacher ‘mixed methods’ included a substantial phonics component – albeit as analytic phonics. I get the impression that the phonics component has diminished over time so ‘mixed methods’ aren’t what they once were. Even if they included phonics, I wouldn’t recommend ‘mixed methods’ prescriptively as an approach to teaching reading. Having said that, I think mixed methods have some validity descriptively, because they reflect the way adults/children actually read. I would recommend the use of SP for teaching reading, but I think some proponents of SP underestimate the way the human brain tends to cobble together its responses to challenges, rather than to follow a neat, straight pathway.

Advocacy of mixed methods and opposition to SP is often based on accurate observations of the strategies children use to read, not on evidence of what teaching methods are most effective. Our own personal observations tend to be far more salient to us than schools we’ve never visited reporting stunning SATs results. That’s why I think SP proponents need to ensure that the evidence they refer to as supporting SP is of a high enough quality to be convincing to sceptics.

getting it wrong from the beginning: natural learning

In my previous post, I said that I felt that in Getting It Wrong From The Beginning: Our Progressive Inheritance from Herbert Spencer, John Dewey and Jean Piaget Kieran Egan was too hard on Herbert Spencer and didn’t take sufficient account of the context in which Spencer formulated his ideas. In this post, I look in more detail at the ideas in question and Egan’s critique of them.

natural learning

Egan says that the “holy grail of progressiveness … has been to discover methods of school instruction derived from and modelled on children’s effortless learning … in households, streets and fields” (pp.38-39). In essence, progressives like Spencer see all learning as occurring in the same way, implying that children find school learning difficult only because it doesn’t take into account how they learn naturally. Their critics see school learning as qualitatively different to natural learning; it requires thinking, and thinking doesn’t come naturally and is effortful so students don’t like it.

It’s inaccurate to describe the learning children do in ‘households, streets and fields’ as ‘effortless’. Apparently effortless would be more accurate. That’s because a key factor in learning is rehearsal. Babies and toddlers spend many, many hours rehearsing their motor, language, and sensory processing skills and in acquiring information about the world around them. Adolescents do the same in respect of interacting with peers, using video games or playing in a band. Adults can become highly competent in the workplace or at cooking, motor mechanics or writing novels in their spare time. What makes this learning appear effortless is that the individuals are highly motivated to put in the effort, so the learning doesn’t feel like work. I think there are three main motivational factors in so-called ‘natural learning’; sensory satisfaction (in which I’d include novelty-seeking and mastery), social esteem and sheer necessity – if it’s a case of acquiring knowledge and skills or starving, the acquisition of knowledge and skills usually wins.

School learning tends to differs from ‘natural’ learning in two main respects. One is motivational. School learning is essentially enforced – someone else decides what you’re going to learn about regardless of whether you want to learn about it or see an immediate need to learn about it. The other is that the breadth of the school curriculum means that there isn’t enough time for learning to occur ‘naturally’. If I were to spend a year living with a Spanish family or working for a chemist I would learn more Spanish or chemistry naturally than I would if I had two Spanish or chemistry lessons a week at school simply because the amount of rehearsal time would be more in the Spanish family or in the chemistry lab than it would be in school. Schools generally teach the rules of languages or of science explicitly and students have to spend more time actively memorising vocabulary and formulae because there simply isn’t the time available to pick them up ‘naturally’.

progressive ‘myths’

Egan’s criticism of Spencer’s ideas centres around three core principles of progressive education; simple to complex, concrete to abstract and known to unknown – Egan calls the principles ‘myths’. Egan presents what at first appears to be a convincing demolition job on all three principles, but the way he uses the constructs involved is different to the way in which they are used by Spencer and/or by developmental psychology. Before unpacking Egan’s criticism of the core principles, I think it would be worth looking at the way he views cognition.

the concept of mind

Egan frequently refers to the concept of ‘mind’. ‘Mind’ is a useful shorthand term when referring to activities like feeling, thinking and learning, but it’s too vague a concept to be helpful when trying to figure out the fine detail of learning. Gilbert Ryle points out that even in making a distinction between mind and body, as Descartes did, we make a category error – a ‘mind’ isn’t the same sort of thing as a body, so we can’t make valid comparisons between them. If I’ve understood Ryle correctly, what he’s saying is that ‘mind’ isn’t just a different type of thing to a body, ‘mind’ doesn’t exist in the way a body exists, but is rather an emergent property of what a person does – of their ‘dispositions’, as he calls them.

Emergent properties that appear complex and sophisticated can result from some very simple interactions. An example is flocking behaviour. At first glance, the V-formation in flight adopted by geese and ducks or the extraordinary patterns made by flocks of starlings before roosting or by fish evading a predator look pretty complex and clever. But in fact these apparently complex behaviours can emerge from some very simple rules of thumb (heuristics) such as each bird or fish maintaining a certain distance from the birds or fish on either side of them, and moving in the general direction of its neighbours. Similarly, some human thinking can appear complex and sophisticated when in fact it’s the outcome of some simple biological processes. ‘Minds’ might not exist in the same way as bodies do, but brains are the same kind of thing as bodies and do exist in the same way as bodies do, and brains have a significant impact on how people feel, think, and learn.

the brain and learning

Egan appeals to Fodor’s model of the brain in which “we have fast input systems and and a slower, more deliberative central processor” (p.39). Fodor’s fast and ‘stupid’ input systems are dedicated to processing particular types of information and work automatically, meaning that we can’t not learn things like motor skills or language. Fodor is broadly correct in his distinction, but I think Egan has drawn the wrong conclusions from this idea. A core challenge in research is that often more than one hypothesis offers a plausible explanation for a particular phenomenon. The genius of research is in eliminating the hypotheses that actually don’t explain the phenomenon. But if you’re not familiar with a field and you’re not aware that there are competing hypotheses, it’s easy to assume that there’s only one explanation for the data. This is what Egan appears to do in relation to cognitive processes; he sees the cognitive data through the spectacles of a model that construes natural learning as qualitatively different to the type of learning that happens in school.

Egan assumes that the apparent ease with which children learn to recognise faces or pick up languages and the fact that there are dedicated brain areas for face recognition and for language implies that those functions are inbuilt automatic systems that result in effortless learning. But that’s not the only hypothesis in town. What’s equally possible that face-recognition and language need to be learned. There’s general agreement that the human brain is hard-wired to extract signals from noise – to recognise patterns – but the extent to which patterns are identified and learned depends on the frequency of exposure to the patterns. For most babies, human facial features are the first visual pattern they see, and it’s one they see a great many times during their first day of life, so it’s not surprising that, even at a few hours old, they ‘prefer’ facial features the right way up rather than upside down. It’s a relatively simple pattern, so would be learned quickly. Patricia Kuhl’s work on infants’ language acquisition suggests that a similar principle is in operation in relation to auditory information – babies’ brains extract patterns from the speech they hear and the rate at which the patterns are extracted is a function of the frequency of exposure to speech. The patterns in speech are much more complex than facial features, so language takes much longer to learn.

Egan’s understanding of mind and brain colours the way he views Spencer’s principles. He also uses the constructs embedded in the principles in a different way to Spencer. As a consequence, I feel his case against the principles is considerably weakened.

the three principles of progressive education

simple to complex

Spencer’s moment of epiphany with regard to education was when he realised that the gradual transition from simple to complex observed in the evolution of living organisms, the way human societies have developed and the pre-natal development of the foetus, also applied to the way human beings learn. Egan points out that this idea was challenged by the discovery of the second law of thermodynamics which states that isolated systems evolve towards maximum entropy – in other words complexity tends to head towards simplicity, the opposite of what Spencer and the evolutionists were claiming. What critics overlook is that although the second law of thermodynamics applies to the isolated system of the universe as a whole and any isolated system within it, most systems in the universe aren’t isolated. Within the vast, isolated universe system, subatomic particles, chemicals and living organisms are interacting with each other all the time. If that wasn’t the case, complex chemical reactions wouldn’t happen, organisms wouldn’t change their structure and babies wouldn’t be born. I think Egan makes a valid point about early human societies not consisting of simple savages, but human societies, like the evolution of living organisms, chemical reactions, the development of babies and the way people learn if left to their own devices, do tend to start simple and move towards complex.

Egan challenges the application of this principle to education by suggesting that the thinking of young children can be very complex as exemplified by their vivid imaginations and “mastering language and complex social rules when most adults can’t program a VCR” (p.62). He also claims this principle has “hidden and falsified those features of children’s thinking that are superior to adults’” (p.90), namely children’s use of metaphor that he says declines once they become literate (p.93). I think Egan is right that Spencer’s idea of cognition unfolding along a predetermined straight developmental line from simple to complex is too simplistic and doesn’t pay enough attention to the role of the environment. But I think he’s mistaken in suggesting that language, social behaviour and metaphor are examples of complex thinking in children. Egan himself attributes young children’s mastery of language and complex social rules to Fodor’s ‘stupid’ systems, which is why they are often seen as a product of ‘natural’ learning. Children might use metaphor more frequently than adults, but that could equally well be because adults have wider vocabularies, more precise terminology and simply don’t need to use metaphor so often. Frequency isn’t the same as complexity. Research into children’s motor, visuo-spatial, auditory, and cognitive skills all paints the same picture; that it starts simple and gets more complex over time.

concrete to abstract

By ‘abstract’ Spencer appears to have meant the abstraction of rules from concrete examples; the rules of grammar from speech, of algebraic rules from mathematical relationships, the laws of physics and chemistry from empirical observations and so on. Egan’s idea of ‘abstract’ is different – he appears to construe it as meaning ‘intangible’. He claims that children are capable of abstract thought because they have no problem imagining things that don’t exist, giving the example of Beatrix Potter’s Peter Rabbit (p.61). Peter Rabbit certainly isn’t concrete in the sense of actually existing in the real world, but all the concepts children need to comprehend his story are very concrete indeed; they include rabbits, items of clothing, tools, vegetables and gardens. And the ‘abstract’ emotions involved – anger, fear, security – are all ones with which children would be very familiar. Egan isn’t using ‘abstract’ in the same way as Spencer. Egan also claims that children’s ability to understand symbolic relationships means that Spencer was wrong. However, as Egan points out, symbols are ‘arbitrarily connected with what they symbolize’ and the ‘ready grasp of symbols’ is found in ‘children who are exposed to symbols’ which suggests that actually the children’s thinking does start with the concrete (what the symbols represent) and moves towards the abstract (the symbols and their arbitrary connection with what they symbolize). Spencer might have over-egged the pudding with respect to concrete to abstract principle, but I don’t think Egan manages to demonstrate that he was wrong.

known to unknown

Spencer was also insistent that education should start with what children knew – the things that were familiar to them in their own homes and communities. Egan raises several objections to this idea (pp.63-64):

1. “if this is a fundamental principle of human learning, there is no way the process can begin”
2. ‘if novelty – that is things unconnected with what is already known – is the problem … reducing the amount of novelty doesn’t solve the problem”
3. this principle has dumbed down the curriculum and comes close to “contempt for children’s intelligence”
4. “ this is the four-legged fly item … no one’s understanding of the world … expands according to this principle of gradual content association”

With regard to point 1, Spencer clearly wasn’t saying we have to know something in order to know anything else. What he was saying is that trying to get children to learn things that are completely unconnected with what they already know is likely to end in failure.

I can’t see how, in point 2, reducing the amount of novelty doesn’t solve the problem. If I were to attend a lecture delivered in Portuguese about the Higgs’ boson, the amount of novelty involved would be so high (I know only one Portuguese word and little about sub-atomic physics) that I would be likely to learn nothing. If, however, it was a Royal Institution Christmas Lecture in English for a general audience, the amount of novelty would be considerably reduced and I would probably learn a good deal. Exactly how much would depend on my prior knowledge about sub-atomic physics.

I do agree with Egan’s point 3, in the sense that taking this principle to extremes would result in an impoverished curriculum, but that’s a problem with implementation rather than the principle itself.

It’s ironic that Egan describes point 4 as the ‘four-legged fly’ item, since work on brain plasticity suggests that gradual content association, via the formation of new synapses, is precisely the way in which human beings do expand their understanding of the world. If we come across information with massive novel content, we tend to simply ignore it because of the time required to gather the additional information we need in order to make sense of it.

a traditional-liberal education

Egan’s critique of Spencer’s ideas is a pretty comprehensive one. For him, Spencer’s ideas are like the original version of the curate’s egg – not that parts of them are excellent, but that they are totally inedible. Egan says “I have already indicated that I consider the traditional-liberal principles equally as problematic as the progressive beliefs I am criticising” (p.54), but I couldn’t see where he’d actually done so.

A number of times Egan refers with apparent approval to some of the features commonly associated with a traditional-liberal education. He’s clearly uneasy about framing education in utilitarian terms, as Spencer did, but then Spencer was criticising a curriculum that was based on tradition and “the ornamental culture of the leisured class”. In the section entitled “What is wrong with Spencer’s curriculum?” (p.125ff) Egan highlights Spencer’s dismissal of grammar, history, Latin and the ‘useless arts’. In doing so, I think he has again overlooked the situation that Spencer was addressing.

As I understand it, the reason that Greek and Latin were originally considered essential to education was that for centuries in Europe, ancient Greek and Latin texts were the principal source of knowledge, as well as Latin being the lingua franca. From the Greek and Latin texts, you could get a broad understanding of what was known about literature, history, geography, theology, science, mathematics, politics, economics and law. If they understood what worked and what went wrong in Greek and Roman civilisations, boys from well-to-do families – the future movers and shakers – would be less likely to repeat the errors of previous generations. Over time, as contemporary knowledge increased and books were more frequently written in the vernacular, the need to learn Greek and Latin became less important; it persisted often because it was traditional, rather than because it was useful.

I’ve noticed that the loudest cries for reform of the education system in the English-speaking world have come from those with a background in subjects that involve high levels of abstraction; English, history, mathematics, philosophy. Egan’s special interest is in imaginative education. I’ve heard hardly a peep from scientists, geographers or PE teachers. It could be that highly abstracted subjects have been victims of the worst excesses of progressivism – or that in highly abstracted subjects there’s simply more scope for differences of opinion about subject content. I can understand why Egan is wary of utility being the guiding principle for education; it’s too open to exploitation by business and politicians, and education needs to do more than train an efficient workforce. But I’m not entirely clear what Egan wants to see in its place. He appears to see education as primarily for cultural purposes; so we can all participate in what Oakeshott called ‘the conversation of mankind’, a concept mentioned by other new traditionalists, such as Robert Peal and Toby Young. Egan sees a good education as needing to include grammar, Latin and history because they are pieces of the complex image that makes up ‘what we expect in an educated person'(p.160). I can see what he’s getting at, but this guiding principle for education is demonstrably unhelpful. We’ve been arguing about it at least since Spencer’s day, and have yet to reach a consensus.

In my view, education isn’t about a cultural conversation or about utility, although it involves both. But it should be useful. The more people who get a good knowledge and understanding of all aspects how the world the works, the more likely our communities are to achieve a good, sustainable standard of living and decent quality of life. We need our education system to produce people who make the world a better place, not just people who can talk about it.

the curate’s egg, the emperor’s new clothes and Aristotle’s flies: getting it wrong from the beginning

Alongside a recommendation to read Robert Peal’s Progressively Worse, came another to read Kieran Egan’s Getting It Wrong From The Beginning: Our Progressive Inheritance from Herbert Spencer, John Dewey and Jean Piaget. Egan’s book is in a different league to Peal’s; it’s scholarly, properly referenced and published by a mainstream publisher not a think-tank. Although it appears to be about Spencer, Dewey and Piaget, Egan’s critique is aimed almost solely at Spencer; Piaget’s ideas are addressed, but Dewey hardly gets a look in. During the first chapter – a historical sketch of Spencer and his ideas – Egan and I got along swimmingly. Before I read this book my knowledge of Spencer would have just about filled a postage stamp (I knew he was a Victorian polymath who coined the term ‘survival of the fittest’) so I found Egan’s account of Spencer’s influence illuminating. But once his analysis of Spencer’s ideas got going, we began to part company.

My first problem with Egan’s analysis was that I felt he was unduly hard on Spencer. There is a sense in which he has to be because he lays at Spencer’s feet the blame for most of the ills of the education systems in the English-speaking world. Spencer is portrayed as someone who dazzled the 19th century public in the UK and America with his apparently brilliant ideas, which were then rapidly discredited towards the end of his life and soon after his death he was forgotten. Yet Spencer, according to Egan, laid the foundation for the progressive ideas that form the basis for the education system in the US and the UK. That poses a problem for Egan because he then has to explain why, if Spencer’s ideas were so bad that academia and the public dismissed them, in education they have not only persisted but flourished in the century since his death.

misleading metaphors

Egan tackles this conundrum by appealing to three metaphors; the curate’s egg, the emperor’s new clothes and Aristotle’s flies. The curate’s egg – ‘good in parts’ – is often used to describe something of variable quality, but Egan refers to the original Punch cartoon in which the curate, faced with a rotten egg for breakfast, tries to be polite to his host the bishop. The emperor’s new clothes require no explanation. In other words, Egan explains the proliferation of Spencer’s educational theories as partly down to deference to someone who was once considered a great thinker, and partly to people continuing to believe something despite the evidence of their own eyes.

Bishop: “I’m afraid you’ve got a bad egg, Mr Jones”; Curate: “Oh, no, my Lord, I assure you that parts of it are excellent!”

Aristotle’s flies

The Aristotle’s flies metaphor does require more explanation. Egan claims “Aristotle’s spells are hard to break. In a careless moment he wrote that flies have four legs. Despite the easy evidence of anyone’s eyes, his magisterial authority ensured that this “fact” was repeated in natural history texts for more than a thousand years” (p.42). In other words, Spencer’s ideas, derived ultimately from Aristotle’s, have, like Aristotle’s, been perpetuated because of his ‘magisterial authority’ – something which Egan claims Spencer lost.

It’s certainly true that untruths can be perpetuated for many years through lazy copying from one text to another. But these are usually untruths that are hard to disprove – the causes of fever or the existence of the Loch Ness monster, or, in Aristotle’s case, the idea that the brain cooled the blood, for example – not untruths that could be dispelled in a few second’s observation by a child capable of counting to six. Aristotle’s alleged ‘careless moment’ caught my attention because ‘legs’ pose a particular challenge for comparative anatomists. Aristotle was interested in comparative anatomy and was a keen and careful observer of nature. It’s unlikely that he would have had such a ‘careless moment’, and much more likely that the error would have been due to a mistranslation.

The challenge of ‘legs’ is that in nature they have a tendency over time to morph into other things – arms in humans and wings in birds for example. Anyone who has observed a housefly for a few seconds will know that houseflies frequently use their first pair of legs for grooming – in other words, as arms. I thought it quite possible that Aristotle categorised the first pair of fly legs as ‘arms’ so I looked for the reference. Egan doesn’t give it but the story about the four-legged fly idea being perpetuated for a millennium is a popular one. In 2005 it appeared in an article in the journal European Molecular Biology Organisation Reportsand was subsequently challenged in 2008 in a zoology blog.

male mayfly

male mayfly

Aristotle’s observation is in a passage on animal locomotion and the word for ‘fly’ – ephemeron – is translated by D’Arcy Thompson as ‘dayfly’ – also commonly known as the mayfly (order Ephemeroptera, named for their short adult life). In mayfly the first pair of legs is enlarged and often held forward off the ground as the males use them for grasping the female during mating. So the fly walks on four legs – the point Aristotle is making. Egan’s book was published in 2002, before this critique was written, but even before the advent of the internet it wouldn’t have been difficult to check Aristotle’s text – in Greek or in translation.

Spencer in context

I felt also that much of Egan’s criticism of Spencer was from the vantage point of hindsight. Spencer was formulating his ideas whilst arguments about germ theory were ongoing, before the publication of On the Origin of Species, before the American Civil war, before all men (never mind women) were permitted to vote in the UK or the US, before state education was implemented in England, and a century before the discovery of the structure of DNA. His ideas were widely criticised by his contemporaries, but that doesn’t mean he was wrong about everything.

It’s also important to set Spencer’s educational ideas in context. He was writing in an era when mass education systems were in their infancy and schools were often significantly under-resourced. Textbooks and exercise books were unaffordable not just for most families, but for many schools. Consequently schools frequently resorted to the age-old practice of getting children to memorise, not just the alphabet and multiplication tables, but everything they were taught. Text committed to memory could be the only access to books that many people might get during their lifetime. If the children didn’t have books they couldn’t take material home to learn so had to do it in school. Memorisation takes time, so teachers were faced with a time constraint and a dilemma – whether to prioritise remembering or explaining. Not surprisingly, memorisation tended to win, because understanding can always come later. Consequently, many children could recite a lot of text, but hadn’t got a clue what it meant. For many, having at least learned to read and write at school, their education actually began after they left school and had earned enough money to buy books themselves or could borrow them from libraries. This is the rote learning referred to as ‘vicious’ by early progressive educators.

The sudden demand for teachers when mass education systems were first rolled out meant that schools had to get whatever teachers they could. Many had experience but no training and would simply expect children from very different backgrounds to those they had previously taught to learn the same material, such as reciting the grammatical rules of standard English when the children knew only their local dialect with different pronunciation, vocabulary and grammatical structure. For children in other parts of the UK it was literally a different language. The history of England, with its list of Kings and Queens was essentially meaningless to children whose only prior access to their nation’s history was a few stories passed down orally.

This was why Spencer placed so much emphasis on the principles of simple to complex, concrete to abstract and known to unknown. Without those starting points, many children’s experience of education was one of bobbing about in a sea of incomprehension and getting more lost as time went by – and Spencer was thinking of middle-class children, not working-class ones for whom the challenge would have been greater. The problem with Spencer’s ideas was that they were extended beyond what George Kelly calls their range of convenience; they were taken to unnecessary extremes that were indeed at risk of insulting children’s intelligence.

In the next post, I take a more detailed look at Egan’s critique of Spencer’s ideas.

progressively worse

‘Let the data speak for themselves’ is a principle applied by researchers in a wide range of knowledge domains, from particle physics through molecular biology to sociology and economics. The converse would be ‘make the data say what you want them to say’, a human tendency that different knowledge domains have developed various ways of counteracting, such as experimental design, statistical analysis, peer review and being explicit about one’s own epistemological framework.

Cognitive science has explored several of the ways in which our evaluation of data can be flawed; Kahneman, Slovic & Tversky (1982) for example, examine in detail some of the errors and biases inherent in human reasoning. Findings from cognitive science have been embraced with enthusiasm by the new traditionalists, but they appear to have applied the findings only to teaching and learning, not to the thinking of the people who design education systems or pedagogical methods – or those who write books about those things. In Progressively Worse Robert Peal succumbs to some of those errors and biases – notably the oversimplification of complex phenomena, confirmation bias and attribution errors – and as a consequence he draws conclusions that are open to question.

The ‘furious debate’

Peal opens Progressively Worse with a question he says has been the subject of half a century of ‘furious debate’; ‘how should children learn?’ He exemplifies the debate as a series of dichotomies – an authoritative teacher vs independent learning, knowledge vs skills etc. representing differences between traditional and progressive educational approaches. He then provides an historical overview of changes to the British (or, more accurately English – they do things differently in Scotland) education system between 1960 and 2010, notes their impact on pedagogy and concludes that it’s only freedom to innovate that will rescue the country from the ‘damaging doctrine’ of progressive education to which the educational establishment is firmly wedded. (p.1)

Progressive or traditional

For Peal, progressive education has four core themes;

• education should be child-centred
• knowledge is not central to education
• strict discipline and moral education are oppressive and
• socio-economic background dictates success (pp.5-7).

He’s not explicit about the core themes of traditional education, but the features he mentions include;

• learning from the wisdom of an authoritative teacher
• an academic curriculum
• a structure of rewards and examinations
• sanctions for misbehaving and not working (p.1).

He also gives favourable mention to;

subject divisions
the house system
smart blazers, badges and ties
lots of sport
academic streaming
prize-giving
prefects
pupil duties
short hair
silent study
homework
testing
times tables
grammar, spelling and punctuation
school song, colours and motto
whole-class teaching, explanation and questioning
the difference between right and wrong, good and evil
class rankings

I claimed that Peal’s analysis of the English education system is subject to three principle cognitive errors or biases. Here are some examples:

Oversimplification

For the new traditionalists, cognitive load theory – derived from the fact that working memory has limited capacity – has important implications for pedagogy. But people don’t seek to minimise cognitive load only when learning new concepts in school. We also do it when handling complex ideas. On a day-to-day level, oversimplification can be advantageous because it enables rapid, flexible thinking; when devising public policy it can be catastrophic because the detail of policy is often as important as the overarching principle.

Education is a relatively simple idea in principle, but in practice it’s fiendishly complex, involving political and philosophical frameworks, socio-economic factors, systems pressures, teacher recruitment, training and practice and children’s health and development. Categorising education as ‘progressive’ or ‘traditional’ doesn’t make it any simpler. Each of Peal’s four core themes of progressive education is complex and could be decomposed into many elements. In classrooms, the elements that make up progressive education are frequently interspersed with elements of traditional education, so although I agree with him that some elements of progressive education taken to extreme have had a damaging influence, it’s by no means clear that they have been the only causes of damage, nor that other elements of progressive education have not been beneficial.

Peal backs up with numbers his claim that the British education system is experiencing ‘enduring educational failure’ (p. 4). He says the ‘bare figures are hard to ignore’. Indeed they are; what he doesn’t seem to realise is that ‘bare figures’ are also sometimes ambiguous. For example, the UK coming a third of the way down the PISA rankings is not an indication of educational ‘failure’ – unless your definition of success is a pretty narrow one. And the fact that in all countries except the UK literacy and numeracy levels of 16-24 year-olds are better than those of 55-65 year-olds might be telling us more about the resilience of the UK education system in the post-war period than about current literacy standards in other countries. ‘Bare figures’ rarely tell the whole story.

Confirmation bias

Another concept from cognitive science important to the new traditionalists is the schema – the way related information is organised in long-term memory. Schemata are seen as useful because they aid recall. But our own schemata aren’t always an accurate representation of the real world. Peal overlooks the role schemata play in confirmation bias; we tend to construe evidence that confirms the structure of one of our own existing schemata as having higher validity than evidence that contradicts it, even if the evidence overall shows that our schema is inaccurate.

Research usually begins with a carefully worded research question; the question has to be one that can have an answer, and the way the question is framed will determine what data are gathered and how they are analysed to provide an answer. The data don’t always confirm researchers’ expectations; what the data say is sometimes surprising and occasionally counterintuitive. Peal opens with the question; ‘how should children learn?’ but it’s not a question that could be answered using data as it’s framed in terms of an imperative. That’s not an issue for Peal, because he doesn’t use his data to answer the question, but starts with his answer and marshals the data to support it. He’s entitled to do this of course. Whether it’s an appropriate way to tackle an important area of public policy is another matter. The big pitfall in using this approach is that it’s all too easy to overlook data that doesn’t confirm one’s thesis, and Peal overlooks data relating to the effectiveness of traditional educational methods.

Peal’s focus on the history of progressive education during the last 50 years means he doesn’t cover the history of traditional education in the preceding centuries. If Peal’s account of British education is the only one you’ve read, you could be forgiven for thinking that traditional education was getting along just fine until the pesky progressives arrived with their political ideology that happened to gain traction because of the counter-cultural zeitgeist in the 1960s and 1970s. But other accounts paint a different picture.

Traditional education has had plenty of opportunities to demonstrate its effectiveness; Prussia had introduced a centralised, compulsory education system by the late 18th century – one that was widely emulated. But traditional methods weren’t without their critics. It wasn’t uncommon for a school to consist of one class with one teacher in charge. Children (sometimes hundreds) were seated in order of age on benches (‘forms’) and learned by rote not just multiplication tables and the alphabet, but entire lessons, which they then recited to older children or ‘monitors’ (Cubbereley, 1920). This was an approach derived from the catechetical method used for centuries by religious groups and was understandable if funding was tight and pupils didn’t have access to books. But a common complaint about rote learning was that children might memorise the lessons but they often didn’t understand them.

Another problem was the children with learning difficulties and disabilities enrolled in schools when education became compulsory. The Warnock committee reports teachers being surprised by the numbers. In England, such children were often hived off into special schools where those deemed ‘educable’ were trained for work. In France, by contrast, Braille, Itard and Seguin developed ways of supporting the learning of children with sensory impairments and Binet was commissioned to develop an assessment for learning difficulties that eventually transformed into the Stanford-Binet Intelligence Scale.

Corporal punishment for misdemeanours or failure to learn ‘lessons’ wasn’t uncommon either, especially after payment by results was introduced through ‘Lowe’s code’ in 1862. In The Lost Elementary Schools of Victorian England Philip Gardner draws attention to the reasons why ‘dame schools’- small schools in private houses – persisted up until WW2; these included meeting the needs of children terrified of corporal punishment and parents sceptical of the quality of teaching in state schools – often the result of their own experiences.

Not all schools were like this of course, and I don’t imagine for a moment that that’s what the new traditionalists would advocate. But it’s important to bear in mind that just as progressive methods taken to extremes can damage children’s educational prospects, traditional methods taken to extremes can do the same. It’s difficult to make an objective comparison of the outcomes of traditional and progressive education in the early days of the English state education system because comparable data aren’t available for the period prior to WW2, but it’s clear that the drawbacks of rote learning, whole class teaching and teacher authority made a significant contribution to progressive educational ideas being well-received by a generation of adults whose personal experience of school was often negative.

Attribution errors

Not only is the structure of some things complex, but their causes can be too. Confirmation bias can lead to some causes being considered but others being prematurely dismissed – in other words, to wrong causal attributions being made. One common attribution error is to assume that a positive correlation between two factors indicates that one causes another.

Peal attributes the origins of progressive education to Rousseau and the Romantic movement, presumably following ED Hirsch, a former professor of English literature whose specialism was the Romantic poets and who re-frames the nature/nurture debate as Romantic/Classical. Peal also claims that “progressive education seeks to apply political principles such as individual freedom and an aversion to authority to the realm of education” (p.4) supporting the new traditionalists’ view of progressive education as ideologically motivated. Although the pedagogical methods advocated by Pestalozzi, Froebel, Montessori and Dewey resemble Rousseau’s philosophy, a closer look at their ideas suggests his influence was limited. Pestalozzi became involved in developing Rousseau’s ideas when Rousseau’s books were banned in Switzerland. Pestalozzi was also influenced by Herbart, a philosopher intrigued by perception and consciousness, topics that preoccupied early psychologists such as William James, a significant influence on John Dewey. Froebel was a pupil of Pestalozzi interested in early learning who set up the original Kindergärten. Maria Montessori trained as a doctor. She applied the findings of Itard and Seguin who worked with deaf-mute children, to education in general. The founders of progressive education were influenced as much by psychology and medicine as by the Romantics.

Peal doesn’t appear to have considered the possibility of convergence – that people with very different worldviews, including Romantics, Marxists, social reformers, educators and those working with children with disabilities – might espouse similar educational approaches for very different reasons; or of divergence – that they might adopt some aspects of progressive education but not others.

Peal and traditional education

Peal’s model of the education system certainly fits his data, but that’s not surprising since he explicitly begins with a model and selects data to fit it. Although he implies that he would like to see a return to traditional approaches, he doesn’t say exactly what they would look like. Several of the characteristics of traditional education Peal refers to are the superficial trappings of long-established independent schools – bells, blazers and haircuts, for example. Although some of the other features he mentions might have educational impacts he doesn’t cite any evidence to show what they might be.

I suspect that Peal has fallen into the trap of assuming that because long-established independent schools have a good track record of providing a high quality academic education, it follows that if all schools emulated them in all respects, all students would get a good education. What this view overlooks is that independent schools are, and have always been, selective, even those set up specifically to provide an education for children from poor families. Providing a good academic education to an intellectually able, academically-inclined child from a family motivated enough to take on additional work to be able to afford the school uniform is a relatively straightforward task. Providing the same for a child with learning difficulties, interested only in football and motor mechanics whose dysfunctional family lives in poverty in a neighbourhood with a high crime rate is significantly more challenging, and might not be appropriate.

The way forward

The new traditionalists argue that the problems with the education system are the result of a ‘hands off’ approach by government and the educational establishment being allowed to get on with it. Peal depicts government, from Jim Callaghan’s administration onward, as struggling (and failing) to mitigate the worst excesses of progressive education propagated by the educational establishment. That’s a popular view, but not necessarily an accurate one and Peal’s data don’t support that conclusion. The data could equally well indicate that the more government intervenes in education, the worse things get. The post-war period has witnessed a long series of expensive disasters since government got more ‘hands on’ with education; the social divisiveness of the 11+, pressure on schools to adopt particular pedagogical approaches, enforced comprehensivisation, change to a three-tier system followed by a change back to a two-tier one, a constantly changing compulsory national curriculum, standardised testing focused on short-term rather than long-term outcomes, a local inspectorate replaced by a centralised one, accountability to local people replaced by accountability to central government, a constant stream of ‘initiatives’, constantly changing legislation and regulation and increasing micro-management.

A state education system has to be able to provide a suitable education for all children, a challenging task for teachers. The most effective approach found to date for occupations required to apply expertise to highly variable situations is the professional one. Although ‘professional’ is often used simply to denote good practice, it has a more specific meaning for occupations – professionals are practitioners who have acquired high-level expertise to the point where they are authorised to practice without supervision. Regulation and accountability comes via professional bodies and independent adjudicators. This model, used in occupations ranging from doctors, lawyers and architects to builders and landscape gardeners, although not foolproof, has worked well for centuries.

Teaching is an obvious candidate for professional status, but teachers in England have never been treated as true professionals. Initial teacher training has often been shortened or set aside entirely in times of economic downturn or shortages of teachers in specific subject areas, and it’s debatable whether a PGCE provides a sufficient grounding for subject-specialist secondary teachers, never mind for the range of skills required in primary education. Increasing micromanagement by local authorities and more recently by central government has undermined the professional status of teachers further.

I see no evidence to suggest that the university lecturers and researchers, civil servants, local authorities, school inspectors, teaching unions, educational psychologists and teachers themselves that make up the so-called ‘educational establishment’ are any less able than government to design a workable and effective education system – indeed by Peal’s own reckoning, during the period when they actually did that the education system functioned much better.

Despite providing some useful information about recent educational policy, Peal’s strategy of starting with a belief and using evidence to support it is unhelpful and possibly counterproductive because it overlooks alternative explanations for why there might be problems with the English education system. This isn’t the kind of evidence-based approach to policy that government needs to use. Let the data speak for themselves.

References
Cubberley, EP (1920). The History of Education. Cambridge, MA: Riverside Press
Gardner, P (1984). The Lost Elementary Schools of Victorian England: The People’s Education. Routledge.
Kahneman, D., Slovic, P & Tversky A (1982). Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press.
Peal, R (2014). Progressively Worse: The Burden of Bad Ideas in British Schools. Civitas.

the new traditionalists: there’s more to d.i. than meets the eye, too

A few years ago, mystified by the way my son’s school was tackling his reading difficulties, I joined the TES forum and discovered I’d missed The Reading Wars. Well, not quite. They began before I started school and show no sign of ending any time soon. But I’d been blissfully unaware that they’d been raging around me.

On one side in the Reading Wars are advocates of a ‘whole language’ approach to learning to read – focusing on reading strategies and meaning – and on the other are advocates of teaching reading using phonics. Phonics advocates see their approach as evidence-based, and frequently refer to the whole language approach (using ‘mixed methods’) as based on ideology.

mixed methods

Most members of my family learned to read successfully using mixed methods. I was trained to teach reading using mixed methods and all the children I taught learned to read. My son, taught using synthetic phonics, struggled with reading and eventually figured it out for himself using whole word recognition. Hence my initial scepticism about SP. I’ve since changed my mind, having discovered that my son’s SP programme wasn’t properly implemented and after learning more about how the process of reading works. If I’d relied only on the scientific evidence cited as supporting SP, I wouldn’t have been convinced. Although it clearly supports SP as an approach to decoding, the impact on literacy in general isn’t so clear-cut.

ideology

I’ve also found it difficult to pin down the ideology purported to be at the root of whole language approaches. An ideology is a set of abstract ideas or values based on beliefs rather than on evidence, but the reasons given for the use of mixed methods when I was learning to read and when I was being trained to teach reading were pragmatic ones. In both instances, mixed methods were advocated explicitly because (analytic) phonics alone hadn’t been effective for some children, and children had been observed to use several different strategies during reading acquisition.

The nearest I’ve got to identifying an ideology are the ideas that language frames and informs people’s worldviews and that social and economic power plays a significant part in determining who teaches what to whom. The implication is that teachers, schools, school boards, local authorities or government don’t have a right to impose on children the way they construct their knowledge. To me, the whole language position looks more like a theoretical framework than an ideology, even if the theory is debatable.

the Teaching Wars

The Reading Wars appear to be but a series of battles in a much bigger war over what’s often referred to as traditional vs progressive teaching methods. The new traditionalists frequently characterise the Teaching Wars along the same lines as SP proponents characterise the Reading Wars; claiming that traditional methods are supported by scientific evidence, but ideology is the driving force behind progressive methods. Even a cursory examination of this claim suggests it’s a caricature of the situation rather than an accurate summary.

The progressives’ ideology
Rousseau is often cited as the originator of progressive education and indeed, progressive methods sometimes resemble the approach he advocated. However, many key figures in progressive education such as Herbert Spencer, John Dewey and Jean Piaget derived their methods from what was then state-of-the-art scientific theory and empirical observation, not from 18th century Romanticism.

The traditionalists’ scientific evidence The evidence cited by the new traditionalists appears to consist of a handful of findings from cognitive psychology and information science. They’re important findings, they should form part of teacher training and they might have transformed the practice of some teachers, but teaching and learning involves more than cognition. Children’s developing brains and bodies, their emotional and social background, the social, economic and political factors shaping the expectations on teachers and students in schools, and the philosophical frameworks of everybody involved suggest that evidence from many other scientific fields should also be informing educational theory, and that it might be risky to apply a few findings out of context.

I can understand the new traditionalists’ frustration. One has to ask why education theory hasn’t kept up to date with research in many fields that are directly relevant to teaching, learning, child development and the structure of the education system itself. However, dissatisfaction with progressive methods appears to originate, not so much with the methods themselves, as with the content of the curriculum and with progressive methods being taken to extremes.

keeping it simple

The limited capacity of working memory is the feature of human cognitive architecture that underpins Kirschner, Sweller and Clark’s argument in favour of direct instruction. One outcome of that limitation is a human tendency to oversimplify information by focusing on the prototypical features of phenomena – a tendency that often leads to inaccurate stereotyping. Kirschner, Sweller and Clark present their hypothesis in terms of a dispute between two ‘sides’ one advocating minimal guidance and the other a full explanation of concepts, procedures and strategies (p.75).

Although it’s appropriate in experimental work to use extreme examples of these approaches in order to test a hypothesis, the authors themselves point out that in a classroom setting most teachers using progressive methods provide students with considerable guidance anyway (p.79). Their conclusion that the most effective way to teach novices is through “direct, strong, instructional guidance” might be valid, but in respect of the oversimplified way they frame the dispute, they appear to have fallen victim to the very limitations of human cognitive architecture to which they draw our attention.

The presentation of the Teaching Wars in this polarised manner goes some way to explaining why direct instruction seems like such a big deal for the new traditionalists. Direct instruction shouldn’t be confused with Direct Instruction (capitalised) – the scripted teaching used in Engelmann & Becker’s DISTAR programme – although a recent BBC Radio 4 programme suggests that might be exactly what’s happening in some quarters.

direct instruction

The Radio 4 programme How do children learn history? is presented by Adam Smith, a senior lecturer in history at University College London, who has blogged about the programme here. He’s carefully non-committal about the methods he describes – it is the BBC after all.

A frequent complaint about the way the current national curriculum approaches history is what’s included, what’s excluded, what’s emphasised and what’s not. At home, we’ve had to do some work on timelines because although both my children have been required to put themselves into the shoes of various characters throughout history (an exercise my son has grown to loathe), neither of them knew how the Ancient Egyptians, Greeks, Romans, Vikings or Victorians related to each other – a pretty basic historical concept. But those are curriculum issues, rather than methods issues. As well as providing a background to the history curriculum debate, the broadcast featured two lessons that used different pedagogical approaches.

During an ‘inquiry’ lesson on Vikings, presented as a good example of current practice, groups of children were asked to gather information about different aspects of Viking life. A ‘direct instruction’ lesson on Greek religious beliefs, by contrast, involved the teacher reading from a textbook whilst the children followed the text in their own books with their finger, then discussed the text and answered comprehension questions on it. The highlight of the lesson appeared to be the inclusion of an exclamation mark in the text.

It’s possible that the way the programme was edited oversimplified the lesson on Greek religious beliefs, or that the children in the Viking lesson were older than those in the Greek lesson and better able to cope with ‘inquiry’, but there are clearly some possible pitfalls awaiting those who learn by relying on the content of a single textbook. The first is that whoever publishes the textbook controls the knowledge – that’s a powerful position to be in. The second is that you don’t need much training to be able to read from a textbook or lead a discussion about what’s in it – that has implications for who is going to be teaching our children. The third is how children will learn to question what they’re told. I’m not trying to undermine discipline in the classroom, just pointing out that textbooks can be, and sometimes are, wrong. The sooner children learn that authority lies in evidence rather than in authority figures, the better. Lastly, as a primary school pupil I would have found following a teacher reading from a textbook tedious in the extreme. As a secondary school pupil it was a teacher reading from a textbook for twenty minutes that clinched my decision to drop history as soon possible. I don’t think I’d be alone in that.

who are the new traditionalists?

The Greek religions lesson was part of a project funded by the Education Endowment Foundation (EEF), a charity developed by the Sutton Trust and the Impetus Trust in 2011 with a grant from the DfE. The EEF’s remit is to fund research into interventions aimed at improving the attainment of pupils receiving free school meals. The intervention featured in How do children learn history? is being implemented in Future Academies in central London. I think the project might be the one outlined here, although this one is evaluating the use of Hirsch’s Core Knowledge framework in literacy, rather than in history, which might explain the focus on extracting meaning from the text.

My first impression of the traditionalists was that they were a group of teachers disillusioned by the ineffectiveness of the pedagogical methods they were trained to use, who’d stumbled across some principles of cognitive science they’d found invaluable and were understandably keen to publicise them. Several of the teachers are Teach First graduates and work in academies or free schools – not surprising if they want freedom to innovate. They also want to see pedagogical methods rigorously evaluated, and the most effective ones implemented in schools. But those teachers aren’t the only parties involved.

Religious groups have welcomed the opportunities to open faith schools and develop their own curricula – a venture supported by previous and current governments despite past complications resulting from significant numbers of schools in England being run by churches and the current investigation into the alleged operation Trojan Horse in Birmingham.

Future, the sponsors of Future Academies and the Curriculum Centre, was founded by John and Caroline Nash, a former private equity specialist and stockbroker respectively. Both are reported to have made significant donations to the Conservative party. John Nash was appointed Parliamentary Under Secretary of State for Schools in January 2013. The Nashes are co-chairs of the board of governors of Pimlico Academy and Caroline Nash is chair of The Curriculum Centre. All four trustees of the Future group are from the finance industry.

Many well-established independent schools, notably residential schools for children with special educational needs and disabilities, are now controlled by finance companies. This isn’t modern philanthropy in action; the profits made from selling on the school chains, the magnitude of the fees charged to local authorities, and the fact that the schools are described as an ‘investment’, suggests that another motivation is at work.

A number of publishers of textbooks got some free product placement in a recent speech by Elizabeth Truss, currently parliamentary Under Secretary of state for Education and Childcare.

Educational reform might have teachers in the vanguard, but there appear to be some powerful bodies with religious, political and financial interests who might want to ensure they benefit from the outcomes, and have a say in what those outcomes are. The new traditionalist teachers might indeed be on to something with their focus on direct instruction, but if direct instruction boils down in practice to teachers using scripted texts or reading from textbooks, they will find plenty of other players willing to jump on the bandwagon and cash in on this simplistic and risky approach to educating the country’s most vulnerable children. Oversimplification can lead to unwanted complications.