improving reading from Clackmannanshire to West Dunbartonshire

In the 1990s, two different studies began tracking the outcomes of reading interventions in Scottish schools.   One, run by Joyce Watson and Rhona Johnston then from the University of St Andrews, started in 1992/3 in schools in Clackmannanshire, which hugs the River Forth, just to the east of Stirling. The other began in 1998 in West Dunbartonshire, with the Clyde on side and Loch Lomond on the other, west of Glasgow. It was led by Tommy MacKay, an educational psychologist with West Dunbartonshire Council, who also lectured in psychology at the University of Strathclyde.

I’ve blogged about the Clackmannanshire study in more detail here. It was an experiment involving 13 schools and 300 children divided into three groups, taught to read using synthetic phonics, analytic phonics or analytic phonics plus phonemic awareness. The researchers measured and compared the outcomes.

The West Dunbartonshire study had a more complex design involving five different studies and ten strands of intervention over ten years in all pre-schools and primary schools in the local authority area (48 schools and 60 000 children). As in Clackmannanshire, analytic phonics was used as a control for the synthetic phonics experimental group. The study also had an aim; to eradicate functional illiteracy in school leavers in West Dunbartonshire. It very nearly succeeded; Achieving the Vision, the final report, shows that by the time the study finished in 2007 only three children were deemed functionally illiterate. ( Thanks to @SaraJPeden on Twitter for the link.)

Five studies, ten strands of intervention

The main study was a multiple-component intervention using cross-lagged design. Four supporting studies were;

  • Synthetic phonics study (18 schools)
  • Attitudes study (24 children from earlier RCT)
  • Declaration study (12 nurseries & primaries in another education authority area)
  • Individual support study (24 secondary pupils).

The West Dunbartonshire study was unusual in that it addressed multiple factors already known to impact on reading attainment, but that are often sidelined in interventions focusing on the mechanics of reading. The ten strands were (p.14);

Strand 1: Phonological awareness and the alphabet

Strand 2: A strong and structured phonics emphasis

Strand 3: Extra classroom help in the early years

Strand 4: Fostering a ‘literacy environment’ in school and community

Strand 5: Raising teacher awareness through focused assessment

Strand 6: Increased time spent on key aspects of reading

Strand 7: Identification of and support for children who are failing

Strand 8: Lessons from research in interactive learning

Strand 9: Home support for encouraging literacy

Strand 10: Changing attitudes, values and expectations

Another unusual feature was that the researchers were looking not only for statistically significant improvements in reading, but wider significant improvements;

statistical significance must be viewed in terms of wider questions that were primarily social, cultural and political rather than scientific – questions about whether lives were being changed as a result of the intervention; questions about whether children would leave school with the skills needed for a successful career in a knowledge society; questions about whether ‘significant’ results actually meant significant to the participants in the research or only to the researcher.” (p.16)

The researchers also recognized the importance of ownership of the project throughout the local community, everyone “from the leader of the Council to the parents and the children themselves identifying with it and owning it as their own project”. (p.7)

In addition they were aware that a project following students through their entire school career would need to survive inevitable organisational challenges. Despite the fact that West Dunbartonshire was the second poorest council in Scotland, the local authority committed to continue funding the project;

The intervention had to continue and to succeed through virtually every major change or turmoil taking place in its midst – including a total restructuring of the educational directorate, together with significant changes in the Council. (p.46)

Results

 The results won’t surprise anyone familiar with the impact of synthetic phonics; there were significant improvements in reading ability in children in the experimental group. What was remarkable was the impact of the programme on children who didn’t participate. Raw scores for pre-school assessments improved noticeably between 1997 and 2006 and there were many reports from parents that the intervention had stimulated interest in reading in older siblings.

One of the most striking results was that at the end of the study, there were only three pupils in secondary schools in the local authority area with reading ages below the level of functional literacy (p.31). That’s impressive when compared to the 17% of school leavers in England considered functionally illiterate. So why hasn’t the West Dunbartonshire programme been rolled out nationwide? Three factors need to be considered in order to answer that question.

1.What is functional literacy?

The 17% figure for functional illiteracy amongst school leavers is often presented as ‘shocking’ or a ‘failure’ on the part of the education system. These claims are valid only if those making them have evidence that higher levels of school-leaver literacy are attainable. The evidence cited often includes literacy levels in other countries or studies showing very high percentages of children being able to decode after following a systematic synthetic phonics (SSP) programme. Such evidence is akin to comparing apples and oranges because:

– Many languages are orthographically more transparent than English (there’s a higher direct correspondence between graphemes and phonemes). The functional illiteracy figure of 17% (or thereabouts) holds for the English-speaking world, not just England, and has done so since at least the end of WW2  – and probably earlier given literacy levels in older adults.  (See Rashid & Brooks (2010) and McGuinness (1998).)

– Both the Clackmannanshire and West Dunbartonshire studies resulted in high levels of decoding ability. Results were less stellar when it came to comprehension.

– It depends what you mean by functional literacy. This was a challenge faced by Rashid & Brooks in their review; measures of functional literacy have varied, making it difficult to identify trends across time.

In the West Dunbartonshire study, children identified as having significant reading difficulties followed an intensive 3-month individual support programme in early 2003. This involved 91 children in P7, 12 in P6 and 1 in P5. By 2007, 12 pupils at secondary level were identified as still having not reached functional literacy levels; reading ages ranged between 6y 9m and 8y 10m (p.31). By June 2007, only three children had scores below the level of functional literacy. (Two others missed the final assessment.)

The level of functional literacy used in the West Dunbartonshire study was a reading age of at least 9y 6m using the Neale Assessment of Reading Ability (NARA-II). I couldn’t find an example online, but there’s a summary here. The tasks are rather different to the level 1 tasks in National Adult Literacy survey carried out in the USA in 1992 (NCES p.86).

A reading/comprehension age of 9y 6m is sufficient for getting by in adult life; reading a tabloid newspaper or filling in simple forms. Whether it’s sufficient for doing well in GCSEs (reading age 15y 7m ), getting a decent job in later life, or having a good understanding of how the world works is another matter.

2. What were the costs and benefits?

Overall, the study cost £13 per student per year, or, 0.5% of the local authority’s education budget (p.46), which doesn’t sound very much. But for 60 000 students over a ten year period it adds up to almost £8m, a significant sum. I couldn’t find details of the overall reading abilities of secondary school students when the study finished in 2007, and haven’t yet tracked down any follow-up studies showing the impact of the interventions on the local community.

Also, we don’t know what difference the study would have made to adult literacy levels in the area. Adult literacy levels are usually presented as averages, and in the case of the US National Adult Literacy survey included those with disabilities. Many children with disabilities in West Dunbartonshire would have been attending special schools and the study appears to have involved only mainstream schools.  Whether the impact of the study is sufficient to persuade cash-strapped local authorities to invest in it is unclear.

3. Could the interventions be implemented nationwide?

One of the strengths of Achieving the Vision is that it explores the limitations of the study in some detail (p.38ff). One of the strengths of the study was that the researchers were well aware of the challenges that would have to be met in order for the intervention to achieve its aims. These included issues with funding; the local Council, although supportive, was working within a different funding framework to the Scottish Executive Education Department. The funding issues had a knock-on impact on staff seconded to the project – who had no guarantee of employment once the initial funding ran out. The study was further affected by industrial action and by local authority re-structuring. How many projects would have access to the foresight, tenacity and collaborative abilities of those leading the West Dunbartonshire initiative?

Conclusion

The aim of the West Dunbartonshire initiative was to eradicate functional illiteracy in an entire local authority area. The study effectively succeeded in doing so – in mainstream schools, and if a functional illiteracy level is considered to be below a reading/ comprehension age of 9y 6m. Synthetic phonics played a key role.  Synthetic phonics is frequently advocated as a remedy for functional illiteracy in school leavers and in the adult population. The West Dunbartonshire study shows, pretty conclusively, that synthetic phonics plus individual support plus a comprehensive local authority-backed focus on reading, can result in significant improvements in reading ability in secondary school students. Does it eradicate functional illiteracy in school leavers or in the adult population?  We don’t know.

References

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study, The Scottish Executive website. https://www.webarchive.org.uk/wayback/archive/20170701074158/http://www.gov.scot/Publications/2005/02/20682/52383

MacKay, T (2007).  Achieving the Vision: The Final Research Report of the West Dunbartonshire Literacy Initiative.

McGuinness, D (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

NCES (1993). Adult Literacy in America. National Center for Educational Statistics.

Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

 

 

 

 

we’re all different

We’re all different. Tiny variations in our DNA before and at conception. Our parents smoked/didn’t smoke, drank/didn’t drink, followed ideal/inadequate diets, enjoyed robust health or had food intolerances, allergies, viral infections. We were brought up in middle class suburbia/a tower block and attended inadequate/outstanding schools. All those factors contribute to who we are and what we are capable of achieving.

That variation, inherent in all biological organisms, is vital to our survival as a species. Without it, we couldn’t adapt to a changing environment or form communities that successfully protect us. The downside of that inherent variation is that some of us draw a short straw. Some variations mean we don’t make it through childhood, have lifelong health problems or die young. Or that we become what Katie Ashford, SENCO at Michaela Community School, Wembley calls the ‘weakest pupils‘.

Although the factors that contribute to our development aren’t, strictly speaking, random, they are so many and so varied, they might as well be random. That means that in a large population, the measurement of any characteristic affected by many factors – height, blood pressure, intelligence, reading ability – will form what’s known as a normal distribution; the familiar bell curve.

 The bell curve

If a particular characteristic forms a bell-shaped distribution, that allows us to make certain predictions about a large population. For that characteristic, 50% of the population will score above average and 50% below average; there will be relatively few people who are actually average. We’ll know that around 70% of the population will score fairly close to average, around 25% noticeably above or below it, and around 5% considerably higher or lower. That’s why medical reference ranges for various characteristics are based on the upper and lower measurements for 95% of the population; if your blood glucose levels or thyroid function is in the lowest or highest 2.5%, you’re likely to have a real problem, rather than a normal variation.

So in terms of general ability that means around 2.5% of the population will be in a position to decide whether they’d rather be an Olympic athlete, a brain surgeon or Prime Minister (or all three), whereas another 2.5% will find everyday life challenging.

What does a normal distribution mean for education? Educational attainment is affected by many causal factors, so by bizarre coincidence the attainment of 50% of school pupils is above average, and 50% below it. Around 20% of pupils have ‘special educational needs’ and around 2.5% will have educational needs that are significant enough to warrant a Statement of Special Educational Needs (recently replaced by Education Health and Care Plans).

Special educational needs

In 1978, the Warnock report pointed out that based on historical data, up to 20% of school pupils would probably have special educational needs at some point in their school career. ‘Special educational needs’ has a precise but relative meaning in law. It’s defined in terms of pupils requiring educational provision additional to or different from “educational facilities of a kind generally provided for children of the same age in schools within the area of the local education authority”.

Statements of SEN

The proportion of pupils with statements of SEN remained consistently at around 2.8% between 2005 and 2013 (after which the SEN system changed). http://www.publications.parliament.uk/pa/cm200506/cmselect/cmeduski/478/478i.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/225699/SFR30-2013_Text.pdf

It could be, of course, that these figures are an artifact of the system; anecdotal evidence suggests that some local authorities considered statutory assessments only for children who scored below the 2nd percentile on the WISC scale. Or it could be that measures of educational attainment do reflect the effectively random nature of the causes of educational attainment. In other words, a single measure of educational attainment can tell us whether a child’s attainment is unusually high or low; it can’t tell us why it’s unusually high or low. That often requires a bit of detective work.

If they can do it, anyone can

Some people feel uncomfortable with the idea of human populations having inherent variation; it smacks of determinism, excuses and complacency. So from time to time we read inspiring accounts of children in a school in a deprived inner city borough all reading fluently by the age of 6, or of the GCSE A*-C grades in a once failing school leaping from 30-60% in a year.  The implication is that if they can do it, anyone can. That’s a false assumption. Those things can happen in some schools. But they can’t happen in all schools simultaneously because of the variation inherent in human populations and because of the nature of life events (see previous post).

Children of differing abilities don’t distribute themselves neatly across schools. Some schools might have no children with statements and others might have many. Even if all circumstances were equal (which they’re not) clustering occurs within random distributions. This is a well-know phenomenon in epidemiology; towns with high numbers of cancer patients or hospitals with high numbers of unexpected deaths where no causal factors are identified tend to attract the attention of conspiracy theorists. This clustering illusion isn’t so well known in educational circles. It’s all too easy to assume that a school has few children with special educational needs because of the high quality of teaching, or that a school has many children with SEN because teaching is poor. Obviously, it’s more complicated than that.

What helps the weakest pupils?

According to Katie what ‘the weakest pupils’ need is “more focus, more rigour and more practice if they are to stand any chance of catching up with their peers”.   Katie goes on to unpack what she means. More focus means classrooms that aren’t chaotic. More rigour means expecting children to read challenging texts. More practice means practicing the things they can’t do, not the things they can.

Katie’s post is based on the assumption that the weakest pupils can and should ‘catch up with their peers.’ But it’s not clear what she means by that. Does she mean the school not needing a bottom set? All pupils attaining at least the national average for their age group? All pupils clustered at the high end of the attainment range?  She doesn’t say.

In a twitter discussion, Katie agreed that there is variation inherent in a population, but

katie ashford bell curve

I agree with Katie that there is often room for improvement, and that her focus, getting all children reading, can make a big difference, but improvement is likely to entail more than more focus, more rigour and more practice. In an earlier post Katie complains that “Too many people overcomplicate the role of SENCO”.   She sees her role as very simple: “I avoid pointless meetings, unnecessary paperwork and attending timewasting conferences as much as possible. Instead, I teach, organise interventions, spend lots of time with the pupils, and make sure teachers and support staff have everything they need to teach their kids really, really well.

Her approach sounds very sensible.  But she doesn’t say what the interventions are. Or what the teachers and support staff need to teach their kids really, really well. Or what meetings, paperwork and conferences she thinks are pointless, unnecessary and timewasting. Katie doesn’t say how many children at Michaela have statements of special needs or EHCPs – presumably some children have arrived there with these in place. Or what she does about the meetings and paperwork involved. Or how she tracks individual children’s progress. (I’m not suggesting that statements and EHCPs are the way to go – just that currently they’re part of the system and SENCOs have to deal with them).

What puzzled me most about Katie’s interventions was that they bore little resemblance to those I’ve seen other SENCOs implement in mainstream schools. It’s possible that they’ve overcomplicated their role.   It could be that the SENCOs I’ve watched at work are in primary schools and that at secondary level it’s different. Another explanation is that they’ve identified the root causes of children’s learning difficulties and have addressed them.

They’ve introduced visual timetables, taught all pupils Makaton, brought in speech and language therapists to train staff, installed the same flooring throughout the building to improve the mobility of children with cerebral palsy or epilepsy and integrated music, movement and drama into the curriculum. They’ve developed assistive technology for children with sensory impairments and built up an extensive, accessible school library that includes easy-to-read books with content suitable for older kids for poor readers and more challenging texts with content suitable for younger kids for good readers. They’ve planted gardens and attended forest schools regularly to develop motor and sensory skills.

As a child who read avidly  –  including Dickens – I can see how many hours of reading and working through chapters of Dickens could improve the reading ability of many children. But I’m still struggling to see how that would work for a kid whose epilepsy results in frequent ‘absences’ of attention or who has weak control of eye movements, an auditory processing impairment or very limited working memory capacity.

I’m aware that ‘special educational needs’ is a contentious label and that it’s often only applied because children aren’t being taught well, or taught appropriately. I’m utterly committed to the idea of every child being given the best possible education. I just don’t see any evidence to support the idea that catching up with one’s peers is a measure of educational excellence, or that practicing what you’re not good at is a better use of time than doing what you are good at.

Section 7 of the Education Act 1996 (based on the 1944 Education Act) frames a suitable education in terms of an individual child’s age, ability and aptitude, and any special educational needs they may have. The education system appears to have recently lost sight of the aptitude element. I fear that an emphasis on ‘catching up’ with one’s peers and on addressing weaknesses rather than developing strengths will inevitably result in many children seeing themselves as failing to jump an arbitrary hurdle, rather than as individuals with unique sets of talents and aptitudes who can play a useful and fulfilling role in society.

I’d be interested in Katie’s comments.

joining the dots and seeing the big picture

I’m a tad cynical about charitable bodies these days, especially if they’re associated with academies. Whilst reading their ostensibly ‘independent’ reports I’m on the lookout for phrasing calculated to improve their chances of doing well in the next funding round, or for ‘product placement’ for their services. So a report from the Driver Youth Trust – Joining the Dots: Have recent reforms worked for those with SEND? was a welcome surprise.

The Driver Youth Trust (DYT) is a charity focused on the needs of dyslexic students. Its programme Drive for Literacy is used in ARK schools. I’m well aware of the issues around ‘dyslexia’ and haven’t investigated the Drive for Literacy; in this post I want to focus on Joining the Dots, commissioned by DYT and written by LKMco.

Joining the Dots one of the clearest, most perceptive overviews of the new SEND system that I’ve read. Some of the findings and explanations for the findings are counterintuitive, often a sign of report driven by the evidence rather than what the report writers think they are expected to say. The take-home message is that the new SEND system has had mixed outcomes to date, but the additional autonomy schools now have should allow them to improve outcomes for children regardless, and it presents some inspiring case studies to prove the point.

Here are some of the findings that stood out for me.

SEND reforms interact with the rest of the education system

Reforms to the school system since 2010 have had an even greater impact on young people with SEND than the 2014 Act itself…we find that changes have often enabled those previously succeeding to achieve even better outcomes, while things have only got tougher for those already struggling. As a result unacceptable levels of inequity have merely been reinforced. It is also clear that changes have been inadequately communicated and that many stakeholders (including parents in particular) are struggling to navigate the new landscape.” (p.7)

Fragmentation

“I think that what we did is picked up all the fragments, dropped them on the floor and made them even more fragmented… and now it’s a question of putting them back together in the right order…” – LA service delivery manager (p.15)

SEND pupils and their families have therefore found themselves lost in a system that has yet to reform or regroup.” (p.17)

Funding

Three levels of funding are available for schools: Element 1 is basic funding for all pupil, Element 2 is a notional SEND budget based on a range of factors, and Element 3 is high needs block funding mainly for pupils with EHC plans. The lack of ring-fencing around of the notional SEND budget means that schools can spend this money however they want. (p.20)

Admissions

Pupils with SEND require additional resources and their often lower attainment can impact on the school’s standing in league tables. Parents and teachers reported concerns about admissions policies being stacked against students with SEND.

The local offer

The DfE Final Impact Report for the Pathfinder LAs trialling the new SEND framework found that only 12% of Pathfinder families had looked at their Local Offer and only half of those had found it useful. That picture doesn’t seem to have changed. An FOI request revealed that the number of LA staff with responsibility for SEND varies between 0-382.8 full time equivalent.

Schools

Schools often don’t know what information to give to the LA about their SEND pupils, and the information LAs give schools is sometimes inaccurate. The Plumcroft Primary case study illustrates the point. Plumcroft’s new headteacher tried to improve LA support for pupils with SEND but realised that services available commercially and privately were not only often better, but were actually affordable. As he put it; “If a local authority says ‘no you can’t’ most people just go ‘alright then’ and carry on with the service and whinge about it. Whereas the reality is, you can… there’s no constraint at all.” (p.35)

Categories

The new SEND system does away with the School Action and School Action Plus categories, partly because of concerns that children identified as having SEN were stuck with the label even when it was no longer applicable. The number of children identified with SEN has dropped substantially since, but concerns have been voiced about how children with additional needs are being identified and supported.

Brian Lamb highlights another concern that emerged in the early stages of the legislation, that pupils who would previously have had a Statement, would, under the new system, find it ‘difficult to impossible’ to qualify for an EHCP unless they also have health difficulties or are in care (p.39). This fear doesn’t seem to have materialised, since LAs are now transferring pupils from statements to EHC plans en masse, and it’s in the interest of service providers to ask for an EHC plan to be in place in order to resource any substantial support a child needs.

All teachers are teachers of children with special educational needs

Even though the DfE itself said in 2001 that ‘all teachers are teachers of children with special educational needs’ teacher training funding has consistently failed to recognise this. The new system hasn’t introduced significant improvements.

Exam reform

A shift to making public examinations more demanding in terms of literacy automatically puts students with literacy difficulties at a disadvantage. A student might have an excellent knowledge and understanding of the subject matter, but be unable to get it down on paper. The distribution of assistive technology varies widely between schools.

Reinventing the wheel

LA bureaucracy has been seen as a significant factor in the move over recent years to give schools increased autonomy. This has resulted, predictably, in increased concerns over transparency, accountability, expertise and resources. Many schools are now forming federations in order to pool resources and share expertise. There is clearly a need for an additional tier of organisation at the local level suggesting that it might have been more sensible to improve local authority practice rather than marginalise it.

The content of the report might not be especially cheering, but it makes a change to find a report that’s so readable, informative and insightful.

is systematic synthetic phonics generating neuromyths?

A recent Twitter discussion about systematic synthetic phonics (SSP) was sparked by a note to parents of children in a reception class, advising them what to do if their children got stuck on a word when reading. The first suggestion was “encourage them to sound out unfamiliar words in units of sound (e.g. ch/sh/ai/ea) and to try to blend them”. If that failed “can they use the pictures for any clues?” Two other strategies followed. The ensuing discussion began by questioning the wisdom of using pictures for clues and then went off at many tangents – not uncommon in conversations about SSP.
richard adams reading clues

SSP proponents are, rightly, keen on evidence. The body of evidence supporting SSP is convincing but it’s not the easiest to locate; much of the research predates the internet by decades or is behind a paywall. References are often to books, magazine articles or anecdote; not to be discounted, but not what usually passes for research. As a consequence it’s quite a challenge to build up an overview of the evidence for SSP that’s free of speculation, misunderstandings and theory that’s been superseded. The tangents that came up in this particular discussion are, I suggest, the result of assuming that if something is true for SSP in particular it must also be true for reading, perception, development or biology in general. Here are some of the inferences that came up in the discussion.

You can’t guess a word from a picture
Children’s books are renowned for their illustrations. Good illustrations can support or extend the information in the text, showing readers what a chalet, a mountain stream or a pine tree looks like, for example. Author and artist usually have detailed discussions about illustrations to ensure that the book forms an integrated whole and is not just a text with embellishments.

If the child is learning to read, pictures can serve to focus attention (which could be wandering anywhere) on the content of the text and can have a weak priming effect, increasing the likelihood of the child accessing relevant words. If the picture shows someone climbing a mountain path in the snow, the text is unlikely to contain words about sun, sand and ice-creams.

I understand why SSP proponents object to the child being instructed to guess a particular word by looking at a picture; the guess is likely to be wrong and the child distracted from decoding the word. But some teachers don’t seem to be keen on illustrations per se. As one teacher put it “often superficial time consuming detract from learning”.

Cues are clues are guesswork
The note to parents referred to ‘clues’ in the pictures. One contributor cited a blogpost that claimed “with ‘mixed methods’ eyes jump around looking for cues to guess from”. Clues and cues are often used interchangeably in discussions about phonics on social media. That’s understandable; the words have similar meanings and a slip on the keyboard can transform one into the other. But in a discussion about reading methods, the distinction between guessing, clues and cues is an important one.

Guessing involves drawing conclusions in the absence of enough information to give you a good chance of being right; it’s haphazard, speculative. A clue is a piece of information that points you in a particular direction. A cue has a more specific meaning depending on context; e.g. theatrical cues, social cues, sensory cues. In reading research, a cue is a piece of information about something the observer is attending to, or a property of a thing to be attended to. It could be the beginning sound or end letter of a word, or an image representing the word. Cues are directly related to the matter in hand, clues are more indirectly related, guessing is a stab in the dark.

The distinction is important because if teachers are using the terms cue and clue interchangeably and assuming they both involve guessing there’s a risk they’ll mistakenly dismiss references to ‘cues’ in reading research as guessing or clues, which they are not.

Reading isn’t natural
Another distinction that came up in the discussion was the idea of natural vs. non-natural behaviours. One argument for children needing to be actively taught to read rather than picking it up as they go along is that reading, unlike walking and talking, isn’t a ‘natural’ skill. The argument goes that reading is a relatively recent technological development so we couldn’t possibly have evolved mechanisms for reading in the same way as we have evolved mechanisms for walking and talking. One proponent of this idea is Diane McGuinness, an influential figure in the world of synthetic phonics.

The argument rests on three assumptions. The first is that we have evolved specific mechanisms for walking and talking but not for reading. The ideas that evolution has an aim or purpose and that if everybody does something we must have evolved a dedicated mechanism to do it, are strongly contested by those who argue instead that we can do what our anatomy and physiology enable us to do (see arguments over Chomsky’s linguistic theory). But you wouldn’t know about that long-standing controversy from reading McGuinness’s books or comments from SSP proponents.

The second assumption is that children learn to walk and talk without much effort or input from others. One teacher called the natural/non-natural distinction “pretty damn obvious”. But sometimes the pretty damn obvious isn’t quite so obvious when you look at what’s actually going on. By the time they start school, the average child will have rehearsed walking and talking for thousands of hours. And most toddlers experience a considerable input from others when developing their walking and talking skills even if they don’t have what one contributor referred to as a “WEIRDo Western mother”. Children who’ve experienced extreme neglect (such as those raised in the notorious Romanian orphanages) tend to show significant developmental delays.

The third assumption is that learning to use technological developments requires direct instruction. Whether it does or not depends on the complexity of the task. Pointy sticks and heavy stones are technologies used in foraging and hunting, but most small children can figure out for themselves how to use them – as do chimps and crows. Is the use of sticks and stones by crows, chimps or hunter-gatherers natural or non-natural? A bicycle is a man-made technology more complex than sticks and stones, but most people are able to figure out how to ride a bike simply by watching others do it, even if a bit of practice is needed before they can do it themselves. Is learning to ride a bike with a bit of support from your mum or dad natural or non-natural?

Reading English is a more complex task than riding a bike because of the number of letter-sound correspondences. You’d need a fair amount of watching and listening to written language being read aloud to be able to read for yourself. And you’d need considerable instruction and practice before being able to fly a fighter jet because the technology is massively more complex than that involved in bicycles and alphabetic scripts.

One teacher asked “are you really going to go for the continuum fallacy here?” No idea why he considers a continuum a fallacy. In the natural/non-natural distinction used by SSP proponents there are three continua involved;

• the complexity of the task
• the length of rehearsal time required to master the task, and
• the extent of input from others that’s required.

Some children learn to read simply by being read to, reading for themselves and asking for help with words they don’t recognise. But because reading is a complex task, for most children learning to read by immersion like that would take thousands of hours of rehearsal. It makes far more sense to cut to the chase and use explicit instruction. In principle, learning to fly a fighter jet would be possible through trial-and-error, but it would be a stupidly costly approach to training pilots.

Technology is non-biological
I was told by several teachers that reading, riding a bike and flying an aircraft weren’t biological functions. I fail to see how they can’t be, since all involve human beings using their brain and body. It then occurred to me that the teachers are equating ‘biological’ with ‘natural’ or with the human body alone. In other words, if you acquire a skill that involves only body parts (e.g. walking or talking) it’s biological. If it involves anything other than a body part it’s not biological. Not sure where that leaves hunting with wooden spears, making baskets or weaving woolen fabric using a wooden loom and shuttle.

Teaching and learning are interchangeable
Another tangent was whether or not learning is involved in sleeping, eating and drinking. I contended that it is; newborns do not sleep, eat or drink in the same way as most of them will be sleeping, eating or drinking nine months later. One teacher kept telling me they don’t need to be taught to do those things. I can see why teachers often conflate teaching and learning, but they are not two sides of the same coin. You can teach children things but they might fail to learn them. And children can learn things that nobody has taught them. It’s debatable whether or not parents shaping a baby’s sleeping routine, spoon feeding them or giving them a sippy cup instead of a bottle count as teaching, but it’s pretty clear there’s a lot of learning going on.

What’s true for most is true for all
I was also told by one teacher that all babies crawl (an assertion he later modified) and by a school governor that they can all suckle (an assertion that wasn’t modified). Sweeping generalisations like this coming from people working in education is worrying. Children vary. They vary a lot. Even if only 0.1% of children do or don’t do something, that would involve 8 000 children in English schools. Some and most are not all or none and teachers of all people should be aware of that.

A core factor in children learning to read is the complexity of the task. If the task is a complex one, like reading, most children are likely to learn more quickly and effectively if you teach them explicitly. You can’t infer from that that all children are the same, they all learn in the same way or that teaching and learning are two sides of the same coin. Nor can you infer from a tenuous argument used to justify the use of SSP that distinctions between natural and non-natural or biological and technological are clear, obvious, valid or helpful. The evidence that supports SSP is the evidence that supports SSP. It doesn’t provide a general theory for language, education or human development.

jumping the literacy hurdle

Someone once said that getting a baby dressed was like trying to put an octopus into a string bag. I was reminded of that during another recent discussion with synthetic phonics (SP) advocates. The debate was triggered by this comment; “Surely, the most fundamental aim of schools is to teach children to read.”

This sentence looks like an essay question for trainee teachers – if they’re still expected to write essays, that is. It encapsulates what has frustrated me so much about the SP ‘position’; all those implicit assumptions.

First there is no ‘surely’ about any aspect of education. You name it, there’s been heated debate about it. Second, it’s not safe to assume schools should have a ‘most fundamental’ aim. Education is a complex business and generally involves quite a few fundamental aims; focussing on one rather than the others is a risky strategy. Third, the sentence assumes a role for literacy that requires some justification.

reading in the real world

Reading is our primary means of recording spoken language. It provides a way of communicating with others across space and time. It extends working memory. It’s important. But in a largely literate society it’s easy to assume that all members of that society are, should be, or need to be equally literate. They’re not. They never have been. And I’ve yet to find any evidence showing that uniform literacy across the population is either achievable or necessary.

I’m not claiming that it doesn’t matter if someone isn’t a competent reader or if 15% of school leavers are functionally illiterate. What I am claiming is that less than 100% functional literacy doesn’t herald the end of civilisation as we know it.

For thousands of years, functionally illiterate people have grown food, baked, brewed, made clothes, pots, pans, furniture, tools, weapons and machines, built houses, palaces, cities, chariots, sailing ships, dams and bridges, navigated halfway around the world, formed exquisite glassware and stunning jewellery, composed songs, poems and plays, devised judicial systems and developed sophisticated religious beliefs.

All those things require knowledge and skill – but not literacy. The quality of human life has undoubtedly been transformed by literacy, and transformed for the better. But literacy is a vehicle for knowledge, a means to an end not an end in itself. It’s important, not for its own sake but because of what it has enabled us – collectively – to achieve. I’m not disparaging reading for enjoyment; but reading for enjoyment didn’t change the world.

What the real world needs is not for everyone to be functionally literate, but for a critical mass of people to be functionally literate. And for some people to be so literate that they can acquire complex skills and knowledge that can benefit the rest of us. What proportion of people need to be functionally or highly literate will depend on what a particular society wants to achieve.

Human beings are a highly social species. Our ecological success (our ability to occupy varied habitats – what we do to those habitats is something else entirely) is due to our ability to solve problems, to communicate those solutions to each other and to work collectively. What an individual can or can’t do is important, but what we can do together is more important because that’s a more efficient way of using resources for mutual benefit.

This survey found that 20% of professionals and 30% of managers don’t have adequate literacy skills. It’s still possible to hold down a skilled job, draw a good salary, drive a car, get a mortgage, raise a family and retire on an adequate pension even if your literacy skills are flaky. Poor literacy might be embarrassing and require some ingenious workarounds to cover it up, but that’s more of a problem with social acceptability than utility. And plenty of jobs don’t require you to be a great reader.

It looks as though inadequate literacy, although an issue in the world of work, isn’t an insurmountable obstacle. So why would anyone claim that teaching children to read is ‘the most fundamental aim of schools’?

reading in schools

There are several reasons. Mass education systems were set up partly to provide manufacturing industry with a literate, numerate workforce. Schools in those fledgling education systems were often run on shoestring budgets. If a school had very limited resources, making reading a priority at least provided children with the opportunity to educate themselves in later life. Literacy takes time to develop, so if you have the luxury of being able to teach additional subjects, it makes sense to access them via reading and writing – thus killing two birds with one stone. Lastly, because for a variety of reasons public examinations are written ones, literacy is a key measure of pupil and school achievement.

In the real world, if you find reading especially difficult you can still learn a lot – by watching and listening or trial and error. But the emphasis schools place on literacy means that if in school you happen to be a child who finds reading especially difficult, you’re stumped. You can’t even compensate by becoming knowledgeable if you’re required to jump the literacy hurdle first. And poor knowledge, however literate you are, is a big problem in the real world.

SP advocates would say that the reason some children find reading difficult is because they haven’t been taught properly. And that if they were taught properly they would be able to read. That’s a possible explanation, but one possible explanation doesn’t rule out all the other possible explanations. And if Jeanne Chall’s descriptions of teachers’ approaches to formal reading instruction programmes are anything to go by, it’s unlikely that all children are going to get taught to read ‘properly’ any time soon. If some children have problems learning to read for whatever reason, we need to make sure that they’re not denied access to knowledge as well. Because in the real world, it’s knowledge that makes things work.

Now for some of the arms of the reading octopus that got tangled up in the string bag that is Twitter.

• I’m not saying reading isn’t important; it is – but that doesn’t make it the ‘fundamental aim of schools’, nor ‘a fundamental skill needed for life’.
• I’m not saying children shouldn’t be taught to read; they should be, but variation in reading ability doesn’t automatically mean a ‘deficit’ in instruction, home life or in the child.
• I’m not saying some children struggle to read because they are ‘less able’ than others; some kids find reading especially challenging but that has nothing to do with their intelligence.
• Nor am saying we shouldn’t have high aspirations for students; we should, but there’s no reason to have the same aspirations for all of them. Our strength as a species is in our diversity.

Frankly, if forced to choose, I’d rather live in a community populated by competent, practical people with reading skills that left something to be desired, than one populated by people with, say, PPE degrees from Oxford who’ve forgotten which way is up.

synthetic phonics, dyslexia and natural learning

Too intense a focus on the virtues of synthetic phonics (SP) can, it seems, result in related issues getting a bit blurred. I discovered that some whole language supporters do appear to have been ideologically motivated but that the whole language approach didn’t originate in ideology. And as far as I can tell we don’t know if SP can reduce adult functional illiteracy rates. But I wouldn’t have known either of those things from the way SP is framed by its supporters. SP proponents also make claims about how the brain is involved in reading. In this post I’ll look at two of them; dyslexia and natural learning.

Dyslexia

Dyslexia started life as a descriptive label for the reading difficulties adults can develop due to brain damage caused by a stroke or head injury. Some children were observed to have similar reading difficulties despite otherwise normal development. The adults’ dyslexia was acquired (they’d previously been able to read) but the children’s dyslexia was developmental (they’d never learned to read). The most obvious conclusion was that the children also had brain damage – but in the early 20th century when the research started in earnest there was no easy way to determine that.

Medically, developmental dyslexia is still only a descriptive label meaning ‘reading difficulties’ (causes unknown, might/might not be biological, might vary from child to child). However, dyslexia is now also used to denote a supposed medical condition that causes reading difficulties. This new usage is something that Diane McGuinness complains about in Why Children Don’t Learn to Read.

I completely agree with McGuinness that this use isn’t justified and has led to confusion and unintended and unwanted outcomes. But I think she muddies the water further by peppering her discussion of dyslexia (pp. 132-140) with debatable assertions such as:

“We call complex human traits ‘talents’”.

“Normal variation is on a continuum but people working from a medical or clinical model tend to think in dichotomies…”.

“Reading is definitely not a property of the human brain”.

“If reading is a biological property of the brain, transmitted genetically, then this must have occurred by Lamarckian evolution.”

Why debatable? Because complex human traits are not necessarily ‘talents’; clinicians tend to be more aware of normal variation than most people; reading must be a ‘property of the brain’ if we need a brain to read; and the research McGuinness refers to didn’t claim that ‘reading’ was transmitted genetically.

I can understand why McGuinness might be trying to move away from the idea that reading difficulties are caused by a biological impairment that we can’t fix. After all, the research suggests SP can improve the poor phonological awareness that’s strongly associated with reading difficulties. I get the distinct impression, however, that she’s uneasy with the whole idea of reading difficulties having biological causes. She concedes that phonological processing might be inherited (p.140) but then denies that a weakness in discriminating phonemes could be due to organic brain damage. She’s right that brain scans had revealed no structural brain differences between dyslexics and good readers. And in scans that show functional variations, the ability to read might be a cause, rather than an effect.

But as McGuinness herself points out reading is a complex skill involving many brain areas, and biological mechanisms tend to vary between individuals. In a complex biological process there’s a lot of scope for variation. Poor phonological awareness might be a significant factor, but it might not be the only factor. A child with poor phonological awareness plus visual processing impairments plus limited working memory capacity plus slow processing speed – all factors known to be associated with reading difficulties – would be unlikely to find those difficulties eliminated by SP alone. The risk in conceding that reading difficulties might have biological origins is that using teaching methods to remediate them might then called into question – just what McGuinness doesn’t want to happen, and for good reason.

Natural and unnatural abilities

McGuinness’s view of the role of biology in reading seems to be derived from her ideas about the origin of skills. She says;

It is the natural abilities of people that are transmitted genetically, not unnatural abilities that depend upon instruction and involve the integration of many subskills”. (p.140, emphasis McGuinness)

This is a distinction often made by SP proponents. I’ve been told that children don’t need to be taught to walk or talk because these abilities are natural and so develop instinctively and effortlessly. Written language, in contrast, is a recent man-made invention; there hasn’t been time to evolve a natural mechanism for reading, so we need to be taught how to do it and have to work hard to master it. Steven Pinker, who wrote the foreword to Why Children Can’t Read seems to agree. He says “More than a century ago, Charles Darwin got it right: language is a human instinct, but written language is not” (p.ix).

Although that’s a plausible model, what Pinker and McGuinness fail to mention is that it’s also a controversial one. The part played by nature and nurture in the development of language (and other abilities) has been the subject of heated debate for decades. The reason for the debate is that the relevant research findings can be interpreted in different ways. McGuinness is entitled to her interpretation but it’s disingenuous in a book aimed at a general readership not to tell readers that other researchers would disagree.

Research evidence suggests that the natural/unnatural skills model has got it wrong. The same natural/unnatural distinction was made recently in the case of part of the brain called the fusiform gyrus. In the fusiform gyrus, visual information about objects is categorised. Different types of objects, such as faces, places and small items like tools, have their own dedicated locations. Because those types of objects are naturally occurring, researchers initially thought their dedicated locations might be hard-wired.

But there’s also word recognition area. And in experts, the faces area is also used for cars, chess positions, and specially invented items called greebles. To become an expert in any of those things you require some instruction – you’d need to learn the rules of chess or the names of cars or greebles. But your visual system can still learn to accurately recognise, discriminate between and categorise many thousands of items like faces, places, tools, cars, chess positions and greebles simply through hours and hours of visual exposure.

Practice makes perfect

What claimants for ‘natural’ skills also tend to overlook is how much rehearsal goes into them. Most parents don’t actively teach children to talk, but babies hear and rehearse speech for many months before they can say recognisable words. Most parents don’t teach toddlers to walk, but it takes young children years to become fully stable on their feet despite hours of daily practice.

There’s no evidence that as far as the brain is concerned there’s any difference between ‘natural’ and ‘unnatural’ knowledge and skills. How much instruction and practice knowledge or skills require will depend on their transparency and complexity. Walking and bike-riding are pretty transparent; you can see what’s involved by watching other people. But they take a while to learn because of the complexity of the motor-co-ordination and balance involved. Speech and reading are less transparent and more complex than walking and bike-riding, so take much longer to master. But some children require intensive instruction in order to learn to speak, and many children learn to read with minimal input from adults. The natural/unnatural distinction is a false one and it’s as unhelpful as assuming that reading difficulties are caused by ‘dyslexia’.

Multiple causes

What underpins SP proponents’ reluctance to admit biological factors as causes for reading difficulties is, I suspect, an error often made when assessing cause and effect. It’s an easy one to make, but one that people advocating changes to public policy need to be aware of.

Let’s say for the sake of argument that we know, for sure, that reading difficulties have three major causes, A, B and C. The one that occurs most often is A. We can confidently predict that children showing A will have reading difficulties. What we can’t say, without further investigation, is whether a particular child’s reading difficulties are due to A. Or if A is involved, that it’s the only cause.

We know that poor phonological awareness is frequently associated with reading difficulties. Because SP trains children to be aware of phonological features in speech, and because that training improves word reading and spelling, it’s a safe bet that poor phonological awareness is also a cause of reading difficulties. But because reading is a complex skill, there are many possible causes for reading difficulties. We can’t assume that poor phonological awareness is the only cause, or that it’s a cause in all cases.

The evidence that SP improves children’s decoding ability is persuasive. However, the evidence also suggests that 12% – 15% of children will still struggle to learn to decode using SP. And that around 15% of children will struggle with reading comprehension. Having a method of reading instruction that works for most children is great, but education should benefit all children, and since the minority of children who struggle are the ones people keep complaining about, we need to pay attention to what causes reading difficulties for those children – as individuals. In education, one size might fit most, but it doesn’t fit all.

Reference

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

synthetic phonics and functional literacy: the missing link

According to Diane McGuinness in Why Can’t Children Read, first published in 1997, California’s low 4th grade reading scores prompted it in 1996 to revert to using phonics rather than ‘real books’ for teaching reading. McGuinness, like the legislators in California, clearly expected phonics to make a difference to reading levels. It appears to have had little impact (NCES, 2013). McGuinness would doubtless point out that ‘phonics’ isn’t systematic synthetic phonics, and that might have made a big difference. Indeed it might. We don’t know.

Synthetic phonics and functional literacy

Synthetic phonics is important because it can break a link in a casual chain that leads to functional illiteracy:

• poor phonological awareness ->
• poor decoding ->
• poor reading comprehension ->
• functional illiteracy and low educational attainment

The association between poor phonological awareness and reading difficulties is well established. And obviously if you can’t decode text you won’t understand it and if you can’t understand text your educational attainment won’t be very high.

SP involves training children to detect, recognise and discriminate between phonemes, so we’d expect it to improve phonological awareness and decoding skills, and that’s exactly what studies have shown. But as far as I can tell, we don’t know what impact SP has on the rest of the causal chain; on functional literacy rates in school leavers or on overall educational attainment.

This is puzzling. The whole point of teaching children to read is so they can be functionally literate. The SP programmes McGuinness advocates have been available for at least a couple of decades, so there’s been plenty of time to assess their impact on functional literacy. One of them, Phono-graphix (developed by a former student of McGuinness’s, now her daughter-in-law), has been the focus of several peer-reviewed studies all of which report improvements, but none of which appears to have assessed the impact on functional literacy by school leaving age. SP proponents have pointed out that might be because they’ve had enough difficulty getting policy-makers to take SP seriously, let alone fund long-term pilot studies.

The Clackmannanshire study

One study that did involve SP and followed the development of literacy skills over time was carried out in Clackmannanshire in Scotland by Rhona Johnston and Joyce Watson, then based at the University of Hull and the University of St Andrews respectively.

They compared three reading instruction approaches implemented in Primary 1 and tracked children’s performance in word reading, spelling and reading comprehension up to Primary 7. The study found very large gains in word reading (3y 6m; fig 1) and spelling (1y 9m; fig 2) for the group of children who’d had the SP intervention. The report describes reading comprehension as “significantly above chronological age throughout”. What it’s referring to is a 7-month advantage in P1 that had reduced to a 3.5-month advantage by P7.

A noticeable feature of the Clackmannanshire study is that scores were presented as group means, although boys’ and girls’ scores and those of advantaged and disadvantaged children were differentiated. One drawback of aggregating scores this way is that it can mask effects within the groups. So an intervention might be followed by a statistically significant average improvement that’s caused by some children performing much better than others.

This is exactly what we see in the data on ‘underachievers’ (fig 9). Despite large improvements at the group level, by P7 5% of children were more than two years behind their chronological age norm for word reading, 10% for spelling and 15% for reading comprehension. The improvements in group scores on word reading and spelling increased with age – but so did the proportion of children who were more than two years behind. This is an example of the ‘Matthew effect’ that Keith Stanovich refers to; children who can decode read more so their reading improves, whereas children who can’t decode don’t read so don’t improve. For the children in the Clackmannanshire study as a group, SP significantly improved word reading and spelling and slightly improved their comprehension, but it didn’t eliminate the Matthew effect.

The phonics check

There’s a similar within-group variation in the English KS1 phonics check, introduced in 2012. Ignoring the strange shape of the graph in 2012 and 2013 (though Dorothy Bishop’s observations are worth reading), the percentage of Year 2 children who scored below the expected standard was 15% in 2013 and 12% in 2014. The sharp increase at the cut-off point suggests that there are two populations of children – those who grasp phonics and those who don’t. Or that most children have been taught phonics properly but some haven’t. There’s also a spike at the end of the long tail of children who don’t ‘get’ phonics at all for whatever reason, representing the 5783 children who scored 0.

It’s clear that SP significantly improves children’s ability to decode and spell – at the group level. But we don’t appear to know whether that improvement is due to children who can already decode a bit getting much better at it, or to children who previously couldn’t decode learning to do it, or both, or if there are some children for whom SP has no impact.

And I have yet to find evidence showing that SP reduces the rates of functional illiteracy that McGuinness, politicians and the press complain about. The proportion of school leavers who have difficulty with reading comprehension has hovered around 17% for decades in the US (NCES, 2013) and in the UK (Rashid & Brooks, 2010). A similar proportion of children in the US and the UK populations have some kind of learning difficulty. And according to the Warnock report that figure appears to have been stable in the UK since mass education was introduced.

The magical number 17 plus or minus 2

There’s a likely explanation for that 17% (or thereabouts). In a large population, some features (such as height, weight, IQ or reading ability) are the outcome of what are essentially random variables. If you measure one of those features across the population and plot a graph of your measurements, they will form what’s commonly referred to as a normal distribution – with the familiar bell curve shape. The curve will be symmetrical around the mean (average) score. Not only does that tell you that 50% of your population will score above the mean and 50% below it, it also enables you to predict what proportion of the population will be significantly taller/shorter, lighter/heavier, more/less intelligent or better/worse at reading than average. Statistically, around 16% of the population will score more than one standard deviation below the mean. Those people will be significantly shorter/lighter/less intelligent or have more difficulties with reading than the rest of the population.

Bell curves tend to ring alarm bells so I need to make it clear what I am not saying. I’m not saying that problems with reading are due to a ‘reading gene’ or biology or IQ, and so we can’t do anything about them. What I am saying is that if reading ability in a large population is the outcome of not just one factor, but many factors that are to all intents and purposes random, then it’s a pretty safe bet that around 16% of children will have a significant problem with it. What’s important for that 16% is figuring out what factors are causing reading problems for individual children within that group. There are likely to be several different causes, as the NCES (1993) study found. So a child might have reading difficulties due to persistent glue ear as an infant, an undiagnosed developmental disorder, having a mother with mental health problems who hardly speaks to them, having no books at home or because their family dismisses reading as pointless. Or all of the above. SP might help, but is unlikely to address all of the obstacles to word reading, spelling and comprehension that child faces.

The data show that SP enables 11 year-olds as a group to make huge gains in their word reading and spelling skills. That’s brilliant. Let’s use synthetic phonics.

The data also show that SP doesn’t eliminate reading comprehension problems for at least 15% of 11 year-olds – or the word reading problems of around 15% of 6-7 year-olds. That could be due to some SP programmes not being taught systematically enough, intensively enough or for long enough. But it could be due to other causes. If so, those causes need to be identified and addressed or the child’s functional literacy will remain at risk.

I can see why the Clackmannanshire study convinced the UK government to recommend then mandate the use of SP for reading instruction in English schools (things are different in Scotland), but I haven’t yet found a follow-up study that measured literacy levels at 16, or the later impact on educational attainment; and the children involved in the study would now be in their early 20s.

What concerns me is that if more is being implicitly claimed for SP than it can actually deliver or if it fails to deliver a substantial improvement in the functional literacy of school leavers in a decade’s time, then it’s likely to be seen as yet another educational ‘fad’ and abandoned, regardless of the gains it brings in decoding and spelling. Meanwhile, the many other factors involved in reading comprehension are at risk of being marginalised if policy-makers pin their hopes on SP alone. Which just goes to show why nationally mandated educational policies should be thoroughly piloted and evaluated before they are foisted on schools.


References

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (2013). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

the nation’s report card: functional literacy

Synthetic phonics (SP) proponents make some bold claims about the impact SP has on children’s ability to decode text. Sceptics often point out that decoding isn’t reading – comprehension is essential as well. SP proponents retort that of course decoding isn’t all there is to reading, but if a child can’t decode, comprehension will be impossible. You can’t argue with that, and there’s good evidence for the efficacy of SP in facilitating decoding. But what impact has it had on reading? I feel as if I’ve missed something obvious here (maybe I have) but as far as I’ve been able to ascertain, the answer is that we don’t know.

Despite complaints about literacy from politicians, employers and the public focussing on the reading ability of school leavers, the focus of the English education system has been on early literacy and on decoding. I can understand why; not being able to decode can have major repercussions for individual children and for schools. But decoding and adult functional literacy seem to be linked only by an assumption that the primary cause of functional illiteracy is the inability to decode. This assumption doesn’t appear to be supported by the data. I should emphasise that I’ve never come across anyone who has claimed explicitly that SP will make a significant dent in functional illiteracy. But SP proponents often tut-tut about functional literacy levels and when Diane McGuinness discusses it in Why Children Can’t Read and What We Can Do About It, she makes the implication quite clear.

Armed with a first degree from Birkbeck College, a PhD from University College London and now Emeritus Professor of Psychology at the University of South Florida, McGuinness’ work has focussed on reading instruction. She’s a tireless advocate for SP and is widely cited by SP supporters. Her books are informative and readable, if rather idiosyncratic, and Why Children Can’t Read is no exception. In it, she explains how writing systems developed, takes us on a tour of reading research, points us to effective remedial programmes and tops it all off with detailed instructions for teachers and parents who want to use her approach to teaching decoding. But before moving on to what she says about functional literacy, it’s worth considering what she has to say about science.

This is doing science.

Her chapter ‘Science to the rescue’ consists largely of a summary of research into reading difficulties. However, McGuinness opens with a section called ‘What science is and isn’t’ in which she has a go at Ken Goodman. It’s not her criticism of Goodman’s work that bothers me, but the criteria she uses to do so. After listing various kinds of research carried out by journalists, academics doing literature reviews or observing children in classrooms, she says; “None of these activities qualify as scientific research. Science can only work when things can be measured and recorded in numbers” (p.127). This is an extraordinary claim. In one sentence, McGuinness dismisses operationalizing constructs, developing hypotheses, and qualitative research methods (that don’t measure things or put numbers on them) as not being scientific.

She uses this sweeping claim to discredit Goodman, who, as she points out elsewhere, wasn’t a ‘psycholinguist’ (p.55). (As I mentioned previously, McGuinness also ridicules quotes from Frank Smith – who was a ‘psycholinguist’ – but doesn’t mention him by name in the text; that’s tucked away in her Notes section.) She rightly points out that using the words ‘research’ and ‘scientific’ doesn’t make what Goodman is saying, science. And she rightly wonders about his references to his beliefs. But she then goes on to question the phonetics and linguistics on which Goodman bases his model;

There is no ‘science’ of how sounds and letters work together in an alphabet. This is strictly an issue of categorisation and mapping relationships… Goodman proceeds to discuss rudimentary phonetics and linguistics, leading the reader to believe that they are sciences. They are not. They are descriptive disciplines and depend upon other phoneticians and linguists agreeing with you. …Classifying things is not science. It is the first step to begin to do science.” (p.128)

McGuinness has a very narrow view of science. She reduces it to quantitative research methods and misunderstands the role of classification in scientific inquiry. Biology took an enormous leap forward when Linnaeus developed a classification system that worked for all living organisms. Similarly, Mendeleev’s periodic table enabled chemists to predict the properties of as yet undiscovered elements. Linguists’ categorisation of speech sounds is, ironically, what McGuinness used to develop her approach to reading instruction. What all these classification systems have in common is not just their reliability (level of agreement between the people doing the classification) but their validity (based on the physical structure of organisms, atoms and speech sounds).

McGuinness’s view of science explains why she seems most at home with data that are amenable to measurement, so it was instructive to see how she extracts information from data in her opening chapter ‘Reading report card’. She discusses the results of four large-scale surveys in the 1990s of ‘functional literacy’ (p.10). Two, published by the National Center for Education Statistics (NCES) compared adult and child literacy in the US, and two by the Organisation for European Economic Co-operation (OECD) included the US, Canada and five non-English-speaking countries.

Functional literacy data

Functional literacy was assessed using a 5–level scale. Level 1 ranged from not being able to read at all to a reading task that “required only the minimum level of competence” – for example extracting information from a short newspaper article. Level 5 involved a fact sheet for potential jurors (NCES, 1993, pp.73-84).

In the NCES study, 21% of the US adult population performed at level 1 “indicating that they were functionally illiterate” (McGuinness, p.10) and 47% scored at levels 1 or 2. Despite the fact that level 2 was above the minimum level of competence, McGuinness describes the level 1+2 group as “barely literate”. Something she omits to tell us is what the NCES report has to say about the considerable heterogeneity of the level 1 group. 25% were born abroad. 35% had had fewer than 8 years of schooling. 33% were 65 or older. 26% reported a ‘physical, mental or health condition’ that affected their day-to-day functioning, and 19% a visual impairment that made it difficult for them to read print (NCES, 1993, pp.16-18).

The OECD study showed that functional illiteracy (level 1) varied slightly across English-speaking countries – between 17% and 22%. McGuinness doesn’t tell us what the figures were for the five non-English speaking countries, apart from Sweden with a score of 7.5% at level 1 – half that of the English-speaking countries. The most likely explanation is the relative transparency of the orthographies – Swedish spelling was standardised as recently as 1906. But McGuinness doesn’t mention orthography as a factor in literacy results; instead “Sweden has set the benchmark for what school systems can achieve” (p.11). McGuinness then goes on to compare reading proficiency in different US States.

The Nation’s Report Card

McGuinness describes functional illiteracy levels in English-speaking countries as ‘dismal’, ‘sobering’, ‘shocking’ and ‘a literacy crisis’. She draws attention to the fact that after California mandated the use of the ‘real books’ (whole language) approach to reading instruction in 1987, it came low down the US national league tables for 4th grade reading in 1992, and then tied ‘for a dead last’ with Louisiana in 1994 (p.11). Although California’s score had decreased by only 5 points (from 202 to 197 – the entire range being 182-228) (NCES, 1996 p.47), there was perhaps a stigma attached to being tied ‘dead last with Louisiana’, as phonics was reintroduced into Californian classrooms together with more than a billion dollars for teacher training in 1996, the year before Why Children Can’t Read was first published.

What difference did it make? Not much, it seems. Although California’s 4th grade reading scores had recovered by 1998 (NCES,1999, p.113), and improved further by 2011 (NCES, 2013b), the increase wasn’t statistically significant.

Indeed, whatever method of reading instruction has been used in the US, it doesn’t appear to have had much overall impact on reading standards. At age 17, the proportion of ‘functionally illiterate’ US readers has fluctuated between 14% and 21% – an average of 17% – since 1971 (NCES, 2013b). And in the UK the figure has remained ‘stubbornly’ around 17% since WW2 (Rashid & Brooks, 2010).

Functional illiteracy levels in the English-speaking world are higher than in many non-English-speaking countries, and have remained stable for decades. Functional illiteracy is a long-standing problem and McGuinness, at least, implies that SP can crack it. In the next post I want to look at the evidence for that claim.

References

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (1996). NAEP 1994 Reading Report Card for the Nation and the States. National Center for Educational Statistics.
NCES (1999). NAEP 1998 Reading Report Card for the Nation and the States. National Center for Educational Statistics.
NCES (2013a). Mega-States: An Analysis of Student Performance in the Five Most Heavily Populated States in the Nation. National Center for Educational Statistics.
NCES (2013b). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

whole language and ideology

It took my son two years to learn to read. Despite his love of books and a lot of hard work, he just couldn’t manage it. Eventually he cracked it. Overnight. All by himself. Using whole word recognition. He’s the only member of our family who didn’t learn to read effortlessly – and he’s the only one who was taught to read using synthetic phonics (SP). SP was the bee’s knees at the time – his reading breakthrough happened a few months before the interim Rose Report was published. Baffled, I turned to the TES forum for insights and met the synthetic phonics teachers. They explained systematic synthetic phonics. They questioned whether my son had been taught SP systematically or intensively enough. (He hadn’t.) And they told me that SP was based on scientific evidence, whereas the whole language approach, which they opposed, was ideologically driven.

SP supporters are among the most vocal advocates for evidence-based education policies, so I checked out the evidence. What I could find, that is. Much of it predated the internet or was behind a paywall. What I did find convinced me that SP was the most effective way of teaching children to decode text. I’m still convinced. But the more I read, the more sceptical I became about some of the other claims made by SP proponents. In the next few posts, I want to look at three claims; about the whole language approach to learning to read, the impact of SP and reading and the brain.

whole language: evidence and ideology

The once popular whole language approach to learning to read was challenged by research findings that emerged during the 1980s and 90s. The heated debate that ensued is often referred to as the Reading Wars. The villains of the piece for SP proponents seemed to be a couple of guys called Goodman and Smith. I was surprised to find that they are both academics. Frank Smith has a background in psycholinguistics, a PhD from Harvard and a co-authored book with his supervisor, George “the magical number seven” Miller. Ken Goodman had accumulated an array of educational awards. Given their credentials, ideology clearly wasn’t the whole story.

In 1971 Frank Smith published Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read, which explains the whole language approach. It’s a solid but still readable and still relevant summary of how research from cognitive science and linguistics relates to reading. So how did Smith end up fostering the much maligned – and many would say discredited – whole language approach?

bottom-up vs top-down

By 1971 it was well established that brains process sensory information in a ‘bottom-up’ fashion. Cognitive research showed that complex visual and auditory input from the environment is broken down into simple fragments by the sense organs. The fragments are then reconstituted in the brain, step-by-step, into the whole visual images or patterns of sound that we perceive. This process is automatic and pre-conscious and gets faster and more efficient the more familiar we are with a particular item.

But this step-by-step sequential model of cognitive processing didn’t explain what readers did. Research showed that people read words faster than non-words, can identify words from only a few key features, and that the meaning of the beginning of a sentence influences the way they pronounce words at the end of it (as in ‘her eyes were full of tears’).

According to the sequential model of cognition, this is impossible; you can’t determine the meaning of a word before you’ve decoded it. The only explanation that made sense was that a ‘top-down’ processing system was also in operation. What wasn’t clear at the time was how the two systems interacted. A common view was that the top-down process controlled the bottom-up one.

For Smith, the top-down model had some important implications such as:

• Young children wouldn’t be able to detect the components of language (syllables, phonemes, nouns, verbs etc) so teaching reading using components wouldn’t be effective.
• If children had enough experience of language, spoken and written, they would learn to read as easily as they learned to speak.
• Skilled readers readers would use contextual cues to identify words; poorer readers would rely more heavily on visual features.

Inspired by Smith’s model of reading, Keith Stanovich and Richard West, then graduate students at the University of Michigan, decided to test the third hypothesis. To their surprise, they found exactly the opposite of Smith’s prediction. The better readers were, the more they relied on visual recognition. The poorer readers relied more on context. It wasn’t that the skilled readers weren’t using contextual cues, but their visual recognition process was simply faster – they defaulted to using context if visual recognition failed.

As Stanovich explains (Stanovich, 2000, pp.21-23) the flaw in most top-down models of reading was that they assumed top-down controlled bottom-up processing. What Stanovich and West’s finding implied (and later research supported) was that the two systems interacted at several levels. Although some aspects of Smith’s model were wrong it was based on robust evidence. So why did SP proponents think it was ideologically driven? One clue is in Ken Goodman’s work.

a psycholinguistic guessing game

Smith completed his PhD in 1967, the year that Goodman, then an Associate Professor at Wayne State University, Detroit, published his (in)famous article in the Journal of the Reading Specialist “Reading: A psycholinguistic guessing game”. The title is derived from a key concept in contemporary reading models – that skilled readers used rapid, pre-conscious hypothesis testing to identify words. It’s an eye-catching title, but open to misunderstanding; the skilled ‘guessing game’ that Goodman was referring to is very different from getting a beginner reader to have a wild stab at an unfamiliar word. Which was why Goodman and Smith recommended extensive experience of language.

Goodman’s background was in education rather than psycholinguistics. According to Diane McGuinness, (McGuinness, 1998, p.129) Goodman does have some peer-reviewed publications, but the most academic text I could find was his introduction to The Psycholinguistic Nature of the Reading Process published in 1968. In contrast to the technical content of the rest of the book, Goodman’s chapter provides only a brief overview of reading from a psycholinguistic perspective and in the four-sentence chapter summary he refers to his ‘beliefs’ twice – a tendency McGuinness uses as evidence against him. (Interestingly, although she also ridicules some quotes from Smith, his name is tucked away in her Notes section.)

Although Goodman doesn’t come across as a heavyweight academic, the whole language model he enthusiastically supports is nonetheless derived from the same body of evidence used by Smith and Stanovich. And the miscue analysis technique Goodman developed is now widely used to identify the strategies adopted by individual readers. So where does ideology come in?

Keith Stanovich sheds some light on this question in Progress in Understanding Reading. Published in 2000, it’s a collection of Stanovich’s key papers spanning a 25-year career. In the final section he reflects on his work and the part it played in the whole language debate. Interestingly, Stanovich emphasises what the two sides had in common. Here’s his take on best practice in the classroom;

Fortunately the best teachers have often been wise enough to incorporate the most effective practices from the two different approaches into their instructional programs.” (p.361)

and on the way research findings have been used in the debate;

Whole language proponents link [a model of the reading process at variance with the scientific data] with the aspects of whole language philosophy that are legitimately good and upon which virtually no researchers disagree.” (p.362)

correspondence and coherence

For Stanovich the heat in the debate didn’t come from disagreements between reading researchers, but from the clash between two conflicting theories about the nature of truth; correspondence vs coherence. Correspondence theory assumes that there is a real world out there, independent of our perceptions of it. In contrast the coherence theory assumes that our “knowledge is internally constructed – that our evolving knowledge is not tracking an independently existing world, but that internally constructed knowledge literally is the world” (p.371, emphasis Stanovich’s). The whole language model fits nicely into the coherence theory of truth, so research findings that challenged whole language also challenged what Stanovich describes as the “extreme constructivism” of some whole language proponents.

Stanovich also complains that whole language proponents often fail to provide evidence for their claims, cherry-pick supporting evidence only, ignore contradictory evidence and are prone to the use of straw men and ad hominem attacks. He doesn’t mention that synthetic phonics proponents are capable of doing exactly the same. I don’t think this is due to bias on his part; what’s more likely is that when his book was published in 2000 the whole language model had had plenty of time to filter through to classrooms, policy makers’ offices and university education and philosophy departments. The consensus on synthetic phonics was relatively new and hadn’t gathered so much popular support. Fifteen years on, that situation has changed. In my experience, some SP proponents are equally capable of making sweeping claims, citing any supporting evidence regardless of its quality, and being dismissive towards anyone who disagrees with anything they believe. Which brings me to the subject of my next post; claims about what SP can achieve.

references

Goodman, K (Ed). 1998. The Psycholinguistic Nature of the Reading Process. Wayne State University Press.
McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
Smith, F. (1971). Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read. Lawrence Erlbaum. (My copy is 4th edition, published 1988).
Stanovich, K (2000). Progress in Understanding Reading. Guilford Press.

Kieran Egan’s “The educated mind” 2

The second post in a two-part review of Kieran Egan’s book The Educated Mind: How Cognitive Tools Shape our Understanding.

For Egan, a key point in the historical development of understanding was the introduction by the Greeks of a fully alphabetic representation of language – it included symbols for vowels as well as consonants. He points out that being able to represent speech accurately in writing gives people a better understanding of how they use language and therefore of the concepts that language represents. Egan attributes the flowering of Greek reasoning and knowledge to their alphabet “from which all alphabetic systems are derived” (p.75).

This claim would be persuasive if it were accurate. But it isn’t. As far as we know, the Phoenicians – renowned traders – invented the first alphabetic representation of language. It was a consonantal alphabet that reflected the structure of Semitic languages and it spread through the Middle East. The Greeks adapted it, introducing symbols for vowels. This wasn’t a stroke of genius on their part – Semitic writing systems also used symbols for vowels where required for disambiguation – but a necessary addition because Greek is an Indo-European language with a syllabic structure. The script used by the Mycenaean civilisation that preceded the Greeks was a syllabic one.

“a distinctive kind of literate thinking”

Egan argues that this alphabet enabled the Greeks to develop “extended discursive writing” that “is not an external copy of a kind of thinking that goes on in the head; it represents a distinctive kind of literate thinking” (p.76). I agree that extended discursive writing changes thinking, but I’m not convinced that it’s distinctive nor that it results from literacy.

There’s been some discussion amongst teachers recently about the claim that committing facts to long-term memory mitigates the limitations of working memory. Thorough memorisation of information certainly helps – we can recall it quickly and easily when we need it – but we can still only juggle half-a-dozen items at a time in working memory. The pre-literate and semi-literate civilisations that preceded the Greeks relied on long-term memory for the storage and transmission of information because they didn’t have an alternative. But long-term memory has its own limitations in the form of errors, biases and decay. Even people who had memorisation down to a fine art were obliged to develop writing in order to have an accurate record of things that long-term memory isn’t good at handling, such as what’s in sealed sacks and jars and how old it is. Being able to represent spoken language in writing takes things a step further. Written language not only circumvents the weaknesses of long-term memory, it helps with the limitations of working memory too. Extended discursive writing can encompass thousands of facts, ideas and arguments that a speaker and a listener would find it impossible to keep track of in conversation. So extended discursive writing doesn’t represent “a distinctive kind of literate thinking” so much as significantly extending pre-literate thinking.

the Greek miracle

It’s true that the sudden arrival in Greece of “democracy, logic, philosophy, history, drama [and] reflective introspection… were explainable in large part as an implication of the development and spread of alphabetic literacy” (p.76). But although alphabetic literacy might be a necessary condition for the “Greek miracle”, it isn’t a sufficient one.

Like all the civilisations that had preceded it, the economy of the Greek city states was predominantly agricultural, although it also supported thriving industries in mining, metalwork, leatherwork and pottery. Over time agricultural communities had figured out more efficient ways of producing, storing and trading food. Communities learn from each other, so sooner or later, one of them would produce enough surplus food to free up some of its members to focus on thinking and problem-solving, and would have the means to make a permanent record of the thoughts and solutions that emerged. The Greeks used agricultural methods employed across the Middle East, adapted the Phoenician alphabet and slavery fuelled the Greek economy as it had previous civilisations. The literate Greeks were standing on the shoulders of pre-literate Middle Eastern giants.

The ability to make a permanent record of thoughts and solutions gave the next generation of thinkers and problem-solvers a head start and created the virtuous cycle of understanding that’s continued almost unabated to the present day. I say almost unabated, because there have been periods during which it’s been impossible for communities to support thinkers and problem-solvers; earthquakes, volcanic eruptions, drought, flood, disease, war and invasion have all had a devastating and long-term impact on food production and on the infrastructure that relies on it.

language, knowledge and understanding

Egan’s types of understanding – Somatic, Mythic, Romantic, Philosophic and Ironic – have descriptive validity; they do reflect the way understanding has developed historically, and the way it develops in children. But from a causal perspective, although those phases correlate with literacy they also correlate with the complexity of knowledge. As complexity of knowledge increases, so understanding shifts from binary to scalar to systematic to the exceptions to systems; binary classifications, for example, are characteristic of the way people, however literate they are, tend to categorise knowledge in a domain that’s new to them (e.g. Lewandowski et al, 2005).

Egan doesn’t just see literacy as an important factor in the development of understanding, he frames understanding in terms of literacy. What this means is that in Egan’s framework, knowledge (notably pre-verbal and non-verbal knowledge) has to get in line behind literacy when it comes to the development of understanding. It also means that Egan overlooks the key role of agriculture and trade in the development of writing systems and of the cultures that invented them. And that apprenticeship, for millennia widely used as a means of passing on knowledge, is considered only in relation to ‘aboriginal’ cultures (p.49). And that Somatic understanding is relegated to a few pages at the end of the chapter on the Ironic.

non-verbal knowledge

These are significant oversights. Non-verbal knowledge is a sine qua non for designers, artisans, architects, builders, farmers, engineers, mariners, surgeons, physiotherapists, artists, chefs, parfumiers, musicians – the list goes on and on. It’s true that much of the knowledge associated with these occupations is transmitted verbally, but much of it can’t be transmitted through language, but acquired only by looking, listening or doing. Jenny Uglow in The Lunar Men attributes the speed at which the industrial revolution took place not to literacy, but to the development of a way to reproduce technical drawings accurately.

Egan appears sceptical about practical people and practical things because when

those who see themselves as practical people engaging in practical things [who] tend not to place any value on acquiring the abstract languages framed to deal with an order than underlies surface diversity” are “powerful in government, education departments and legislatures, pressures mount for an increasingly down-to-earth, real-world curriculum. Abstractions and theories are seen as idle, ivory-tower indulgences removed from the gritty reality of sensible life.” (p.228)

We’re all familiar with the type of people Egan refers to, and I’d agree that the purpose of education isn’t simply to produce a workforce for industry. But there are other practical people engaging in practical things who are noticeable by their absence from this book; farmers, craftspeople, traders and engineers who are very interested in abstractions, theories and the order that underlies surface diversity. The importance of knowledge that’s difficult to verbalise has significant implications for the curriculum and for the traditional academic/vocational divide. Although there is clearly a difference between ‘abstractions and theories’ and their application, theory and application are interdependent; neither is more important than the other, something that policy-makers often find difficult to grasp.

Egan acknowledges that there’s a problem with emphasising the importance of non-verbal knowledge in circles that assume that language underpins understanding. As he points out “Much modernist and postmodernist theory is built on the assumption that human understanding is essentially languaged understanding” (p.166). Egan’s framework elbows aside language to make room for non-verbal knowledge, but it’s a vague, incoherent “ineffable” sort of non-verbal knowledge that’s best expressed linguistically through irony (p.170). It doesn’t appear to include the very coherent, concrete kind of non-verbal knowledge that enables us to grow food, build bridges or carry out heart-transplants.

the internal coherence of what’s out there

Clearly, bodies of knowledge transmitted from person to person via language will be shaped by language and by the thought-processes that produce it, so the knowledge transmitted won’t be 100% complete, objective or error-free. But a vast amount of knowledge refers to what’s out there, and what’s out there has an existence independent of our thought-processes and language. What’s out there also has an internally coherent structure that becomes clearer the more we learn about it, so over time our collective bodies of knowledge more accurately reflect what’s out there and become more internally coherent despite their incompleteness, subjectivity and errors.

The implication is that in education, the internal coherence of knowledge itself should play at least some part in shaping the curriculum. But because the driving force behind Egan’s framework is literacy rather than knowledge, the internal coherence of knowledge can’t get a word in edgeways. During the Romantic phase of children’s thinking, for example, Egan recommends introducing topics randomly to induce ‘wonder and awe’ (p.218), rather than introducing them systematically to help children make sense of the world. To me this doesn’t look very different from the “gradual extension from what is already familiar” (p.86) approach of which Egan is pretty critical. I thought the chapter on Philosophic understanding might have something to say about this but it’s about how people think about knowledge rather than the internal coherence of knowledge itself – not quite the same thing.

the cherries on the straw hat of society

The sociologist Jacques Ellul once described hippies as the cherries on the straw hat of society* meaning that they were in a position to be critical of society only because of the nature of the society of which they were critical. I think this also serves as an analogy for Egan’s educational framework. He’s free to construct an educational theory framed solely in terms of literacy only because of the non-literate knowledge of practical people like farmers, craftspeople, traders and engineers. That brings me back to my original agricultural analogy; wonder and awe, like apple blossom and the aroma of hops, might make might make our experience of education and of agriculture transcendent, but if it wasn’t for coherent bodies of non-verbal knowledge and potatoes, swedes and Brussels sprouts, we wouldn’t be in a position to appreciate transcendence at all.

References

Lewandowski G, Gutschow A, McCartney R, Sanders K, Shinners-Kennedy D (2005). What novice programmers don’t know. Proceedings of the first international workshop on computing education research, 1-12. ACM New York, NY.

Uglow, J (2003). The Lunar Men: The Friends who made the Future. Faber & Faber.

Note
*I can’t remember which of Ellul’s books this reference is from and can’t find it quoted anywhere. If anyone knows, I’d be grateful for the source.