educating the evolved mind: education

The previous two posts have been about David Geary’s concepts of primary and secondary knowledge and abilities; evolved minds and intelligence.  This post is about how Geary applies his model to education in Educating the Evolved Mind.

There’s something of a mismatch between the cognitive and educational components of Geary’s model.  The cognitive component is a range of biologically determined functions that have evolved over several millennia.  The educational component is a culturally determined education system cobbled together in a somewhat piecemeal and haphazard fashion over the past century or so.

The education system Geary refers to is typical of the schooling systems in developed industrialised nations, and according to his model, focuses on providing students with biologically secondary knowledge and abilities. Geary points out that many students prefer to focus on biologically primary knowledge and abilities such as sports and hanging out with their mates (p.52).   He recognises they might not see the point of what they are expected to learn and might need its importance explained to them in terms of social value (p.56). He suggests ‘low achieving’ students especially might need explicit, teacher driven instruction (p.43).

You’d think, if cognitive functions have been biologically determined through thousands of years of evolution, that it would make sense to adapt the education system to the cognitive functions, rather then the other way round. But Geary doesn’t appear to question the structure of the current US education system at all; he accepts it as a given. I suggest that in the light of how human cognition works, it might be worth taking a step back and re-thinking the education system itself in the light of the following principles:

1.communities need access to expertise

Human beings have been ‘successful’, in evolutionary terms, mainly due to our use of language. Language means it isn’t necessary for each of us to learn everything for ourselves from scratch; we can pass on information to each other verbally. Reading and writing allow knowledge to be transmitted across time and space. The more knowledge we have as individuals and communities, the better our chances of survival and a decent quality of life.

But, although it’s desirable for everyone to be proficient reader and writer and to have an excellent grasp of collective human knowledge, that’s not necessary in order for each of us to have a decent quality of life. What each community needs is a critical mass of people with good knowledge and skills.

Also, human knowledge is now so vast that no one can be an expert on everything; what’s important is that everyone has access to the expertise they need, when and where they need it.  For centuries, communities have facilitated access to expertise by educating and training experts (from carpenters and builders to doctors and lawyers) who can then share their expertise with their communities.

2.education and training is not just for school

Prior to the development of mass education systems, most children’s and young people’s education and training would have been integrated into the communities in which they lived. They would understand where their new knowledge and skills fitted into the grand scheme of things and how it would benefit them, their families and others. But schools in mass education systems aren’t integrated into communities. The education system has become its own specialism. Children and young people are withdrawn from their community for many hours to be taught whatever knowledge and skills the education system thinks fit. The idea that good exam results will lead to good jobs is expected to provide sufficient motivation for students to work hard at mastering the school curriculum.  Geary recognises that it doesn’t.

For most of the millennia during which cognitive functions have been developing, children and young people have been actively involved in producing food or making goods, and their education and training was directly related to those tasks. Now it isn’t.  I’m not advocating a return to child labour; what I am advocating is ensuring that what children and young people learn in school is directly and explicitly related to life outside school.

Here’s an example: A highlight of the Chemistry O level course I took many years ago was a visit to the nearby Avon (make-up) factory. Not only did we each get a bag of free samples, but in the course of an afternoon the relevance of all that rote learning of industrial applications, all that dry information about emulsions, fat-soluble dyes, anti-fungal additives etc. suddenly came into sharp focus. In addition, the factory was a major local employer and the Avon distribution network was very familiar to us, so the whole end-to-end process made sense.

What’s commonly referred to as ‘academic’ education – fundamental knowledge about how the world works – is vital for our survival and wellbeing as a species. But knowledge about how the world works is also immensely practical. We need to get children and young people out, into the community, to see how their communities apply knowledge about how the world works, and why it’s important. The increasing emphasis in education in the developed world on paper-and-pencil tests, examination results and college attendance is moving the education system in the opposite direction, away from the practical importance of extensive, robust knowledge to our everyday lives.  And Geary appears to go along with that.

3.(not) evaluating the evidence

Broadly speaking, Geary’s model has obvious uses for teachers.   There’s considerable supporting evidence for a two-phase model of cognition ranging from Fodor’s specialised, stable/general, unstable distinction, to the System 1/System 2 model Daniel Kahnemann describes in Thinking, Fast and Slow. Whether the difference between Geary’s biologically primary and secondary knowledge and abilities is as clear-cut as he claims, is a different matter.

It’s also well established that in order to successfully acquire the knowledge usually taught in schools, children need the specific abilities that are measured by intelligence tests; that’s why the tests were invented in the first place. And there’s considerable supporting evidence for the reliability and predictive validity of intelligence tests. They clearly have useful applications in schools. But it doesn’t follow that what we call intelligence or g (never mind gF or gC) is anything other than a construct created by the intelligence test.

In addition, the fact that there is evidence that supports Geary’s claims doesn’t mean all his claims are true. There might also be considerable contradictory evidence; in the case of Geary’s two-phase model the evidence suggests the divide isn’t as clear-cut as he suggests, and the reification of intelligence has been widely critiqued. Geary mentions the existence of ‘vigorous debate’ but doesn’t go into details and doesn’t evaluate the evidence by actually weighing up the pros and cons.

Geary’s unquestioning acceptance of the concepts of modularity, intelligence and education systems in the developed world, increases the likelihood that teachers will follow suit and simply accept Geary’s model as a given. I’ve seen the concepts of biologically primary and secondary knowledge and abilities, crystallised intelligence (gC) and fluid intelligence (gF), and the idea that students with low gF who struggle with biologically secondary knowledge just need explicit direct instruction, all asserted as if they must be true – presumably because an academic has claimed they are and cited evidence in support.

This absence of evaluation of the evidence is especially disconcerting in anyone who emphasises the importance of teachers becoming research-savvy and developing evidence-based practice, or who posits models like Geary’s in opposition to the status quo. The absence of evaluation is also at odds with the oft cited requirement for students to acquire robust, extensive knowledge about a subject before they can understand, apply, analyse, evaluate or use it creatively. That requirement applies only to school children, it seems.

references

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Kahneman, D (2012).  Thinking, fast and slow.   Penguin.

Advertisements

is systematic synthetic phonics generating neuromyths?

A recent Twitter discussion about systematic synthetic phonics (SSP) was sparked by a note to parents of children in a reception class, advising them what to do if their children got stuck on a word when reading. The first suggestion was “encourage them to sound out unfamiliar words in units of sound (e.g. ch/sh/ai/ea) and to try to blend them”. If that failed “can they use the pictures for any clues?” Two other strategies followed. The ensuing discussion began by questioning the wisdom of using pictures for clues and then went off at many tangents – not uncommon in conversations about SSP.
richard adams reading clues

SSP proponents are, rightly, keen on evidence. The body of evidence supporting SSP is convincing but it’s not the easiest to locate; much of the research predates the internet by decades or is behind a paywall. References are often to books, magazine articles or anecdote; not to be discounted, but not what usually passes for research. As a consequence it’s quite a challenge to build up an overview of the evidence for SSP that’s free of speculation, misunderstandings and theory that’s been superseded. The tangents that came up in this particular discussion are, I suggest, the result of assuming that if something is true for SSP in particular it must also be true for reading, perception, development or biology in general. Here are some of the inferences that came up in the discussion.

You can’t guess a word from a picture
Children’s books are renowned for their illustrations. Good illustrations can support or extend the information in the text, showing readers what a chalet, a mountain stream or a pine tree looks like, for example. Author and artist usually have detailed discussions about illustrations to ensure that the book forms an integrated whole and is not just a text with embellishments.

If the child is learning to read, pictures can serve to focus attention (which could be wandering anywhere) on the content of the text and can have a weak priming effect, increasing the likelihood of the child accessing relevant words. If the picture shows someone climbing a mountain path in the snow, the text is unlikely to contain words about sun, sand and ice-creams.

I understand why SSP proponents object to the child being instructed to guess a particular word by looking at a picture; the guess is likely to be wrong and the child distracted from decoding the word. But some teachers don’t seem to be keen on illustrations per se. As one teacher put it “often superficial time consuming detract from learning”.

Cues are clues are guesswork
The note to parents referred to ‘clues’ in the pictures. One contributor cited a blogpost that claimed “with ‘mixed methods’ eyes jump around looking for cues to guess from”. Clues and cues are often used interchangeably in discussions about phonics on social media. That’s understandable; the words have similar meanings and a slip on the keyboard can transform one into the other. But in a discussion about reading methods, the distinction between guessing, clues and cues is an important one.

Guessing involves drawing conclusions in the absence of enough information to give you a good chance of being right; it’s haphazard, speculative. A clue is a piece of information that points you in a particular direction. A cue has a more specific meaning depending on context; e.g. theatrical cues, social cues, sensory cues. In reading research, a cue is a piece of information about something the observer is interested in or a property of a thing to be attended to. It could be the beginning sound or end letter of a word, or an image representing the word. Cues are directly related to the matter in hand, clues are more indirectly related, guessing is a stab in the dark.

The distinction is important because if teachers are using the terms cue and clue interchangeably and assuming they both involve guessing there’s a risk they’ll mistakenly dismiss references to ‘cues’ in reading research as guessing or clues, which they are not.

Reading isn’t natural
Another distinction that came up in the discussion was the idea of natural vs. non-natural behaviours. One argument for children needing to be actively taught to read rather than picking it up as they go along is that reading, unlike walking and talking, isn’t a ‘natural’ skill. The argument goes that reading is a relatively recent technological development so we couldn’t possibly have evolved mechanisms for reading in the same way as we have evolved mechanisms for walking and talking. One proponent of this idea is Diane McGuinness, an influential figure in the world of synthetic phonics.

The argument rests on three assumptions. The first is that we have evolved specific mechanisms for walking and talking but not for reading. The ideas that evolution has an aim or purpose and that if everybody does something we must have evolved a dedicated mechanism to do it, are strongly contested by those who argue instead that we can do what our anatomy and physiology enable us to do (see arguments over Chomsky’s linguistic theory). But you wouldn’t know about that long-standing controversy from reading McGuinness’s books or comments from SSP proponents.

The second assumption is that children learn to walk and talk without much effort or input from others. One teacher called the natural/non-natural distinction “pretty damn obvious”. But sometimes the pretty damn obvious isn’t quite so obvious when you look at what’s actually going on. By the time they start school, the average child will have rehearsed walking and talking for thousands of hours. And most toddlers experience a considerable input from others when developing their walking and talking skills even if they don’t have what one contributor referred to as a “WEIRDo Western mother”. Children who’ve experienced extreme neglect (such as those raised in the notorious Romanian orphanages) tend to show significant developmental delays.

The third assumption is that learning to use technological developments requires direct instruction. Whether it does or not depends on the complexity of the task. Pointy sticks and heavy stones are technologies used in foraging and hunting, but most small children can figure out for themselves how to use them – as do chimps and crows. Is the use of sticks and stones by crows, chimps or hunter-gatherers natural or non-natural? A bicycle is a man-made technology more complex than sticks and stones, but most people are able to figure out how to ride a bike simply by watching others do it, even if a bit of practice is needed before they can do it themselves. Is learning to ride a bike with a bit of support from your mum or dad natural or non-natural?

Reading English is a more complex task than riding a bike because of the number of letter-sound correspondences. You’d need a fair amount of watching and listening to written language being read aloud to be able to read for yourself. And you’d need considerable instruction and practice before being able to fly a fighter jet because the technology is massively more complex than that involved in bicycles and alphabetic scripts.

One teacher asked “are you really going to go for the continuum fallacy here?” No idea why he considers a continuum a fallacy. In the natural/non-natural distinction used by SSP proponents there are three continua involved;

• the complexity of the task
• the length of rehearsal time required to master the task, and
• the extent of input from others that’s required.

Some children learn to read simply by being read to, reading for themselves and asking for help with words they don’t recognise. But because reading is a complex task, for most children learning to read by immersion like that would take thousands of hours of rehearsal. It makes far more sense to cut to the chase and use explicit instruction. In principle, learning to fly a fighter jet would be possible through trial-and-error, but it would be a stupidly costly approach to training pilots.

Technology is non-biological
I was told by several teachers that reading, riding a bike and flying an aircraft weren’t biological functions. I fail to see how they can’t be, since all involve human beings using their brain and body. It then occurred to me that the teachers are equating ‘biological’ with ‘natural’ or with the human body alone. In other words, if you acquire a skill that involves only body parts (e.g. walking or talking) it’s biological. If it involves anything other than a body part it’s not biological. Not sure where that leaves hunting with wooden spears, making baskets or weaving woolen fabric using a wooden loom and shuttle.

Teaching and learning are interchangeable
Another tangent was whether or not learning is involved in sleeping, eating and drinking. I contended that it is; newborns do not sleep, eat or drink in the same way as most of them will be sleeping, eating or drinking nine months later. One teacher kept telling me they don’t need to be taught to do those things. I can see why teachers often conflate teaching and learning, but they are not two sides of the same coin. You can teach children things but they might fail to learn them. And children can learn things that nobody has taught them. It’s debatable whether or not parents shaping a baby’s sleeping routine, spoon feeding them or giving them a sippy cup instead of a bottle count as teaching, but it’s pretty clear there’s a lot of learning going on.

What’s true for most is true for all
I was also told by one teacher that all babies crawl (an assertion he later modified) and by a school governor that they can all suckle (an assertion that wasn’t modified). Sweeping generalisations like this coming from people working in education is worrying. Children vary. They vary a lot. Even if only 0.1% of children do or don’t do something, that would involve 8 000 children in English schools. Some and most are not all or none and teachers of all people should be aware of that.

A core factor in children learning to read is the complexity of the task. If the task is a complex one, like reading, most children are likely to learn more quickly and effectively if you teach them explicitly. You can’t infer from that that all children are the same, they all learn in the same way or that teaching and learning are two sides of the same coin. Nor can you infer from a tenuous argument used to justify the use of SSP that distinctions between natural and non-natural or biological and technological are clear, obvious, valid or helpful. The evidence that supports SSP is the evidence that supports SSP. It doesn’t provide a general theory for language, education or human development.

the nation’s report card: functional literacy

Synthetic phonics (SP) proponents make some bold claims about the impact SP has on children’s ability to decode text. Sceptics often point out that decoding isn’t reading – comprehension is essential as well. SP proponents retort that of course decoding isn’t all there is to reading, but if a child can’t decode, comprehension will be impossible. You can’t argue with that, and there’s good evidence for the efficacy of SP in facilitating decoding. But what impact has it had on reading? I feel as if I’ve missed something obvious here (maybe I have) but as far as I’ve been able to ascertain, the answer is that we don’t know.

Despite complaints about literacy from politicians, employers and the public focussing on the reading ability of school leavers, the focus of the English education system has been on early literacy and on decoding. I can understand why; not being able to decode can have major repercussions for individual children and for schools. But decoding and adult functional literacy seem to be linked only by an assumption that the primary cause of functional illiteracy is the inability to decode. This assumption doesn’t appear to be supported by the data. I should emphasise that I’ve never come across anyone who has claimed explicitly that SP will make a significant dent in functional illiteracy. But SP proponents often tut-tut about functional literacy levels and when Diane McGuinness discusses it in Why Children Can’t Read and What We Can Do About It, she makes the implication quite clear.

Armed with a first degree from Birkbeck College, a PhD from University College London and now Emeritus Professor of Psychology at the University of South Florida, McGuinness’ work has focussed on reading instruction. She’s a tireless advocate for SP and is widely cited by SP supporters. Her books are informative and readable, if rather idiosyncratic, and Why Children Can’t Read is no exception. In it, she explains how writing systems developed, takes us on a tour of reading research, points us to effective remedial programmes and tops it all off with detailed instructions for teachers and parents who want to use her approach to teaching decoding. But before moving on to what she says about functional literacy, it’s worth considering what she has to say about science.

This is doing science.

Her chapter ‘Science to the rescue’ consists largely of a summary of research into reading difficulties. However, McGuinness opens with a section called ‘What science is and isn’t’ in which she has a go at Ken Goodman. It’s not her criticism of Goodman’s work that bothers me, but the criteria she uses to do so. After listing various kinds of research carried out by journalists, academics doing literature reviews or observing children in classrooms, she says; “None of these activities qualify as scientific research. Science can only work when things can be measured and recorded in numbers” (p.127). This is an extraordinary claim. In one sentence, McGuinness dismisses operationalizing constructs, developing hypotheses, and qualitative research methods (that don’t measure things or put numbers on them) as not being scientific.

She uses this sweeping claim to discredit Goodman, who, as she points out elsewhere, wasn’t a ‘psycholinguist’ (p.55). (As I mentioned previously, McGuinness also ridicules quotes from Frank Smith – who was a ‘psycholinguist’ – but doesn’t mention him by name in the text; that’s tucked away in her Notes section.) She rightly points out that using the words ‘research’ and ‘scientific’ doesn’t make what Goodman is saying, science. And she rightly wonders about his references to his beliefs. But she then goes on to question the phonetics and linguistics on which Goodman bases his model;

There is no ‘science’ of how sounds and letters work together in an alphabet. This is strictly an issue of categorisation and mapping relationships… Goodman proceeds to discuss rudimentary phonetics and linguistics, leading the reader to believe that they are sciences. They are not. They are descriptive disciplines and depend upon other phoneticians and linguists agreeing with you. …Classifying things is not science. It is the first step to begin to do science.” (p.128)

McGuinness has a very narrow view of science. She reduces it to quantitative research methods and misunderstands the role of classification in scientific inquiry. Biology took an enormous leap forward when Linnaeus developed a classification system that worked for all living organisms. Similarly, Mendeleev’s periodic table enabled chemists to predict the properties of as yet undiscovered elements. Linguists’ categorisation of speech sounds is, ironically, what McGuinness used to develop her approach to reading instruction. What all these classification systems have in common is not just their reliability (level of agreement between the people doing the classification) but their validity (based on the physical structure of organisms, atoms and speech sounds).

McGuinness’s view of science explains why she seems most at home with data that are amenable to measurement, so it was instructive to see how she extracts information from data in her opening chapter ‘Reading report card’. She discusses the results of four large-scale surveys in the 1990s of ‘functional literacy’ (p.10). Two, published by the National Center for Education Statistics (NCES) compared adult and child literacy in the US, and two by the Organisation for European Economic Co-operation (OECD) included the US, Canada and five non-English-speaking countries.

Functional literacy data

Functional literacy was assessed using a 5–level scale. Level 1 ranged from not being able to read at all to a reading task that “required only the minimum level of competence” – for example extracting information from a short newspaper article. Level 5 involved a fact sheet for potential jurors (NCES, 1993, pp.73-84).

In the NCES study, 21% of the US adult population performed at level 1 “indicating that they were functionally illiterate” (McGuinness, p.10) and 47% scored at levels 1 or 2. Despite the fact that level 2 was above the minimum level of competence, McGuinness describes the level 1+2 group as “barely literate”. Something she omits to tell us is what the NCES report has to say about the considerable heterogeneity of the level 1 group. 25% were born abroad. 35% had had fewer than 8 years of schooling. 33% were 65 or older. 26% reported a ‘physical, mental or health condition’ that affected their day-to-day functioning, and 19% a visual impairment that made it difficult for them to read print (NCES, 1993, pp.16-18).

The OECD study showed that functional illiteracy (level 1) varied slightly across English-speaking countries – between 17% and 22%. McGuinness doesn’t tell us what the figures were for the five non-English speaking countries, apart from Sweden with a score of 7.5% at level 1 – half that of the English-speaking countries. The most likely explanation is the relative transparency of the orthographies – Swedish spelling was standardised as recently as 1906. But McGuinness doesn’t mention orthography as a factor in literacy results; instead “Sweden has set the benchmark for what school systems can achieve” (p.11). McGuinness then goes on to compare reading proficiency in different US States.

The Nation’s Report Card

McGuinness describes functional illiteracy levels in English-speaking countries as ‘dismal’, ‘sobering’, ‘shocking’ and ‘a literacy crisis’. She draws attention to the fact that after California mandated the use of the ‘real books’ (whole language) approach to reading instruction in 1987, it came low down the US national league tables for 4th grade reading in 1992, and then tied ‘for a dead last’ with Louisiana in 1994 (p.11). Although California’s score had decreased by only 5 points (from 202 to 197 – the entire range being 182-228) (NCES, 1996 p.47), there was perhaps a stigma attached to being tied ‘dead last with Louisiana’, as phonics was reintroduced into Californian classrooms together with more than a billion dollars for teacher training in 1996, the year before Why Children Can’t Read was first published.

What difference did it make? Not much, it seems. Although California’s 4th grade reading scores had recovered by 1998 (NCES,1999, p.113), and improved further by 2011 (NCES, 2013b), the increase wasn’t statistically significant.

Indeed, whatever method of reading instruction has been used in the US, it doesn’t appear to have had much overall impact on reading standards. At age 17, the proportion of ‘functionally illiterate’ US readers has fluctuated between 14% and 21% – an average of 17% – since 1971 (NCES, 2013b). And in the UK the figure has remained ‘stubbornly’ around 17% since WW2 (Rashid & Brooks, 2010).

Functional illiteracy levels in the English-speaking world are higher than in many non-English-speaking countries, and have remained stable for decades. Functional illiteracy is a long-standing problem and McGuinness, at least, implies that SP can crack it. In the next post I want to look at the evidence for that claim.

References

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (1996). NAEP 1994 Reading Report Card for the Nation and the States. National Center for Educational Statistics.
NCES (1999). NAEP 1998 Reading Report Card for the Nation and the States. National Center for Educational Statistics.
NCES (2013a). Mega-States: An Analysis of Student Performance in the Five Most Heavily Populated States in the Nation. National Center for Educational Statistics.
NCES (2013b). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

the evidence: NSPCC briefing on home education

When I first read the NSPCC briefing Home education: learning from case reviews I thought the NSPCC had merely got hold of the wrong end of the stick about the legislation relevant to home education. That’s not unusual – many people do just that. But a closer examination showed there was much more to it than a simple misunderstanding.

The briefing claims to consist of ‘learning about child protection pulled from the published versions’ of seven serious case reviews (SCRs) involving children educated at home. [The full briefing has since been replaced with a summary, but the original is still accessible here. Also note that the Serious Case Review for Child S listed in the NSPCC summary is for the wrong Child S.] But the claims and recommendations made by the briefing aren’t an accurate reflection of what the SCRs tell us – about home education or child protection. The briefing also calls into question the current legislation relevant to home education, but makes no attempt to explain the legislation or the principles on which it’s based. So what ‘learning’ can we ‘pull’ from the NSPCC briefing?

legislation

The legislation and guidance relevant to home education isn’t explained or even cited, so anyone relying on the briefing for information would be aware only of the NSPCC’s view of the law, not what the law actually says or why it says it. Since the NSPCC doesn’t appear to understand the legislation, its view of the law creates a problem for unwitting readers.

claims

I noted 13 claims made by the briefing about the risks to children educated at home. Only one – that children could become isolated – was supported by the evidence in the SCRs, and that indicated only that some of the children involved could have been considered isolated at some times. In other words the risks to home-educated children that the NSPCC is concerned about are hypothetical risks rather than real ones. Laws aren’t and shouldn’t be based on hypothetical risks only, but this important distinction isn’t mentioned.

recommendations

The briefing cites only the 15 recommendations from the SCRs relating directly to home education – and overlooks the other 64. Over 30 of the others involved procedural issues and more than 20 involved healthcare. Two of the healthcare recommendations that the briefing does highlight relate to organisations that were defunct before the briefing was published.

opinion

Although it cites evidence from the SCRs, the briefing isn’t what I’d call evidence-based, that is, derived from a careful evaluation of all relevant, available evidence. It looks more like an opinion backed up by the selection of supporting evidence only.

NSPCC publications

The home education briefing isn’t typical of NSPCC publications. The research report on disabled children, for example, is exactly what you’d expect from a research report. It’s well written, well evidenced and well referenced. Most of the briefings that summarise straightforward legislation, guidance and procedures are what you’d expect to see too. It’s when a topic needs to be thought through from first principles that the charity seems to flounder. A couple of examples:

An earlier version of Checkpoints for Schools discussed at length bullying by children, but failed to mention how teacher behaviour or the way the education system is designed contributed to the problem. But I guess those omissions are understandable; after all most people think of bullying in schools as involving only other children.

The oversights in the briefing about Fabricated or Induced Illness (FII) (which I can no longer find on the NSPCC website but is available here) are more serious. A framework drawn up by the Royal College of Paediatrics and Child Health has been amended so that simple parental anxiety and genuine and unrecognised medical problems both come under the umbrella of FII, which not only renders the concept of FII meaningless, it sees the children of anxious parents and children with undiagnosed medical conditions as being at risk. Also, despite referring to ‘genuine and unrecognised medical problems’ the briefing fails to alert healthcare professionals to medical conditions known to be under-diagnosed that have a significantly higher prevalence than FII.

I contacted the NSPCC about both documents, but rather than discuss the points I’d raised, the charity simply re-stated its position on bullying and FII. Communication with one of the authors of the FII briefing was more fruitful. Slides from a presentation by the authors are online and paint a rather different picture to the one presented in the briefing.

NSPCC and evidence

The NSPCC is entitled to express its opinion about these issues of course, but the steps that need to be taken to reduce bullying, improve doctors’ diagnostic skills or prevent children coming to serious harm are much more likely to be effective if they’re based on a thorough evaluation of the evidence about what actually happens.

In the UK legislation isn’t based on opinion, either, but again, on evidence. It has to be. Changing the law is a time-consuming and expensive process that can have serious unintended and unwanted consequences if you don’t get it right. And you’re quite likely not to get it right if you base it on people’s opinion about what they think happens instead of evidence about what actually happens.

If the NSPCC were a member of the public passing comment on children’s behaviour, medical diagnosis or an esoteric aspect of education legislation, their failure to evaluate the evidence properly wouldn’t matter so much. But the NSPCC is a major national charity funded by many millions of pounds from the public – and direct from government. It’s also the only organisation other than local authorities and the police that has statutory child protection powers.

The briefing on home education is out of date, sloppily written, poorly presented and pays only lip-service to the evaluation of evidence. It’s pretty clear that the NSPCC doesn’t like the idea of home education, an opinion it’s entitled to hold. But I also got the impression it doesn’t actually value home educating families very highly. Neither the few home-educated children who came to harm, nor the vast majority who won’t, appear to be worth the effort of producing a well written, well presented booklet that contains sound information and a proper evaluation of the evidence.

The NSPCC has no business cherry-picking evidence. Nor does it have any business using its high-profile status to publish advice or recommendations based only on evidence that supports its opinion. It doesn’t always do that so why do it at all?

play: schools are for children, not children for schools

Some years ago, the TES carried an article about a primary school that taught its pupils how to knit. I learned to knit at school. My mum dutifully used my first attempt – a cotton dishcloth – for months despite its resemblance to a fishing net with an annoying tendency to ensnare kitchen utensils. The reason I was taught knitting was primarily in order to be able to knit. But the thrust of the TES article wasn’t about the usefulness of knitting. It was that it improved the children’s maths. It seemed that at some point since the introduction of mass education in England the relationship between schools and the real world had changed. The point of schools was no longer to provide children with knowledge (like maths) that will help them tackle real-world problems (like knitting), but vice versa – the point of useful real-world skills was now to support performance in school.

school readiness

I was reminded of the knitting article earlier this year, when Sir Michael Wilshaw, chief inspector of Ofsted, suggested to inspectors that not all early years settings are preparing children adequately for school. In a comment to the BBC he added;

More than two-thirds of our poorest children – and in some of our poorest communities that goes up to eight children out of 10 – go to school unprepared,” he said. “That means they can’t hold a pen, they have poor language and communication skills, they don’t recognise simple numbers, they can’t use the toilet independently and so on.”

His comments prompted an open letter to the Telegraph complaining that Sir Michael’s instruction to inspectors to assess nurseries mainly in terms of preparation for school “betrays an abject (and even wilful) misunderstanding of the nature of early childhood experience.” One of the signatories was Sue Cowley, who recently blogged about the importance of play. Her post, like Sir Michael’s original comments, generated a good deal of discussion.

Old Andrew responded promptly. He comments “This leads me to my one opinion on early years teaching methods: OFSTED are right to judge them by outcomes rather than acting as the “play police” and seeking to enforce play-based learning“.

The two bloggers have homed in on different issues. Sue Cowley is concerned about the shift in focus from childhood experience to ‘school-readiness’; Old Andrew is relieved that Ofsted inspectors are longer expected to ‘enforce play-based learning’. The online debate has also shifted from the original question implicit in Sir Michael’s comments and in the response in the letter to the Telegraph i.e. what is the purpose of nurseries and pre-schools? to a question posed by Old Andrew; “Is there any actual empirical evidence on the importance of play? All the “evidence” seems to be theoretical.”

empirical evidence

Responses from early years teachers to questions about evidence for the benefits of play are often along the lines of “I have the evidence of my own eyes”, which hasn’t satisfied the sceptics. Whether you think it’s a satisfactory answer or not depends on the importance you attach to direct observation.

The problem with direct observation is that it’s dependent on perception, which is notoriously unreliable. David Didau has blogged about some perceptual flaws here. He also mentions some of the cognitive errors that occur when people draw conclusions from observations. The scientific method has been developed largely to counteract the flaws in our perception and reasoning. But it doesn’t follow that direct observation is completely unreliable. Indeed, direct observation is the cornerstone of empirical evidence.

Here’s an example. Let’s say I’ve noticed that every time I use a particular brand of soap, my hands sting and turn bright red. It wouldn’t be unreasonable to conclude that I have an allergic response to an ingredient in the soap – but I wouldn’t know that for sure. There could be many causes for my red, stinging hands; the soap might be purely coincidental. The conclusions about causes I could draw solely from my direct observations would be pretty speculative.

But the direct observations themselves – identifying the brand of soap and what happened to my hands – would be a lot more reliable. It’s possible that I could have got the brand of soap wrong and could have imagined what happened to my hands, but those errors are much less likely than the errors involved in drawing conclusions about causality. I could easily increase the reliability of my direct observations by involving an independent observer. If a hundred independent observers all agreed that a particular brand of soap was associated with my and/or other people’s hands turning bright red, those observations wouldn’t be 100% watertight but they would be considered to be fairly reliable and might prompt the soap manufacturer to investigate further. Increasing the reliability of my conclusion about the causal relationship – that the soap caused an allergic reaction – would be more challenging.

is play another Brain Gym?

What intrigued me about the early years’ teachers responses was their reliance on direct observation as empirical evidence for the importance of play. Most professionals, if called upon to do so, can come up with some peer-reviewed research that supports the methods they use, even if it means delving into dusty textbooks they haven’t used for years. I could see Old Andrew’s point; if play is so important, why isn’t there a vast research literature on it? There are three characteristics of play that would explain both the apparent paucity of research and the teachers’ emphasis on direct observation.

First, play is a characteristic typical of most young mammals, and young humans play a lot. At one level, asking what empirical evidence there is for its importance is a pointless question – a bit like asking for evidence for the importance of learning or growth. Play, like learning and growth, is simply a facet of development.

Second, play, like most other mammalian characteristics, is readily observable – although you might need to do a bit of dissection to spot some of the anatomical ones. Traditionally, play has been seen as involving three types of skill; locomotor, object control and social interaction. But you don’t need a formal peer-reviewed study to tell you that. A few hours’ observation of a group of young children would be sufficient. A few hours’ observation would also reveal all the features of play Sue Cowley lists in her blog post.

Third, also readily apparent through direct observation is what children learn during play; the child who chooses to play with the shape-sorter every day until they can get all the shapes in the right holes first time, the one who can’t speak a word of English but is fluent after a few months despite little direct tuition, the one who initially won’t speak to anyone but blossoms into a mini-socialite through play. Early years teachers watch children learning through play every day, so it’s not surprising they don’t see the need to rely on research to tell them about its importance.

The features of play and what children can learn from it are not contentious; the observations of thousands of parents, teachers, psychologists, psychiatrists and anthropologists are largely in agreement over what play looks like and what children learn from it. This would explain why there appears to be little research on the importance of play; it’s self-evidently important to children themselves, as an integral part of human development and as a way of learning. In addition, much of the early research into play was carried out in the inter-war years. Try finding that online. Or even via your local library. Old Andrew’s reluctance to accept early years teachers’ direct observations as evidence might stem from his admission that he doesn’t “really have much insight into what small children are like.”

play-based education

The context of Old Andrew’s original question was Michael Wilshaw’s comments on school readiness and the response in the Telegraph letter. A recent guest post on his blog is critical of play-based learning, suggesting it causes problems for teachers higher up the food chain. Although Old Andrew says he’d like to see evidence for the importance of play in any context, what we’re actually talking about here is the importance of play in the education system.

Direct observation can tell us what play looks like and what children learn from it. What it can’t tell us about is the impact of play on development, GCSE results or adult life. For that, we’d need a more complex research design than just watching and/or recording before-and-after abilities. Some research has been carried out on the impact of play. Although there doesn’t appear to be a correlation between how much young mammals play and their abilities as adults, not playing does appear to impair responsiveness and effective social interaction. And we do know some things about the outcomes of the more complex play seen in children (e.g. Smith & Pellegrini, 2013).

Smith & Pellegrini agree that a prevailing “play ethos” has tended to exaggerate the evidence for the essential role of play (p.4) and that appears to be Old Andrew’s chief objection to the play advocates’ claims. Sue Cowley’s list describes play as ‘vital’, ‘crucial’ and ‘essential’. I can see how her choice of wording might give the impression to anyone looking for empirical evidence in the research literature that research findings relating to the importance of play in development, learning or education were more robust than they are. I can also see why someone observing the direct outcomes of play on a daily basis would see play as ‘vital’, ‘crucial’ and ‘essential’.

I agree with Old Andrew that Ofsted shouldn’t be enforcing play-based learning, or for that matter, telling teachers how to teach. There’s no point in training professionals and then telling them how to do their job. I also agree that if grand claims are being made for play-based learning or if it’s causing problems later on, we need some robust research or some expectation management, or both.

Having said that, it’s worth noting that for the best part of a century nursery and infant teachers have sung the praises of play-based learning. What’s easily overlooked by those who teach older children is the challenge facing early years teachers. They are expected to make ‘school-ready’ children who, in some cases and for whatever reason, have started nurseries, pre-schools and reception classes with little speech, who don’t understand a word of English, who can’t remember instructions, who have problems with dexterity, mobility and bowel and bladder control, or who find the school environment bewildering and frightening. Sometimes, the only way early years teachers can get children to engage or learn anything at all is through play. Early years teachers, as Sue Cowley points out, are usually advocates of highly structured, teacher-directed play. What’s more, they can see children learning from play in real time in front of them. The key question is not “what’s the empirical evidence for the importance of play?” but rather “if children play by default, are highly motivated to play and learn quickly from it, where’s the evidence for a better alternative?”

I’m all in favour of evidence-based practice, but I’m concerned that direct observation might be being prematurely ruled out. I’m also concerned that the debate appears to have shifted from the original one about preparation for school vs the erosion of childhood. This brings us back to the priorities of the school that taught knitting in order to improve children’s maths. Children obviously need to learn for their own benefit and for that of the community as a whole, but we need to remember that in a democracy school is for children, not children for school.

bibliography

Pellegrini, A & Smith PK (2005). The Nature of Play: Great Apes and Humans. Guilford Press.
Smith, PK & Pellegrini, A (2013). Learning through play. In Tremblay RE, Boivin M, Peters (eds). Encyclopedia of Early Childhood Development [online]. Montreal, Quebec: Centre of Excellence for Early Childhood Developmentand Strategic Knowledge Cluster on Early Child Development 1-6. Available at http://www.child-encyclopedia.com/documents/Smith-PellegriniANGxp2.pdf Accessed 11.8.2014.

the venomous data bore

Robert Peal has posted a series of responses to critics of his book Progressively Worse here. The second is on ‘data and dichotomies’. In this post I want to comment on some of the things he says about data and evidence.

when ‘evidence doesn’t work’

Robert* refers back to a previous post entitled ‘When evidence doesn’t work’ summarising several sessions at the ResearchED conference held at Dulwich College last year. He rightly draws attention to the problem of hard-to-measure outcomes, and to which outcomes we decide to measure in the first place. But he appears to conclude that there are some things – ideology, morality, values – that are self-evidently good or bad and that are outside the remit of evidence.

In his response to critics, Robert claims that one reason ‘evidence doesn’t work’ is because “some of the key debates in education are based on value judgements, not efficacy.” This is certainly true – and those key debates have resulted in a massive waste of resources in education over the past 140 years. There’s been little consensus on what long-term outcomes people want from the education system, what short-term outcomes they want, what pedagogies are effective and how effectiveness can be assessed. If a decision as to whether Shakespeare ‘should’ be studied at GCSE is based on value judgements it’s hardly surprising it’s been the subject of heated debate for decades. Robert’s conclusion appears to be that heated debate about value judgements is inevitable because values aren’t things that lend themselves to being treated as evidence. I disagree.

data

I think he draws this conclusion because his view of data is rather limited. Data don’t just consist of ‘things we can easily measure’ like exam results (Robert’s second reason why ‘evidence doesn’t work’). They don’t have to involve measuring things at all; qualitative data can be very informative. Let’s take the benefits of studying Shakespeare in school. Robert asks “Can an RTC tell us, for example, whether secondary school pupils benefit from studying Shakespeare?” If it was carefully controlled it could, though we would have to tackle the question of what outcomes to measure. But randomised controlled trials are only one of many methods for gathering data. Collecting qualitative data from a representative sample of the population about the impact studying Shakespeare had had on their lives could give some insights, not only into whether Shakespeare should be studied in school, but how his work should be studied. And whether people should have the opportunity to undertake some formal study of Shakespeare in later life if they wanted to. People might appreciate actually being asked.

venomous data bore*

venomous data bore Buprestis octoguttata§

opinion

I don’t know whether Robert sees me as what he refers to as a ‘data bore’, but if he does I accept the epithet as a badge of honour. For the record however, not only have I never let a skinny latte pass my lips, but the word ‘nuanced’ has never done so either (not in public, at least). Nor do I have a “lofty distain for anything so naïve as ‘having an opinion’”.

I’m more than happy for people to have opinions and to express them and for them to be taken into account when education policy is being devised. But not all opinions are equal. They can vary between professional, expert opinion derived from a thorough theoretical knowledge and familiarity with a particular research literature, through well-informed personal opinion, to someone simply liking or not liking something but not having a clue why. I would not want to receive medical treatment based on a vox pop carried out in my doctor’s waiting room, nor do I want a public sector service to be designed on a similar basis. If it is, then the people who voice their opinions most loudly are likely to get what they want, leaving the rest of us, ‘data bores’ included, to work on the damage limitation.

rationality and values

Robert appears to have a deep suspicion of rationality. He says “rational man believes that they can make their way in the world without recourse to the murky business of ideology and morality, or to use a more contemporary term, ‘values’.” He also says it was ‘terrific’ to hear Sam Freedman expound the findings of Jonathan Haidt and Daniel Kahnemann “about the dominance of the subconscious, emotional part of our minds, over the logical, conscious part.” He could add Antonio Damasio to that list. There’s little doubt that our judgement and decision-making is dominated by the subconscious emotional part of our minds. That doesn’t mean it’s a good thing.

Ideology, morality and values can inspire people to do great things, and rationality can inflict appalling damage, but it’s not always like that. Every significant step that’s been ever been taken towards reducing infant mortality, maternal mortality, disease, famine, poverty and conflict and every technological advance ever made has involved people using the ‘logical conscious part’ of their minds as well as, or instead of, the ‘subconscious emotional part’. Those steps have sometimes involved a lifetime’s painstaking work in the teeth of bitter opposition. In contrast, many of the victims of ideology, morality and values lie buried where they fell on the world’s battlefields.

Robert’s last point about data is that they are “simply not able to ‘speak for themselves’. Its voice is always mediated by human judgement.” That’s not quite the impression given on page 4 of his book when referring to a list of statistics he felt showed there was a fundamental problem in British education. In the case of these statistics, ‘the bare figures are hard to ignore’.

Robert is quite right that the voice of the data is always mediated by human judgement, but we have devised ways of interpreting the data that make them less susceptible to bias. The data are perfectly capable of speaking for themselves, if we know how to listen to them. Clearly the researcher, like the historian, suffers from selection bias, but some fields of discourse, unlike history it seems, have developed robust methodologies to address that. The biggest problem faced by the data is that they can’t get a word in edgeways because of all the opinion being voiced.

endnote

According to this tweet from Civitas…

civitas venom

Robert says he has responded to criticism in blogs by Tim Taylor, Guy Woolnough and myself. I’m doubtless biased, but the comment most closely resembling ‘venom’ that I could find was actually in a scurrilous tweet from Debra Kidd, shown in Robert’s third response to his critics. Debra, shockingly for a teacher, uses a four-letter-word to describe Robert’s description of state schools as ‘a persistent source of national embarrassment’. She calls it ‘tosh’. If Civitas thinks that’s venom, it clearly has little experience of academia, politics or the playground. Rather worrying on all counts, if it’s a think tank playing a significant role in education reform.

* I felt we should be on first name terms now we’ve had a one-to-one conversation about statistics.

§ Image courtesy Christian Fischer from Britannica Kids.

It’s not really a venomous data bore, it’s a Metallic wood-boring beetle. It’s not really metallic either, it just looks like it. Nor does the beetle bore wood, its larvae do. Words can be so misleading.

A tale of two Blobs

The think-tank Civitas has just published a 53-page pamphlet written by Toby Young and entitled ‘Prisoners of The Blob’. ‘The Blob’ for the uninitiated, is the name applied by the UK’s Secretary of State for Education, Michael Gove, to ‘leaders of the teaching unions, local authority officials, academic experts and university education departments’ described by Young as ‘opponents of educational reform’. The name’s not original. Young says it was coined by William J Bennett, a former US Education Secretary; it was also used by Chris Woodhead, first Chief Inspector of Ofsted in his book Class War.

It’s difficult to tell whether ‘The Blob’ is actually an amorphous fog-like mass whose members embrace an identical approach to education as Young claims, or whether such a diverse range of people espouse such a diverse range of views that it’s difficult for people who would like life to be nice and straightforward to understand all the differences.

Young says;

They all believe that skills like ‘problem-solving’ and ‘critical thinking’ are more important than subject knowledge; that education should be ‘child-centred’ rather than ‘didactic’ or ‘teacher-led’; that ‘group work’ and ‘independent learning’ are superior to ‘direct instruction’; that the way to interest children in a subject is to make it ‘relevant’; that ‘rote-learning’ and ‘regurgitating facts’ is bad, along with discipline, hierarchy, routine and anything else that involves treating the teacher as an authority figure. The list goes on.” (p.3)

It’s obvious that this is a literary device rather than a scientific analysis, but that’s what bothers me about it.

Initially, I had some sympathy with the advocates of ‘educational reform’. The national curriculum had a distinctly woolly appearance in places, enforced group-work and being required to imagine how historical figures must have felt drove my children to distraction, and the approach to behaviour management at their school seemed incoherent. So when I started to come across references to educational reform based on evidence, the importance of knowledge and skills being domain-specific, I was relieved. When I found that applying findings from cognitive science to education was being advocated, I got quite excited.

My excitement was short-lived. I had imagined that a community of researchers had been busily applying cognitive science findings to education, that the literatures on learning and expertise were being thoroughly mined and that an evidence-based route-map was beginning to emerge. Instead, I kept finding references to the same small group of people.

Most fields of discourse are dominated by a few individuals. Usually they are researchers responsible for significant findings or major theories. A new or specialist field might be dominated by only two or three people. The difference here is that education straddles many different fields of discourse (biology, psychology sociology, philosophy and politics, plus a range of subject areas) so I found it a bit odd that the same handful of names kept cropping up. I would have expected a major reform of the education system to have had a wider evidence base.

Evaluating the evidence

And then there was the evidence itself. I might be looking in the wrong place, but so far, although I’ve found a few references, I’ve uncovered no attempts by proponents of educational reform to evaluate the evidence they cite.

A major flaw in human thinking is confirmation bias. To represent a particular set of ideas, we develop a mental schema. Every time we encounter the same set of ideas, the neural network that carries the schema is activated. The more it’s activated, the more readily it’s activated in future. This means that any configuration of ideas that contradicts a pre-existing schema, has, almost literally, to swim against the electromagnetic tide. It’s going to take a good few reiterations of the new idea set before a strongly embedded pre-existing schema is likely to be overridden by a new one. Consequently we tend to favour evidence that confirms our existing views, and find it difficult to see things in a different way.

The best way we’ve found to counteract confirmation bias in the way we evaluate evidence is through hypothesis testing. Essentially you come up with a hypothesis and then try to disprove it. If you can’t, it doesn’t mean your hypothesis is right, it just means you can’t yet rule it out. Hypothesis testing as such is mainly used in the sciences, but the same principle underlies formal debating, the adversarial approach in courts of law, and having an opposition to government in parliament. The last two examples are often viewed as needlessly combative, when actually their job is to spot flaws in what other people are saying. How well they do that job is another matter.

It’s impossible to tell at first glance whether a small number of researchers have made a breakthrough in education theory, or whether their work is simply being cited to affirm a set of beliefs. My suspicion that it might be the latter was strengthened when I checked out the evidence.

The evidence

John Hattie conducted a meta-anlaysis of over 800 studies of student achievement. My immediate thought when I came across his work was of the well-documented problems associated with meta-analyses. Hattie does discuss these, but I’m not convinced he disposed of one key issue; the garbage-in-garbage-out problem. A major difficulty with meta-analyses is ensuring that all the studies involved use the same definitions for the constructs they are measuring; and I couldn’t find a discussion of what Hattie (or other researchers) mean by ‘achievement’. I assume that Hattie uses test scores as a proxy measure of achievement. This is fine if you think the job of schools is to ensure that children learn what somebody has decided they should learn. But that assumption poses problems. One is who determines what students should learn. Another is what happens to students who, for whatever reason, can’t learn at the same rate as the majority. And a third is how the achievement measured in Hattie’s study maps on to achievement in later life. What’s noticeable about the biographies of many ‘great thinkers’ – Darwin and Einstein are prominent examples – is how many of them didn’t do very well in school. It doesn’t follow that Hattie is wrong – Darwin and Einstein might have been even greater thinkers if their schools had adopted his recommendations – but it’s an outcome Hattie doesn’t appear to address.

Siegfreid Engelmann and Wesley C Becker developed a system called Direct Instruction System for Teaching Arithmetic and Reading (DISTAR) that was shown to be effective in Project Follow-Through – a evaluation of a number of educational approaches in the US education system over a 30 year period starting in the 1960s. There’s little doubt that Direct Instruction is more effective than many other systems at raising academic achievement and self-esteem. The problem is, again, who decides what students learn, what happens to students who don’t benefit as much as others, and what’s meant by ‘achievement’.

ED Hirsch developed the Core Knowledge sequence – essentially an off-the-shelf curriculum that’s been adapted for the UK and is available from Civitas. The US Core Knowledge sequence has a pretty obvious underlying rationale even if some might question its stance on some points. The same can’t be said of the UK version. Compare, for example, the content of US Grade 1 History and Geography with that of the UK version for Year 1. The US version includes Early People and Civilisations and the History of World Religion – all important for understanding how human geography and cultures have developed over time. The UK version focuses on British Pre-history and History (with an emphasis on the importance of literacy) followed by Kings and Queens, Prime ministers then Symbols and figures – namely the Union Jack, Buckingham Palace, 10 Downing Street and the Houses of Parliament – despite the fact that few children in Y1 are likely to understand how or why these people or symbols came to be important. Although the strands of world history and British history are broadly chronological, Y4s study Ancient Rome alongside the Stuarts, and Y6s the American Civil War potentially before the Industrial Revolution.

Daniel Willingham is a cognitive psychologist and the author of Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for the classroom and When can you trust the experts? How to tell good science from bad in education. He also writes for a column in American Educator magazine. I found Willingham informative on cognitive psychology. However, I felt his view of education was a rather narrow one. There’s nothing wrong with applying cognitive psychology to how teachers teach the curriculum in schools – it’s just that learning and education involve considerably more than that.

Kirschner, Sweller and Clark have written several papers about the limitations of working memory and its implications for education. In my view, their analysis has three key weaknesses; they arbitrarily lump together a range of education methods as if they were essentially the same, they base their theory on an outdated and incomplete model of memory, and they conclude that only one teaching approach is effective – explicit, direct instruction – ignoring the fact that knowledge comes in different forms.

Conclusions

I agree with some of the points made by the reformers:
• I agree with the idea of evidence-based education – the more evidence the better, in my view.
• I have no problem with children being taught knowledge. I don’t subscribe to a constructivist view of education – in the sense that we each develop a unique understanding of the world and everybody’s worldview is as valid as everybody else’s – although cognitive science has shown that everybody’s construction of knowledge is unique. We know that some knowledge is more valid and/or more reliable than other knowledge and we’ve developed some quite sophisticated ways of figuring out what’s more certain and what’s less certain.
• The application of findings from cognitive science to education is long overdue.
• I have no problem with direct instruction (as distinct from Direct Instruction) per se.

However, some of what I read gave me cause for concern:
• The evidence-base presented by the reformers is limited and parts of it are weak and flawed. It’s vital to evaluate evidence, not just to cite evidence that at face-value appears to support what you already think. And a body of evidence isn’t a unitary thing; some parts of it can be sound whilst other parts are distinctly dodgy. It’s important to be able to sift through it and weigh up the pros and cons. Ignoring contradictory evidence can be catastrophic.
• Knowledge, likewise, isn’t a unitary thing; it can vary in terms of validity and reliability.
• The evidence from cognitive science also needs to be evaluated. It isn’t OK to assume that just because cognitive scientists say something it must be right; cognitive scientists certainly don’t do that. Being able to evaluate cognitive science might entail learning a fair bit about cognitive science first.
• Direct instruction, like any other educational method, is appropriate for acquiring some types of knowledge. It isn’t appropriate for acquiring all types of knowledge. The problem with approaches such as discovery learning and child-led learning is not that there’s anything inherently wrong with the approaches themselves, but that they’re not suitable for acquiring all types of knowledge.

What has struck me most forcibly about my exploration of the evidence cited by the education reformers is that, although I agree with some of the reformers’ reservations about what’s been termed ‘minimal instruction’ approaches to education, the reformers appear to be ignoring their own advice. They don’t have extensive knowledge of the relevant subject areas, they don’t evaluate the relevant evidence, and the direct instruction framework they are advocating – certainly the one Civitas is advocating – doesn’t appear to have a structure derived from the relevant knowledge domains.

Rather than a rational, evidence-based approach to education, the ‘educational reform’ movement has all the hallmarks of a belief system that’s using evidence selectively to support its cause; and that’s what worries me. This new Blob is beginning to look suspiciously like the old one.