the debating society

One of my concerns about the model of knowledge promoted by the Tiger Teachers is that it hasn’t been subjected to sufficient scrutiny.   A couple of days ago on Twitter I said as much.  Jonathan Porter, a teacher at the Michaela Community School, thought my criticism unfair because the school has invited critique by publishing a book and hosting two debating days. Another teacher recommended watching the debate between Guy Claxton and Daisy Christodoulou Sir Ken is right: traditional education kills creativity. She said it may not address my concerns about theory. She was right, it didn’t. But it did suggest a constructive way to extend the Tiger Teachers’ model of knowledge.

the debate

Guy, speaking for the motion and defending Sir Ken Robinson’s views, highlights the importance of schools developing students’ creativity, and answers the question ‘what is creativity?’ by referring to the findings of an OECD study; that creativity emerges from six factors – curiosity, determination, imagination, discipline, craftsmanship and collaboration. Daisy, opposing the motion, says that although she and Guy agree on the importance of creativity and its definition, they differ over the methods used in schools to develop it.

Daisy says Guy’s model involves students learning to be creative by practising being creative, which doesn’t make sense. It’s a valid point. Guy says knowledge is a necessary but not sufficient condition for developing creativity; other factors are involved. Another valid point. Both Daisy and Guy debate the motion but they approach it from very different perspectives, so they don’t actually rigorously test each other’s arguments.

Daisy’s model of creativity is a bottom-up one. Her starting point is how people form their knowledge and how that develops into creativity. Guy’s model, in contrast, is a top-down one; he points out that creativity isn’t a single thing, but emerges from several factors. In this post, I propose that Daisy and Guy are using the same model of creativity, but because Daisy’s focus is on one part and Guy’s on another, their arguments shoot straight past each other, and that in isolation, both perspectives are problematic.

Creativity is a complex construct, as Guy points out. A problem with his perspective is that the factors he found to be associated with creativity are themselves complex constructs. How does ‘curiosity’ manifest itself? Is it the same in everyone or does it vary from person to person? Are there multiple component factors associated with curiosity too? Can we ask the same questions about ‘imagination’? Daisy, in contrast, claims a central role for knowledge and deliberate practice. A problem with Daisy’s perspective is, as I’ve pointed out elsewhere, that her model of knowledge peters out when it comes to the complex cognition Guy refers to. With bit more information, Daisy and Guy could have done some joined-up thinking.  To me, the two models look like the representation below, the grey words and arrows indicating concepts and connections referred to but not explained in detail.

slide1

cognition and expertise

If I’ve understood it correctly, Daisy’s model of creativity is essentially this: If knowledge is firmly embedded in long-term memory (LTM) via lots of deliberate practice and organised into schemas, it results in expertise. Experts can retrieve their knowledge from LTM instantly and can apply it flexibly. In short, creativity is a feature of expertise.

Daisy makes frequent references to research; what scientists think, half a century of research, what all the research has shown. She names names; Herb Simon, Anders Ericsson, Robert Bjork. She reports research showing that expert chess players, football players or musicians don’t practise whole games or entire musical works – they practise short sequences repeatedly until they’ve overlearned them. That’s what enables experts to be creative.

Daisy’s model of expertise is firmly rooted in an understanding of cognition that emerged from artificial intelligence (AI) research in the 1950s and 1960s. At the time, researchers were aware that human cognition was highly complex and often seemed illogical.  Computer science offered an opportunity to find out more; by manipulating the data and rules fed into a computer, researchers could test different models of cognition that might explain how experts thought.

It was no good researchers starting with the most complex illogical thinking – because it was complex and illogical. It made more sense to begin with some simpler examples, which is why the AI researchers chose chess, sport and music as domains to explore. Expertise in these domains looks pretty complex, but the complexity has obvious limits because chess, sport and music have clear, explicit rules. There are thousands of ways you can configure chess pieces or football players and a ball during a game, but you can’t configure them any-old-how because chess and football have rules. Similarly, a musician can play a piece of music in many different ways, but they can’t play it any-old-how because then it wouldn’t be the same piece of music.

In chess, sport and music, experts have almost complete knowledge, clear explicit rules, and comparatively low levels of uncertainty.   Expert geneticists, doctors, sociologists, politicians and historians, in contrast, often work with incomplete knowledge, many of the domain ‘rules’ are unknown, and uncertainty can be very high. In those circumstances, expertise  involves more than simply overlearning a great many facts and applying them flexibly.

Daisy is right that expertise and creativity emerge from deliberate practice of short sequences – for those who play chess, sport or music. Chess, soccer and Beethoven’s piano concerto No. 5 haven’t changed much since the current rules were agreed and are unlikely to change much in future. But domains like medicine, economics and history still periodically undergo seismic shifts in the way whole areas of the domains are structured, as new knowledge comes to light.

This is the point at which Daisy’s and Guy’s models of creativity could be joined up.  I’m not suggesting some woolly compromise between the two. What I am suggesting is that research that followed the early AI work offers the missing link.

I think the missing link is the schema.   Daisy mentions schemata (or schemas if you prefer) but only in terms of arranging historical events chronologically. Joe Kirby in Battle Hymn of the Tiger Teachers also recognises that there can be an underlying schema in the way students are taught.  But the Tiger Teachers don’t explore the idea of the schema in any detail.

schemas, schemata

A schema is the way people mentally organise their knowledge. Some schemata are standardised and widely used – such as the periodic table or multiplication tables. Others are shared by many people, but are a bit variable – such as the Linnaean taxonomy of living organisms or the right/left political divide. But because schemata are constructed from the knowledge and experience of the individual, some are quite idiosyncratic. Many teachers will be familiar with students all taught the same material in the same way, but developing rather different understandings of it.

There’s been a fair amount of research into schemata. The schema was first proposed as a psychological concept by Jean Piaget*. Frederic Bartlett carried out a series of experiments in the 1930s demonstrating that people use schemata, and in the heyday of AI the concept was explored further by, for example, David Rumelhart, Marvin Minsky and Robert Axelrod. It later extended into script theory (Roger Schank and Robert Abelson), and how people form prototypes and categories (e.g. Eleanor Rosch, George Lakoff). The schema might be the missing link between Daisy’s and Guy’s models of creativity, but both models stop before they get there. Here’s how the cognitive science research allows them to be joined up.

Last week I finally got round to reading Jerry Fodor’s book The Modularity of Mind, published in 1983. By that time, cognitive scientists had built up a substantial body of evidence related to cognitive architecture. Although the evidence itself was generally robust, what it was saying about the architecture was ambiguous. It appeared to indicate that cognitive processes were modular, with specific modules processing specific types of information e.g. visual or linguistic. It also indicated that some cognitive processes operated across the board, e.g. problem-solving or intelligence. The debate had tended to be rather polarised.  What Fodor proposed was that cognition isn’t a case of either-or, but of both-and; that perceptual and linguistic processing is modular, but higher-level, more complex cognition that draws on modular information, is global.   His prediction turned out to be pretty accurate, which is why Daisy’s and Guy’s models can be joined up.

Fodor was familiar enough with the evidence to know that he was very likely to be on the right track, but his model of cognition is a complex one, and he knew he could have been wrong about some bits of it. So he deliberately exposes his model to the criticism of cognitive scientists, philosophers and anyone else who cared to comment, because that’s how the scientific method works. A hypothesis is tested. People try to falsify it. If they can’t, then the hypothesis signposts a route worth exploring further. If they can, then researchers don’t need to waste any more time exploring a dead end.

joined-up thinking

Daisy’s model of creativity has emerged from a small sub-field of cognitive science – what AI researchers discovered about expertise in domains with clear, explicit rules. She doesn’t appear to see the need to explore schemata in detail because the schemata used in chess, sport and music are by definition highly codified and widely shared.  That’s why the AI researchers chose them.  The situation is different in the sciences, humanities and arts where schemata are of utmost importance, and differences between them can be the cause of significant conflict.  Guy’s model originates in a very different sub-field of cognitive science – the application of high-level cognitive processes to education. Schemata are a crucial component; although Guy doesn’t explore them in this debate, his previous work indicates he’s very familiar with the concept.

Since the 1950s, cognitive science has exploded into a vast research field, encompassing everything from the dyes used to stain brain tissue, through the statistical analysis of brain scans, to the errors and biases that affect judgement and decision-making by experts. Obviously it isn’t necessary to know everything about cognitive science before you can apply it to teaching, but if you’re proposing a particular model of cognition, having an overview of the field and inviting critique of the model would help avoid unnecessary errors and disagreements.  In this debate, I suggest schemata are noticeable by their absence.

*First use of schema as a psychological concept is widely attributed to Piaget, but I haven’t yet been able to find a reference.

The Tiger Teachers and cognitive science

Cognitive science is a key plank in the Tiger Teachers’ model of knowledge. If I’ve understood it properly the model looks something like this:

Cognitive science has discovered that working memory has limited capacity and duration, so pupils can’t process large amounts of novel information. If this information is secured in long-term memory via spaced, interleaved practice, students can recall it instantly whenever they need it, freeing up working memory for thinking.

What’s wrong with that? Nothing, as it stands. It’s what’s missing that’s the problem.

Subject knowledge

One of the Tiger Teachers’ beefs about the current education system is its emphasis on transferable skills. They point out that skills are not universally transferable, many are subject-specific, and in order to develop expertise in higher-level skills novices need a substantial amount of subject knowledge. Tiger Teachers’ pupils are expected to pay attention to experts (their teachers) and memorise a lot of facts before they can comprehend, apply, analyse, synthesise or evaluate. The model is broadly supported by cognitive science and the Tiger Teachers apply it rigorously to children. But not to themselves, it seems.

For most Tiger Teachers cognitive science will be an unfamiliar subject area. That makes them (like most of us) cognitive science novices. Obviously they don’t need to become experts in cognitive science to apply it to their educational practice, but they do need the key facts and concepts and a basic overview of the field. The overview is important because they need to know how the facts fit together and the limitations of how they can be applied.   But with a few honourable exceptions (Daisy Christodoulou, David Didau and Greg Ashman spring to mind – apologies if I’ve missed anyone out), many Tiger Teachers don’t appear to have even thought about acquiring expertise, key facts and concepts or an overview. As a consequence facts are misunderstood or overlooked, principles from other knowledge domains are applied inappropriately, and erroneous assumptions made about how science works. Here are some examples:

It’s a fact…

“Teachers’ brains work exactly the same way as pupils’” (p.177). No they don’t. Cognitive science (ironically) thinks that children’s brains begin by forming trillions of connections (synapses). Then through to early adulthood, synapses that aren’t used get pruned, which makes information processing more efficient. (There’s a good summary here.)  Pupils’ brains are as different to teachers’ brains as children’s bodies are different to adults’ bodies. Similarities don’t mean they’re identical.

Then there’s working memory. “As the cognitive scientist Daniel Willingham explains, we learn by transferring knowledge from the short-term memory to the long term memory” (p177). Well, kind of – if you assume that what Willingham explicitly describes as “just about the simplest model of the mind possible”  is an exhaustive model of memory. If you think that, you might conclude, wrongly, “the more knowledge we have in long-term memory, the more space we have in our working memory to process new information” (p.177). Or that “information cannot accumulate into long-term memory while working memory is being used” (p.36).

Long-term memory takes centre stage in the Tiger Teachers’ model of cognition. The only downside attributed to it is our tendency to forget things if we don’t revisit them (p.22). Other well-established characteristics of long-term memory – its unreliability, errors and biases – are simply overlooked, despite Daisy Christodoulou’s frequent citation of Daniel Kahneman whose work focused on those flaws.

With regard to transferable skills we’re told “cognitive scientist Herb Simon and his colleagues have cast doubt on the idea that there are any general or transferable cognitive skills” (p.17), when what they actually cast doubt on is the ideas that all skills are transferable or that none are.

The Michaela cognitive model is distinctly reductionist; “all there is to intelligence is the simple accrual and tuning of many small units of knowledge that in total produce complex cognition” (p.19). Then there’s “skills are simply just a composite of sequential knowledge – all skills can be broken down to irreducible pieces of knowledge” (p.161).

The statement about intelligence is a direct quote from John Anderson’s paper ‘A Simple Theory of Complex Cognition’ but Anderson isn’t credited, so you might not know he was talking about simple encodings of objects and transformations, and that by ‘intelligence’ he means how ants behave rather than IQ. I’ve looked at Daisy Christodoulou’s interpretation of Anderson’s model here.

The idea that intelligence and skills consist ‘simply just’ of units of knowledge ignores Anderson’s procedural rules and marginalises the role of the schema – the way people configure their knowledge. Joe Kirby mentions “procedural and substantive schemata” (p. 17), but seems to see them only in terms of how units of knowledge are configured for teaching purposes; “subject content knowledge is best organised into the most memorable schemata … chronological, cumulative schemata help pupils remember subject knowledge in the long term” (p.21). The concept of schemata as the way individuals, groups or entire academic disciplines configure their knowledge, that the same knowledge can be configured in different ways resulting in different meanings, or that configurations sometimes turn out to be profoundly wrong, doesn’t appear to feature in the Tiger Teachers’ model.

Skills: to transfer or not to transfer?

Tiger Teachers see higher-level skills as subject-specific. That hasn’t stopped them applying higher-level skills from one domain inappropriately to another. In her critique of Bloom’s taxonomy, Daisy Christodoulou describes it as a ‘metaphor’ for the relationship between knowledge and skills. She refers to two other metaphors; ED Hirsch’s scrambled egg and Joe Kirby’s double helix (Seven Myths p.21).  Daisy, Joe and ED teach English, and metaphors are an important feature in English literature. Scientists do use metaphors, but they use analogies more often, because in the natural world patterns often repeat themselves at different levels of abstraction. Daisy, Joe and ED are right to complain about Bloom’s taxonomy being used to justify divorcing skills from knowledge. And the taxonomy itself might be wrong or misleading.   But it is a taxonomy and it is based on an important scientific concept – levels of abstraction – so should be critiqued as such, not as if it were a device used by a novelist.

Not all evidence is equal

A major challenge for novices is what criteria they can use to decide whether or not factual information is valid. They can’t use their overview of a subject area if they don’t have one. They can’t weigh up one set of facts against another if they don’t know enough facts. So Tiger Teachers who are cognitive science novices have to fall back on the criteria ED Hirsch uses to evaluate psychology – the reputation of researchers and consensus. Those might be key criteria in evaluating English literature, but they’re secondary issues for scientific research, and for good reason.

Novices then have to figure out how to evaluate the reputation of researchers and consensus. The Tiger Teachers struggle with reputation. Daniel Willingham and Paul Kirschner are cited more frequently than Herb Simon, but with all due respect to Willingham and Kirschner, they’re not quite in the same league. Other key figures don’t get a mention.  When asked what was missing from the Tiger Teachers’ presentations at ResearchEd, I suggested, for starters, Baddeley and Hitch’s model of working memory. It’s been a dominant model for 40 years and has the rare distinction of being supported by later biological research. But it’s mentioned only in an endnote in Willingham’s Why Don’t Students Like School and in Daisy’s Seven Myths about Education. I recommended inviting Alan Baddeley to speak at ResearchEd – he’s a leading authority on memory after all.   One of the teachers said he’d never even heard of him. So why was that teacher doing a presentation on memory at a national education conference?

The Tiger Teachers also struggle with consensus. Joe Kirby emphasises the length of time an idea has been around and the number of studies that support it (pp.22-3), overlooking the fact that some ideas can dominate a field for decades, be supported by hundreds of studies and then turn out to be profoundly wrong; theories about how brains work are a case in point.   Scientific theory doesn’t rely on the quantity of supporting evidence; it relies on an evaluation of all relevant evidence – supporting and contradictory – and takes into account the quality of that evidence as well.  That’s why you need a substantial body of knowledge before you can evaluate it.

The big picture

For me, Battle Hymn painted a clearer picture of the Michaela Community School than I’d been able to put together from blog posts and visitors’ descriptions. It persuaded me that Michaela’s approach to behaviour management is about being explicit and consistent, rather than simply being ‘strict’. I think having a week’s induction for new students and staff (‘bootcamp’) is a great idea. A systematic, rigorous approach to knowledge is vital and learning by rote can be jolly useful. But for me, those positives were all undermined by the Tiger Teachers’ approach to their own knowledge.  Omitting key issues in discussions of Rousseau’s ideas, professional qualifications or the special circumstances of schools in coastal and rural areas, is one thing. Pontificating about cognitive science and then ignoring what it says is quite another.

I can understand why Tiger Teachers want to share concepts like the limited capacity of working memory and skills not being divorced from knowledge.  Those concepts make sense of problems and have transformed their teaching.  But for many Tiger Teachers, their knowledge of cognitive science appears to be based on a handful of poorly understood factoids acquired second or third hand from other teachers who don’t have a good grasp of the field either. Most teachers aren’t going to know much about cognitive science; but that’s why most teachers don’t do presentations about it at national conferences or go into print to share their flimsy knowledge about it.  Failing to acquire a substantial body of knowledge about cognitive science makes its comprehension, application, analysis, synthesis and evaluation impossible.  The Tiger Teachers’ disregard for principles they claim are crucial is inconsistent, disingenuous, likely to lead to significant problems, and sets a really bad example for pupils. The Tiger Teachers need to re-write some of the lyrics of their Battle Hymn.

A tale of two Blobs

The think-tank Civitas has just published a 53-page pamphlet written by Toby Young and entitled ‘Prisoners of The Blob’. ‘The Blob’ for the uninitiated, is the name applied by the UK’s Secretary of State for Education, Michael Gove, to ‘leaders of the teaching unions, local authority officials, academic experts and university education departments’ described by Young as ‘opponents of educational reform’. The name’s not original. Young says it was coined by William J Bennett, a former US Education Secretary; it was also used by Chris Woodhead, first Chief Inspector of Ofsted in his book Class War.

It’s difficult to tell whether ‘The Blob’ is actually an amorphous fog-like mass whose members embrace an identical approach to education as Young claims, or whether such a diverse range of people espouse such a diverse range of views that it’s difficult for people who would like life to be nice and straightforward to understand all the differences.

Young says;

They all believe that skills like ‘problem-solving’ and ‘critical thinking’ are more important than subject knowledge; that education should be ‘child-centred’ rather than ‘didactic’ or ‘teacher-led’; that ‘group work’ and ‘independent learning’ are superior to ‘direct instruction’; that the way to interest children in a subject is to make it ‘relevant’; that ‘rote-learning’ and ‘regurgitating facts’ is bad, along with discipline, hierarchy, routine and anything else that involves treating the teacher as an authority figure. The list goes on.” (p.3)

It’s obvious that this is a literary device rather than a scientific analysis, but that’s what bothers me about it.

Initially, I had some sympathy with the advocates of ‘educational reform’. The national curriculum had a distinctly woolly appearance in places, enforced group-work and being required to imagine how historical figures must have felt drove my children to distraction, and the approach to behaviour management at their school seemed incoherent. So when I started to come across references to educational reform based on evidence, the importance of knowledge and skills being domain-specific, I was relieved. When I found that applying findings from cognitive science to education was being advocated, I got quite excited.

My excitement was short-lived. I had imagined that a community of researchers had been busily applying cognitive science findings to education, that the literatures on learning and expertise were being thoroughly mined and that an evidence-based route-map was beginning to emerge. Instead, I kept finding references to the same small group of people.

Most fields of discourse are dominated by a few individuals. Usually they are researchers responsible for significant findings or major theories. A new or specialist field might be dominated by only two or three people. The difference here is that education straddles many different fields of discourse (biology, psychology sociology, philosophy and politics, plus a range of subject areas) so I found it a bit odd that the same handful of names kept cropping up. I would have expected a major reform of the education system to have had a wider evidence base.

Evaluating the evidence

And then there was the evidence itself. I might be looking in the wrong place, but so far, although I’ve found a few references, I’ve uncovered no attempts by proponents of educational reform to evaluate the evidence they cite.

A major flaw in human thinking is confirmation bias. To represent a particular set of ideas, we develop a mental schema. Every time we encounter the same set of ideas, the neural network that carries the schema is activated. The more it’s activated, the more readily it’s activated in future. This means that any configuration of ideas that contradicts a pre-existing schema, has, almost literally, to swim against the electromagnetic tide. It’s going to take a good few reiterations of the new idea set before a strongly embedded pre-existing schema is likely to be overridden by a new one. Consequently we tend to favour evidence that confirms our existing views, and find it difficult to see things in a different way.

The best way we’ve found to counteract confirmation bias in the way we evaluate evidence is through hypothesis testing. Essentially you come up with a hypothesis and then try to disprove it. If you can’t, it doesn’t mean your hypothesis is right, it just means you can’t yet rule it out. Hypothesis testing as such is mainly used in the sciences, but the same principle underlies formal debating, the adversarial approach in courts of law, and having an opposition to government in parliament. The last two examples are often viewed as needlessly combative, when actually their job is to spot flaws in what other people are saying. How well they do that job is another matter.

It’s impossible to tell at first glance whether a small number of researchers have made a breakthrough in education theory, or whether their work is simply being cited to affirm a set of beliefs. My suspicion that it might be the latter was strengthened when I checked out the evidence.

The evidence

John Hattie conducted a meta-anlaysis of over 800 studies of student achievement. My immediate thought when I came across his work was of the well-documented problems associated with meta-analyses. Hattie does discuss these, but I’m not convinced he disposed of one key issue; the garbage-in-garbage-out problem. A major difficulty with meta-analyses is ensuring that all the studies involved use the same definitions for the constructs they are measuring; and I couldn’t find a discussion of what Hattie (or other researchers) mean by ‘achievement’. I assume that Hattie uses test scores as a proxy measure of achievement. This is fine if you think the job of schools is to ensure that children learn what somebody has decided they should learn. But that assumption poses problems. One is who determines what students should learn. Another is what happens to students who, for whatever reason, can’t learn at the same rate as the majority. And a third is how the achievement measured in Hattie’s study maps on to achievement in later life. What’s noticeable about the biographies of many ‘great thinkers’ – Darwin and Einstein are prominent examples – is how many of them didn’t do very well in school. It doesn’t follow that Hattie is wrong – Darwin and Einstein might have been even greater thinkers if their schools had adopted his recommendations – but it’s an outcome Hattie doesn’t appear to address.

Siegfreid Engelmann and Wesley C Becker developed a system called Direct Instruction System for Teaching Arithmetic and Reading (DISTAR) that was shown to be effective in Project Follow-Through – a evaluation of a number of educational approaches in the US education system over a 30 year period starting in the 1960s. There’s little doubt that Direct Instruction is more effective than many other systems at raising academic achievement and self-esteem. The problem is, again, who decides what students learn, what happens to students who don’t benefit as much as others, and what’s meant by ‘achievement’.

ED Hirsch developed the Core Knowledge sequence – essentially an off-the-shelf curriculum that’s been adapted for the UK and is available from Civitas. The US Core Knowledge sequence has a pretty obvious underlying rationale even if some might question its stance on some points. The same can’t be said of the UK version. Compare, for example, the content of US Grade 1 History and Geography with that of the UK version for Year 1. The US version includes Early People and Civilisations and the History of World Religion – all important for understanding how human geography and cultures have developed over time. The UK version focuses on British Pre-history and History (with an emphasis on the importance of literacy) followed by Kings and Queens, Prime ministers then Symbols and figures – namely the Union Jack, Buckingham Palace, 10 Downing Street and the Houses of Parliament – despite the fact that few children in Y1 are likely to understand how or why these people or symbols came to be important. Although the strands of world history and British history are broadly chronological, Y4s study Ancient Rome alongside the Stuarts, and Y6s the American Civil War potentially before the Industrial Revolution.

Daniel Willingham is a cognitive psychologist and the author of Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for the classroom and When can you trust the experts? How to tell good science from bad in education. He also writes for a column in American Educator magazine. I found Willingham informative on cognitive psychology. However, I felt his view of education was a rather narrow one. There’s nothing wrong with applying cognitive psychology to how teachers teach the curriculum in schools – it’s just that learning and education involve considerably more than that.

Kirschner, Sweller and Clark have written several papers about the limitations of working memory and its implications for education. In my view, their analysis has three key weaknesses; they arbitrarily lump together a range of education methods as if they were essentially the same, they base their theory on an outdated and incomplete model of memory, and they conclude that only one teaching approach is effective – explicit, direct instruction – ignoring the fact that knowledge comes in different forms.

Conclusions

I agree with some of the points made by the reformers:
• I agree with the idea of evidence-based education – the more evidence the better, in my view.
• I have no problem with children being taught knowledge. I don’t subscribe to a constructivist view of education – in the sense that we each develop a unique understanding of the world and everybody’s worldview is as valid as everybody else’s – although cognitive science has shown that everybody’s construction of knowledge is unique. We know that some knowledge is more valid and/or more reliable than other knowledge and we’ve developed some quite sophisticated ways of figuring out what’s more certain and what’s less certain.
• The application of findings from cognitive science to education is long overdue.
• I have no problem with direct instruction (as distinct from Direct Instruction) per se.

However, some of what I read gave me cause for concern:
• The evidence-base presented by the reformers is limited and parts of it are weak and flawed. It’s vital to evaluate evidence, not just to cite evidence that at face-value appears to support what you already think. And a body of evidence isn’t a unitary thing; some parts of it can be sound whilst other parts are distinctly dodgy. It’s important to be able to sift through it and weigh up the pros and cons. Ignoring contradictory evidence can be catastrophic.
• Knowledge, likewise, isn’t a unitary thing; it can vary in terms of validity and reliability.
• The evidence from cognitive science also needs to be evaluated. It isn’t OK to assume that just because cognitive scientists say something it must be right; cognitive scientists certainly don’t do that. Being able to evaluate cognitive science might entail learning a fair bit about cognitive science first.
• Direct instruction, like any other educational method, is appropriate for acquiring some types of knowledge. It isn’t appropriate for acquiring all types of knowledge. The problem with approaches such as discovery learning and child-led learning is not that there’s anything inherently wrong with the approaches themselves, but that they’re not suitable for acquiring all types of knowledge.

What has struck me most forcibly about my exploration of the evidence cited by the education reformers is that, although I agree with some of the reformers’ reservations about what’s been termed ‘minimal instruction’ approaches to education, the reformers appear to be ignoring their own advice. They don’t have extensive knowledge of the relevant subject areas, they don’t evaluate the relevant evidence, and the direct instruction framework they are advocating – certainly the one Civitas is advocating – doesn’t appear to have a structure derived from the relevant knowledge domains.

Rather than a rational, evidence-based approach to education, the ‘educational reform’ movement has all the hallmarks of a belief system that’s using evidence selectively to support its cause; and that’s what worries me. This new Blob is beginning to look suspiciously like the old one.