Biologically primary and biologically secondary knowledge; Merlin Donald and David Geary 2

In the previous post I summarised Merlin Donald’s model of the evolution of the human mind described in his Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. In this post, I look at David Geary’s model set out in his book The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence.  As I said previously, the two books not only have almost identical titles, they also deal with very similar material. But the similarities are superficial.  Donald concludes that there have been three major transitions in human cognition, deriving his conclusions from artifacts indicating cultural shifts.  Geary bases his model of knowledge on a set of constructs, each of which is the subject of intense debate; motivation to control, intelligence, and brain modularity.

Motivation to control

The first construct Geary introduces is motivation to control (p.3) – a feature he sees as fundamental to all species in their struggle for survival.   Although he provides examples of what he means by the term (e.g. pp.72-80), I couldn’t find an explanation of which brain functions facilitate it, and the components of the construct remain rather fuzzy.  ‘Control’ could apply only to the immediate environment (e.g. food, water, shelter, social support), or extend to wanting world domination.  And I recall being required to write an essay on ‘motivation’ for an academic assignment – because it’s a great example of a term that’s used by different people to refer to very different things.  Ironically, motivation could be seen as a folk construct (of which more later); we all know roughly what the term refers to, but it’s so broad that to be used for research purposes it needs to be deconstructed.

Intelligence

Like Donald, Geary reviews the archaeological evidence showing the increase in relative brain size as the genus Homo evolved from its primate ancestors.  For Geary, the significant increase in EQ meant that H sapiens had superior general intelligence.  This enabled it to outcompete other species of the genus Homo for mates, food and other resources – which is how H sapiens achieved ecological dominance.   

Like motivation, intelligence is a term used to refer to different things by different people.  It’s been controversial since Charles Spearman came up with the idea of general intelligence (g) in 1904.  But that hasn’t stopped elaborations such as Raymond Cattell’s proposal that fluid intelligence (gF) is biologically determined, and crystallised intelligence (gC) is the outcome of an interaction between gF and sociocultural factors such as education.  Intelligence is, I’d suggest, another folk construct. 

Spearman defined intelligence in terms of particular academic skills (I’ve blogged about his model here). Donald (wisely) avoids using the term except in relation to Darwin’s views on animal intelligence.  Geary discusses intelligence and related brain function at length, but I struggled to pin down exactly what it was about the brain of H sapiens Geary believes resulted in their ecological dominance; I think it’s the development of the prefrontal cortex.  Geary draws attention to three levels of function in this brain area (p.211);

  • Monitoring and integrating information from posterior areas of the brain
  • Attentional control and inhibition of irrelevant information
  • Episodic memory and self-awareness.

The last of these three functions, Geary claims, gives humans what Endel Tulving called autonoetic awareness – enabling us to imagine ourselves in the past, present and future.  For Geary, this enables us to imagine a ‘perfect world’ and use problem-solving and motivation to control to try to achieve it (p.16).  He summarises (pp.304-5) his chapter on the evolution of intelligence in terms of Spearman’s g and Cattell’s gF and gC – again resorting to contested constructs.

Modularity

Geary’s model leans heavily on the concept of modules – areas of the brain that have evolved to process a specific type of information. Modules process information automatically and pre-consciously, in ways that during evolution increased the chances of an individual’s survival.  The automatic and pre-conscious processing has also resulted in inherent cognitive errors and biases.  The upshot is that we tend to configure our knowledge about the world in ways that aren’t always logical or rational; we default to folk biology (the natural world), folk physics (the way the natural world functions) and folk psychology (human behaviour and interactions).

The fact that some areas of the brain process some information automatically and pre-consciously (ie in modular fashion) isn’t in dispute – but the extent of the modularity is. Geary points out that the modules would have evolved due to environmental factors that were – at the macro level – invariant – but human beings also have to cope with a variant microenvironment.  He suggests the modules, although evolved for a particular purpose, are soft, (pp.11-122) ie they have some plasticity.  That’s very likely, but that characteristic by definition blurs the boundary between his categories of biologically primary and biologically secondary knowledge.

In addition to the motivation to control, intelligence, and evolved modules on which Geary’s model is founded, some other examples of him taking a construct for granted caught my attention; notably competition, the central executive, and folk biology.    

Competition

For Geary, evolution revolves around competition – initially social competition for mates, but in modern societies for occupational status (p.336).  Competition is certainly an important factor in the process of evolution, but the central feature of Darwin’s model was advantageous adaptation to the environment rather than competition as such; as Donald points out, competition usually arises only when resources are scarce.  And competition doesn’t explain the co-operation and altruism found in many human societies – a notoriously knotty problem for evolutionary psychologists.

Central executive

Another knotty problem is that of consciousness. I mentioned in the previous post that Donald concludes connectionist models of information processing indicate the central executive function (and therefore consciousness) isn’t modular – in a dedicated brain area – but is an emergent feature of a distributed network. 

Geary agrees with Donald – to an extent;  “… that novelty and conflict result in automatic attentional shifts and activation of the executive function is important because it addresses the homunculus question.   The central executive does not activate itself, but rather is automatically activated when heuristic-based processes are not sufficient for dealing with current information patterns or tasks…” (p.215).  But I couldn’t find any reference in Geary’s book to connectionist models despite their importance in cognitive neurology, and he appears to still see the central executive as modular. 

Folk biology

Geary makes frequent references to Charles Darwin and Alfred Wallace – 19th century contemporaries who each developed a theory of evolution.  I’ve referred above to Geary’s assumptions about the role of competition in Darwin’s model.

Another assumption Geary makes in relation to Darwin and Wallace (and Carl Linnaeus the taxonomist), was that their ideas must have been based on folk biology “driven by an interest in the natural world” (pp.188, 311).  Not only is this a somewhat tautological claim – if Geary’s model is right, all knowledge is ultimately built on folk knowledge – but he also overlooks the backgrounds of these eminent scientists.  Both of Darwin’s grandfathers (Erasmus Darwin and Josiah Wedgewood) were founder members of The Lunar Men, a discussion group whose members were leading scientists and industrialists, Darwin’s father was a respected doctor, Wallace’s father trained as a lawyer, Linnaeus’ father was a clergyman and amateur botanist, and Darwin, Wallace and Linnaeus attended grammar schools.  So from an early age, all three would have been exposed to far more than folk knowledge. 

What’s the difference between Donald’s and Geary’s models?

Key differences I noted were:

  • Donald sets out to create a coherent explanation of the evolution of human cognition.  Geary explores evidence that supports his model of human knowledge. 
  • Donald’s framework emerges from the archaeological, neurological and psychological evidence; the details of the changes are debatable, but the major shifts must have happened for the artifacts to exist.  In contrast, Geary tries to fit the evidence into a framework composed of broad – often contentious – constructs.
  • Donald dissects the heated debates associated with several constructs (e.g. modularity, laterality, speech and language, the central executive function, consciousness – but interestingly sidesteps intelligence).  Geary appears to take the constructs for granted. 

Conclusion

Neither of these books is an easy read.  And both necessarily involve some speculation because there are gaps in the archaeological evidence and in our knowledge about cognition. But Donald sets out his reasoning step-by-step so the diligent reader should end up with a good grasp of the evidence for the evolution of human cognition and the brain – even if his model is a little outdated because archaeology and cognitive neurology have moved on in the past 30 years.  Geary, in contrast, repeatedly tries to link up his constructs, which results in a fair bit of repetition, and left me struggling to see the forest through the trees. 

When I first read Geary’s The Origin of Mind my focus was inevitably on the factual information – which appeared pretty reliable. But I noticed he sidelined the debates about the implications of the factual information, and made assumptions about the constructs on which his model is based. My concern about Geary’s book is that teachers unfamiliar with cognitive neurology will be blinded by science (Donald has around 300 references, Geary cites well over 1000), but be unaware that Geary glosses over the reasons his key constructs are contentious, and that his model rests on assumptions.

Also, I couldn’t see the point of Geary’s model.  He assumes that by default students think in terms of folk biology, folk physics and folk psychology (biologically primary knowledge), so knowledge that’s not folk biology, physics or psychology (biologically secondary knowledge) needs to be actively taught.  It’s helpful for teachers to know that logical rational thought requires some effort because it’s swimming against the tide of the way human cognition works, but even Geary struggles to find a clear boundary between biologically primary and biologically secondary knowledge.  And teachers usually know what their students have learned with no apparent effort and what they’re having difficulty with.  So how does it help them to draw a somewhat questionable line between two types of knowledge? 

Lastly, despite Donald being an academic at a reputable university (Case Western), his book having a similar title, dealing with similar content, drawing conclusions about education, and being published by a reputable publisher (Harvard University Press) only a decade earlier, Geary doesn’t mention him.   I couldn’t help wondering why.

Biologically primary and biologically secondary knowledge: Merlin Donald and David Geary 1

David Geary is an evolutionary psychologist, whose theory about biologically primary and biologically secondary knowledge has been influential in some educational circles. 

The theory proposes that people acquire biologically primary knowledge with little effort due to the way our cognitive processes have evolved. Biologically primary knowledge is acquired using processes that are fast, frugal, simple and implicit.  It’s acquired naturally, in the course of development, and encompasses walking, talking, foraging for food, social interaction, and explanations about the world in the form of folk biology, folk physics and folk psychology.

Biologically secondary knowledge requires logical, rational thought – using processes that are slow, effortful, complex and explicit.  Unlike biologically primary knowledge, biologically secondary knowledge needs to be taught, and the main purpose of schools is to transmit biologically secondary knowledge.  I’ve previously critiqued Geary’s book The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence here.

Geary’s theory pops up on Twitter from time to time, and during a recent discussion Oliver Caviglioli (@olicav) asked me what I thought of Merlin Donald’s work.  I wasn’t familiar with it, so read Donald’s Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition.  You can see Caviglioli’s summary of the two books here.  I found the contrast between Donald’s and Geary’s books intriguing. 

One of these books is not like the other

The two books not only have almost identical titles, the content is also very similar; both open with a discussion of the archaeological evidence for the evolution of the hominin brain, discuss human cognition, and conclude with the implications for the transmission of knowledge.  But those similarities are quite superficial; in other respects Donald and Geary take very different approaches to their subject matter. 

Donald’s purpose is to provide a coherent account of how human cognition and culture evolved in tandem; in 1991, when his book was published, there were many competing explanations for human cognitive function, which he evaluates.  Geary in contrast, publishing over a decade later in 2006 when knowledge about cognitive function had made considerable progress, seeks to marshal evidence to support his hypothesis that there are two distinct types of knowledge.

Between them the authors refer to several issues that are the subject of heated debate, such as the extent of modularity and laterality in the brain, whether language is hard-wired or not, the nature of the central executive and consciousness, and the implications for pedagogy. Both authors come down on one side or the other of the debates.  Donald tackles the debates directly and systematically, taking the reader step-by-step through his reasoning.  I felt Geary glosses over the actual debates, and assumes the side that fits his model must be correct.  But first, a topic where both authors are in broad agreement – the archaeological evidence…

Brain evolution

Both authors open with an exploration of the archaeological evidence indicating how the modern human brain evolved.  Brains, of course, are composed of soft tissue, and decompose rapidly after death.  So the structure and function of early hominin brains has to be inferred from the size and internal shaping of the skull.  There’s a huge difference in both brain size and body size between early australopithecines (3-4m years ago) and Homo sapiens (from around 500 000 years ago).  To arrive at a fair comparison of brain sizes, researchers use an Encephalization Quotient (EQ) that denotes brain size in relation to body size.  Modern humans have an EQ which is almost 3 times that of our nearest non-human relative, the chimpanzee. 

Donald and Geary agree that the increase in EQ with the emergence of H sapiens was relatively rapid and relatively recent, and led to its ecological dominance; H sapiens proliferated, whilst other contemporaneous species (or sub-species) such as the Neanderthals and Denisovans died out, despite evidence that they interbred with H sapiens

The increase in EQ would have come about due to natural selection – genetic changes resulting in incremental changes in structure and function, that gave H sapiens advantages in interacting with the environment. Geary identifies three types of environmental factor; climatic, ecological, and social.  There’s no evidence of major climatic or ecological changes since H sapiens first appeared, so he concludes the critical environmental change must have been social.  Donald agrees about the climate and ecology, but proposes the significant change was a cultural one. Both authors are interested in what changes to brain structure resulted in H sapiens’ ecological dominance, but after that they part company; not over the archaeological or neurological evidence, but on how they evaluate the evidence and what conclusions they draw from it.  In this post, I’ll attempt to summarise Donald’s narrative.  In the next post I’ll look at Geary’s, and highlight the differences between them. 

Donald’s narrative; cultural change

Donald points out that evidence of the social organisation of hominins, and the artifacts they produced, are an indication of how the different species’ brains worked. The artifacts can be dated reasonably accurately, so we have a fair idea of the order in which they appeared, and which species produced them.  Donald traces back the chain of causality from the archaeological evidence to the brain function that would have been required to produce it. 

The brains of the earliest members of the genus Homo (such as australopithecus) appear to be similar to those of the great apes; their EQ was within the same range and, like the apes, they used tools.  The big shift came with the arrival of H erectus around 2m years ago.  Donald proposes that H erectus embodies the first of three key cultural changes that point to three key changes in the function of the hominin brain.  These are transitions from:

  • episodic to mimetic culture
  • mimetic to mythic culture, and
  • mythic to theoretic culture.  

Episodic culture

Many animal species appear to have episodic memory – the memory of particular events and associated sights, sounds, smells etc. Donald calls early hominin culture episodic culture – very similar to that of chimps.  Both had communities about the same size, could use tools, could mimic conspecifics using tools, and had a similar vocal apparatus – chimps in the wild use around 35 different vocalisations. 

Mimetic culture

For Donald, the first major cultural change was from episodic to mimetic culture.  By mimetic, Donald doesn’t mean mimicry; many species are able to copy behaviours.  Nor does he mean imitation (p.168) – copying the behaviour but adapting it.  By mimetic Donald means the ability to use representational symbols – mainly to communicate with others.  This ability opens the door to more sophisticated communication using vocalisation and gestures, to more complex social structures, and to advanced tool-making and hunting techniques; all characteristics of H erectus.  Not only did H erectus have a significantly larger EQ, but produced sophisticated tools, used fire, and migrated over long distances.

Mythic culture

The second major shift was to what Donald calls mythic culture.  What he means is the integration of fragmented mimetic knowledge into coherent wholes that provided shared explanatory narratives for communities.  The ability to integrate knowledge also resulted in the ability to reconfigure it, enabling problem solving.  So mythic culture involved innovation, which characterised H sapiens, who made novel tools such as needles, and created clothing and cave paintings.  H sapiens also showed significant changes in skull and jaw shape, suggesting the development of a vocal apparatus that enabled speech, as distinct from mere vocalisation – although there is evidence that speech might have evolved earlier.   A species that could communicate via complex speech and language would have an obvious evolutionary advantage over species that couldn’t.

Theoretic culture

Donald’s third cultural shift happened with the recording of symbolic representations on clay or wax tablets, papyrus, parchment, paper – or more recently electronic media.  Modern human cognition, regardless of the sophistication of brain function, is limited by working memory – we can process only a handful of items of information simultaneously, and only for a few seconds.  So regardless of how much knowledge we have as individuals or collectively, it’s difficult to recall mentally – individually or collectively – enough information to tackle very complex problems.  Writing (notably the use of an alphabet) changed all that.

Donald notes that Ancient Greece had an economy prosperous enough to allow some members of its society time to reflect on and expand existing knowledge.  And the cosmopolitan nature of Greek society prompted challenges to explanatory myths.  Both changes were facilitated by the development of a functional phonetic alphabet that enabled speech to be recorded, so people could read and critique earlier philosophical discourse.  Donald points out that accessing large quantities of written material required metalinguistic skills which in turn led to what he calls theoretic culture. 

Will brains evolve further?

The first two cultural shifts required an additional level of information processing in the brain; to enable the use of representational symbols in mimetic culture, and the integration of mimetic knowledge in mythic culture.   Donald makes the point that the development of a new brain function didn’t mean the functionality that evolved earlier was erased; he notes that in patients who have lost the ability to speak, the mimetic ability to communicate using representations such as gestures and symbols is often preserved. 

Donald’s third shift – to theoretic culture – required the development of an external memory store in the form of written material.  This has implications for brain evolution too. Brain structure changes with use, and individuals whose brain structure facilitates the use of external memory will be at an evolutionary advantage in a highly technological environment, which could result in long-term changes to the structure and function of the brain of H sapiens. 

Disputed concepts

Donald’s model is at some points necessarily speculative because the archaeological record isn’t exhaustive, and what happened during long periods of time remains unaccounted for.   But overall, his model is strongly evidence-based.  He explores the archaeological evidence, and how it’s been interpreted.  He explains how the modern human brain works – and what can be inferred when parts of it don’t work following injury. He tackles head-on, and explores in detail, some of the most disputed concepts in the field of human cognition, three of which I’ve summarised below; whether thought is mediated by language, the extent of modularity in the brain, and the nature of consciousness. 

Speech, language and thought

The development of speech and language was a pivotal point for the development of human cognition. Donald looks at: the lateralisation of the brain involved in handedness and language; how the vocal tract works to produce speech; theories about the extent to which language is hard-wired; and how speech and language are related to thought.  Contrary to much popular opinion, for Donald, evidence as diverse as that from aphasic patients and mathematical theorists show that internal symbolic representations precede linguistic representation.  

Modularity

In this context, a module is an area of the brain dedicated to processing a particular type of information, such as the visual information involved in face recognition, or auditory information involved in understanding speech. Modular information processing is automatic and fast, and introspection into the process is difficult (or impossible) for the person doing the processing. Donald concludes that some processes are entirely modular, others aren’t, and for some processes, it’s unclear. 

Consciousness

Consciousness is a phenomenon that has perplexed philosophers for centuries.  We’re unaware of most of the information our brains process, so how do we become aware of some of it at some times?  The function of integrating and evaluating information is often attributed to the central executive – although when the term was first used in Baddeley & Hitch’s model of working memory it wasn’t clear whether this function was modular (ie in a dedicated brain area) or distributed. 

The central executive introduces an infinite regress problem; if the central executive decides what information to pay attention to, what is it that’s paying attention? This is the same problem posed by the old idea of a homunculus in the brain directing operations.  Donald concludes that connectionist models of information processing suggest the strongest neural signals are synonymous with attention, indicating that the central executive isn’t modular, but is an emergent feature of the brain’s neural network.  That is, the central executive and consciousness are not distinct brain processes – they are outcomes of brain processes.

Implications for education

Understanding complex knowledge via written information requires metalinguistic skills that have implications for pedagogy.  Students not only have to learn a body of knowledge, but to access more knowledge they need to learn to read and write, learn the conventions of scholarship, and how to evaluate and critique arguments and theory.  In terms of current educational debate, just to be clear, Donald isn’t suggesting that ‘skills’ are more important than ‘knowledge’ – he sees both as essential.

Donald sees storing knowledge externally in books (and now computer systems) as essentially providing a new cognitive architecture that enables ready access to vast amounts of knowledge.   And – again just to be clear – he doesn’t mean that to access knowledge we can ‘just Google it’; his book was published in 1991, before the internet was widely available. 

In the next post, I’ll attempt to summarise Geary’s narrative, and to highlight the differences between his and Donald’s conclusions. 

the best which has been thought/said/known/written – take your pick

I’ve seen Michael Young’s The Rise of Meritocracy cited so many times recently, I thought I should read it.  Published in 1958, it’s written from the perspective of a sociological analysis looking back from 2034. 

Young speculates on the long-term changes resulting from the 1944 Education Act.  The introduction of the 11+ test had made a grammar school education available to many bright children from families who couldn’t previously have afforded it.  Young suggests that in future, the UK will be governed not by those from historically wealthy families, but by those with most intellectual ability – a ‘meritocracy’.

One of the unintended and unwanted outcomes Young forecasts is that low-income families would no longer be able to console themselves with the idea that the rich were often stupid. In the brave new world, the elite would be both wealthy and clever.  So children who weren’t especially academically able would find themselves stuck in low-paid occupations. This would have significant social, economic and legislative implications.  Young is especially critical of comprehensive education.  Was the criticism serious or satirical?  I couldn’t tell.

Suspending disbelief

We’re all familiar with novels about the future.  But they are novels; we know to suspend our disbelief so we can focus on the author’s key themes.  Young’s book isn’t a novel.  A barrister, and one-time director of research for the Labour Party, he wrote The Rise of Meritocracy as a satirical socio-political treatise.  He had difficulty getting it published.  The Fabian Society turned it down, as did eleven other publishers.  Eventually a chance meeting with the founder of Thames & Hudson allowed it to see the light of day.

I found the book perplexing for several reasons:

References The Rise of Meritocracy is written as an academic treatise, and cites references that support its arguments.  The pre-1958 references are probably authentic.  (I tried to track some down but failed, but that’s not surprising, as pre-internet references are often not cited online.  The post-1958 references were originally fictitious, but a revised edition of the book was published in 1994 and its new introduction contains authentic post-1958 references.  I felt the references were neither serious nor satirical, but rather superfluous.

Satire   The book is intended to be satirical, but the satire would be lost on any reader unfamiliar with the detail of policy issues facing the Labour Party or the education system in the 1950s.  

Anachronisms Young couldn’t possibly have predicted the huge economic, social and political upheavals that took place between 1958 and 2034.  But despite expecting his readers to imagine the book was written in 2034, it’s written very much from a 1950s perspective. It’s as if the 1944 Education Act and its consequences were the only significant changes during the following 90 years. 

This produces some jarring anachronisms.  IQ, for example, crops up frequently in the text, despite single figure IQ measures being called into question long before the 1950s and multi-dimension assessments being in use since the beginning of the 20th century.

Another is the importance of labour unions.  Again, Young couldn’t have foreseen the forthcoming crises in the union movement, but I couldn’t tell if his predictions were serious or satirical, or both.

And then there’s the use of ‘he’ as the default pronoun, despite women’s equality being a very live issue at the time due to women having done ‘men’s’ work during WW2, and equal pay being very much on the political agenda. Women are marginalised in the workplace in Young’s vision of the meritocracy, so maybe he was being satirical.  Who can tell?

It’s a small world

By this time, I was puzzled by how frequently I’d encountered references to Young’s concept of ‘meritocracy’ recently.  The Rise of Meritocracy might have made interesting and entertaining reading in the 1950s, but other than highlighting some issues about education and social mobility, it didn’t seem especially informative or currently relevant.

Then I reached page 159 (of 180 pages).  There, Young quotes from a fictitious ‘Chelsea Manifesto’ published by the Technicians Party in 2009. The Manifesto refers to “the best that has been thought and known in the world”, a modified quotation from, as Young puts it, ”the almost forgotten Matthew Arnold”. What Arnold actually said in the preface to his book Anarchy and Culture, was “the best which has been thought and said in the world” [my emphasis], but you could be forgiven for assuming Young was quoting Arnold directly.  That misquote provided a plausible explanation for why why I’d seen Young (and Arnold) cited so frequently recently. 

In January 2010 Civitas published a book by David Conway called Liberal Education and the National Curriculum. Conway deals with Matthew Arnold’s ideas in some detail – and quotes him correctly.

In May 2010 Michael Gove was appointed Education Secretary for the new Coalition government.  In 2011 the West London Free School opened, the first free school in the country to sign a funding agreement with the Secretary of State for Education – Michael Gove.  The West London Free School was co-founded by Toby Young, Michael Young’s son.  It was soon after this that Michael Gove began using the Arnold quote – or rather ‘started throwing mangled versions of it around in lofty speeches’ as Phil Beadle puts it in an article on the Teachwire site.

In 2011 Gove referred to “the best that has been thought and written” [my emphasis] in a speech to Cambridge University, and again in 2012 in a letter to Tim Oates, a director of Cambridge Assessment. In 2013 the same misquote appears in Gove’s (in)famous ‘Mr Men’ speech to teachers at Brighton College. And in 2014 in an interview with Anthony Horowitz. and in a speech to Policy Exchange.

Now, it could be that Michael Gove is a big fan of Matthew Arnold. But if so, why does he misquote him and miss so many opportunities to name-drop? I’m picturing a more likely explanation; that Toby Young mentioned his dad’s Arnold reference to Michael Gove who thought it would make a good soundbite.  That would explain why Arnold and Young senior were suddenly back in vogue.

In 2014 Civitas published a somewhat less scholarly work – Toby Young’s Prisoners of the Blob. On the Civitas webpage about Young junior’s book, Arnold is quoted accurately – twice.  And interestingly, Toby refers to 1950s education and Harold Wilson describing comprehensive schools as ‘grammar schools for all’. 

Would Michael Young have agreed?  Having tried to untangle the fact from the fiction, the satire from the seriousness, and the quotes from the misquotes – I have no idea.

References

Arnold, Matthew (2006). Culture and Anarchy. Oxford World Classics.

Young, Michael (2017). The Rise of the Meritocracy. Routledge.

Hobbes’ Leviathan and the dangers of implicit assumptions

I’ve just read Thomas Hobbes’ Leviathan*.  All I knew about the book beforehand was Hobbes’ proposal that only a sovereign with absolute authority could prevent human life being ‘nasty, brutish and short’. This was puzzling because Hobbes lived through the English civil war, caused in large part by Charles I acting in an autocratic manner. Leviathan explains where Hobbes’ idea came from.

Hobbes was born near Malmesbury Wiltshire, during the attempted invasion by the Spanish Armada in 1588. His father, a clergyman, disappeared from the scene following an assault on a parishioner, and young Thomas was supported by his uncle, who provided him with a good education. Hobbes proved an able scholar, became fluent in Greek and Latin, enrolling at Oxford University in 1601, later transferring to Cambridge and graduating in 1608. He became a tutor to the Cavendish family, an association that was to last his whole life.  At the outbreak of civil war in 1640, Hobbes fled to Paris, where for a couple of years he tutored the future Charles II.

Leviathan

Leviathan, Or The Matter, Form, & Power Of a Common-Wealth Ecclesiastical And Civil was written following a serious illness, and published in 1651 while Hobbes was still in exile.  The title is taken from the book of Job 41.33-34; ‘Leviathan’ is the name of a sea-monster (28.27) “…upon earth there is not his like, who is made without fear.  He beholdeth all high things; he is a king over all the children of pride.”

For Hobbes, all truth is God’s truth.  In the past, God had revealed his truth directly “as one man speaketh to another” (35.3), but now there were only two sources – nature and holy Scripture (the Bible).  In Leviathan Hobbes explicitly uses both sources to make his case.

The book is in four sections:  Of Man, Of Commonwealth, Of A Christian Commonwealth, and Of The Kingdom of Darkness.  In the first, Hobbes attempts a systematic analysis of human nature. That forms the basis for his exploration in the next section of what a collective commonwealth or social contract could look like, based on “the principles of nature only” (32.1). He then moves on to truth revealed in the Bible.  ‘Of a Christian Commonwealth’ introduces the principles of “supernatural revelations of the will of God” (32.1).  ‘Of The Kingdom of Darkness’ covers the misinterpretation of Scripture, ‘vain philosophy’, ‘fabulous traditions’, and cui bono (who benefits).

Hobbes’ argument

Having set out the characteristics of human nature in ‘Of Man’ Hobbes argues the natural state of man is one of war (13.9).  The only remedy, he concludes in ‘Of Commonwealth’, is for people to surrender some of their liberty to a sovereign monarch or assembly with absolute power, who would then be able to best protect them (21.9).

In ‘Of A Christian Commonwealth’ Hobbes draws on evidence from the Bible. He points out “God not only reigned over all men naturally by his might; but also had peculiar subjects” (35.3).  God had made covenants with these peculiar (special) subjects; first with Adam, then Abraham and his descendants.  The covenant with Abraham was renewed when God gave Moses the Law.  A new covenant had been made through the death and resurrection of Jesus – this time with Christian believers.  

Hobbes argues that God is the true sovereign of his special peoples, but that “…by the Kingdom of God, is properly meant a Common-wealth, instituted (by the consent of those which were to be subject thereto) for their civil governmentGod was King, and the high priest was to be (after the death of Moses) his sole viceroy, or lieutenant” (35.7).  The Law of Moses was both ecclesiastical and civil and the high priest had both ecclesiastical and civil powers. God later allowed the powers to pass to a king, and Hobbes sees this structure of government continuing in the Christian era, despite God’s covenant changing substantially.

The evidence

Hobbes argues carefully, relies heavily on evidence, and counters common objections to his model of government.  But he frequently glosses over any evidence that contradicts his view.  Here are some examples…

From nature 

Hobbes was right that war had been a constant scourge throughout human history, and most people had led lives that were “solitary, poor, nasty, brutish and short”.  Many wars had doubtless occurred because monarchs didn’t have enough power to keep their subjects safe. But Hobbes takes these observations to their logical conclusion, despite logical conclusions not being inevitable in real life. After all, there had been times of peace, regions that managed to escape war for long periods, and not everyone’s life had been nasty, brutish and short.  Hobbes himself had led a reasonably comfortable (if very eventful) life, dying at the ripe old age of 91.

From the Old Testament 

To justify his argument for a sovereign rather than a priest being God’s viceroy, Hobbes cites events described in I Samuel 8. Samuel had appointed his sons as judges, but they “turned aside after lucre, and took bribes, and perverted judgement”.  The elders of Israel complained to Samuel and said “now make us a king to judge us like all the nations”.  Samuel consulted God, then pointed out in detail the downside of having a king – essentially ‘he’ll take all your stuff’. But the elders persisted so God said “Hearken unto their voice, and make them a king”. 

Hobbes recognises the elders’ complaint about the corruption of Samuel’s sons was a pretext, when really they wanted a king “like all the nations”.  The children of Israel had form when it came to being like other nations; worshipping foreign gods, making graven images, building altars in high places, etc. You can almost hear God sighing as he lets his people have what they ask for. Hobbes is aware that the elders were deposing the high priest as God’s viceroy or lieutenant, but sees that as OK because God agrees, and glosses over the ‘like all the nations’ point.

From the New Testament 

Hobbes is aware that the new covenant with Christian believers raised big questions about ecclesiastical and civil government. He acknowledges that: the new Kingdom of God is a spiritual one and won’t become an earthly one until Jesus returns; Christian believers don’t all live in the same geographical area with the same laws and the same king; there are new biblical instructions for appointing church leaders; and the Roman Catholic church had both ecclesiastical and civil powers, but Hobbes recognised the authority of the Church of England (33.1). How does he resolve those tensions?

Hobbes maintains God is still king over all, still appoints earthly viceroys or lieutenants, and God’s law remains both ecclesiastical and civil.  (42.10).  Churches should follow biblical principles for their governance (including voting for church leaders), but the job of the church is to persuade people of the truth, not to coerce them (42.8-10).  And the Roman Catholic church is merely a church; sovereigns can consult the Pope on matters of religion, but then “the Pope is in that point subordinate to them” (42.80).

But to justify his model Hobbes cites biblical passages exhorting Christians to obey those in authority because they’re ordained by God (42.10).  For Hobbes “this obedience is simple” (20.16). But he overlooks corollary exhortations in the same passages; that husbands should love their wives as Christ loved the church, masters should treat their servants justly, and that the duty of those in authority is to promote good and prevent evil. Ironically, he also cites a response from Jesus to a question about authority that shows Jesus didn’t think obedience was at all simple.

The tribute question 

During Jesus’ life on earth, the inhabitants of Judea were required to pay taxes to the occupying Romans. The Pharisees and Herodians (supporters of Herod Antipas, the Jewish ruler), seeking to “entangle him in his talk”  (Matthew 22.15), asked Jesus whether it was lawful to pay tribute to Caesar.  It was a trick question – Jesus knew either ‘yes’ or ‘no’ would have been the wrong answer. So he requests a tribute coin and asks whose image and superscription is on it.  The reply – “Caesar’s”. The coin was probably a Tiberian denarius, which bore abbreviations meaning “Caesar Augustus Tiberius, son of the Divine Augustus. Highest Priest”

Jesus’ response “Render unto Caesar the things which are Caesar’s and unto God the things which are God’s”, is sometimes interpreted as drawing a distinction between the secular and spiritual.  But for first century Jews, and for Hobbes, that distinction didn’t exist.  Jesus’ audience would have realised the significance of what he said; all things were God’s, so Caesar had power only because God permitted it.  On top of that, the coin carried the graven image of a man who claimed to be divine and a high priest – claims that amounted to blasphemy. Jesus was making the point that earthly rulers were also obliged to keep God’s law. But Hobbes doesn’t comment on the nuance of Jesus’ reply (20.16). 

Hobbes’ response to issues such as rulers doing evil, or ordering people to do evil or to deny their faith, is that faith is a private (internal) matter that no ruler can control.  And if you disobey the ruler for good reason, you take the consequences, but ultimately that doesn’t matter because you’re answerable to God and your reward will be in heaven (43.23). The reason Hobbes skirts round evidence that contradicts his model becomes apparent in the last section of the book – ‘Of The Kingdom of Darkness’.

Universities, Aristotle and evidence

Up to this point, I’d seen Hobbes as a rationalist/empiricist. After all, he’d met Galileo and Descartes, emphasised reason, dismissed superstition, and based his argument on a systematic evaluation of evidence. Like most of his contemporaries he also believed in God and in the truth of the Bible, but not uncritically – he was aware of the issues around the authority of Scripture (33). But reading ‘Of The Kingdom of Darkness’, it dawned on me I’d misunderstood Hobbes’ worldview.

Hobbes is scathing about his university education.  He’s also very critical of Aristotle.  Initially, I assumed Hobbes’ complaint was that Aristotle made errors, but the university accepted Aristotle’s teaching uncritically; he says it didn’t teach proper philosophy, but rather ‘Aristotelity’ (46.13).

The penny didn’t drop until Hobbes refers to “Aristotle, and other heathen philosophers” (46.32), even though he had previously complained the University taught Roman religion, Roman law, and the art of medicine, “and for the study of Philosophy it hath no otherwise place, then as a handmaid to the Roman Religion” (46.13). But Hobbes didn’t just think the ancient Greek and Roman philosophers were wrong about some things – he thought they couldn’t be right because they were heathens

Implicit assumptions

When reasoning, we all start with assumptions. These are often implicit, either because we’re not fully aware of them, or we take it for granted that others share them.  I know from bitter experience that implicit assumptions can easily lead to wrong conclusions, or can result in disputes that could have been avoided had the assumptions on all sides been made explicit.  

Hobbes sees history as God’s plan unfolding, and his truth gradually being revealed. That plan included a new covenant with Christian believers, God appointing earthly rulers with ecclesiastical and civil powers, with the church subservient to those rulers.  Conveniently for Hobbes’ model, that’s exactly what had happened when Henry VIII had founded the Church of England in 1534.  Hobbes even views the Authorised version of the Bible as canonical because James I decided it was (33.1).   

Hobbes is critical of Aristotle because Aristotle’s religious beliefs (implicit assumptions) shaped his theories about the physical world – for example attributing the motion of inanimate objects to their inherent characteristics (46.24). And philosophers’ uncritical acceptance of Aristotle’s essentialism had led to absurd ideas about souls (46.15ff). 

But Hobbes had developed a blind spot when it came to the impact of his own religious beliefs on his thinking about government.  Hobbes’ conclusion that kings are divinely appointed, is based only on evidence that supports that conclusion.  And his belief in his conclusion means he repeatedly overlooks evidence that contradicts it. 

My implicit assumption that Hobbes’ worldview was a rational-empirical one, rather than one based on religious belief and confirmatory evidence only, was due to the opening chapters of Leviathan ticking the rational-empirical boxes.  I had to read a considerable amount of counter-evidence before it dawned on me I was wrong.  For me, Hobbes’ Leviathan has been an object lesson in checking implicit assumptions.

*I read the Oxford World Classics’ edition of Leviathan, edited by JAC Gaskin, and reissued in 2008. It follows Hobbes’ paragraph numbers and headings. You can also read the Project Gutenberg edition here. It has the paragraph headings, but not numbers. I also referred to the Authorised Version of the Bible ( first published in 1611), which Hobbes would have been familiar with.

Civitas and coronavirus

Civitas recently published a paper entitled Is Coronavirus unprecedented? It’s a good question. The review is subtitled A brief history of the medicalisation of life, and the first six chapters offer a fascinating account of how disease in general and epidemics in particular, have been perceived from the 4th century BC onwards. Evidence includes accounts from Thucydides, Bede, Boccaccio, Machiavelli, Defoe and Camus, describing epidemics such as typhus, bubonic plague, smallpox and cholera. The review encompasses models of medicine, citing Hippocrates, Lucretius, Galen, Chaucer, Bacon and Hobbes. The authors also examine the outcomes of attempts to prevent the spread of disease, such as the forced isolation of infected communities.

The lessons the authors seem to want us to learn are that pandemics are “part and parcel of human existence” (p.19); that the “startled overreaction” of governments to the current Coronavirus pandemic is a result of the “exaggerated pursuit of national health” (vii) and the medicalisation of modern life; and that measures to prevent the spread of pandemics often do more harm than good, There’s some truth in all of those conclusions, but the authors arrive at them only by overlooking several important factors. Let’s take each conclusion in turn.

pandemics are part and parcel of human existence
Until relatively recently, that was true. And people accepted it, but only because there was no alternative; as the authors point out “whether populations grew or shrank had little to do with medicine despite its best efforts” (p.39). But the acceptance of pandemics as a fact of life was a reluctant one, as indicated by historic responses to plagues. Infected individuals, households or communities were isolated, some people turned to strict religious observance, some fled from cities to the country if they could, and if they couldn’t, they’d often abandon themselves to a “‘shameful and disordered life’” (p.12). Plagues, although part and parcel of life, were seen as a scourge.

In recent decades things have changed. In the last 30 years smallpox has been eradicated and progress is being made towards eradicating polio, malaria, syphilis, measles, rubella and rabies. Most people, throughout history, would probably have seen that as a good thing.

the ‘exaggerated pursuit of national health’ and the medicalisation of modern life
Has the attempt to eradicate some diseases led to the medicalisation of modern life? ‘Medicalisation’ of normal life does occur, notably in respect of responses to adverse life events or poor living conditions. People who feel sad or anxious are often considered to have ‘depression’ or ‘anxiety’, and to require medication, when they’re actually experiencing a normal response to circumstances. But doctors can’t always tell whether or not those people will recover spontaneously given time, and often medicate because they don’t have time to diagnose properly in a 10 minute appointment, support services have long waiting lists, and dealing with environmental causes is beyond their remit; at least medication can help patients get on with their lives in the meantime.

But a viral infection doesn’t need to be ‘medicalised’ to damage health – it does so regardless of how people categorise it. And medical knowledge about its infectivity, symptoms, and how to treat them is essential to governments making socio-economic decisions.

The authors seem to see the possibility of eradicating diseases as naively utopian, and as opening the door to authoritarianism: “After 1945, WHO programmes of disease eradication reinforced the authority of science and the medicalisation of life” (p.36). This prompts a rather odd conclusion: “Whether populations grew or shrank …changed utterly after 1945, and in not very well-understood ways” (p.39). On the contrary, the ways in which it changed are very well understood, but have been explored in fields other than theology and political science – the author’s specialisms.

measures to prevent the spread of pandemics often do more harm than good
The review points out that the cordons sanitaires put in place to isolate infected communities and prevent plague spreading, often caused additional problems. Trade ceased and food shortages occurred, triggering civil unrest. If the cordon were policed by the military following a time of conflict, the unrest could also be political (p.27). Isolation measures undoubtedly cause harm and do economic damage. But the authors blithely overlook the catastrophic damage caused by not isolating infected people. The disruption to normal life resulting from widespread death, sickness, and long-term health problems in survivors during a pandemic has been enormous.

The authors see Coronavirus as a “mild contagion” (p.34), and claim “governments embraced an epidemiological prediction of death rates of 1 per cent of the West’s population unless they locked down the economy, quarantined households and suspended all non-essential activity.” (p.viii)

That’s not the case. The mortality rate for Coronavirus was estimated at 1% if nothing were done to prevent it. Lockdown wasn’t the only option. If Exercise Cygnus had been properly carried out in 2016, and national and local plans put in place for responding to a highly contagious virulent infection, the UK could have had the capacity to test and trace, and to manufacture sufficient PPE, so lockdown could have been avoided entirely. But that didn’t happen, probably because in 2016 the UK government was focussed on Brexit rather than public health. The findings of Exercise Cygnus were classified, but were leaked by The Guardian in May 2020. The report indicated that the UK was poorly prepared for a serious epidemic. Lockdown was necessary only where countries lacked test and trace capability. Describing the pandemic as ‘unprecedented’ is a convenient way of distracting attention from that.

It’s also worth noting the review doesn’t mention the Asian flu pandemics of 1957 and 1968, after the formation of the NHS. In the UK, life for those uninfected carried on much as usual (although this Lancet article shows a typist wearing a mask).

There were reasons for the nation just carrying on. In the 1950s and 1960s, epidemics were the norm. There were annual outbreaks of measles, mumps, rubella, whooping cough and chicken pox. There were sporadic outbreaks of smallpox and diphtheria. Intensive care facilities were relatively basic so only a limited number of people would have benefited from hospital admission. In the 1957 Asian flu epidemic, the death rate was estimated at 0.3%, less than a third of the rate for Covid-19, but there were significant economic consequences. Factories, offices and mines closed, and sickness benefit payments amounted to £10m.

Even Alex Tabbarok, a libertarian economist, cites the growth rate the US economy in 1957 following the Asian flu pandemic as -4% in the last quarter and -10% in the first quarter of 1958. But as he points out, many references to this recession don’t even mention the pandemic as a contributory cause.

conclusion

Pandemics have indeed been part and parcel of human existence, and will continue to be. However virulent or infective they are, they have a devastating effect on human wellbeing, by their impact on mortality rates, health or the economy. We have the technology and knowledge to minimise that damage, as happened in the SARS-CoV outbreak in 2002-4, and in several outbreaks of Ebola since it was first identified in 1976.

Inadequate preparation was identified as a cause for the damage caused by the 1957 flu epidemic and inadequate preparation was directly responsible for the lockdown put in place to limit the spread of Covid-19.

The authors refer to the “anxious insecurity” they claim has been caused by the “medicalisation of life” (p.39) but overlook the anxious insecurity, panic, grief, and economic devastation caused by disease that dogged human beings until the advent of modern medicine.

The authors of this report do something that I’ve seen increasingly recently. They begin with a belief, cite evidence that supports their belief, and overlook evidence to the contrary from relevant fields – in this case biology, medicine and economics. Another case of policy-based evidence, rather than evidence-based policy.

reference

Jones, DM & Webb, E (2020). Is Coronavirus unprecedented? A brief history of the medicalisation of life. Civitas.

Michaela: duty, loyalty and gratitude

duty and loyalty
In ‘National Identity’, his chapter in The Power of Culture, Michael Taylor explains that the Michaela Community School’s values are communitarian (p.78). Communitarianism in turn is based on the principle of self-governing small communities. The idea is that communities are essential for individuals to thrive, and in return for community support, individuals are expected to ‘give something back’. Michaela students’ obligations to the school, the wider community and the nation are framed in terms of duty.

Michael sees loyalty as a corollary of duty, and claims “The family and local community are an integral part of this, but the most logical point of our loyalty, whilst leaving plenty of room for critical analysis, should be to the nation”. He goes on, bizarrely, to frame rights in terms of possessing a passport; “As well as ensuring that pupils know that they have certain rights which are accorded to them by virtue of having a British passport, they also have a series of obligations and responsibility to their fellow citizens” (p.78). Do only people with passports have rights?

It’s clear that Michaela teachers feel a strong sense of duty toward their students. They’re committed to ensuring these young people grow into knowledgeable, civilised adults who lead fulfilling lives. But the emphasis in this book is on the students’ duty, rather than the teachers’. There are hints that’s because Michaela students tend to arrive with an awareness of their ‘rights’, but not of the responsibilities that go with them.

rights and responsibilities
Michaela doesn’t seem to think much of the contemporary emphasis on ‘rights’. Michael says that to “move away from the appalling world views and racism that have led to so much misery” is ‘admirable’ but that “embracing diversity in this country is often associated with a rejection of Britishness and in particular, Englishness” (p.74). And ”we have gone too far in Britain in creating a culture where a significant number of people appear to believe that rights are not always mirrored by responsibilities” (p.78).

As a history teacher, Michael must be aware of how the current focus on rights came about. For centuries British people (in common with the rest of the world) either had rights granted (or withdrawn) by a powerful minority, or they had to fight for rights, sometimes at great cost. And not always against invaders – the powerful minorities were usually distinctly British, and in particular, English. Mass education and improved communication have resulted in people becoming increasingly fed up with the focus being on their responsibilities rather than their rights, and many feel it’s time that changed.

Why would the Michaela narrative (Michaela is keen on narratives) overlook the inequity inherent in British history? My guess is that it would call into question the school’s rather hierarchical view of society and the value of the high status positions students are expected to aim for.

I agree the contemporary emphasis on rights glosses over responsibilities. It’s possible that Michaela students are taught about their responsibilities and rights, but I didn’t spot any evidence of that. Instead, the school seems to have given the rights-and-responsibilities pendulum a hefty shove in the direction of responsibilities. That’s understandable, given the current climate, but isn’t going to help students comprehend their role in a democratic society.

the social contract and entitlement
Something noticeable by its absence from The Power of Culture is the concept of the social contract. That’s odd, because Michaela is keen on British culture, and the social contract is largely a British idea (e.g. Hobbes, Bacon, Locke) that underpins our constitution. The term social contract usually refers to a principle of national governance, but can be used to describe any social agreement between an individual and a group. Social contracts vary between individuals and change over time; they’re fluid, flexible arrangements that can be explicit (enshrined in law for example) or implicit (people might not be aware that there is a social contract until someone breaks it).


Why is the social contract missing from the Michaela model? I’d hazard a guess that’s because Locke and Rousseau subscribed to it, and they of course, are associated with ‘progressive’ education – a no-no for Michaela.

Michael claims “the antithesis of duty is entitlement” (p.78). I’m not sure duty has an antithesis as such, although a sense of entitlement can undermine a sense of duty. But as residents of the UK, Michaela students do have entitlements, and it’s OK to feel entitled to them; duties and entitlements can exist side-by-side. The social contract can include entitlements. In the UK, for example, all children are entitled to an education (although in English law it’s framed in terms of a parental duty). Children are entitled to a place at a state school if parents request that. The state recruits and pays teachers to provide a suitable education for those children, which brings us to another key feature of Michaela culture – gratitude.

gratitude
Michaela students are expected to express gratitude for the work their teachers and other school staff do, via verbal ‘appreciations’ at lunchtimes (followed by two claps), and via written postcards (there are examples on pp.129-30 in Iona Thompson’s chapter ‘The Culture of Gratitude’). The emphasis is on how hard teachers work, how many hours they put in, and a question from a student at another school “But isn’t that your job Miss?” is described as ‘obnoxious’ (p.125).

I think it’s appropriate to make children aware they live in a country with a long democratic tradition, where primary and secondary education are free at the point of use, and to be aware this isn’t the norm across the world. And it’s appropriate to hope they appreciate teachers’ commitment. But teachers volunteer for the job and they are paid. Students are unpaid conscripts who are required to be educated, not only for their own benefit, but also for the common good. Most students don’t have any option but to attend school, and their teachers are paid to provide them with a suitable education, so expecting students to express their gratitude formally seems a bit much.

Incidentally, I think It’s reasonable to expect students to say ‘please’ and ‘thank you’, because most cultures use such non-costly tokens to facilitate social interaction. But everyone knows ‘please’ and ‘thank you’ are tokens, and they’re are easy to use even if you actually feel no obligation or gratitude whatsoever. If more costly tokens are expected (such as ‘appreciations’ or postcards), some students will be happy to oblige regardless of what they really feel, and students who don’t feel grateful, or struggle to express themselves, will feel under pressure to comply regardless. It reminds me of the little girl being interviewed about Sunday School who said she always answered questions with ‘Jesus’ or ‘God’ “because they like it when you say that”.

values, culture and knowledge
Michaela’s explanation of its values highlights a recurring feature of the self-styled ‘traditional’ teachers’ discourse. The teachers quite rightly emphasise the importance of knowledge in education. They draw attention to the difference between experts and novices, and point out that novices don’t usually have sufficient knowledge to mimic the behaviour of experts or ask the kinds of questions experts would ask, so ‘discovery’ is often an inefficient way of learning; direct instruction is usually more effective.

In the classroom students by definition are novices, and the teacher by definition is the (subject) expert. But many traditional teachers don’t apply the expert-novice distinction outside the classroom to areas where the teachers themselves are novices. So, cognitive science has been cited to justify particular pedagogical methods favoured by traditionalists, but the ‘cog sci’ is often based on snippets of information picked up second- or even third-hand from other teachers. The ‘cog sci’ has often been just plain wrong, because the teachers in question don’t have sufficient domain-specific knowledge.

So, despite Daniel Willingham carefully presenting “just about the simplest model of the mind possible” (Willingham 2009), his model has been wrongly interpreted as representing cognition as a whole. And teachers have been diligent in debunking some educational ‘myths’ (brain gym, discovery learning, learning styles) but have blithely replaced them with others such as;

-knowledge in long term memory is ‘secure’,
-knowledge in long term memory is always available and doesn’t take up any ‘space’ in working memory,
-all schemas are ‘chunked’ so a large schema forms only one item in working memory,
-all skills are domain-specific and can’t be transferred,
-children’s brains are the same as teachers’ brains.

Teachers with expertise in English literature seem especially prone to replacing the principles of cognitive science with principles from their own discipline. So much for skills being domain-specific.

It’s puzzling why the traditional teachers have consulted so few psychology teachers or cognitive scientists. My guess is that’s partly because experts are likely to say “it’s a bit more complicated than that”, and investigating the complications would involve the traditional teachers in more work (they see learning as ‘hard’). Another reason is they’d have to re-think their model of teaching and learning.

Cognitive science is a rather esoteric area, so teachers couldn’t be expected to know much about it (although there’s nothing stopping them getting an overview from an expert, or from an undergrad textbook – traditional teachers are keen on textbooks). But values and British culture aren’t especially esoteric, and are key features of public discourse, so you’d expect a school that’s published a book about them to be well-informed about their provenance. Instead, there are whole facets missing from their model.

I fail to see how Michaela can reconcile its claim that it wants students from deprived backgrounds to improve their life chances via education, with failing to question an inherently inequitable model of society, and ignoring the British history that’s resulted in that very deprivation.

references

Michaela Community School (2020). The Power of Culture, Katherine Birbalsingh (ed.). John Catt.

Willingham, Daniel T (2009). Why Don’t Students Like School? Jossey-Bass.

Michaela: colonising the curriculum?

If all I’d known about the Michaela Community School was its day-to-day routine, I’d have raised little more than an eyebrow. That’s in part because day-to-day life at Michaela looks remarkably like day-to-day life at the grammar school I attended half a century ago. What prompted me to raise more than an eyebrow is the new book from the Michaela Community School, The Power of Culture.

As far as the day-to-day is concerned it’s packed with positive practical ideas. I noted particularly;
-creating liberating pathways for students
-taking a long term view of conduct
-catching the students being good
-not expecting them to ape experts
-presenting knowledge in context
-mini introductions to practical, useful non-academic knowledge
-the outside speaker programme
-whole-class marking
-no targets
-no performance related pay
-all school staff (including admin & cleaners) being involved.

On a day-to-day level, Michaela’s methods are obviously effective. Students learn, raise their expectations, improve their behaviour and get good exam results. It’s when it came to the school’s ethos (beliefs and values) that I felt the framework began to wobble.

The Michaela ethos might reflect the pre-existing beliefs of staff, but the school also appears to have resorted to a bit of post-hoc justification for its practices. Rather than practice emerging from a coherent, thought-through set of beliefs and values, I get the impression teachers have;
1. seen ineffective or counterproductive practices or values in other schools (students learn little, have low aspirations, and their behaviour is out of control),
2. reacted against those practices,
3. tried alternatives,
4. and only then identified beliefs and values that justify the alternatives.

The lack of coherence and thinking-through is important, because beliefs and values are taught explicitly at Michaela and can have a significant impact on students’ lives. In this post I focus on a key feature of the Michaela ethos highlighted in The Power of Culture – British history and culture.

British culture
Michaela has reacted strongly against calls to ‘decolonise’ the curriculum, as Katie Ashford explains in ‘Schools should teach Dead White Men’. Although her initial description of the aims of ‘decolonisation’ advocates is pretty accurate, I felt Katie goes on to caricature their position by citing extreme views. Some advocates of ‘decolonising’ might think ‘our society is entirely racist’ (p.59), be calling for the removal of dead white men from the curriculum (p.63), or want only black writers to be included (p.67), but most don’t. What they’re concerned about is the implicit assumptions underpinning the curriculum that can push our thinking in a particular direction without us being aware of it. They’re calling for a restructuring of the curriculum that views its content from an inclusive, egalitarian standpoint, rather than from the point of view of dominant cultures.

Michaela’s view in contrast, is that each of their students is British, lives in England, and in order to participate fully in British/English life, needs to know about British/English history and culture, a point Michael Taylor expands on in ‘National Identity’.

What is Britishness?
Michael understands why schools celebrate cultural diversity. But he claims that is ‘often associated with the rejection of Britishness and in particular, Englishness’. Despite this, people ‘feel British and people feel English’ (p.74). For Michaela, a sense of British and English identity is engendered by the Union flag, the Queen’s birthday, St George’s Day, ‘important national songs’ (National Anthem, Jerusalem, I Vow to Thee my Country), Westminster Abbey, the Palace of Westminster, St Paul’s Cathedral and WW1 battlefields. I wouldn’t question the importance of students knowing about those symbols, but St George’s Day is the only one that pre-dates the colonial era – which lends weight to the decolonisers’ point.

Now, I feel as British and English as the next British/English person, but what makes me feel British/English is older, more egalitarian symbols; leaders being ‘first among equals’ (a principle espoused by, amongst others, Celts and Anglo Saxons), observations such as “when Adam delved and Eve span, who was then the gentleman?” (John Ball, Peasants’ Revolt, 1381) and Civil War battlefields. For me, the symbols embraced by Michaela represent a social hierarchy that has a longstanding tendency to take away people’s stuff and give it to its posh mates, something that all Michaela students need to be aware of. They need to be aware of it because Michaela points its students in the direction of the upper echelons of that social hierarchy (Russell Group and Ivy League universities, civil service internships etc).

Clearly, questions need to be asked about why those from ethnic minorities and/or state schools are under-represented in high status professions. And students from ethnic minorities and/or state schools should indeed be supported to aim high academically. But questions also need to be asked about why certain professions have high status, and why other equally important ones don’t. As a community, we don’t need only high flyers. We need people who can do the nuts-and-bolts hands-on work that keeps the country going. Many of those jobs don’t have much social cachet, but are interesting, demanding, well-paid and essential. I’m not talking about menial work here; I’m asking why farming, engineering, manufacturing, retail management, local government or nursing, don’t have the same allure for Michaela as say, wealthy bankers (p.64) or the civil service (p.115).

Unity and diversity
Michaela, with some justification, wants to shift the focus from our differences to what we have in common, from the individual to the community. But in doing so it overlooks an important principle. One of the functions of a democracy is to safeguard the diversity of individuals; to protect our liberty to live as we think fit, free from arbitrary constraint (see previous post). Human diversity isn’t an optional extra; it’s vital for our standard of living and quality of life. Communities simply wouldn’t be able to adapt or develop if we were all the same.

And although people in Britain do have much in common, we are also inherently very diverse, a point that Michael glosses over. For example, he says “language, law and custom are all concrete realities that link people from Caithness to Cornwall” (p.79). But in Cornwall you might encounter a campaigner for Cornish independence whose child attends a Cornish-speaking nursery. In Caithness you’d be quite likely to bump into an ardent Scottish nationalist, speaking Gaelic, living under Scottish law, and practising customs unique to Scotland. There are historical reasons for that, which Michael as a history teacher must be aware of, but doesn’t mention. (His chapter on teaching history is well worth reading, incidentally).

One thing most cultures throughout human history have in common, is that those with few resources have been exploited by those with more. And that doesn’t only entail some nations exploiting other nations; many have exploited others in their own community. It’s a feature shared by all cultures, and something they all end up trying to prevent. Getting students from ethnic minority and state school backgrounds into high status professions is one way to tackle inequality, but won’t effect much change if those same students are taught to revere symbols of the very system that has exploited in the past – and is still exploiting.

Michaela doesn’t seem to understand the problematic aspects of the political and social hierarchy. It’s as if the school has been so busy reacting against the prevailing focus in education on diversity, context and structural issues, it’s come up with an alternative model that ignores those factors completely.

Colonising the curriculum
There’s a good argument for students focusing on the history and literature of the country they live in, and as Katie points out there isn’t time to teach about all cultures in depth (p.70). But students don’t need to learn everything in depth. What they do need is an overview of world history and culture – from a world, rather than a British perspective.

But Michaela’s wider perspective isn’t a world one, it’s a Western/European one (pp. 53, 69, 71, 172). It’s as if agriculture, city states, administration, industry, trade, and arts and crafts didn’t exist prior to the ancient Greeks. I felt the Western/European perspective is epitomised in two sentences children are expected to learn. One is;

Shakespeare is widely recognised as the greatest writer of all time, and was a great dramatist. (p. 379)

Shakespeare is certainly considered by many to be the greatest writer of all time, but the word ‘recognised’ implies his status is a matter of fact, rather than a matter of opinion. Some ancient Greeks could be contenders for the title, especially if all their manuscripts were still in existence. And who knows what great dramatists preceded them?

The other sentence is the answer to the second of two questions:

What word means ‘the belief that there is one God’?
How were the Israelites different from the Canaanites? (p.197)

My childhood was steeped in Bible stories and my immediate answer to the second question was “the Canaanites lived in the land of Canaan; the Israelites invaded it”. But the answer students are expected to give is “the Israelites differ from the Canaanites because, whereas the Israelites were monotheistic, the Canaanites were polytheistic”. That’s certainly a difference, but it probably wouldn’t have been the one foremost in the minds of the Canaanites at the time – which again reinforces the decolonisers’ argument.

It’s possible Michaela staff are presenting students with a Western/European/British/English history and culture and Judeo-Christian beliefs from a critical perspective, but I didn’t spot any evidence of that. Instead, teachers appear to accept the current social hierarchy as a given – uncritically. And the criterion for ‘success’ (beyond academic achievement) is attaining high social status rather than leading a fulfilling and useful life. That’s ironic because the criterion for ‘success’ in the street culture familiar to many of Michaela’s students, is also high social status. I’m not convinced that the principles of loyalty to the nation and giving something back (p.78) will eradicate the inequities inherent in British culture.

Michaela culture – a Swiss cheese model?

The Michaela Community School was founded in 2014 by Katharine Birbalsingh (as Headteacher) and Suella Braverman (currently Attorney General). The school’s ‘no excuses’ approach to education generated much controversy, but their first GCSE results outperformed the national average and their Progress 8 score ranked them fifth nationally.

In 2016 the school published The Battle Hymn of the Tiger Teachers, a summary of the Michaela ethos, with contributions from its staff. I found it perturbing and blogged about it here . But those were early days. The school recently published Michaela: The Power of Culture, which I hoped would offer more insights into its success. I got as far as Jonathan Porter (deputy head) explaining the rationale for the school’s culture, in ‘Michaela – A School of Freedom’. I’ve had to take a break. Here’s why…

Liberty
Jonathan opens by claiming that we have a ‘romantic instinct’ that yearns for “emancipation rather than prescription”, for “a loosening rather than a tightening of the fence” (p.39). He says the romantic instinct has its origins, not in “ancient theory – which understood true freedom to mean virtuous self-government”, but in John Locke’s 17th century proposition that human beings in their natural state are ‘ungoverned and unconstrained’ (p.40). Jean-Jacques Rousseau largely concurred with Locke, and according to Jonathan, Rousseau’s views on education set out in Emile, or On Education (1762) have had a profound and detrimental influence on education in Britain.

Isaiah Berlin revisited Locke’s ideas in the 1950s. Berlin posited two types of liberty: Negative liberty that seeks to minimise the obstacles to people doing what they want to do; and positive liberty, the freedom to self-determine, which might require some input from the state. Berlin was wary of positive liberty due to the potential for state control. But Jonathan agrees with Charles Taylor that “…we cannot erase the view of positive freedom entirely, not least because our ability to exercise any freedom we might have hinges on certain ends” (p.45).

Michaela adopts a ‘no excuses’ principle for behaviour management and Jonathan sees this as grounded in the ‘ancient theory’ of virtuous self-government. His reasoning appears to be that children often make poor choices about how to use their liberty (he goes into detail about the temptations of social media), and that the ‘ancient theory’ had stood the test of time until Locke came along. Many of Jonathan’s claims stand up to scrutiny – but some don’t. Also, he tells only half the story – and the other half is important.

Virtuous self-government
As I understand it, the ‘ancient theory’ of virtuous self-government recognised that people (individually and collectively) were generally unhappy about external control, hence the ‘self-government’ bit. But self-government alone didn’t guarantee true liberty – that was possible only for those not enslaved to their passions, a thread running through the liberty discourse. That meant virtue was essential for individuals and communities to enjoy true freedom.

Something Jonathan overlooks is that many (at least from Judea to Greece) who subscribed to the ‘ancient theory’ also believed that human beings had fallen from a prior state of grace. The human task was to remedy that fall via sacrifice, rituals, good works etc. Deities and their earthly representatives (prophets, priests, kings et al.) were usually involved. Promoting the idea that human moral status is inherently flawed, put the deities’ earthly representatives in positions of considerable power. But power structures don’t feature in Jonathan’s analysis.

Locke and Rousseau
Locke (and Rousseau) challenged the idea that we’re fundamentally sinful by nature and have to spend our lives making up for it. Instead, they proposed that whatever our moral status, we’re entitled to live our lives as we think fit, not as prescribed by social or religious institutions. Of course if we’re interacting with other people, our right to exercise our natural liberty is likely to conflict with someone else’s right to do the same, so we need some form of government to adjudicate, and some rules we all agree to comply with, to ensure a peaceful co-existence. This is the basis of Locke’s take on social contract theory, to which Rousseau also subscribed. Jonathan refers to social contract theory (p.40) but goes on, I felt, to caricature Locke’s liberty as Milton’s ‘licence’. Milton was right that for some “licence they mean when they cry liberty”, but that wasn’t what Locke and Rousseau meant. What they objected to wasn’t constraint per se, but arbitrary constraint – another point Jonathan refers to (p.40) but then bypasses.

Both Locke and Rousseau had direct experience of the doctrine of original sin being used to justify arbitrary constraint.The English civil war had begun shortly before Locke’s tenth birthday and his father served in the Parliamentary army. John was a bright lad and would have been well aware of what his father was fighting for. Rousseau had grown up in Calvinist Geneva but spent most of his adult life Catholic France, so had seen the doctrine of original sin from two very different theological perspectives. Locke’s and Rousseau’s ideas about liberty were responses to major issues of their day, and were popular because the ancient theory of virtuous self-government, and more importantly its implementation, were quite evidently no longer fit for purpose.

Virtue and power
Virtuous self-government is an appealing idea, but even by the 5th century BCE it had become clear it was feasible only in relatively small, completely independent communities. By then, the population of Athens had grown too large for direct participation in decision-making. Thucydides recounts discussions about whether decisions should be made by only a proportion of the population, or by representatives. And recounts the disagreements over who was ‘virtuous’.

By the 17th century CE, virtuous self-government had been found by many to be a necessary but insufficient foundation for society. You don’t need to believe in a deity to believe in virtue, but if virtuous self-government is the model a society has adopted, somebody ends up deciding what’s virtuous and what’s not. And that somebody is usually whoever has social or political power. After all, ‘virtue’ has been used to justify despotism, genocide, murder, torture and slavery – none of which feels particularly virtuous if you’re on the receiving end. The early Athenians argued that nature itself showed the strong should rule the weak, but unsurprisingly many of the tribes they tried to rule objected, on the grounds that they too wanted to govern themselves.

Of course by definition children don’t have sufficient experience or knowledge to make fully informed life choices. Locke considered the mind a tabula rasa; for him, it was important to ensure children’s early experiences were positive. Rousseau in contrast, had been a student in the school of hard knocks and felt it was important for children to find out about reality for themselves. I think Michaela is right that children need guidance and support from adults, to be taught effective life strategies, and to learn self-control in order to best exercise their liberty. But Jonathan doesn’t ask who decides what’s virtuous, or what the ends of education are – key issues for Locke and Rousseau.

Arbitrary constraints
Jonathan mentions arbitrary constraints, but sees them as political constraints (p.46) rather than social ones. There’s an example in his discussion of character (p.49). He says; “If pupils at Michaela are just one minute late to school, they will receive a 30-minute detention at the end of the day. We do make exceptions, although these really are exceptions. Most days a handful of detentions will be given to pupils who slept through their alarms, didn’t pack their bags the night before, or left home late but didn’t run to catch the bus… Although we are forgiving, a future employer may not be”.

I understand why pupils should be expected to arrive at school on time – it’s inconvenient for everybody if they don’t. But one minute late? And although the school might make allowances for exceptional circumstances, it isn’t forgiving – pupils are punished for transgressions.

The justification for the no excuses approach to tardiness is that a future employer might expect down-to-the-minute punctuality. It’s true that some industries (e.g. transport, manufacturing) do operate at that level of punctuality – but in those industries lateness has direct, real-life, non-arbitrary consequences. It’s also true that many employers require employees to clock in and clock out, but they usually use flexitime, which means arriving a minute later means leaving a minute later to compensate. And many employers, particularly in the type of employment Michaela encourages its students to aspire to, don’t monitor minutes or even hours, as long as the work gets done. So what is the ‘one minute late’ rule really about? There’s a fine line between discipline and control. It was a line Locke and Rousseau were aware of but it’s not clear where Michaela’s line is.

It looks to me as if Michaela has chosen a ‘no excuses’ approach to school culture because it has certain administrative advantages, then justified that choice by appealing to authorities that support their position, such as the virtuous self-government model, Aristotle, Graeco-Roman tradition, 1000 years of history, and Edmund Burke (p.46ff). Rather than use theory from opposing authorities (e.g. Locke, Rousseau, Berlin) to test the school’s model for possible flaws, it caricatures opposing theories as responsible for licence, undermining the British education system, and allowing children unrestricted access to social media.

Virtuous self-government is an appealing idea, but survived for 1000 years of history largely because it was shored up by religious and secular power hierarchies with those at the top deciding what was virtuous and how far self-government extended – as  Michaela is doing. But Michaela’s students will take their place in an adult world that relies on people negotiating outcomes; at the state level, in the workplace and between individuals. Will a ‘no excuses’ culture prepare them effectively for that?

Virtuous self-government and a ‘no excuses’ culture work for some people and some institutions, but the ancient Athenians, contemporaries of Locke, Rousseau, and Berlin, and state education systems from Prussia to the UK, have found that they don’t work for everybody –  which is largely why those systems changed.  Virtuous self-government and a ‘no excuses’ culture have face value appeal, but as systems of governance they’re as full of holes as a Swiss cheese.

apprentice without a sorcerer

Cummings’ essay Some Thoughts on Education and Political Priorities highlights his admiration for experts, notably scientists, but this doesn’t prevent him making several classic novice errors. These errors, not surprisingly, lead Cummings to some conclusions contradicted by evidence he hasn’t considered. I’ve focused on four of them.

oversimplifying systems

Cummings knows that systems operate differently at different levels, and although all systems, as part of the physical world involve maths and physics, you can’t reduce all systems to maths and physics (p.18). But his preoccupation with maths and physics, and lack of attention to the higher levels of systems suggest he can’t resist doing just that. In his essay maths is mentioned 473 times (almost 2 mentions per page) and physics 179 times. Science gets 507 references and quantum 238. In contrast, the arts get 8 mentions and humanities 16. Ironically, given his emphasis on complex systems, Cummings seems determined to view complex knowledge domains like education, politics, the humanities and arts, only through the lenses of maths, physics and linear scales.

Cummings’ first degree is in history, but he knows a lot of scientific facts. How deep his understanding goes is another matter. He opens the section on a scientific approach to teaching practice with the famous ‘Cargo Cult’ speech in which Richard Feynman accused educational and psychological studies of mimicking the surface features of science but not applying the deep structure of the scientific method (p.70). Cumming’s criticism is well-founded; evidence has always influenced educational practice in the UK, but the level of rigour involved has varied considerably. Ironically, Cummings’ appeal to scientific evidence then itself sets off down the cargo-cult route.

misunderstanding key concepts: chunking vs schemata

Cummings claims “experts do better because they ‘chunk’ together lots of individual things in higher level concepts – networks of abstractions – which have a lot of compressed information and allow them to make sense of new information (experts can also use their networks to piece together things they have forgotten)” (p.71).

‘Chunking’ occurs when several distinct items of information are perceived and processed as one item. The research e.g. Miller (1956), De Groot (1965) and Anderson (1996), shows it happens automatically after groups of low-level (simple) items with strongly similar features have been encountered very frequently, e.g. Morse code, words, faces, chess positions. I’ve not seen any research that shows the same phenomenon happening with information that’s associated but complex and dissimilar. And Cummings doesn’t cite any.

Information that’s complex and dissimilar but frequently encountered together (e.g. Periodic Table, biological taxonomy, battle of Hastings) forms strong associations cognitively that are configured into a schema. What Cummings describes isn’t chunking; it’s the formation of a high level schema. Chunks are schemata, but not all schemata are chunks.

Cummings is right that experts abstract information to form high level schemata, but the information isn’t compressed as he claims. The abstractions are key features of aspects of the schema e.g. key features of transition metals, birds or invasions.  I can just about hold all the key features of birds in my working memory at once, but not at the same time as exceptions (e.g ostrich, penguin) or features of different bird species. The prototypical features make it easier to retrieve associated information, but it isn’t retrieved all at once. If I think about the key features of birds, many facts about birds and their features spring to mind, but they do so sequentially, not at the same time. The limitations of working memory still apply.

The distinction between chunking and schema formation is important because schemata play a big part in expertise e.g. Schank & Abelson (1977) and Rumelhart (1980). Despite their importance, Cummings refers to schemata only once, when he’s describing how his essay is structured (p.7). The omission is a significant one with implications for Cumming’s model of how experts structure their knowledge.

experts vs novices

Experts in a particular field derive their expertise from a body of knowledge that’s been found to be valid and reliable. They construct that knowledge into schemata, or mental models. New knowledge can then be incorporated into the schemata, which might then need to be configured differently. Sometimes experts disagree strongly, not about the content of their schemata, but about how the content is configured.

The ensuing debates can go on for decades. A classic example is the debate between those who think correlations between intelligence test scores indicate that intelligence is a ‘something’ that ‘really exists’, and those who think the assumption that there’s a ‘something’ called intelligence, shapes the choice of items in intelligence tests, so correlations should come as no surprise (see previous post). Another long-standing debate involves those who think universal patterns in the structure of language mean that language is hard-wired in the brain, versus others who think the patterns emerge from the way networks of neurons compute information.

Acquiring key information about an unfamiliar knowledge domain takes time and effort, and Cummings has obviously put in the hours. What’s more challenging is finding out how domain experts configure their knowledge – experts often take their schemata for granted and don’t make them explicit. Sometimes you need to ask directly (or be told) why knowledge is organized in a certain way, and if there are any crucial differences of opinion in the field.

Cummings doesn’t seem to have asked how experts structure their knowledge. Instead, he appears to have squeezed knowledge new to him (e.g. chunking) into his own pre-existing schema without checking whether his schema is right or wrong. Or, he’s adopted the first schema he’s agreed with (e.g. genes and IQ). He admits to basing his genes/IQ model largely on Robert Plomin’s Behavioural Genetics and talks by Stephen Hsu. He dismisses the controversies and takes Plomin and Hsu’s models for granted.

evaluating evidence

There are references to the scientific method in Cummings’ essay but they’re about data analysis, not the scientific method as such. A crucial step in the scientific method is evaluating evidence – analysing data for sure, but also testing hypotheses by weighing up the evidence for and against. This process isn’t about ‘balance’ – it’s about finding flaws in methods and reasoning in order to avoid confirmation bias.

But Cummings repeatedly accepts evidence in support of one thing or against another, without questioning it. I’d suggest he can’t question much of it because he doesn’t know enough about the field. Some that caught my eye are:

  • Assuming hunter-gatherers’ knowledge is “based on superstition (almost total ignorance of complex systems)” (p.1). Anthropology that might claim otherwise, is like other social sciences, summarily dismissed by Cummings.
  • Unsubstantiated claims such as “Aeronautics was confined to qualitative stories (like Icarus) until the 1880s when people started making careful observations and experiments about the principles of flight” (p.21). Da Vinci, Bacon, Montgolfiers, Caley? No mention.
  • Attributing European economic development between 14th and 19th centuries to ‘markets and science’ and omitting the role of the Reformation, French Revolution, or Enclosure Acts (p.108).
  • Uncritical acceptance of Smith’s and Hayek’s speculative claims about the benefits of markets (p.106).
  • Overlooking systems constraints on growth – in corn yields, computing power etc. (pp.46, 231-2). No mention of the ubiquitous sigmoid curve.
  • Overlooking the Club of Rome’s Limits to Growth when discussing shortage and innovation (p.112).
  • Emphasising the importance of complex systems with no mention of systems theory as such (e.g. Bertalanffy’s general systems theory).
  • Ignoring important debates about construct validity e.g. intelligence and personality (p.49).

not just wrong

People are often wrong about things and usually a few minor errors don’t matter. In Cummings’ case they matter a great deal, partly because he’s so influential, but also because even tiny errors can have huge consequences. I chose the example of chunking because Cummings’ interpretation of it has been disproportionately influential in recent English education policy.

Daisy Christodoulou in Seven Myths about Education (2014) takes the assumption about chunking a step further. She’s right that chunking low-level associations such as times tables allows us to ‘cheat’ the limitations of working memory, but wrong to assume (like Cummings) high-level schemata do the same. And flat-out wrong to claim “we can summon up the information from long-term memory to working memory without imposing a cognitive load.” (Christodoulou p.19, my emphasis). Her own example (23,322 x 42) contradicts her claim.

Christodoulou’s claim is based on Kirschner, Sweller & Clark’s 2006 paper ‘Why minimal guidance during instruction does not work’. The authors say; “The limitations of working memory only apply to new, yet to be learned information that has not been stored in long-term memory. New information such as new combinations of numbers or letters can only be stored for brief periods with severe limitations on the amount of such information that can be dealt with. In contrast, when dealing with previously learned information stored in long-term memory, these limitations disappear.” (Kirschner et al p.77).  The only evidence they cite is a 1995 review paper proposing an additional cognitive mechanism “long-term working memory”.

I have yet to read a proponent of Kirschner, Sweller & Clarke’s model discuss the well-known limitations of long-term memory, summarised here. Greg Ashman for example, following on from a useful summary of schemata, says;

One way of thinking about the role of long-term memory in solving problems or dealing with new information is that entire schema can be brought readily into working memory and manipulated as a single element alongside any new elements that we need to process. The normal limits imposed on working memory fall away almost entirely when dealing with schemas retrieved from long-term memory – a key idea of cognitive load theory. This illustrates both the power of having robust schemas in long-term memory and the effortlessness of deploying them; an effortlessness that fools so many of us into neglecting the critical role long-term memory plays in learning”.

Many with expertise as varied as English, history, physics or politics, have enthusiastically embraced findings from cognitive science that could improve the effectiveness of teaching. Or more accurately, they’ve embraced Kirschner, Sweller and Clarke’s model of memory and learning.  Some of the ‘cog sci’ enthusiasts have gone further. They’ve taken a handful of facts out of context, squeezed them into their own pre-existing schemata, and drawn conclusions that are at odds with the research. They’ve also assumed that if an expert in ‘cog sci’ makes a plausible claim it must be true, but haven’t evaluated the evidence cited by the expert – because they don’t have the relevant expertise; cognitive science is a knowledge domain unfamiliar to them.

Nevertheless objections to the Kirschner, Sweller and Clarke model are often dismissed as originating either in ideology or ignorance. Ironic, as despite emphasising the importance of knowledge, evidence and expertise, many of the proponents of ‘cog sci’ are patently novices selecting evidence to support a model that doesn’t stand up to scrutiny. Murray Gell-Man is right that we need people who can take a crude look at the whole of knowledge (p.5), but the crude look should be one informed by a good grasp of the domains in question.

In 1797, Goethe published a poem entitled Der Zauberlehrling (Sorcerer’s Apprentice). It was a popular work, and became even more popular in 1940 when animated as part of Disney’s Fantasia, with Mickey Mouse playing the part of the apprentice who started something he couldn’t stop. The moral of the story is that a little knowledge can be a dangerous thing. Cummings has been portrayed as a brilliant eccentric and/or an evil genius. I think he’s an apprentice without a sorcerer.

references

Anderson, J (1996) ACT: A simple theory of complex cognition, American Psychologist, 51, 355-365.

Christodoulou, D (2014).  Seven Myths about Education.  Routledge.

de Groot, A D (1965).  Thought and Choice in Chess.  Mouton.

Kirschner, PA, Sweller, J & Clark, RE (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching Educational Psychologist, 41, 75-86.

Miller, G (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, Psychological Review, 63, 81-97.

Rumelhart, DE (1980). Schemata: the building blocks of cognition. In R.J. Spiro et al. (eds) Theoretical Issues in Reading Comprehension.  Lawrence Erlbaum: Hillsdale, NJ.

Schank, RC & Abelson, RP (1977). Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures.  Lawrence Erlbaum: Hillsdale, NJ.

 

 

 

not all in the genes

Dominic Cummings’ 2013 essay Some Thoughts on Education and Political Priorities reveals his keen interest in the implications of intelligence research for education. His Endnote “Intelligence, IQ, genetics, and extreme abilities” (p.194) runs to 17 pages.

General Intelligence

If I’ve understood Cummings’ model of intelligence correctly, it goes like this: General Intelligence (‘g’) is a trait that’s largely genetically determined and can be measured as IQ. If we could identify the genes involved, we could spot those with high cognitive ability who are needed to find the solutions to the complex problems facing us.

There’s certainly robust evidence that cognitive ability is largely genetically determined (by multiple genes), remains stable, and is a good predictor of lifetime achievement (p.197). We do need people with high IQs to work on solutions to world problems. And children with high IQs need an appropriate education. I share Cummings’ frustration that DfE officials prioritised their notion of equality over the need to develop talent (p.64). But his model is also flawed at several levels. It includes three key components that are worth examining in more detail;

  • A hypothetical human trait – general intelligence
  • The correlation between factors within intelligence tests
  • IQ

Intelligence

Towards the end of the 19th century, researchers got very interested in measuring human characteristics. Some, such as height and weight, were easy to measure, but others – like ‘physiognomy’ or ‘eventuality’- were trickier because it wasn’t obvious what the features of ‘physiognomy’ or ‘eventuality’ were.

PhrenologyPix

You can of course measure any human characteristic you fancy. You decide what the features of ‘adhesiveness’ or ‘ideality’ are and how to measure them, and hey presto! you’ve measured ‘adhesiveness’ or ‘ideality’. There might of course be some disagreement about the features of ‘adhesiveness’ or ‘ideality’ – or even about their very existence.

Also in the late 19th century, industrialised economies were desperate for a literate, numerate, ‘intelligent’ workforce. That requirement was one of the drivers for mass education.

In his 1904 review of measures of intellectual ability, the psychologist and statistician Charles Spearman decided intellectual ability could be measured using performance in: Classics, Common Sense, Pitch Discrimination, French, Cleverness, English, Mathematics, Pitch Discrimination among the uncultured, Music, Light Discrimination and Weight Discrimination (Spearman p.276). Essentially, he defined intelligence in terms of intellectual abilities. More recent measures such as Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing Speed (Wechsler Intelligence Scale for Children – V) define intelligence in terms of cognitive abilities.

‘g’

Spearman went a step further. The positive correlations between the factors in his test convinced him “that there really exists a something that we may provisionally term ‘General Sensory Discrimination’ and similarly a ‘General Intelligence’” (Spearman p.272). And the correlations between scores in cognitive ability tests have convinced others of the existence of a ‘something’ we may provisionally term ‘general intelligence’.

I haven’t been able to find out if Spearman used ‘g’ to refer to the correlation between factors, or the hypothesized ‘something’, or both. Whichever it was, critics were quick to point out that correlation doesn’t indicate causality. A positive correlation between Spearman’s factors exists, certainly. Whether ‘general intelligence’ exists other than as a folk concept is another matter.

Critics also pointed out the circularity in Spearman’s argument. Intelligence tests were assumed to measure intelligence, but because no one knew what intelligence actually was, the tests also defined intelligence – even if they varied considerably. Spearman’s measures were very different to Binet & Simon’s , and neither bears much resemblance to the WISC, or to Raven’s Progressive Matrices. As Edwin Boring put it in 1923, “intelligence is what the tests test”.

IQ

In 1912, the German psychologist William Stern developed the concept of IQ –Intelligenzquotient. IQ (initially mental age divided by chronological age, expressed as a percentage) tells you how an individual’s test score compares to the average for the population. But the criticisms of ‘intelligence’ also apply to IQ. IQ tests undoubtedly measure aspects of cognitive ability, but we don’t know whether or not they measure a genetically determined trait we may call ‘intelligence’. Or even if such a trait exists.

Advocates for general intelligence haven’t take the criticisms lying down. Cummings quotes Robert Plomin’s dismissal of the circularity criticism: “…laypeople often read in the popular press that the assessment of intelligence is circular – intelligence is what intelligence tests assess. On the contrary, g is one of the most reliable and valid measures in the behavioral domain” (p.195).

It’s worth noting that Plomin uses g and intelligence interchangeably, even though intelligence is a hypothesized trait and he refers to g as a measure. There’s no doubt that g is reliable and valid when measuring some cognitive abilities. Whether those abilities represent a genetically determined trait we may term ‘intelligence’ is another matter – which Plomin goes on to admit: “It is less clear what g is and whether g is due to a single general process, such as executive function or speed of information processing, or whether it represents a concatenation of more specific cognitive processes…” It’s also worth noting that Plomin attributes the circularity argument to laypeople and the popular press, rather than to generations of doubting academic critics.

The implicit assumptions made by those emphasizing the importance of g and IQ, are important because they can have unwanted and unintended outcomes. One is that correlations between factors might hold true at population level, but not always at the individual level. Deidre Lovecky, who runs a resource centre in Providence Rhode Island for gifted children with learning difficulties, reports in her book Different Minds having to pick ‘n’ mix sub-tests from different assessment instruments because individual children were scoring at ceiling on some sub-tests and at floor on others. How intelligent are those children? Their IQ scores wouldn’t tell us.

Also, hunting for hypothetical snarks can waste a huge amount of time and resource. It’s taken over a century for us not to be able to find out what ‘g’ is. Given the number of genes involved ,you’d think by now people would have abandoned the search for a single causal factor. It’s a similar story for chronic fatigue syndrome (‘neurasthenia’ – 1869) and autism (‘autistic disturbances of affective contact’ – 1943); both perfectly respectable descriptive labels, but costly red herrings for researchers looking for a single cause.

Characteristics, traits, states, and behaviours

What convinces Cummings that intelligence, g and IQ are ‘somethings’ that really exist is evidence from behavioural genetics. Scientists working in this field have established beyond reasonable doubt that most of the variance in human intelligence, however you measure it, is accounted for by genetic factors. That shouldn’t be surprising. Intelligence is almost invariably defined in terms of cognitive ability, and cognitive ability emerges from characteristics such as visual and auditory discrimination, reaction time, and working memory capacity, all biological mechanisms largely determined by genes.

But not all human characteristics are the same kind of thing. Some characteristics such as height and weight are clearly physical and are easily measured. For obvious reasons genes account for most of the variance in physical characteristics.

The term trait applies to physical characteristics but also to stable dispositional characteristics. Disposition refers to people’s behavioural tendencies – how introvert or extravert they are, what they like and dislike, do and don’t do etc. The evidence from behavioural genetics suggests that genes also account for most of the variance in stable traits.

States are also dispositional characteristics, but they’re temporary and usually emerge in response to environmental factors. So Joan might be extravert and prone to angry outbursts, and Felicity might be introverted and timid, but both of them are likely to become anxious if fire breaks out in the office they share. Their reactions to the fire are largely genetically determined, but are triggered by an environmental event.

Behaviours are things people do. They are undoubtedly influenced by genetic makeup, but occur primarily in response to environmental factors, because that’s the main function of behaviour. Joan might try to extinguish the fire and Felicity might take the nearest exit, but both behaviours would be in response to specific circumstances. If we were pre-programmed automatons, the human race wouldn’t have lasted very long.

In support of his genes-determine-intelligence argument Cummings cites Stephen Hsu, a physicist turned behavioural geneticist, who claims that much of the nature/nurture debate has been settled. Hsu’s right in respect of the genetic influence on traits. But that still leaves plenty of room for the environmental influence on states and behaviours. That has significant implications for Cummings’ model of education.

Genes, intelligence and education

The principal components of Cummings’ model of education are genes, intellectual ability, effective teaching, and exam results. But in real life many other factors impact on educational outcomes. Take Ryan, Joan’s nephew, for example.

Ryan lives with his mum, a single parent. She cares for her father, disabled following a work accident, and her mother who has complex health problems. They live in a former industrial town, currently in economic decline. Ryan’s parents’ relationship broke down due to the financial and time pressures on the family.

Ryan has average intellectual ability, but episodes of glue ear when he was younger left him with a slight speech and language delay. He struggled with maths and reading and was often reprimanded for not following instructions. He loved physical activities, but the regulatory education framework required Ryan, as a child who was ‘falling behind’, to do less practical activity and more arithmetic and phonics.

Ryan soon began to disengage with school. He was referred for speech and language therapy and to the educational psychologist, but both had lengthy waiting lists. By his teens, Ryan had a low reading age, was making slow progress academically, and skipped school whenever he could. His mum couldn’t find paid work to fit around caring for her parents, and was on medication for anxiety and depression.

Genes undoubtedly account for some challenges faced by Ryan and his family; his family’s health, his intellectual ability, and quite likely his glue ear. But environment plays a significant role in the shape of income, diet, viral infections, and national economic, social, and education policy. So do life events (so commonplace their importance is often overlooked); where the family happens to live, grandfather’s accident, parents’ break-up, which school is closest to home.

Then there are specific behaviours on the part of Ryan, his parents, grandparents, teachers – and government ministers. Specific behaviours are often framed as a ‘choice’, but that choice is often highly constrained by circumstances.

Choose your metrics

Cummings measures the effectiveness of the education system by exam results (although he questions the quality of the exams). Exam results are positively correlated with IQ, and IQ is largely genetically determined. So his choice of metric means Cummings places a disproportionate emphasis on influence of genes on educational outcomes.

Of course there’s nothing wrong with IQ or exam results as metrics. If you want to find someone with good cognitive abilities, a modern intelligence test can identify them. If you want candidates with a mathematical ability of at least GCSE level, check out GCSE maths results.

But the choice of a single metric for something as complex as an education system shows an inadequate understanding of complex systems. And begs the question of what education is about. If quality of life in local communities were the key metric, the education system would look very different. By bizarre coincidence, the gene pool of large populations produces people with a wide range of abilities and aptitudes, just what those populations need in order to thrive. That wide range of abilities and aptitudes should be cultivated. Cummings’ choice of metric means the exam-results tail wagging the quality-of-life dog.

Accommodating a wide range of abilities and aptitudes doesn’t equate to having ‘low expectations’ for those with less than stellar exam results. There’s no virtue in people doing jobs they don’t enjoy and aren’t good at, and careers aren’t set in stone. An academic high flyer might become a superb potter, and a former train driver might get a PhD. If the education system doesn’t offer such opportunities, it’s to the detriment of all us.

Cummings would no doubt argue that his claims about education are evidence-based; he cites evidence for pedagogical approaches that improve exam results. But his starting point is an assumption that what the world needs is academic high flyers with high IQs and ‘extreme abilities’. He looks right past those with other abilities and aptitudes essential for communities to keep functioning. And those, who through no fault of their own, can make only a very limited contribution to their communities, but like all of us have a right to a decent quality of life.

Cummings first chooses his metric and then chooses supporting evidence, but only the evidence in support of it. Ironically history is littered with examples of academic high flyers with high IQs and ‘extreme abilities’ causing chaos for the rest of us. Cummings’ use of evidence is the subject of the next post.

reference

Spearman, C.  (1904).  ‘General Intelligence’ objectively determined and measured.  The American Journal of Psychology, 15, 201-292.

acknowledgements

Image from People’s Cyclopedia of Universal Knowledge (1883) via Wikipedia https://en.wikipedia.org/wiki/Phrenology