In her book Seven Myths about Education, Daisy Christodoulou claims that a certain set of ideas dominant in English education are misguided and presents evidence to support her claim. She says “Essentially, the evidence here is fairly straightforward and derives mostly from cognitive psychology”.
Whilst reading Daisy’s book, there were several points where I found it difficult to follow her argument despite the clarity of her writing style and the validity of the findings from cognitive psychology to which she appeals. It then occurred to me that Daisy and some of the writers she cites were using the same terminology to refer to different things, and different terminology to refer to the same thing. This is almost inevitable if you are drawing together ideas from different knowledge domains, but obviously definitions need be clarified or you end up with people misunderstanding each other.
In the next few posts, I want to compare the model of cognition that Daisy outlines with a framework for analysing knowledge that’s been proposed by researchers in several different fields. I’ve gone into some detail because of the need to clarify terms.
why cognitive psychology?
Cognitive psychology addresses the way people think, so has obvious implications for education. In Daisy’s view its findings challenge the assumptions implicit in her seven myths. In the final section of her chapter on myth 1, having recapped on what Rousseau, Dewey and Freire have to say, Daisy provides a brief introduction to cognitive psychology. Or at least to the interface between information theory and cognitive psychology in the 1960s and 70s that produced some important theoretical models of human cognition. Typically, researchers would look at how people perceived or remembered things or solved problems, infer a model that explained how the brain must have processed the information involved and would then test it by running computer simulations. Not only did this approach give some insights into how the brain worked, it also meant that software might be developed that could do some of the perceiving, remembering or problem-solving for us. At the time, there was a good deal of interest in expert systems – software that could mimic the way experts thought.
Much of the earlier work in cognitive psychology had involved the biology of the brain. Researchers knew that different parts of the brain specialised in processing different types of information, that the parts were connected by nerve fibres (neurons) activated by tiny electrical impulses. A major breakthrough came when they realised the brain wasn’t constructed like a railway network, with the nerve fibres connecting parts of the brain as a track connects stations, but in complex networks that were more like the veins in a leaf. Another breakthrough came when they realised information isn’t stored and retrieved in the form of millions of separate representations, like books in a vast library, but in the patterns of connections between the neurons. It’s like the way the same pixels on a computer monitor can display an infinite number of images, depending on which pixels are activated. A third breakthrough occurred when it was found that the brain doesn’t start off with all its neurons already connected – it creates and dissolves connections as it learns. So connections between facts and concepts aren’t just metaphorical, they are biological too.
Because it’s difficult to investigate functioning brains, computers offered a way of figuring out how information was being processed by the brain. Although this was a fruitful area of research in the 1960s and 70s, researchers kept running into difficulties. Problems arose because the human brain isn’t built like a computer; it’s more like a Heath Robinson contraption cobbled together from spare parts. It works after a fashion, and some parts of it are extremely efficient, but if you want understand how it works, you have get acquainted with its idiosyncrasies. The idiosyncrasies exist because the brain is a biological organ with all the quirky features that biological organs tend to have. Trying to figure out how it works from the way people use it has limitations; information about the biological structure and function of the brain is needed to explain why brains work in some rather odd ways.
Since the development of scanning techniques in the 1980s, the attention of cognitive science has shifted back towards the biological mechanisms involved. This doesn’t mean that the information theory approach is defunct – far from it – there’s been considerable interest in computational models of cognition and in cognitive errors and biases, for example. But the information theory and biological approaches are complementary; each approach makes more sense in the light of the other.
more than artificial intelligence
Daisy points out that “much of the modern research into intelligence was inspired and informed by research into artificial intelligence” (p.18). Yes, it was, but work on biological mechanisms, perception, attention and memory was going on simultaneously. Then “in the 1960s and 1970s researchers agreed on a basic mental model of cognition that has been refined and honed since then.” That’s one way of describing the sea change in cognitive science that’s happened since the introduction of scanning techniques, but it’s something of an understatement. Daisy then quotes Kirschner, Sweller and Clark; “ ‘working memory can be equated with consciousness’”. In a way it can, but facts and rules and digits are only a tiny fraction of what consciousness involves, though you wouldn’t know that to read Daisy’s account. Then there’s the nature of long-term memory. According to Daisy “when we try to solve any problem, we draw on all the knowledge that we have committed to long-term memory” (p.63). Yes, we do in a sense, but long-term memory is notoriously unreliable.
What Daisy didn’t say about cognitive psychology is as important as what she did say. Aside from all the cognitive research that wasn’t about artificial intelligence, Daisy fails to mention a model of working memory that’s dominated cognitive psychology for 40 years – the one proposed by Baddeley and Hitch in 1974. Recent research has shown that it’s an accurate representation of what happens in the brain. But despite being a leading authority on working memory, Baddeley gets only one mention in an endnote in Daisy’s book (the same ‘more technical’ reference that Willingham cites – also in an endnote) and isn’t mentioned at all in the Kirschner, Sweller and Clark paper. At the ResearchED conference in Birmingham in April this year, one teacher who’d given a presentation on memory told me he’d never heard of Baddeley. I’m drawing attention to this is not because have a special interest in Baddeley’s model, but because omitting his work from a body of evidence about working memory is a bit like discussing the structure of DNA without mentioning Crick and Watson’s double helix, or 19th century literature omitting Dickens. Also noticeable by her absence is Susan Gathercole, a professor of cognitive psychology at York, who researches working memory problems in children. Her work couldn’t be more relevant to education if it tried, but it’s not mentioned. Another missing name is Antonio Damasio, a neurologist who’s tackled the knotty problem of consciousness – highly relevant to working memory. Because of his background in biology, Damasio takes a strongly embodied view of consciousness; what we are aware of is affected by our physiology and emotions as well as our perceptions and memory. Daisy can’t write about everything, obviously, but it seemed odd to me that her model of cognition is drawn only from concepts central to one strand of one discipline at one period of time, not from an overview of the whole field. It was also odd that she cited secondary sources when work by people who have actually done the relevant research is readily accessible.
does this matter?
On her blog, Daisy sums up the evidence from cognitive psychology in three principles: “working memory is limited; long-term memory is powerful; and we remember what we think about”. When I’ve raised the issue of memory and cognition being more complex than Willingham’s explicitly ‘very simple’ model, teachers who support Daisy’s thesis have asked me if that makes any difference.
Other findings from cognitive psychology don’t make any difference to the three principles as they stand. Nor do they make it inappropriate for teachers to apply those principles, as they stand, to their teaching. But they do make a difference to the conclusions Daisy draws about facts, schemata and the curriculum. Whether they refute the myths or not depends on those conclusions.
a model of cognition
If I’ve understood correctly, Daisy is saying that working memory (WM) has limited capacity and limited duration, but long-term memory (LTM) has a much greater capacity and duration. If we pay attention to the information in WM, it’s stored permanently in LTM. The brain ‘chunks’ associated information in LTM, so that several smaller items can be retrieved into WM as one larger item, in effect increasing the capacity of WM. Daisy illustrates this by comparing the difficulty of recalling a string of 16 numerals
with a string of 16 letters
the cat is on the mat
The numerals are difficult to recall, but the letters are easily recalled because our brains have already chunked those frequently encountered letter patterns into words, the capacity of WM is large enough to hold six words, and once the words are retrieved we can quickly decompose them into their component letters. So in Daisy’s model, memorising information increases the amount of information WM can handle.
I was with her so far. It was the conclusions that Daisy then goes on to draw about facts, schemata and the curriculum that puzzled me. The aha! moment came when I re-read her comments on Bloom’s taxonomy of educational objectives. Bloom adopts a concept that’s important in many fields, including information theory and cognitive psychology. It’s the concept of levels of abstraction, sometimes referred to as levels of granularity.
levels of abstraction
Levels of abstraction form an integral part of some knowledge domains. Chemists are familiar with thinking about their subject at the subatomic, atomic and molecular levels; biologists with thinking about a single organism at the molecular, cellular, organ, system or whole body level; geographers and sociologists with thinking about a population at the household, city or national level. It’s important to note three things about levels of abstraction:
First, the same fundamental entities are involved at different levels of abstraction. The subatomic ‘particles’ in a bowl of common salt are the same particles whether you’re observing their behaviour as subatomic particles, as atoms of sodium and chlorine or as molecules of sodium chloride. Cells are particular arrangements of chemicals, organs are particular arrangements of cells, and the circulatory or respiratory systems are particular arrangements of organs. The same people live in households, cities or nations.
Secondly, entities behave differently at different levels of abstraction. Molecules behave differently to their component atoms (think of the differences between sodium, chlorine and sodium chloride), the organs of the body behave differently to the cells they are built from, and nations behave differently to the populations of cities and households.
Thirdly, what happens at one level of abstraction determines what happens at the next level up. Sodium chloride has its properties because it’s formed from sodium and chlorine – if you replaced the sodium with potassium you’d get a chemical compound that tastes very different to salt. And if you replaced the cells in the heart with liver cells you wouldn’t have a heart, you’d have a liver. The behaviour of nations depends on how the population is made up.
The levels of abstraction Bloom uses in his taxonomy are (starting from the bottom) knowledge, comprehension, application, analysis, synthesis and evaluation. In her model of cognition Daisy refers to several levels of abstraction, although she doesn’t call them that and doesn’t clearly differentiate between them. That might be intentional. She describes Bloom’s taxonomy as a ‘metaphor’ and says it’s a misleading one because it implies that ‘the skills are somehow separate from knowledge’ and that ‘knowledge is somehow less worthy and important’ (p.21). Whether Bloom’s taxonomy is accurate or not, it looks as if Daisy’s perception of it as a ‘metaphor’, and her focus on the current popular emphasis on higher-level skills mean that she overlooks the core principle implicit in Bloom’s taxonomy that you can’t evaluate without synthesis, or synthesise without analysis or analyse without application or apply without comprehension. And you can’t do any of those things without knowledge. The various processes are described as ‘lower’ and ‘higher’ not because a value judgement is being made about their importance or because they involve different things entirely, but because the higher ones are derived from the lower ones in the taxonomy.
It’s possible, of course, that educational theorists have also got hold of the wrong end of the stick and have seen Bloom’s six levels of abstraction not as dependent on one another but as independent from each other. Daisy’s comments on Bloom explained why I’ve had some confusing conversations with teachers about ‘skills’. I’ve been using the term in a generic sense to denote facility in handling knowledge; the teachers have been using it in the narrow sense of specific higher-level skills required by the national curriculum.
Daisy appears to be saying that the relationship between knowledge and skills isn’t hierarchical. She provides two alternative ‘metaphors’; ED Hirsch’s scrambled egg and Joe Kirby’s double helix representing the dynamic, interactive relationship between knowledge and skills (p.21). I think Joe’s metaphor is infinitely better than Hirsch’s but it doesn’t take into account the different levels of abstraction of knowledge.
Bloom’s taxonomy is a framework for analysing educational objectives that are dependent on knowledge. In the next post, I look at a framework for analysing knowledge itself.