learning styles: a response to Greg Ashman

In a post entitled Why I’m happy to say that learning styles don’t exist Greg Ashman says that one of the arguments I used in my previous post about learning styles “seems to be about the semantics of falsification“. I’m not sure that semantics is quite the right term, but the falsification of hypotheses certainly was a key point. Greg points out that “falsification does not meaning proving with absolute certainty that something does not exist because you can’t do this and it would therefore be impossible to falsify anything”. I agree completely. It’s at the next step that Greg and I part company.

Greg seems to be arguing that because we can’t falsify a hypothesis with absolute certainty, sufficient evidence of falsification is enough to be going on with. That’s certainly true for science as a work-in-progress. But he then goes on to imply that if there’s little evidence that something exists, the lack of evidence for its existence is good enough to warrant us concluding it doesn’t exist.

I’m saying that because we can’t falsify a hypothesis with absolute certainty, we can never legitimately conclude that something doesn’t exist. All we can say is that it’s very unlikely to exist. Science isn’t about certainty, it’s about reducing uncertainty.

My starting point is that because we don’t know anything with absolute certainty, there’s no point making absolutist statements about whether things exist or not. That doesn’t get us anywhere except into pointless arguments.

Greg’s starting point appears to be that if there’s little evidence that something exists, we can safely assume it doesn’t exist, therefore we are justified in making absolutist claims about its existence.

Claiming categorically that learning styles, Santa Claus or fairies don’t exist is unlikely to have a massively detrimental impact on people’s lives. But putting the idea into teachers’ heads that good-enough falsification allows us to dismiss outright the existence of anything for which there’s little evidence is risky. The history of science is littered with tragic examples of theories being prematurely dismissed on the basis of little evidence – germ theory springing first to mind.

testing the learning styles hypothesis

Greg also says “a scientific hypothesis is one which makes a testable prediction. Learning styles theories do this.”

No they don’t. That’s the problem. Mathematicians can precisely define the terms in an equation. Philosophers can decide what they want the entities in their arguments to mean. Thanks to some sterling work on the part of taxonomists there’s now a strong consensus on what a swan, or a crow or duck-billed platypus are, rather than the appalling muddle that preceded it. But learning styles are not terms in an equation, or entities in philosophical arguments. They are not even like swans, crows or duck-billed platypuses; they are complex, fuzzy conceptual constructs. Unless you are very clear about how the particular constructs in your learning styles model can be measured, so that everyone who tests your model is measuring exactly the same thing, the hypotheses might be testable in principle but in reality it’s quite likely no one has has tested them properly. And that’s before you even get to what the conceptual constructs actually map on to in the real world.

This is a notorious problem for the social sciences. It doesn’t follow that all conceptual constructs are invalid, or that all hypotheses involving them are pseudoscience, or that the social sciences aren’t sciences at all. All it means is that social scientists often need to be a lot more rigorous than they have been.

I don’t understand why it’s so important for Daniel Willingham or Tom Bennett or Greg Ashman to categorise learning styles – or anything else for that matter – as existing or not. The evidence for the existence of Santa Claus, fairies or the Loch Ness monster is pretty flimsy, so most of us work on the assumption that they don’t exist. The fact that we can’t prove conclusively that they don’t exist doesn’t mean that we should be including them in lesson plans. But I’m not advocating the use of Santa Claus, fairies, the Loch Ness monster or learning styles in the classroom. I’m pointing out that saying ‘learning styles don’t exist’ goes well beyond what the evidence claims and, contrary to what Greg says in his post, implies that we can falsify a hypothesis with absolute certainty.

Absence of evidence is not evidence of absence. That’s an important scientific principle. It’s particularly relevant to a concept like learning styles, which is an umbrella term for a whole bunch of models encompassing a massive variety of allegedly stable traits, most of which have been poorly operationalized and poorly evaluated in terms of their contribution – or otherwise – to learning. The evidence about learning styles is weak, contradictory and inconclusive. I can’t see why we can’t just say that it’s weak, contradictory and inconclusive, so teachers would be well advised to give learning styles a wide berth – and leave it at that.

learning styles: what does Tom Bennett* think?

Tom Bennett’s disdain for learning styles is almost palpable, reminiscent at times of Richard Dawkins commenting on a papal pronouncement, but it started off being relatively tame. In May 2013, in a post on the ResearchEd2013 website coinciding with the publication of his book Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he asks ‘why are we still talking about learning styles?’ and claims “there is an overwhelming amount of evidence suggesting that learning styles do not exist, and that therefore we should not be instructing students according to these false preferences.

In August the same year for his New Scientist post Separating neuromyths from science in education, he tones down the claim a little, pointing out that learning styles models are “mostly not backed by credible evidence”.

But the following April, Tom’s back with a vitriologic vengeance in the TES with Zombie bølløcks: World War VAK isn’t over yet. He rightly – and colorfully – points out that time or resources shouldn’t be wasted on initiatives that have not been demonstrated to be effective. And he’s quite right to ask “where were the educationalists who read the papers, questioned the credentials and demanded the evidence?” But Bennett isn’t just questioning, he’s angry.

He’s thinking of putting on his “black Thinking Hat of reprobation and fury”. Why? Because “it’s all bølløcks, of course. It’s bølløcks squared, actually, because not only has recent and extensive investigation into learning styles shown absolutely no correlation between their use and any perceptible outcome in learning, not only has it been shown to have no connection to the latest ways we believe the mind works, but even investigation of the original research shows that it has no credible claim to be taken seriously. Learning Styles are the ouija board of serious educational research” and he includes a link to Pashler et al to prove it.

Six months later, Bennett teams up with Daniel Willingham for a TES piece entitled Classroom practice – Listen closely, learning styles are a lost cause in which Willingham reiterates his previous arguments and Tom contributes an opinion piece dismissing what he calls zombie theories, ranging from red ink negativity to Neuro-Linguistic Programming and Multiple Intelligences.

why learning styles are not a neuromyth

Tom’s anger would be justified if he were right. But he isn’t. In May 2013, in Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he says of the VAK model “And yet there is no evidence for it whatsoever. None. Every major study done to see if using learning style strategies actually work has come back with totally negative results” (p.144). He goes on to dismiss Kolb’s Learning Style Inventory and Honey and Mumford’s Learning Styles Questionnaire, adding “there are others but I’m getting tired just typing all the categories and wondering why they’re all so different and why the researchers disagree” (p.146). That tells us more about Tom’s evaluation of the research than it does about the research itself.

Education and training research has long suffered from a serious lack of rigour. One reason for that is that they are both heavily derived fields of discourse; education and training theory draws on disciplines as diverse as psychology, sociology, philosophy, politics, architecture, economics and medicine. Education and training researchers need a good understanding of a wide range of fields. Taking all relevant factors into account is challenging, and in the meantime teachers and trainers have to get on with the job. So it’s tempting to get an apparently effective learning model out there ASAP, rather than make sure it’s rigorously tested and systematically compared to other learning models first.

Review paper after review paper has come to similar conclusions when evaluating the evidence for learning styles models:

• there are many different learning styles models, featuring many different learning styles
• it’s difficult to compare models because they use different constructs
• the evidence supporting learning styles models is weak, often because of methodological issues
• some models do have validity or reliability; others don’t
• people do have different aptitudes in different sensory modalities, but
• there’s no evidence that teaching/training all students in their ‘best’ modality improves performance.

If Tom hadn’t got tired typing he might have discovered that some learning styles models have more validity than the three he mentions. And if he’d read the Coffield review more carefully he would have found out that the reason models are so different is because they are based on different theories and use different (often poorly operationalized) constructs and that researchers disagree for a host of reasons, a phenomenon he’d do well to get his head round if he wants teachers to get involved in research.

evaluating the evidence

Reviewers of learning styles models have evaluated the evidence by looking in detail at its content and quality and have then drawn general conclusions. They’ve examined, for example, the validity and reliability of component constructs, what hypotheses have been tested, the methods used in evaluating the models and whether studies have been peer-reviewed.

What they’ve found is that people do have learning styles (depending on how learning style is defined), but there are considerable variations in validity and reliability between learning styles models, and that overall the quality of the evidence isn’t very good. As a consequence, reviewers have been in general agreement that there isn’t enough evidence to warrant teachers investing time or resources in a learning styles approach in the classroom.

But Tom’s reasoning appears to move in the opposite direction; to start with the conclusion that teachers shouldn’t waste time or resources on learning styles, and to infer that;

variable evidence means all learning styles models can be rejected
poor quality evidence means all learning styles models can be rejected
• if some learning styles models are invalid and unreliable they must all be invalid and unreliable
if the evidence is variable and poor and some learning styles models are invalid or unreliable, then
• learning styles don’t exist.

definitions of learning style

It’s Daniel Willingham’s video Learning styles don’t exist that sums it up for Tom. So why does Willingham say learning styles don’t exist? It all depends on definitions, it seems. On his learning styles FAQ page Willingham says;

I think that often when people believe that they observe obvious evidence for learning styles, they are mistaking it for abilityThe idea that people differ in ability is not controversial—everyone agrees with that. Some people are good at dealing with space, some people have a good ear for music, etc. So the idea of “style” really ought to mean something different. If it just means ability, there’s not much point in adding the new term.

This is where Willingham lost me. Obviously, a preference for learning in a particular way is not the same as an ability to learn in a particular way. And I agree that there’s no point talking about style if what you mean is ability. The VAK model claims that preference is an indicator of ability, and the evidence doesn’t support that hypothesis.

But not all learning styles models are about preference; most claim to identify patterns of ability. That’s why learning styles models have proliferated; employers want a quick overall assessment of employees’ strengths and weaknesses when it comes to learning. Because the models encompass factors other than ability – such as personality and ways of approaching problem-solving – referring to learning styles rather than ability seems reasonable.

So if the idea that people differ in ability is not controversial, many learning styles models claim to assess ability, and some are valid and/or reliable, how do Willingham and Bennett arrive at the conclusion that learning styles don’t exist?

The answer, I suspect, is that what they are equating learning styles with the VAK model, most widely used in primary education. It’s no accident that Coffield et al evaluated learning styles and pedagogy in post-16 learning; it’s the world outside the education system that’s the main habitat of learning styles models. It’s fair to say there’s no evidence to support the VAK model – and many others – and that it’s not worth teachers investing time and effort in them. But the evidence simply doesn’t warrant lumping together all learning styles models and dismissing them outright.

taking liberties with the evidence

I can understand that if you’re a teacher who’s been consistently told that learning styles are the way to go and then discover there’s insufficient evidence to warrant you using them, you might be a bit miffed. But Tom’s reprobation and fury doesn’t warrant him taking liberties with the evidence. This is where I think Tom’s thinking goes awry;

• If the evidence supporting learning styles models is variable it’s variable. It means some learning styles models are probably rubbish but some aren’t. Babies shouldn’t be thrown out with bathwater.

• If the evidence evaluating learning styles is of poor quality, it’s of poor quality. You can’t conclude from poor quality evidence that learning styles models are rubbish. You can’t conclude anything from poor quality evidence.

• If the evidence for learning styles models is variable and of poor quality, it isn’t safe to conclude that learning styles don’t exist. Especially if review paper after review paper has concluded that they do – depending on your definition of learning styles.

I can understand why Willingham and Bennett want to alert teachers to the lack of evidence for the VAK learning styles model. But I felt Daniel Willingham’s claim that learning styles don’t exist is misleading and that Tom Bennett’s vitriol was unjustified. There’s a real risk in the case of learning styles of one neuromyth being replaced by another.

*Tom appears to have responded to this post here and here. With yet another article two more articles about zombies.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

learning styles: the evidence

The PTA meeting was drawing to a close. The decision to buy more books for the library instead of another interactive whiteboard had been unanimous, and the conversation had turned to educational fads.

“Now, of course,” the headteacher was saying, “it’s all learning styles. We’re visual, auditory or kinaesthetic learners – you know, Howard Gardner’s Multiple Intelligences.” His comment caught my attention because I was familiar with Gardner’s managerial competencies, but couldn’t recall them having anything to do with sensory modalities and I didn’t know they’d made their way into primary education. My curiosity piqued, I read Gardner’s book Frames of Mind: The Theory of Multiple Intelligences. It prompted me to delve into his intriguing earlier account of working with brain-damaged patients – The Shattered Mind.

Where does the VAK model come from?

Gardner’s multiple intelligences model was clearly derived from his pretty solid knowledge of brain function, but wherever the idea of visual, auditory and kinaesthetic (VAK) learning styles had come from, it didn’t look like it came from Gardner. A bit of Googling learning styles kept bringing up the names Dunn and Dunn, but I couldn’t find anything on the VAK model’s origins. So I phoned a friend. “It’s based on Neuro-Linguistic Programming”, she said.

This didn’t bode well. Neuro-Linguistic Programming (NLP) is a therapeutic approach devised in the 1970s by Richard Bandler, a psychology graduate, and John Grinder, then an assistant professor of psychology who, like Frank Smith, had worked in George magical-number-seven-plus-or-minus-two Miller’s lab and been influenced by Noam Chomsky’s ideas about linguistics.

If I’ve understood Bandler and Grinder’s idea correctly, they proposed that insights into people’s internal, subjective sensory representations can be gleaned from their eye movements and the words they use. According to their model, this makes it possible to change those internal representations to reduce anxiety or eliminate phobias. Although there are some valid elements in the theory behind NLP, evaluations of the model have in the main been critical and evidence supporting the effectiveness of NLP as a therapeutic approach has been notable by its absence (see e.g. Witkowski, 2010).

So the VAK Learning Styles model appeared to be an educational intervention derived from a debatable theory and a therapeutic technique that doesn’t work too well.

Evaluating the evidence

Soon after I’d phoned my friend, in 2004 Frank Coffield and colleagues published a systematic and rigorous evaluation of 13 learning styles models used in post-16 learning and found the reliability and validity of many of them wanting. They didn’t evaluate the VAK model as such, but did review the Dunn and Dunn Learning Styles Inventory which is very similar, and it didn’t come out with flying colours. I mentally consigned VAK Learning Styles to my educational fads wastebasket.

Fast forward a decade. Teachers using social media were becoming increasingly dismissive of VAK Learning Styles and of learning styles in general. Their objections appeared to trace back to Tom Bennett’s 2013 book Teacher Proof. Tom doesn’t like learning styles. In Separating neuromyths from science in education, an article on the New Scientist website, he summarises his ‘hitlist’ of neuromyths. He claims the VAK model is “the most popular version” of the learning styles theory, and that it originated in Neil Fleming’s VARK (visual, auditory, read-write, kinaesthetic) concept. According to Fleming, a teacher from New Zealand, his model does indeed derive from Neuro-Linguistic Programming. Bennett says the Coffield review “found up to 71 learning styles had been described, mostly not backed by credible evidence”.

This is where things started to get a bit confusing. The Coffield review identified 71 different learning styles models and evaluated 13 of them against four basic criteria; internal consistency, test-retest reliability, construct validity and predictive validity. The results were mixed, ranging from one model that met all four criteria to two that met none. Five of the 13 use the words ‘learning style(s)’ in their name. They included Dunn and Dunn’s Learning Styles Inventory that features visual, auditory, kinaesthetic and tactile (VAKT) modalities, but not Fleming’s VARK model nor the popular VAK Learning Styles model as such.

Having cited John Hattie’s research on the effect size of educational interventions that found the impact of individualisation to be relatively low, Coffield et al concluded “it seems sensible to concentrate limited resources and staff efforts on those interventions that have the largest effect sizes” (p.134).

A later review of learning styles by Pashler et al (2008) took a different approach. The authors evaluated the evidence for what they call the meshing hypothesis; the claim that individualizing instruction to the learner’s style can enable them to achieve a better learning outcome. They found “plentiful evidence arguing that people differ in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information” (p.105). But like the Coffield team, Pashler et al concluded “at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number” (p.105).

Populations, groups and individuals

The research by Coffield, Pashler and Hattie highlights a core challenge for any research relating to large populations; that what is true at the population level might not hold for minority groups or specific individuals – and vice versa. Behavioural studies that compare responses to different treatments usually present results at the group level (see for example Pashler et al’s Fig 1). Results from individuals that differ substantially from the group are usually treated as ‘outliers’ and overlooked. But a couple of high or low scores in a small group can make a substantial difference to the mean. It’s useful to know how the average student behaves if you’re researching teaching methods or developing educational policy, but the challenge for teachers is that they don’t teach the average student – they have to teach students across the range – including the outliers.

So although it makes sense at the population level to focus on Hattie’s top types of intervention, those interventions might not yield the best outcomes for particular classes, groups or individual students. And although the effect sizes of interventions involving the personal attributes of students are relatively low, they are far from non-existent.

In short, reviewers have noted that:
• there is evidence to support the idea that people have particular aptitudes for particular types of learning,
and
• some learning styles models have some validity and reliability,
but
• there is little evidence that teaching children in their ‘best’ sensory modality will improve learning outcomes,
so
• given the limited resources available, the evidence doesn’t warrant teachers investing a lot of time and effort in learning styles assessments.

But you wouldn’t know that from reading some commentaries on learning styles. In the next couple of posts, I want to look at what Daniel Willingham and Tom Bennett have to say about them.

Bibliography
Bandler, R. & Grinder, J (1975). The structure of magic I: A book about language and therapy. Science & Behaviour Books, Palo Alto.

Bandler, R. & Grinder, J (1979). Frogs into Princes: The introduction to Neuro-Linguistic Programming. Eden Grove Editions (1990).

Bennett, T. (2013). Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it, Routledge.

Coffield F., Moseley D., Hall, E. & Ecclestone, K (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

Gardner, H. (1977). The Shattered Mind: The person after brain damage. Routledge & Kegan Paul.

Gardner, H. (1983). Frames of Mind: The theory of multiple intelligences. Fontana (1993).

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2009). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

Witkowski, T (2010). Thirty-Five Years of Research on Neuro-Linguistic Programming.
NLP Research Data Base. State of the Art or Pseudoscientific Decoration? Polish Psychological Bulletin 41, 58-66.

seven myths about education: finally…

When I first heard about Daisy Christodoulou’s myth-busting book in which she adopts an evidence-based approach to education theory, I assumed that she and I would see things pretty much the same way. It was only when I read reviews (including Daisy’s own summary) that I realised we’d come to rather different conclusions from what looked like the same starting point in cognitive psychology. I’ve been asked several times why, if I have reservations about the current educational orthodoxy, think knowledge is important, don’t have a problem with teachers explaining things and support the use of systematic synthetic phonics, I’m critical of those calling for educational reform rather than those responsible for a system that needs reforming. The reason involves the deep structure of the models, rather than their surface features.

concepts from cognitive psychology

Central to Daisy’s argument is the concept of the limited capacity of working memory. It’s certainly a core concept in cognitive psychology. It explains not only why we can think about only a few things at once, but also why we oversimplify and misunderstand, are irrational, are subject to errors and biases and use quick-and-dirty rules of thumb in our thinking. And it explains why an emphasis on understanding at the expense of factual information is likely to result in students not knowing much and, ironically, not understanding much either.

But what students are supposed to learn is only one of the streams of information that working memory deals with; it simultaneously processes information about students’ internal and external environment. And the limited capacity of working memory is only one of many things that impact on learning; a complex array of environmental factors is also involved. So although you can conceptually isolate the material students are supposed to learn and the limited capacity of working memory, in the classroom neither of them can be isolated from all the other factors involved. And you have to take those other factors into account in order to build a coherent, workable theory of learning.

But Daisy doesn’t introduce only the concept of working memory. She also talks about chunking, schemata and expertise. Daisy implies (although she doesn’t say so explicitly) that schemata are to facts what chunking is to low-level data. That just as students automatically chunk low-level data they encounter repeatedly, so they will automatically form schemata for facts they memorise, and the schemata will reduce cognitive load in the same way that chunking does (p.20). That’s a possibility, because the brain appears to use the same underlying mechanism to represent associations between all types of information – but it’s unlikely. We know that schemata vary considerably between individuals, whereas people chunk information in very similar ways. That’s not surprising if the information being chunked is simple and highly consistent, whereas schemata often involve complex, inconsistent information.

Experimental work involving priming suggests that schemata increase the speed and reliability of access to associated ideas and that would reduce cognitive load, but students would need to have the schemata that experts use explained to them in order to avoid forming schemata of their own that were insufficient or misleading. Daisy doesn’t go into detail about deep structure or schemata, which I think is an oversight, because the schemata students use to organise facts are crucial to their understanding of how the facts relate to each other.

migrating models

Daisy and teachers taking a similar perspective frequently refer approvingly to ‘traditional’ approaches to education. It’s been difficult to figure out exactly what they mean. Daisy focuses on direct instruction and memorising facts, Old Andrew’s definition is a bit broader and Robert Peal’s appears to include cultural artefacts like smart uniforms and school songs. What they appear to have in common is a concept of education derived from the behaviourist model of learning that dominated psychology in the inter-war years. In education it focused on what was being learned; there was little consideration of the broader context involving the purpose of education, power structures, socioeconomic factors, the causes of learning difficulties etc.

Daisy and other would-be reformers appear to be trying to update the behaviourist model of education with concepts that, ironically, emerged from cognitive psychology not long after it switched focus from behaviourist model of learning to a computational one; the point at which the field was first described as ‘cognitive’. The concepts the educational reformers focus on fit the behaviourist model well because they are strongly mechanistic and largely context-free. The examples that crop up frequently in the psychology research Daisy cites usually involve maths, physics and chess problems. These types of problems were chosen deliberately by artificial intelligence researchers because they were relatively simple and clearly bounded; the idea was that once the basic mechanism of learning had been figured out, the principles could then be extended to more complex, less well-defined problems.

Researchers later learned a good deal about complex, less well-defined problems, but Daisy doesn’t refer to that research. Nor do any of the other proponents of educational reform. What more recent research has shown is that complex, less well-defined knowledge is organised by the brain in a different way to simple, consistent information. So in cognitive psychology the computational model of cognition has been complemented by a constructivist one, but it’s a different constructivist model to the social constructivism that underpins current education theory. The computational model never quite made it across to education, but early constructivist ideas did – in the form of Piaget’s work. At that point, education theory appears to have grown legs and wandered off in a different direction to cognitive psychology. I agree with Daisy that education theorists need to pay attention to findings from cognitive psychology, but they need to pay attention to what’s been discovered in the last half century not just to the computational research that superseded behaviourism.

why criticise the reformers?

So why am I critical of the reformers, but not of the educational orthodoxy? When my children started school, they, and I, were sometimes perplexed by the approaches to learning they encountered. Conversations with teachers painted a picture of educational theory that consisted of a hotch-potch of valid concepts, recent tradition, consequences of policy decisions and ideas that appeared to have come from nowhere like Brain Gym and Learning Styles. The only unifying feature I could find was a social constructivist approach and even on that opinions seemed to vary. It was difficult to tell what the educational orthodoxy was, or even if there was one at all. It’s difficult to critique a model that might not be a model. So I perked up when I heard about teachers challenging the orthodoxy using the findings from scientific research and calling for an evidence-based approach to education.

My optimism was short-lived. Although the teachers talked about evidence from cognitive psychology and randomised controlled trials, the model of learning they were proposing appeared as patchy, incomplete and incoherent as the model they were criticising – it was just different. So here are my main reservations about the educational reformers’ ideas:

1. If mainstream education theorists aren’t aware of working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that they might not be paying enough attention to developments in some or all of the knowledge domains their own theory relies on. Knowing about working memory, chunking, schemata and expertise isn’t going to resolve that problem.

2. If teachers don’t know about working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that teacher training isn’t providing teachers with the knowledge they need. To some extent this would be an outcome of weaknesses in educational theory, but I get the impression that trainee teachers aren’t expected or encouraged to challenge what they’re taught. Several teachers who’ve recently discovered cognitive psychology have appeared rather miffed that they hadn’t been told about it. They were all Teach First graduates; I don’t know if that’s significant.

3. A handful of concepts from cognitive psychology doesn’t constitute a robust enough foundation for developing a pedagogical approach or designing a curriculum. Daisy essentially reiterates what Daniel Willingham has to say about the breadth and depth of the curriculum in Why Don’t Students Like School?. He’s a cognitive psychologist and well-placed to show how models of cognition could inform education theory. But his book isn’t about the deep structure of theory, it’s about applying some principles from cognitive psychology in the classroom in response to specific questions from teachers. He explores ideas about pedagogy and the curriculum, but that’s as far as it goes. Trying to develop a model of pedagogy and design a curriculum based on a handful of principles presented in a format like this is like trying to devise courses of treatment and design a health service based on the information gleaned from a GP’s problem page in a popular magazine. But I might be being too charitable; Willingham is a trustee of the Core Knowledge Foundation, after all.

4. Limited knowledge Rightly, the reforming teachers expect students to acquire extensive factual knowledge and emphasise the differences between experts and novices. But Daisy’s knowledge of cognitive psychology appears to be limited to a handful of principles discovered over thirty years ago. She, Robert Peal and Toby Young all quote Daniel Willingham on research in cognitive psychology during the last thirty years, but none of them, Willingham included, tell us what it is. If they did, it would show that the principles they refer to don’t scale up when it comes to complex knowledge. Nor do most of the teachers writing about educational reform appear to have much teaching experience. That doesn’t mean they are wrong, but it does call into question the extent of their expertise relating to education.

Some of those supporting Daisy’s view have told me they are aware that they don’t know much about cognitive psychology, but have argued that they have to start somewhere and it’s important that teachers are made aware of concepts like the limits of working memory. That’s fine if that’s all they are doing, but it’s not. Redesigning pedagogy and the curriculum on the basis of a handful of facts makes sense if you think that what’s important is facts and that the brain will automatically organise those facts into a coherent schema. The problem is of course that that rarely happens in the absence of an overview of all the relevant facts and how they fit together. Cognitive psychology, like all other knowledge domains, has incomplete knowledge but it’s not incomplete in the same way as the reforming teachers’ knowledge. This is classic Sorcerer’s Apprentice territory; a little knowledge, misapplied, can do a lot of damage.

5. Evaluating evidence Then there’s the way evidence is handled. Evidence-based knowledge domains have different ways of evaluating evidence, but they all evaluate it. That means weighing up the pros and cons, comparing evidence for and against competing hypotheses and so on. Evaluating evidence does not mean presenting only the evidence that supports whatever view you want to get across. That might be a way of making your case more persuasive, but is of no use to anyone who wants to know about the reliability of your hypothesis or your evidence. There might be a lot of evidence telling you your hypothesis is right – but a lot more telling you it’s wrong. But Daisy, Robert Peal and Toby Young all present supporting evidence only. They make no attempt to test the hypotheses they’re proposing or the evidence cited, and much of the evidence is from secondary sources – with all due respect to Daniel Willingham, just because he says something doesn’t mean that’s all there is to say on the matter.

cargo-cult science

I suggested to a couple of the teachers who supported Daisy’s model that ironically it resembled Feynman’s famous cargo-cult analogy (p. 97). They pointed out that the islanders were using replicas of equipment, whereas the concepts from cognitive psychology were the real deal. I suggest that even the Americans had left their equipment on the airfield and the islanders knew how to use it, that wouldn’t have resulted in planes bringing in cargo – because there were other factors involved.

My initial response to reading Seven Myths about Education was one of frustration that despite making some good points about the educational orthodoxy and cognitive psychology, Daisy appeared to have got hold of the wrong ends of several sticks. This rapidly changed to concern that a handful of misunderstood concepts is being used as ‘evidence’ to support changes in national education policy.

In Michael Gove’s recent speech at the Education Reform Summit, he refers to the “solidly grounded research into how children actually learn of leading academics such as ED Hirsch or Daniel T Willingham”. Daniel Willingham has published peer-reviewed work, mainly on procedural learning, but I could find none by ED Hirsch. It would be interesting to know what the previous Secretary of State for Education’s criteria for ‘solidly grounded research’ and ‘leading academic’ were. To me the educational reform movement doesn’t look like an evidence-based discipline but bears all the hallmarks of an ideological system looking for evidence that affirms its core beliefs. This is no way to develop public policy. Government should know better.

seven myths about education: traditional subjects

In Seven Myths about Education, Daisy Christodoulou refers to the importance of ‘subjects’ and clearly doesn’t think much of cross-curricular projects. In the chapter on myth 5 ‘we should teach transferable skills’ she cites Daniel Willingham pointing out that the human brain isn’t like a calculator that can perform the same operations on any data. Willingham must be referring to higher-level information-processing because Anderson’s model of cognition makes it clear that at lower levels the brain is like a calculator and does perform essentially the same operations on any data; that’s Anderson’s point. Willingham’s point is that skills and knowledge are interdependent; you can’t acquire skills in the absence of knowledge and skills are often subject-specific and depend on the type of knowledge involved.

Daisy dislikes cross-curricular projects because students are unlikely to have the requisite prior knowledge from across several knowledge domains, are often expected to behave like experts when they are novices and get distracted by peripheral tasks. I would suggest those problems are indicators of poor project design rather than problems with cross-curricular work per se. Instead, Daisy would prefer teachers to stick to traditional subject areas.

traditional subjects

Daisy refers several times to traditional subjects, traditional bodies of knowledge and traditional education. The clearest explanation of what she means is on pp.117-119, when discussing the breadth and depth of the curriculum;

For many of the theorists we looked at, subject disciplines were themselves artificial inventions designed to enforce Victorian middle-class values … They may well be human inventions, but they are very useful … because they provide a practical way of teaching … important concepts …. The sentence in English, the place value in mathematics, energy in physics; in each case subjects provide a useful framework for teaching the concept.”

It’s worth considering how the subject disciplines the theorists complained about came into being. At the end of the 18th century, a well-educated, well-read person could have just about kept abreast of most advances in human knowledge. By the end of the 19th century that would have been impossible. The exponential growth of knowledge made increasing specialisation necessary; the names of many specialist occupations including the term ‘scientist’ were coined the 19th century. By the end of the 20th century, knowledge domains/subjects existed that hadn’t even been thought of 200 years earlier.

It makes sense for academic researchers to specialise and for secondary schools to employ teachers who are subject specialists because it’s essential to have good knowledge of a subject if you’re researching it or teaching it. The subject areas taught in secondary schools have been determined largely by the prior knowledge universities require from undergraduates. That determines A level content, which in turn determines GCSE content, which in turn determines what’s taught at earlier stages in school. That model also makes sense; if universities don’t know what’s essential in a knowledge domain, no one does.

The problem for schools is that they can’t teach everything, so someone has to decide on the subjects and subject content that’s included in the curriculum. The critics Daisy cites question traditional subject areas on the grounds that they reflect the interests of a small group of people with high social prestige (p.110-111).

criteria for the curriculum

Daisy doesn’t buy the idea that subject areas represent the interests of a social elite, but she does suggest an alternative criterion for curriculum content. Essentially, this is frequency of citation. In relation to the breadth of the curriculum, she adopts the principle espoused by ED Hirsch (and Daniel Willingham, Robert Peal and Toby Young), of what writers of “broadsheet newspapers and intelligent books” (p.116) assume their readers will know. The writers in question are exemplified by those contributing to the “Washington Post, Chicago Tribune and so on” (Willingham p.47). Toby Young suggests a UK equivalent – “Times leader writers and heavyweight political commentators” (Young p.34). Although this criterion for the curriculum is better than nothing, its limitations are obvious. The curriculum would be determined by what authors, editors and publishers knew about or thought was important. If there were subject areas crucial to human life that they didn’t know about, ignored or deliberately avoided, the next generation would be sunk.

When it comes to the depth of the curriculum, Daisy quotes Willingham; “cognitive science leads to the rather obvious conclusion that students must learn the concepts that come up again and again – the unifying ideas of each discipline” (Willingham p.48). My guess is that Willingham describes the ‘unifying ideas of each discipline’ as ‘concepts that come up again and again’ to avoid going into unnecessary detail about the deep structure of knowledge domains; he makes a clear distinction between the criteria for the breadth and depth of the curriculum in his book. But his choice of wording, if taken out of context, could give the impression that the unifying ideas of each discipline are the concepts that come up again and again in “broadsheet newspapers and intelligent books”.

One problem with the unifying ideas of each discipline is that they don’t always come up again and again. They certainly encompass “the sentence in English, place value in mathematics, energy in physics”, but sometimes the unifying ideas involve deep structure and schemata taken for granted by experts but not often made explicit, particularly to school students.

Daisy points out, rightly, that neither ‘powerful knowledge’ nor ‘high culture’ are owned by a particular social class or culture (p.118). But she apparently fails to see that using cultural references as a criterion for what’s taught in schools could still result in the content of the curriculum being determined by a small, powerful social group; exactly what the traditional subject critics and Daisy herself complain about, though they are referring to different groups.

dead white males

This drawback is illustrated by Willingham’s observation that using the cultural references criterion means “we may still be distressed that much of what writers assume their readers know seems to be touchstones of the culture of dead white males” (p.116). Toby Young turns them into ‘dead white, European males’ (Young p.34, my emphasis).

What advocates of the cultural references model for the curriculum appear to have overlooked is that the dead white males’ domination of cultural references is a direct result of the long period during which European nations colonised the rest of the world. This colonisation (or ‘trade’ depending on your perspective) resulted in Europe becoming wealthy enough to fund many white males (and some females) engaged in the pursuit of knowledge or in creating works of art. What also tends to be forgotten is that the foundation for their knowledge originated with males (and females) who were non-whites and non-Europeans living long before the Renaissance. The dead white guys would have had an even better foundation for their work if people of various ethnic origins hadn’t managed to destroy the library at Alexandria (and a renowned female scholar). The cognitive bias that edits out non-European and non-male contributions to knowledge is also evident in the US and UK versions of the Core Knowledge sequence.

Core Knowledge sequence

Determining the content of the curriculum by the use of cultural references has some coherence, but cultural references don’t necessarily reflect the deep structure of knowledge. Daisy comments favourably on ED Hirsch’s Core Knowledge sequence (p.121). She observes that “The history curriculum is designed to be coherent and cumulative… pupils start in first grade studying the first American peoples, they progress up to the present day, which they reach in the eighth grade. World history runs alongside this, beginning with the Ancient Greeks and progressing to industrialism, the French revolution and Latin American independence movements.”

Hirsch’s Core Knowledge sequence might encompass considerably more factual knowledge than the English national curriculum, but the example Daisy cites clearly leaves some questions unanswered. How did the first American peoples get to America and why did they go there? Who lived in Europe (and other continents) before the Ancient Greeks and why are the Ancient Greeks important? Obviously the further back we go, the less reliable evidence there is, but we know enough about early history and pre-history to be able to develop a reasonably reliable overview of what happened. It’s an overview that clearly demonstrates that the natural environment often had a more significant role than human culture in shaping history. And one that shows that ‘dead white males’ are considerably less important than they appear if the curriculum is derived from cultural references originating in the English-speaking world. Similar caveats apply to the UK equivalent of the Core Knowledge sequence published by Civitas, the one that recommends children in year 1 being taught about the Glorious Revolution and the significance of Robert Walpole.

It’s worth noting that few of the advocates of curriculum content derived from cultural references are scientists; Willingham is, but his background is in human cognition, not chemistry, biology, geology or geography. I think there’s a real risk of overlooking the role that geographical features, climate, minerals, plants and animals have played in human history, and of developing a curriculum that’s so Anglo-centric and culturally focused it’s not going to equip students to tackle the very concrete problems the world is currently facing. Ironically, Daisy and others are recommending that students acquire a strongly socially-constructed body of knowledge, rather than a body of knowledge determined by what’s out there in the real world.

knowledge itself

Michael Young, quoted by Daisy, aptly sums up the difference:

Although we cannot deny the sociality of all forms of knowledge, certain forms of knowledge which I find useful to refer to as powerful knowledge and are often equated with ‘knowledge itself’, have properties that are emergent from and not wholly dependent on their social and historical origins.” (p.118)

Most knowledge domains are pretty firmly grounded in the real world, which means that the knowledge itself has a coherent structure reflecting the real world and therefore, as Michael Young points out, it has emergent properties of its own, regardless of how we perceive or construct it.

So what criteria should we use for the curriculum? Generally, academics and specialist teachers have a good grasp of the unifying principles of their field – the ‘knowledge itself’. So their input would be essential. But other groups have an interest in the curriculum; notably the communities who fund and benefit from the education system and those involved on a day-to-day basis – teachers, parents and students. 100% consensus on a criterion is unlikely, but the outcome might not be any worse than the constant tinkering with the curriculum by government over the past three decades.

why subjects?

‘Subjects’ are certainly a convenient way of arranging our knowledge and they do enable a focus on the deep structure of a specific knowledge domain. But the real world, from which we get our knowledge, isn’t divided neatly into subject areas, it’s an interconnected whole. ‘Subjects’ are facets of knowledge about a world that in reality is highly integrated and interconnected. The problem with teaching along traditional subject area lines is that students are very likely to end up with a fragmented view of how the real world functions, and to miss important connections. Any given subject area might be internally coherent, but there’s often no apparent connection between subject areas, so the curriculum as a whole just doesn’t make sense to students. How does history relate to chemistry or RE to geography? It’s difficult to tell while you are being educated along ‘subject’ lines.

Elsewhere I’ve suggested that what might make sense would be a chronological narrative spine for the curriculum. Learning about the Big Bang, the formation of galaxies, elements, minerals, the atmosphere and supercontinents through the origins of life to early human groups, hunter-gatherer migration, agricultural settlement, the development of cities and so on, makes sense of knowledge that would otherwise be fragmented. And it provides a unifying, overarching framework for any knowledge acquired in the future.

Adopting a chronological curriculum would mean an initial focus on sciences and physical geography; the humanities and the arts wouldn’t be relevant until later for obvious reasons. It wouldn’t preclude simultaneously studying languages, mathematics, music or PE of course – I’m not suggesting a chronological curriculum ‘first and only’ – but a chronological framework would make sense of the curriculum as a whole.

It could also bridge the gap between so-called ‘academic’ and ‘vocational’ subjects. In a consumer society, it’s easy to lose sight of the importance of knowledge about food, water, fuel and infrastructure. But someone has to have that knowledge and our survival and quality of life are dependent on how good their knowledge is and how well they apply it. An awareness of how the need for food, water and fuel has driven human history and how technological solutions have been developed to deal with problems might serve to narrow the academic/vocational divide in a way that results in communities having a better collective understanding of how the real world works.

the curriculum in context

I can understand why Daisy is unimpressed by the idea that skills can be learned in the absence of knowledge or that skills are generic and completely transferable across knowledge domains. You can’t get to the skills at the top of Bloom’s taxonomy by bypassing the foundation level – knowledge. Having said that, I think Daisy’s criteria for the curriculum overlook some important points.

First, although I agree that subjects provide a useful framework for teaching concepts, the real world isn’t neatly divided up into subject areas. Teaching as if it is means it’s not only students who are likely to get a fragmented view of the world, but newspaper columnists, authors and policy-makers might too – with potentially disastrous consequences for all of us. It doesn’t follow that students need to be taught skills that allegedly transfer across all subjects, but they do need to know how subject areas fit together.

Second, although we can never eliminate subjectivity from knowledge, we can minimise it. Most knowledge domains reflect the real world accurately enough for us to be able to put them to good, practical use on a day-to-day basis. It doesn’t follow that all knowledge consists of verified facts or that students will grasp the unifying principles of a knowledge domains by learning thousands of facts. Students need to learn about the deep structure of knowledge domains and how the evidence for the facts they encompass has been evaluated.

Lastly, cultural references are an inadequate criterion for determining the breadth of the curriculum. Cultural references form exactly the sort of socially constructed framework that critics of traditional subject areas complain about. Most knowledge domains are firmly grounded in the real world and the knowledge itself, despite its inherent subjectivity, provides a much more valid and reliable criterion for deciding what students should know that what people are writing about. Knowledge about cultural references might enable students to participate in what Michael Oakeshott called the ‘conversation of mankind’, but life doesn’t consist only of a conversation – at whatever level you understand the term. For most people, even in the developed world, life is just as much about survival and quality of life, and in order to optimise our chances of both, we need to know as much as possible about how the world functions, not just what a small group of people are saying about it.

In my next post, hopefully the final one about Seven Myths, I plan to summarise why I think it’s so important to understand what Daisy and those who support her model of educational reform are saying.

References

Peal, R (2014). Progressively Worse: The Burden of Bad Ideas in British Schools. Civitas.
Willingham, D (2009). Why don’t students like school?. Jossey-Bass.
Young, T (2014). Prisoners of the Blob. Civitas.

seven myths about education: deep structure

deep structure and understanding

Extracting information from data is crucially important for learning; if we can’t spot patterns that enable us to identify changes and make connections and predictions, no amount of data will enable us to learn anything. Similarly, spotting patterns within and between facts enables us to identify changes and connections and make predictions will help us understand how the world works. Understanding is a concept that crops up a lot in information theory and education. Several of the proposed hierarchies of knowledge have included the concept of understanding – almost invariably at or above the knowledge level of the DIKW pyramid. Understanding is often equated with what’s referred to as the deep structure of knowledge. In this post I want to look at deep structure in two contexts; when it involves a small number of facts, and when it involves a very large number, as in an entire knowledge domain.

When I discussed the DIKW pyramid, I referred to information being extracted from a ‘lower’ level of abstraction to form a ‘higher’ one. Now I’m talking about ‘deep’ structure. What’s the difference, if any? The concept of deep structure comes from the field of linguistics. The idea is that you can say the same thing in different ways; the surface features of what you say might be different, but the deep structure of the statements could still be the same. So the sentences ‘the cat is on the mat’ and ‘the mat is under the cat’ have different surface features but the same deep structure. Similarly, ‘the dog is on the box’ and ‘the box is under the dog’ share the same deep structure. From an information-processing perspective the sentences about the dog and the cat share the same underlying schema.

In the DIKW knowledge hierarchy, extracted information is at a ‘higher’ level, not a ‘deeper’ one. The two different terminologies are used because the concepts of ‘higher’ level extraction of information and ‘deep’ structure have different origins, but essentially they are the same thing. All you need to remember is that in terms of information-processing ‘high’ and ‘deep’ both refer to the same vertical dimension – which term you use depends on your perspective. Higher-level abstractions, deep structure and schemata refer broadly to the same thing.

deep structure and small numbers of facts

Daniel Willingham devotes an entire chapter of his book Why don’t students like school? to the deep structure of knowledge when addressing students’ difficulty in understanding abstract ideas. Willingham describes mathematical problems presented in verbal form that have different surface features but the same deep structure – in his opening example they involve the calculation of the area of a table top and of a soccer pitch (Willingham, p.87). What he is referring to is clearly the concept of a schema, though he doesn’t call it that.

Willingham recognises that students often struggle with deep structure concepts and recommends providing them with many examples and using analogies they’re are familiar with. These strategies would certainly help, but as we’ve seen previously, because the surface features of facts aren’t consistent in terms of sensory data, students’ brains are not going to spot patterns automatically and pre-consciously in the way they do with consistent low-level data and information. To the human brain, a cat on a mat is not the same as a dog on a box. And a couple trying to figure out whether a dining table would be big enough involves very different sensory data to that involved in a groundsman working out how much turf will be needed for a new football pitch.

Willingham’s problems involve several levels of abstraction. Note that the levels of abstraction only provide an overall framework, they’re not set in stone; I’ve had to split the information level into two to illustrate how information needs to be extracted at several successive levels before students can even begin to calculate the area of the table or the football pitch. The levels of abstraction are;

• data – the squiggles that make up letters and the sounds that make up speech
• first-order information – letters and words (chunked)
• second-order information – what the couple is trying to do and what the groundsman is trying to do (not chunked)
• knowledge – the deep structure/schema underlying each problem.

To anyone familiar with calculating area, the problems are simple ones; to anyone unfamiliar with the schema involved, they impose a high cognitive load because the brain is trying to juggle information about couples, tables, groundsmen and football pitches and can’t see the forest for the trees. Most brains would require quite a few examples before they had enough information to be able to spot the two patterns, so it’s not surprising that students who haven’t had much practical experience of buying tables, fitting carpets, painting walls or laying turf take a while to cotton on.

visual vs verbal representations

What might help students further is making explicit the deep structure of groups of facts with the help of visual representations. Visual representations have one huge advantage over verbal representations. Verbal representations, by definition, are processed sequentially – you can only say, hear or read one word at a time. Most people can process verbal information at the same rate at which they hear it or read it, so most students will be able to follow what a teacher is saying or what they are reading, even if it takes a while to figure out what the teacher or the book are getting at. However, if you can’t process verbal information quickly enough, can’t recall earlier sentences whilst processing the current one, miss a word, or don’t understand a crucial word or concept, it will be impossible to make sense of the whole thing. In visual representations, you can see all the key units of information at a glance, most of the information can be processed in parallel and the underlying schema is more obvious.

The concept of calculating area lends itself very well to visual representation; it is a geometry problem after all. Getting the students to draw a diagram of each problem would not only focus their attention on the deep structure rather than its surface features, it would also demonstrate clearly that problems with different surface features can have the same underlying deep structure.

It might not be so easy to make visual representations of the deep structure of other groups of facts, but it’s an approach worth trying because it makes explicit the deep structure of the relationship between the facts. In Seven Myths about Education, one of Daisy’s examples of a fact is the date of the battle of Waterloo. Battles are an excellent example of deep structure/schemata in action. There is a large but limited number of ways two opposing forces can position themselves in battle, whoever they are and whenever and wherever they are fighting, which is why ancient battles are studied by modern military strategists. The configurations of forces and what subsequent configurations are available to them are very similar to the configurations of pieces and next possible moves in chess. Of course chess began as a game of military strategy – as a visual representation of the deep structure of battles.

Deep structure/underlying schemata are a key factor in other domains too. Different atoms and different molecules can share the same deep structure in their bonding and reactions and chemists have developed formal notations for representing that visually; the deep structure of anatomy and physiology can be the same for many different animals – biologists rely heavily on diagrams to convey deep structure information. Historical events and the plots of plays can follow similar patterns even if the events occurred or the plays were written thousands of years apart. I don’t know how often history or English teachers use visual representations to illustrate the deep structure of concepts or groups of facts, but it might help students’ understanding.

deep structure of knowledge domains

It’s not just single facts or small groups of facts that have a deep structure or underlying schema. Entire knowledge domains have a deep structure too, although not necessarily in the form of a single schema; many connected schemata might be involved. How they are connected will depend on how experts arrange their knowledge or how much is known about a particular field.

Making students aware of the overall structure of a knowledge domain – especially if that’s via a visual representation so they can see the whole thing at once – could go a long way to improving their understanding of whatever they happen to be studying at any given time. It’s like the difference between Google Street View and Google Maps. Google Street View is invaluable if you’re going somewhere you’ve never been before and you want to see what it looks like. But Google Maps tells you where you are in relation to where you want to be – essential if you want to know how to get there. Having a mental map of an entire knowledge domain shows you how a particular fact or group of facts fits in to the big picture, and also tells you how much or how little you know.

Daisy’s model of cognition

Daisy doesn’t go into detail about deep structure or schemata. She touches on these concepts only a few times; once in reference to forming a chronological schema of historical events, then when referring to Joe Kirby’s double-helix metaphor for knowledge and skills and again when discussing curriculum design.

I don’t know if Daisy emphasises facts but downplays deep structure and schemata to highlight the point that the educational orthodoxy does essentially the opposite, or whether she doesn’t appreciate the importance of deep structure and schemata compared to surface features. I suspect it’s the latter. Daisy doesn’t provide any evidence to support her suggestion that simply memorising facts reduces cognitive load when she says;

“So when we commit facts to long-term memory, they actually become part of our thinking apparatus and have the ability to expand one of the biggest limitations of human cognition”(p.20).

The examples she refers to immediately prior to this assertion are multiplication facts that meet the criteria for chunking – they are simple and highly consistent and if they are chunked they’d be treated as one item by working memory. Whether facts like the dates of historical events meet the criteria for chunking or whether they occupy less space in working memory when memorised is debatable.

What’s more likely is that if more complex and less consistent facts are committed to memory, they are accessed more quickly and reliably than those that haven’t been memorised. Research evidence suggests that neural connections that are activated frequently become stronger and are accessed faster. Because information is carried in networks of neural connections, the more frequently we access facts or groups of facts, the faster and more reliably we will be able to access them. That’s a good thing. It doesn’t follow that those facts will occupy less space in working memory.

It certainly isn’t the case that simply committing to memory hundreds or thousands of facts will enable students to form a schema, or if they do, that it will be the schema their teacher would like them to form. Teachers might need to be explicit about the schemata that link facts. Since hundreds or thousands of facts tend to be linked by several different schemata – you can arrange the same facts in different ways – being explicit about the different ways they can be linked might be crucial to students’ understanding.

Essentially, deep structure schemata play an important role in three ways;

Students’ pre-existing schemata will affect their understanding of new information – they will interpret it in the light of the way they currently organise their knowledge. Teachers need to know about common misunderstandings as well as what they want students to understand.

Secondly, being able to identify the schema underlying one fact or small group of facts is the starting point for spotting similarities and differences between several groups of facts.

Thirdly, having a bird’s-eye view of the schemata involved in an entire knowledge domain increases students’ chances of understanding where a particular fact fits in to the grand scheme of things – and their awareness of what they don’t know.

Having a bird’s-eye view of the curriculum can help too, because it can show how different subject areas are linked. Subject areas and the curriculum are the subjects of the next post.