the new SEN legislation and the Dunkirk spirit

In less than a week an event will take place that’s been awaited with excitement, apprehension, or in some cases with something approaching the Dunkirk spirit. On 1 September part 3 of the Children and Families Act 2014 comes into force. It’s been described as the biggest change to special educational needs in 30 years.

It won’t work
. If I were a betting sort of person, I’d put money on the next government having to review the system again in a couple of years. How can I be so sure? Or so pessimistic? It’s because the ‘problem’ with special educational needs and disabilities (SEND) isn’t the special educational needs and disabilities, it’s the education system. And not just the SEN bit of it – it’s the education system as a whole. To find out why we need to go back in time…

we have a history

Education became compulsory in England in 1870. The new education system was essentially a one-size-fits-all affair focusing on reading, writing and arithmetic. Or more accurately one-size-fits-most; what took the government by surprise was the number of children turning up to school who didn’t fit the education system. Government essentially saw these ‘handicapped’ children as a problem, and its solution was to provide special schools for them. Although the solution made perfect sense, it wasn’t entirely successful. Handicapped children often ended up socially marginalised and sometimes institutionalised, and there were still children in mainstream schools who were struggling.

By the 1970s, the education system had changed considerably. There was more emphasis on an individualised education and local education authorities (LEAs), schools and teachers had a good deal of flexibility in the education they provided. The time was right for Margaret Thatcher as Secretary of State for Education to commission a review of the education of handicapped children, headed by Mary Warnock. The Warnock Committee reported in 1978. It defined special education as ‘provision not generally available in normal schools’ (p.45). In other words it saw the ‘problem’ of special education not as the children but as the educational provision available in mainstream schools. The committee’s recommendations fed into the 1981 Education Act that:

• assumed children would attend mainstream schools where possible
• did away with the old categories of handicap
• introduced the concept of ‘special educational needs’
• gave LEAs a duty to assess children’s special educational needs and to fund the additional provision required for their education.

The Act had the potential to transform the lives of children marginalised by the education system, but it clearly hasn’t done so – not in a good way, anyway. In the last 20 years we’ve had three SEN Codes of Practice, numerous inquiries, reports and tinkerings with SEN legislation and regulations. One select committee described the system as not fit for purpose. So…

what went wrong?

The Warnock recommendations were made in the context of a highly flexible education system. A contemporary account describes a fruitful collaboration between a school for children with visual impairment (VI) and a mainstream junior school, pioneered by a keen LEA officer (Hegarty & Pocklington, 1981). Children with VI were gradually integrated into the mainstream school and teachers trained each other. Everybody won.

In order to undertake such a project, LEAs, schools and teachers needed a fair amount of control over their time and budgets. Projects like this might have eventually been rolled out nationwide, except that within a decade the introduction of a compulsory national curriculum and standardised testing had begun to steer the education system back towards a one-size-fits-all approach. Within a few short years central government had essentially wrested the responsibility for education and its funding from local authorities and education had become a serious ‘political football’. Successive governments have focused on raising educational attainment as an indicator of their own effectiveness as a government and ironically that’s what’s resulted in SEN becoming a problem again in recent years.

Essentially, if you want an efficient one-size-fits-all education system and world-beating exam results it makes perfect sense to remove from the equation children who don’t fit into the system and are unlikely to do well in exams however hard everyone tries. That’s what the government did in the 1890s. If you want an education system that provides all children with an education suitable to their individual needs, you can forget about one-size-fits-all and world-beating exam results; you’ll need a lot of flexibility. That’s what the education system had developed into by the time of the Warnock committee. If you want both you’re likely to end up where we are now.

"Relativity" by MC Escher

“Relativity” by MC Escher

The Warnock committee defined special educational needs in terms of the educational provision ‘generally available in normal schools’. By definition, the better the provision in normal schools, the smaller the number of children who would be deemed to have special educational needs. The committee couldn’t have emphasised the need for SEN training for all teachers more strongly if it had tried, but perversely, the education system appears to have taken a step in the opposite direction.

teacher training

The Warnock committee recommended the inclusion of SEN training in the initial teacher training (ITT) for all teachers. Following the 1981 Education Act, the assumption that many children with SEN would be taught in mainstream schools and that all teachers would be trained in SEN led to the cessation of many special needs teacher training courses. They obviously haven’t been replaced with comparable training in ITT. This, coupled with the retirement of special education teachers and a reduction of the number of children in special schools, has meant that the education system as a whole has suffered a considerable loss of SEN expertise.

Reviews of SEN provision have repeatedly reported concerns about there being insufficient emphasis on SEN in ITT. But it’s only been since 2009 that Special Educational Needs Co-ordinators (SENCOs) have been required to be trained teachers, and only new SENCOs have been required to have SEN training. The current government has allocated additional funding for SEN qualifications (para 53) but only up until last year. This isn’t going to touch the problem. DfE figures for 2011 show that only around 7% of the total education workforce has SEN experience and/or training, and most of those people are concentrated in special schools. And special schools report ongoing difficulties recruiting suitably trained staff. This, despite the fact that the Warnock report 35 years ago pointed out that based on historical data, around 20% of the school population could be expected to need additional educational provision at some time during their school career. The report made it clear that all teachers are teachers of children with special educational needs.

Teachers’ expertise, or lack of it, will have a big impact on the attainment of children with SEN, but that hasn’t prevented government from developing unrealistic targets for all children under the guise of raising aspirations.

expectations of attainment

I mentioned earlier that over the last three decades education has become a ‘political football’. Concern is often expressed over the proportion of young people who leave school functionally illiterate or innumerate or without qualifications, despite evidence that this proportion has remained pretty constant for many years. In the case of literacy, it’s remained stubbornly at around 17%, by bizarre coincidence not far from the equally stubborn 20% figure for children with SEN.

But the possibility that some of those young people might be in the position they’re in because of lack of expertise in the education system – or even because they are never going to meet government’s arbitrary attainment targets and that that might actually be OK – doesn’t seem to have occurred to successive governments. In her keynote address to the inaugural national conference of the Autism Education Trust in 2009 the then Minister for Schools and Learning Sarah McCarthy-Fry, saw no reason why young people with autism shouldn’t achieve 5 A-C grade GCSEs. Some of course might do just that. For others such an aspiration bears no relation to their ability or aptitude, part of the definition for the ‘suitable education’ each child is required, by law, to receive.

Currently, funding for post-16 education requires young people to have or be studying for A-C grade GCSEs in both English and Maths. Post-16 providers are rolling their eyes. Although I can understand the reasoning behind this requirement, it’s an arbitrary target bearing no relation to the legal definition of a suitable education.

it’s the system

Currently, local authorities, schools and teachers are under pressure from the SEN system to make personalised, specialised educational provision for a small group of children, whilst at the same time the education system as a whole is pushing them in the opposite direction, towards a one-size-fits-all approach. This is a daft way to design a system and no matter how much effort individual professionals put in, it can’t work. But it isn’t the SEN system itself that needs changing, it’s teacher expertise and government expectations.

Over recent decades, successive governments have approached education legislation (and legislation in general, for that matter) not by careful consideration of the historical data and ensuring that the whole system is designed to produce the desired outcomes, but essentially by edict. A bit of the education system is wrong, so government has decreed that it should be put right, regardless of what’s causing the problem or the impact of changing part of the system without considering the likely consequences elsewhere.

In systems theory terms, this is known as sub-system optimization at the expense of systems optimization. That mouthful basically means that because all the parts of a system are connected, if you tweak one bit of it another bit will change, but not necessarily in a good way. Policy-makers refer to the not-in-a-good-way changes as unintended and unwanted outcomes.

The new SEN legislation is a classic case of an attempt at sub-system optimization that’s doomed to fail. It requires the education, health and social care sectors to do some joined up thinking and extend the support offered to children with SEND for a further decade – until they are 25 – at a time when all three sectors are undergoing massive organisational change and simultaneously having their budgets cut. It introduces personal budgets at a time when all three sectors are changing their commissioning arrangements. It fails to address the lack of expertise in all three systems. (Recent reports have pointed out that teachers aren’t trained in SEN, GPs don’t have paediatric training and children’s social workers don’t know about child development.) It fails to address the fundamental systems design problems inherent in all three sectors; a one-size-fits-all education system, and health and social care sectors that focus on cure rather than prevention.

This approach to systems design isn’t just daft, it’s incompetent and reprehensively irresponsible. People who have made hopeful noises about the new SEN system have tended to focus on the good intentions behind the legislation. I have no doubt about the good intentions or the integrity of the ministers responsible – Sarah Teather and Edward Timpson – but they have been swimming against a strong tide. Getting through the next few years will be tough. Fortunately, in the world of SEN there’s a lot of Dunkirk spirit – we’re going to need it.

References
Hegarty, S & Pocklington, K (1981). A junior school resource area for the visually impaired. In Swann, W (ed.) The Practice of Special Education. Open University Press/Basil Blackwell.
Warnock, H M (1978). Report of the Committee of Enquiry into the Education of Handicapped Children and Young People. HMSO.

truth and knowledge

A couple of days ago I became embroiled in a long-running Twitter debate about the nature of truth and knowledge, during which at least one person fell asleep. @EdSacredProfane has asked me where I ‘sit’ on truth. So, for the record, here’s what I think about truth and knowledge.

1. I think it’s safe to assume that reality and truth are out there. Even if they’re not out there and we’re all experiencing a collective hallucination we might as well assume that reality is real and that truth is true because if we don’t, our experience – whether real or imagined – is likely to get pretty unpleasant.

2. I’m comfortable with the definition of knowledge as justified true belief. But that’s a definition of an abstract concept. The extent to which people can actually justify or demonstrate the truth of their beliefs (collectively or individually) varies considerably.

3. The reason for this is the way perception works. All incoming sensory information is interpreted by our brains, and brains aren’t entirely reliable when it comes to interpreting sensory information. So we’ve devised methods of cross-checking what our senses tell us to make sure we haven’t got it disastrously wrong. One approach is known as the scientific method.

4. Science works on the basis of probability. We can never say for sure that A or B exists or that C definitely causes D. But for the purposes of getting on with our lives if there’s enough evidence suggesting that A or B exists and that C causes D, we assume those things to be true and justified to varying extents.

5. Even though our perception is a bit flaky and we can’t be 100% sure of anything, it doesn’t follow that reality is flaky or not 100% real. Just that our knowledge about it isn’t 100% reliable. The more evidence we’ve gathered, the more consistent and predictable reality looks. Unfortunately it’s also complicated, which, coupled with our flaky and uncertain perceptions, makes life challenging.

play: schools are for children, not children for schools

Some years ago, the TES carried an article about a primary school that taught its pupils how to knit. I learned to knit at school. My mum dutifully used my first attempt – a cotton dishcloth – for months despite its resemblance to a fishing net with an annoying tendency to ensnare kitchen utensils. The reason I was taught knitting was primarily in order to be able to knit. But the thrust of the TES article wasn’t about the usefulness of knitting. It was that it improved the children’s maths. It seemed that at some point since the introduction of mass education in England the relationship between schools and the real world had changed. The point of schools was no longer to provide children with knowledge (like maths) that will help them tackle real-world problems (like knitting), but vice versa - the point of useful real-world skills was now to support performance in school.

school readiness

I was reminded of the knitting article earlier this year, when Sir Michael Wilshaw, chief inspector of Ofsted, suggested to inspectors that not all early years settings are preparing children adequately for school. In a comment to the BBC he added;

More than two-thirds of our poorest children – and in some of our poorest communities that goes up to eight children out of 10 – go to school unprepared,” he said. “That means they can’t hold a pen, they have poor language and communication skills, they don’t recognise simple numbers, they can’t use the toilet independently and so on.”

His comments prompted an open letter to the Telegraph complaining that Sir Michael’s instruction to inspectors to assess nurseries mainly in terms of preparation for school “betrays an abject (and even wilful) misunderstanding of the nature of early childhood experience.” One of the signatories was Sue Cowley, who recently blogged about the importance of play. Her post, like Sir Michael’s original comments, generated a good deal of discussion.

Old Andrew responded promptly. He comments “This leads me to my one opinion on early years teaching methods: OFSTED are right to judge them by outcomes rather than acting as the “play police” and seeking to enforce play-based learning“.

The two bloggers have homed in on different issues. Sue Cowley is concerned about the shift in focus from childhood experience to ‘school-readiness’; Old Andrew is relieved that Ofsted inspectors are longer expected to ‘enforce play-based learning’. The online debate has also shifted from the original question implicit in Sir Michael’s comments and in the response in the letter to the Telegraph i.e. what is the purpose of nurseries and pre-schools? to a question posed by Old Andrew; “Is there any actual empirical evidence on the importance of play? All the “evidence” seems to be theoretical.”

empirical evidence

Responses from early years teachers to questions about evidence for the benefits of play are often along the lines of “I have the evidence of my own eyes”, which hasn’t satisfied the sceptics. Whether you think it’s a satisfactory answer or not depends on the importance you attach to direct observation.

The problem with direct observation is that it’s dependent on perception, which is notoriously unreliable. David Didau has blogged about some perceptual flaws here. He also mentions some of the cognitive errors that occur when people draw conclusions from observations. The scientific method has been developed largely to counteract the flaws in our perception and reasoning. But it doesn’t follow that direct observation is completely unreliable. Indeed, direct observation is the cornerstone of empirical evidence.

Here’s an example. Let’s say I’ve noticed that every time I use a particular brand of soap, my hands sting and turn bright red. It wouldn’t be unreasonable to conclude that I have an allergic response to an ingredient in the soap – but I wouldn’t know that for sure. There could be many causes for my red, stinging hands; the soap might be purely coincidental. The conclusions about causes I could draw solely from my direct observations would be pretty speculative.

But the direct observations themselves – identifying the brand of soap and what happened to my hands – would be a lot more reliable. It’s possible that I could have got the brand of soap wrong and could have imagined what happened to my hands, but those errors are much less likely than the errors involved in drawing conclusions about causality. I could easily increase the reliability of my direct observations by involving an independent observer. If a hundred independent observers all agreed that a particular brand of soap was associated with my and/or other people’s hands turning bright red, those observations wouldn’t be 100% watertight but they would be considered to be fairly reliable and might prompt the soap manufacturer to investigate further. Increasing the reliability of my conclusion about the causal relationship – that the soap caused an allergic reaction – would be more challenging.

is play another Brain Gym?

What intrigued me about the early years’ teachers responses was their reliance on direct observation as empirical evidence for the importance of play. Most professionals, if called upon to do so, can come up with some peer-reviewed research that supports the methods they use, even if it means delving into dusty textbooks they haven’t used for years. I could see Old Andrew’s point; if play is so important, why isn’t there a vast research literature on it? There are three characteristics of play that would explain both the apparent paucity of research and the teachers’ emphasis on direct observation.

First, play is a characteristic typical of most young mammals, and young humans play a lot. At one level, asking what empirical evidence there is for its importance is a pointless question – a bit like asking for evidence for the importance of learning or growth. Play, like learning and growth, is simply a facet of development.

Second, play, like most other mammalian characteristics, is readily observable – although you might need to do a bit of dissection to spot some of the anatomical ones. Traditionally, play has been seen as involving three types of skill; locomotor, object control and social interaction. But you don’t need a formal peer-reviewed study to tell you that. A few hours’ observation of a group of young children would be sufficient. A few hours’ observation would also reveal all the features of play Sue Cowley lists in her blog post.

Third, also readily apparent through direct observation is what children learn during play; the child who chooses to play with the shape-sorter every day until they can get all the shapes in the right holes first time, the one who can’t speak a word of English but is fluent after a few months despite little direct tuition, the one who initially won’t speak to anyone but blossoms into a mini-socialite through play. Early years teachers watch children learning through play every day, so it’s not surprising they don’t see the need to rely on research to tell them about its importance.

The features of play and what children can learn from it are not contentious; the observations of thousands of parents, teachers, psychologists, psychiatrists and anthropologists are largely in agreement over what play looks like and what children learn from it. This would explain why there appears to be little research on the importance of play; it’s self-evidently important to children themselves, as an integral part of human development and as a way of learning. In addition, much of the early research into play was carried out in the inter-war years. Try finding that online. Or even via your local library. Old Andrew’s reluctance to accept early years teachers’ direct observations as evidence might stem from his admission that he doesn’t “really have much insight into what small children are like.”

play-based education

The context of Old Andrew’s original question was Michael Wilshaw’s comments on school readiness and the response in the Telegraph letter. A recent guest post on his blog is critical of play-based learning, suggesting it causes problems for teachers higher up the food chain. Although Old Andrew says he’d like to see evidence for the importance of play in any context, what we’re actually talking about here is the importance of play in the education system.

Direct observation can tell us what play looks like and what children learn from it. What it can’t tell us about is the impact of play on development, GCSE results or adult life. For that, we’d need a more complex research design than just watching and/or recording before-and-after abilities. Some research has been carried out on the impact of play. Although there doesn’t appear to be a correlation between how much young mammals play and their abilities as adults, not playing does appear to impair responsiveness and effective social interaction. And we do know some things about the outcomes of the more complex play seen in children (e.g. Smith & Pellegrini, 2013).

Smith & Pellegrini agree that a prevailing “play ethos” has tended to exaggerate the evidence for the essential role of play (p.4) and that appears to be Old Andrew’s chief objection to the play advocates’ claims. Sue Cowley’s list describes play as ‘vital’, ‘crucial’ and ‘essential’. I can see how her choice of wording might give the impression to anyone looking for empirical evidence in the research literature that research findings relating to the importance of play in development, learning or education were more robust than they are. I can also see why someone observing the direct outcomes of play on a daily basis would see play as ‘vital’, ‘crucial’ and ‘essential’.

I agree with Old Andrew that Ofsted shouldn’t be enforcing play-based learning, or for that matter, telling teachers how to teach. There’s no point in training professionals and then telling them how to do their job. I also agree that if grand claims are being made for play-based learning or if it’s causing problems later on, we need some robust research or some expectation management, or both.

Having said that, it’s worth noting that for the best part of a century nursery and infant teachers have sung the praises of play-based learning. What’s easily overlooked by those who teach older children is the challenge facing early years teachers. They are expected to make ‘school-ready’ children who, in some cases and for whatever reason, have started nurseries, pre-schools and reception classes with little speech, who don’t understand a word of English, who can’t remember instructions, who have problems with dexterity, mobility and bowel and bladder control, or who find the school environment bewildering and frightening. Sometimes, the only way early years teachers can get children to engage or learn anything at all is through play. Early years teachers, as Sue Cowley points out, are usually advocates of highly structured, teacher-directed play. What’s more, they can see children learning from play in real time in front of them. The key question is not “what’s the empirical evidence for the importance of play?” but rather “if children play by default, are highly motivated to play and learn quickly from it, where’s the evidence for a better alternative?”

I’m all in favour of evidence-based practice, but I’m concerned that direct observation might be being prematurely ruled out. I’m also concerned that the debate appears to have shifted from the original one about preparation for school vs the erosion of childhood. This brings us back to the priorities of the school that taught knitting in order to improve children’s maths. Children obviously need to learn for their own benefit and for that of the community as a whole, but we need to remember that in a democracy school is for children, not children for school.

bibliography

Pellegrini, A & Smith PK (2005). The Nature of Play: Great Apes and Humans. Guilford Press.
Smith, PK & Pellegrini, A (2013). Learning through play. In Tremblay RE, Boivin M, Peters (eds). Encyclopedia of Early Childhood Development [online]. Montreal, Quebec: Centre of Excellence for Early Childhood Developmentand Strategic Knowledge Cluster on Early Child Development 1-6. Available at http://www.child-encyclopedia.com/documents/Smith-PellegriniANGxp2.pdf Accessed 11.8.2014.

seven myths about education – what’s missing?

Old Andrew has raised a number of objections to my critique of Seven Myths about Education. In his most recent comment on my previous (and I had hoped, last) post about it, he says I should be able to easily identify evidence that shows ‘what in the cognitive psychology Daisy references won’t scale up’.

One response would be to provide a list of references showing step-by-step the problems that AI researchers ran into. That would take me hours, if not days, because I would have to trawl through references I haven’t looked at for over 20 years. Most of them are not online anyway because of their age, which means Old Andrew would be unlikely to be able to access them.

What is more readily accessible is information about concepts that have emerged from those problems, for example; personal construct theory, schema theory, heuristics and biases, bounded rationality and indexing, connectionist models of cognition and neuroconstructivism. Unfortunately, none of the researchers says “incidentally, this means that students are not necessarily going to develop the right schemata when they commit facts to long-term memory” or “the implications for a curriculum derived from cultural references are obvious”, because they are researching cognition, not education and probably wouldn’t have anticipated anyone suggesting either of these ideas. Whether Old Andrew sees the relevance of these emergent issues or not is secondary, in my view, to how Daisy handles evidence in her book.

concepts and evidence

In the last section of her chapter on Myth 1, Daisy takes us through the concepts of the limited capacity of working memory and chunking. These are well-established, well-tested hypotheses and she cites evidence to support them.

concepts but no evidence

Daisy also appears to introduce two hypotheses of her own. The first is that “we can summon up the information from long-term memory to working memory without imposing a cognitive load” (p.19). The second is that the characteristics of chunking can be extrapolated to all facts, regardless of how complex or inconsistent they might be; “So, when we commit facts to long-term memory they actually become part of our thinking apparatus and have the ability to expand one of the biggest limitations of human cognition” (p.20). The evidence she cites to support this extrapolation is Anderson’s paper – the one about simple, consistent information. I couldn’t find any other evidence cited to support either idea.

evidence but no concepts

Daisy does cite Frantz’s paper about Simon’s work on intuition. Two important concepts of Simon’s that Daisy doesn’t mention but Frantz does, are bounded rationality and the idea of indexing.

Bounded rationality refers to the fact that people can only make sense of the information they have. This supports Daisy’s premise that knowledge is necessary for understanding. But it also supports Friere’s complaint about which facts were being presented to Brazilian schoolchildren. Bounded rationality is also relevant to the idea of the breadth of a curriculum being determined by the frequency of cultural references. Simon used it to challenge economic and political theory.

Simon also pointed out that not only do experts have access to more information than novices do, they can access it more quickly because of their mental cross-indexing, ie the schemata that link relevant information. Rapid speed of access reduces cognitive load, but it doesn’t eliminate it. Chess experts can determine the best next move within seconds, but for most other experts, their knowledge is considerably more complex and less well-defined. A surgeon or an engineer is likely to take days rather than seconds to decide on the best procedure or design to resolve a difficult problem. That implies that quite a heavy cognitive load is involved.

Daisy does mention schemata but doesn’t go into detail about how they are formed or how they influence thinking and understanding. She refers to deep learning in passing but doesn’t tackle the issue Willingham raises about students’ problems with deep structure.

burden of proof

Old Andrew appears to be suggesting that I should assume that Daisy’s assertions are valid unless I can produce evidence to refute them. The burden of proof for a theory usually rests with the person making the claims, for obvious reasons. Daisy cites evidence to support some of her claims, but not all of them. She doesn’t evaluate that evidence by considering its reliability or validity or by taking into account contradictory evidence.

If Daisy had written a book about her musings on cognitive psychology and education, or about how findings from cognitive psychology had helped her teaching, I wouldn’t be writing this. But that’s not what she’s done. She’s used theory from one knowledge domain to challenge theory in another. That can be a very fruitful strategy; the application of game theory and ecological systems theory has transformed several fields. But it’s not helpful simply to take a few concepts out of context from one domain and apply them out of context to another domain.

The reason is that theoretical concepts aren’t free-standing; they are embedded in a conceptual framework. If you’re challenging theory with theory, you need to take a long hard look at both knowledge domains first to get an idea of where particular concepts fit in. You can’t just say “I’m going to apply the concepts of chunking and the limited capacity of working memory to education, but I shan’t bother with schema theory or bounded rationality or heuristics and biases because I don’t think they’re relevant.” Well, you can say that, but it’s not a helpful way to approach problems with learning, because all of these concepts are integral to human cognition. Students don’t leave some of them in the cloakroom when they come into class.

On top of that, the model for pedagogy and the curriculum that Daisy supports is currently influencing international educational policy. If the DfE considers the way evidence has been presented by Hirsch, Willingham and presumably Daisy, as ‘rigorous’, as Michael Gove clearly did, then we’re in trouble.

For Old Andrew’s benefit, I’ve listed some references. Most of them are about things that Daisy doesn’t mention. That’s the point.

references

Axelrod, R (1973). Schema Theory: An Information Processing Model of Perception and Cognition, The American Political Science Review, 67, 1248-1266.
Elman, J et al (1998). Rethinking Innateness: Connectionist Perspective on Development. MIT Press.
Frantz, R (2003). Herbert Simon. Artificial intelligence as a framework for understanding intuition, Journal of Economic Psychology, 24, 265–277.
Kahneman, D., Slovic, P & Tversky A (1982). Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press.
Karmiloff-Smith, A (2009). Nativism Versus Neuroconstructivism: Rethinking the Study of
Developmental Disorders. Developmental Psychology, 45, 56–63.
Kelly, GA (1955). The Psychology of Personal Constructs. New York: Norton.

seven myths about education: finally…

When I first heard about Daisy Christodoulou’s myth-busting book in which she adopts an evidence-based approach to education theory, I assumed that she and I would see things pretty much the same way. It was only when I read reviews (including Daisy’s own summary) that I realised we’d come to rather different conclusions from what looked like the same starting point in cognitive psychology. I’ve been asked several times why, if I have reservations about the current educational orthodoxy, think knowledge is important, don’t have a problem with teachers explaining things and support the use of systematic synthetic phonics, I’m critical of those calling for educational reform, rather than those responsible for a system that needs reforming. The reason involves the deep structure of the models, rather than their surface features.

concepts from cognitive psychology

Central to Daisy’s argument is the concept of the limited capacity of working memory. It’s certainly a core concept in cognitive psychology. It explains not only why we can think about only a few things at once, but also why we oversimplify and misunderstand, are irrational, subject to errors and biases and use quick-and-dirty rules of thumb in our thinking. And it explains why an emphasis on understanding at the expense of factual information is likely to result in students not knowing much and, ironically, not understanding much either.

But what students are supposed to learn is only one of the streams of information that working memory deals with; it simultaneously processes information about students’ internal and external environment. And the limited capacity of working memory is only one of many things that impact on learning; a complex array of environmental factors is also involved. So although you can conceptually isolate the material students are supposed to learn and the limited capacity of working memory, in the classroom neither of them can be isolated from all the other factors involved. And you have to take those other factors into account in order to build a coherent, workable theory of learning.

But Daisy doesn’t introduce only the concept of working memory. She also talks about chunking, schemata and expertise. Daisy implies (although she doesn’t say so explicitly) that schemata are to facts what chunking is to low-level data . That just as students automatically chunk low-level data they encounter repeatedly, so they will automatically form schemata for facts they memorise, and the schemata will reduce cognitive load in the same way that chunking does (p.20). That’s a possibility, because the brain appears to use the same underlying mechanism to represent associations between all types of information – but it’s unlikely. We know that schemata vary considerably between individuals, whereas people chunk information in very similar ways. That’s not surprising if the information being chunked is simple and highly consistent, whereas schemata often involve complex, inconsistent information.

Experimental work involving priming suggests that schemata increase the speed and reliability of access to associated ideas and that would reduce cognitive load, but students would need to have the schemata that experts use explained to them in order to avoid forming schemata of their own that were insufficient or misleading. Daisy doesn’t go into detail about deep structure or schemata, which I think is an oversight, because the schemata students use to organise facts are crucial to their understanding of how the facts relate to each other.

migrating models

Daisy and teachers taking a similar perspective frequently refer approvingly to ‘traditional’ approaches to education. It’s been difficult to figure out exactly what they mean. Daisy focuses on direct instruction and memorising facts, Old Andrew’s definition is a bit broader and Robert Peal’s appears to include cultural artefacts like smart uniforms and school songs. What they appear to have in common is a concept of education derived from the behaviourist model of learning that dominated psychology in the inter-war years. In education it focused on what was being learned; there was little consideration of the broader context involving the purpose of education, power structures, socioeconomic factors, the causes of learning difficulties etc.

Daisy and other would-be reformers appear to be trying to update the behaviourist model of education with concepts that, ironically, emerged from cognitive psychology not long after it switched focus from behaviourist model of learning to a computational one; the point at which the field was first described as ‘cognitive’. The concepts the educational reformers focus on fit the behaviourist model well because they are strongly mechanistic and largely context-free. The examples that crop up frequently in the psychology research Daisy cites usually involve maths, physics and chess problems. These types of problems were chosen deliberately by artificial intelligence researchers because they were relatively simple and clearly bounded; the idea was that once the basic mechanism of learning had been figured out, the principles could then be extended to more complex, less well-defined problems.

Researchers later learned a good deal about complex, less well-defined problems, but Daisy doesn’t refer to that research. Nor do any of the other proponents of educational reform. What more recent research has shown is that complex, less well-defined knowledge is organised by the brain in a different way to simple, consistent information. So in cognitive psychology the computational model of cognition has been complemented by a constructivist one, but it’s a different constructivist model to the social constructivism that underpins current education theory. The computational model never quite made it across to education, but early constructivist ideas did – in the form of Piaget’s work. At that point, education theory appears to have grown legs and wandered off in a different direction to cognitive psychology. I agree with Daisy that education theorists need to pay attention to findings from cognitive psychology, but they need to pay attention to what’s been discovered in the last half century not just to the computational research that superseded behaviourism.

why criticise the reformers?

So why am I critical of the reformers, but not of the educational orthodoxy? When my children started school, they, and I, were sometimes perplexed by the approaches to learning they encountered. Conversations with teachers painted a picture of educational theory that consisted of a hotch-potch of valid concepts, recent tradition, consequences of policy decisions and ideas that appeared to have come from nowhere like Brain Gym and Learning Styles. The only unifying feature I could find was a social constructivist approach and even on that opinions seemed to vary. It was difficult to tell what the educational orthodoxy was, or even if there was one at all. It’s difficult to critique a model that might not be a model. So I perked up when I heard about teachers challenging the orthodoxy using the findings from scientific research and calling for an evidence-based approach to education.

My optimism was short-lived. Although the teachers talked about evidence from cognitive psychology and randomised controlled trials, the model of learning they were proposing appeared as patchy, incomplete and incoherent as the model they were criticising – it was just different. So here are my main reservations about the educational reformers’ ideas:

1. If mainstream education theorists aren’t aware of working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that they might not be paying enough attention to developments in some or all of the knowledge domains their own theory relies on. Knowing about working memory, chunking, schemata and expertise isn’t going to resolve that problem.

2. If teachers don’t know about working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that teacher training isn’t providing teachers with the knowledge they need. To some extent this would be an outcome of weaknesses in educational theory, but I get the impression that trainee teachers aren’t expected or encouraged to challenge what they’re taught. Several teachers who’ve recently discovered cognitive psychology have appeared rather miffed that they hadn’t been told about it. They were all Teach First graduates; I don’t know if that’s significant.

3. A handful of concepts from cognitive psychology doesn’t constitute a robust enough foundation for developing a pedagogical approach or designing a curriculum. Daisy essentially reiterates what Daniel Willingham has to say about the breadth and depth of the curriculum in Why Don’t Students Like School?. He’s a cognitive psychologist and well-placed to show how models of cognition could inform education theory. But his book isn’t about the deep structure of theory, it’s about applying some principles from cognitive psychology in the classroom in response to specific questions from teachers. He explores ideas about pedagogy and the curriculum, but that’s as far as it goes. Trying to develop a model of pedagogy and design a curriculum based on a handful of principles presented in a format like this is like trying to devise courses of treatment and design a health service based on the information gleaned from a GP’s problem page in a popular magazine. But I might be being too charitable; Willingham is a trustee of the Core Knowledge Foundation, after all.

4. Limited knowledge Rightly, the reforming teachers expect students to acquire extensive factual knowledge and emphasise the differences between experts and novices. But Daisy’s knowledge of cognitive psychology appears to be limited to a handful of principles discovered over thirty years ago. She, Robert Peal and Toby Young all quote Daniel Willingham on research in cognitive psychology during the last thirty years, but none of them, Willingham included, tell us what it is. If they did, it would show that the principles they refer to don’t scale up when it comes to complex knowledge. Nor do most of the teachers writing about educational reform appear to have much teaching experience. That doesn’t mean they are wrong, but it does call into question the extent of their expertise relating to education.

Some of those supporting Daisy’s view have told me they are aware that they don’t know much about cognitive psychology, but have argued that they have to start somewhere and it’s important that teachers are made aware of concepts like the limits of working memory. That’s fine if that’s all they are doing, but it’s not. Redesigning pedagogy and the curriculum on the basis of a handful of facts makes sense if you think that what’s important is facts and that the brain will automatically organise those facts into a coherent schema. The problem is of course that that rarely happens in the absence of an overview of all the relevant facts and how they fit together. Cognitive psychology, like all other knowledge domains, has incomplete knowledge but it’s not incomplete in the same way as the reforming teachers’ knowledge. This is classic Sorcerer’s Apprentice territory; a little knowledge, misapplied, can do a lot of damage.

5. Evaluating evidence Then there’s the way evidence is handled. Evidence-based knowledge domains have different ways of evaluating evidence, but they all evaluate it. That means weighing up the pros and cons, comparing evidence for and against competing hypotheses and so on. Evaluating evidence does not mean presenting only the evidence that supports whatever view you want to get across. That might be a way of making your case more persuasive, but is of no use to anyone who wants to know about the reliability of your hypothesis or your evidence. There might be a lot of evidence telling you your hypothesis is right – but a lot more telling you it’s wrong. But Daisy, Robert Peal and Toby Young all present supporting evidence only. They make no attempt to test the hypotheses they’re proposing or the evidence cited, and much of the evidence is from secondary sources – with all due respect to Daniel Willingham, just because he says something doesn’t mean that’s all there is to say on the matter.

cargo-cult science

I suggested to a couple of the teachers who supported Daisy’s model that ironically it resembled Feynman’s famous cargo-cult analogy (p. 97). They pointed out that the islanders were using replicas of equipment, whereas the concepts from cognitive psychology were the real deal. I suggest that even the Americans had left their equipment on the airfield and the islanders knew how to use it, that wouldn’t have resulted in planes bringing in cargo – because there were other factors involved.

My initial response to reading Seven Myths about Education was one of frustration that despite making some good points about the educational orthodoxy and cognitive psychology, Daisy appeared to have got hold of the wrong ends of several sticks. This rapidly changed to concern that a handful of misunderstood concepts is being used as ‘evidence’ to support changes in national education policy.

In Michael Gove’s recent speech at the Education Reform Summit, he refers to the “solidly grounded research into how children actually learn of leading academics such as ED Hirsch or Daniel T Willingham”. Daniel Willingham has published peer-reviewed work, mainly on procedural learning, but I could find none by ED Hirsch. It would be interesting to know what the previous Secretary of State for Education’s criteria for ‘solidly grounded research’ and ‘leading academic’ were. To me the educational reform movement doesn’t look like an evidence-based discipline but bears all the hallmarks of an ideological system looking for evidence that affirms its core beliefs. This is no way to develop public policy. Government should know better.

the MUSEC briefings and Direct Instruction

Yesterday, I got involved in a discussion on Twitter about Direct Instruction (DI). The discussion was largely about what I had or hadn’t said about DI. Twitter isn’t the best medium for discussing anything remotely complex, but there’s something about DI that brings out the pedant in people, me included.

The discussion, if you can call it that, was triggered by a tweet about the most recent MUSEC briefing. The briefings, from Macquarie University Special Education Centre, are a great idea. A one-page round-up of the evidence relating to a particular mode of teaching or treatment used in special education is exactly the sort of resource I’d use often. So why the discussion about this one?

the MUSEC briefings

I’ve bumped into the briefings before. I read one a couple of years ago on the recommendation of a synthetics phonics advocate. It was briefing no.18, Explicit instruction for students with special learning needs. At the time, I wasn’t aware that ‘explicit instruction’ had any particular significance in education – other than denoting instruction that was explicit. And that could involve anything from a teacher walking round the room checking that students understood what they were doing, to ‘talk and chalk’, reading a book or computer-aided learning. The briefing left me feeling bemused. It was packed with implicit assumptions and the references, presented online presumably for reasons of space, included one self-citation, a report that reached a different conclusion to the briefing, a 400-page book by John Hattie that doesn’t appear to reach the same conclusion either, and a paper by Kirschner Sweller and Clark that doesn’t mention children with special educational needs, The references form a useful reading list for teachers, but hardly constitute robust evidence for support the briefing’s conclusions.

My curiosity piqued, I took a look at another briefing, no.33 on behavioural optometry. I chose it because the SP advocates I’d encountered tended to be sceptical about visual impairments being a causal factor in reading difficulties, and I wondered what evidence they were relying on. I knew a bit about visual problems because of my son’s experiences. The briefing repeatedly lumped together things that should have been kept distinct and came to different conclusions to the evidence it cites. I think I was probably unlucky with these first two because some of the other briefings look fine. So what about the one on Direct Instruction, briefing no.39?

Direct Instruction and Project Follow Through

Direct Instruction (capitalized) is a now commercially available scripted learning programme developed by Siegfried Engelmann and Wesley Becker in the US in the 1960s that performed outstandingly well in Project Follow Through (PFT).

The DI programme involved the scripted teaching of reading, arithmetic, and language to children between kindergarten and third grade. The PFT evaluation of DI showed significant gains in basic skills (word knowledge, spelling, language and math computation); in cognitive-conceptual skills (reading comprehension, math concepts, math problem solving) and in affect measures (co-operation, self-esteem, intellectual achievement, responsibility). A high school follow-up study by the sponsors of the DI programme showed that was associated with positive long-term outcomes.

The Twitter discussion revolved around what I meant by ‘basic’ and ‘skills’. To clarify, as I understand it the DI programme itself involved teaching basic skills (reading, arithmetic, language) to quite young children (K-3). The evaluation assessed basic skills, cognitive-conceptual skills and affect measures. There is no indication in the evidence I’ve been able to access of how sophisticated the cognitive-conceptual skills or affect measures were. One would expect them to be typical of children in the K-3 age range. And we don’t know how long those outcomes persisted. The only evidence for long-term positive outcomes is from a study by the programme sponsors – not to be discounted, but not a reliable enough to form the basis for a pedagogical method.

In other words, the PFT evaluation tells us that there were several robust positive outcomes from the DI programme. What it doesn’t tell us is whether the DI approach has the same robust outcomes if applied to other areas of the curriculum and/or with older children. Because the results of the evaluation are aggregated, it doesn’t tell us whether the DI programme benefitted all children or only some, or if it had any negative effects, or what the outcomes were for children with specific special educational needs or learning difficulties – the focus of MUSEC. Nor does it tell us anything about the use of direct instruction in general – what the briefing describes as a “generic overarching concept, with DI as a more specific exemplar”.

the evidence

The briefing refers to “a large body of research evidence stretching back over four decades testifying to the efficacy of explicit/direct instruction methods including the specific DI programs.” So what is the evidence?

The briefing itself refers only to the PFT evaluation of the DI programme. The references, available online consist of:

• a summary of findings written by the authors of the DI programme, Becker & Engelmann,
• a book about DI – the first two authors were Engelmann’s students and worked on the original DI programme,
• an excerpt from the same book on a commercial site called education.com,
• an editorial from a journal called Effective School Practices, previously known as Direct Instruction News and published by the National Institute for Direct Instruction (Chairman S Engelmann)
• a paper about the different ways in which direct instruction is understood, published by the Center on Innovation and Improvement which is administered by the Academic Development Institute, one of whose partners is Little Planet Learning,
• the 400-page book referenced by briefing 18,
• the peer-reviewed paper also referenced by briefing 18.

The references, which I think most people would construe as evidence, include only one peer-reviewed paper. It cites research findings supporting the use of direct instruction in relation to particular types of material, but doesn’t mention children with special needs or learning difficulties. Another reference is a synthesis of peer-reviewed studies. All the other references involve organisations with a commercial interest in educational methods – not the sort of evidence I’d expect to see in a briefing published by a university.

My recommendation for the MUSEC briefings? Approach with caution.

seven myths about education: traditional subjects

In Seven Myths about Education, Daisy Christodoulou refers to the importance of ‘subjects’ and clearly doesn’t think much of cross-curricular projects. In the chapter on myth 5 ‘we should teach transferable skills’ she cites Daniel Willingham pointing out that the human brain isn’t like a calculator that can perform the same operations on any data. Willingham must be referring to higher-level information-processing because Anderson’s model of cognition makes it clear that at lower levels the brain is like a calculator and does perform essentially the same operations on any data; that’s Anderson’s point. Willingham’s point is that skills and knowledge are interdependent; you can’t acquire skills in the absence of knowledge and skills are often subject-specific and depend on the type of knowledge involved.

Daisy dislikes cross-curricular projects because students are unlikely to have the requisite prior knowledge from across several knowledge domains, are often expected to behave like experts when they are novices and get distracted by peripheral tasks. I would suggest those problems are indicators of poor project design rather than problems with cross-curricular work per se. Instead, Daisy would prefer teachers to stick to traditional subject areas.

traditional subjects

Daisy refers several times to traditional subjects, traditional bodies of knowledge and traditional education. The clearest explanation of what she means is on pp.117-119, when discussing the breadth and depth of the curriculum;

For many of the theorists we looked at, subject disciplines were themselves artificial inventions designed to enforce Victorian middle-class values … They may well be human inventions, but they are very useful … because they provide a practical way of teaching … important concepts …. The sentence in English, the place value in mathematics, energy in physics; in each case subjects provide a useful framework for teaching the concept.”

It’s worth considering how the subject disciplines the theorists complained about came into being. At the end of the 18th century, a well-educated, well-read person could have just about kept abreast of most advances in human knowledge. By the end of the 19th century that would have been impossible. The exponential growth of knowledge made increasing specialisation necessary; the names of many specialist occupations including the term ‘scientist’ were coined the 19th century. By the end of the 20th century, knowledge domains/subjects existed that hadn’t even been thought of 200 years earlier.

It makes sense for academic researchers to specialise and for secondary schools to employ teachers who are subject specialists because it’s essential to have good knowledge of a subject if you’re researching it or teaching it. The subject areas taught in secondary schools have been determined largely by the prior knowledge universities require from undergraduates. That determines A level content, which in turn determines GCSE content, which in turn determines what’s taught at earlier stages in school. That model also makes sense; if universities don’t know what’s essential in a knowledge domain, no one does.

The problem for schools is that they can’t teach everything, so someone has to decide on the subjects and subject content that’s included in the curriculum. The critics Daisy cites question traditional subject areas on the grounds that they reflect the interests of a small group of people with high social prestige (p.110-111).

criteria for the curriculum

Daisy doesn’t buy the idea that subject areas represent the interests of a social elite, but she does suggest an alternative criterion for curriculum content. Essentially, this is frequency of citation. In relation to the breadth of the curriculum, she adopts the principle espoused by ED Hirsch (and Daniel Willingham, Robert Peal and Toby Young), of what writers of “broadsheet newspapers and intelligent books” (p.116) assume their readers will know. The writers in question are exemplified by those contributing to the “Washington Post, Chicago Tribune and so on” (Willingham p.47). Toby Young suggests a UK equivalent – “Times leader writers and heavyweight political commentators” (Young p.34). Although this criterion for the curriculum is better than nothing, its limitations are obvious. The curriculum would be determined by what authors, editors and publishers knew about or thought was important. If there were subject areas crucial to human life that they didn’t know about, ignored or deliberately avoided, the next generation would be sunk.

When it comes to the depth of the curriculum, Daisy quotes Willingham; “cognitive science leads to the rather obvious conclusion that students must learn the concepts that come up again and again – the unifying ideas of each discipline” (Willingham p.48). My guess is that Willingham describes the ‘unifying ideas of each discipline’ as ‘concepts that come up again and again’ to avoid going into unnecessary detail about the deep structure of knowledge domains; he makes a clear distinction between the criteria for the breadth and depth of the curriculum in his book. But his choice of wording, if taken out of context, could give the impression that the unifying ideas of each discipline are the concepts that come up again and again in “broadsheet newspapers and intelligent books”.

One problem with the unifying ideas of each discipline is that they don’t always come up again and again. They certainly encompass “the sentence in English, place value in mathematics, energy in physics”, but sometimes the unifying ideas involve deep structure and schemata taken for granted by experts but not often made explicit, particularly to school students.

Daisy points out, rightly, that neither ‘powerful knowledge’ nor ‘high culture’ are owned by a particular social class or culture (p.118). But she apparently fails to see that using cultural references as a criterion for what’s taught in schools could still result in the content of the curriculum being determined by a small, powerful social group; exactly what the traditional subject critics and Daisy herself complain about, though they are referring to different groups.

dead white males

This drawback is illustrated by Willingham’s observation that using the cultural references criterion means “we may still be distressed that much of what writers assume their readers know seems to be touchstones of the culture of dead white males” (p.116). Toby Young turns them into ‘dead white, European males’ (Young p.34, my emphasis).

What advocates of the cultural references model for the curriculum appear to have overlooked is that the dead white males’ domination of cultural references is a direct result of the long period during which European nations colonised the rest of the world. This colonisation (or ‘trade’ depending on your perspective) resulted in Europe becoming wealthy enough to fund many white males (and some females) engaged in the pursuit of knowledge or in creating works of art. What also tends to be forgotten is that the foundation for their knowledge originated with males (and females) who were non-whites and non-Europeans living long before the Renaissance. The dead white guys would have had an even better foundation for their work if people of various ethnic origins hadn’t managed to destroy the library at Alexandria (and a renowned female scholar). The cognitive bias that edits out non-European and non-male contributions to knowledge is also evident in the US and UK versions of the Core Knowledge sequence.

Core Knowledge sequence

Determining the content of the curriculum by the use of cultural references has some coherence, but cultural references don’t necessarily reflect the deep structure of knowledge. Daisy comments favourably on ED Hirsch’s Core Knowledge sequence (p.121). She observes that “The history curriculum is designed to be coherent and cumulative… pupils start in first grade studying the first American peoples, they progress up to the present day, which they reach in the eighth grade. World history runs alongside this, beginning with the Ancient Greeks and progressing to industrialism, the French revolution and Latin American independence movements.”

Hirsch’s Core Knowledge sequence might encompass considerably more factual knowledge than the English national curriculum, but the example Daisy cites clearly leaves some questions unanswered. How did the first American peoples get to America and why did they go there? Who lived in Europe (and other continents) before the Ancient Greeks and why are the Ancient Greeks important? Obviously the further back we go, the less reliable evidence there is, but we know enough about early history and pre-history to be able to develop a reasonably reliable overview of what happened. It’s an overview that clearly demonstrates that the natural environment often had a more significant role than human culture in shaping history. And one that shows that ‘dead white males’ are considerably less important than they appear if the curriculum is derived from cultural references originating in the English-speaking world. Similar caveats apply to the UK equivalent of the Core Knowledge sequence published by Civitas, the one that recommends children in year 1 being taught about the Glorious Revolution and the significance of Robert Walpole.

It’s worth noting that few of the advocates of curriculum content derived from cultural references are scientists; Willingham is, but his background is in human cognition, not chemistry, biology, geology or geography. I think there’s a real risk of overlooking the role that geographical features, climate, minerals, plants and animals have played in human history, and of developing a curriculum that’s so Anglo-centric and culturally focused it’s not going to equip students to tackle the very concrete problems the world is currently facing. Ironically, Daisy and others are recommending that students acquire a strongly socially-constructed body of knowledge, rather than a body of knowledge determined by what’s out there in the real world.

knowledge itself

Michael Young, quoted by Daisy, aptly sums up the difference:

Although we cannot deny the sociality of all forms of knowledge, certain forms of knowledge which I find useful to refer to as powerful knowledge and are often equated with ‘knowledge itself’, have properties that are emergent from and not wholly dependent on their social and historical origins.” (p.118)

Most knowledge domains are pretty firmly grounded in the real world, which means that the knowledge itself has a coherent structure reflecting the real world and therefore, as Michael Young points out, it has emergent properties of its own, regardless of how we perceive or construct it.

So what criteria should we use for the curriculum? Generally, academics and specialist teachers have a good grasp of the unifying principles of their field – the ‘knowledge itself’. So their input would be essential. But other groups have an interest in the curriculum; notably the communities who fund and benefit from the education system and those involved on a day-to-day basis – teachers, parents and students. 100% consensus on a criterion is unlikely, but the outcome might not be any worse than the constant tinkering with the curriculum by government over the past three decades.

why subjects?

‘Subjects’ are certainly a convenient way of arranging our knowledge and they do enable a focus on the deep structure of a specific knowledge domain. But the real world, from which we get our knowledge, isn’t divided neatly into subject areas, it’s an interconnected whole. ‘Subjects’ are facets of knowledge about a world that in reality is highly integrated and interconnected. The problem with teaching along traditional subject area lines is that students are very likely to end up with a fragmented view of how the real world functions, and to miss important connections. Any given subject area might be internally coherent, but there’s often no apparent connection between subject areas, so the curriculum as a whole just doesn’t make sense to students. How does history relate to chemistry or RE to geography? It’s difficult to tell while you are being educated along ‘subject’ lines.

Elsewhere I’ve suggested that what might make sense would be a chronological narrative spine for the curriculum. Learning about the Big Bang, the formation of galaxies, elements, minerals, the atmosphere and supercontinents through the origins of life to early human groups, hunter-gatherer migration, agricultural settlement, the development of cities and so on, makes sense of knowledge that would otherwise be fragmented. And it provides a unifying, overarching framework for any knowledge acquired in the future.

Adopting a chronological curriculum would mean an initial focus on sciences and physical geography; the humanities and the arts wouldn’t be relevant until later for obvious reasons. It wouldn’t preclude simultaneously studying languages, mathematics, music or PE of course – I’m not suggesting a chronological curriculum ‘first and only’ – but a chronological framework would make sense of the curriculum as a whole.

It could also bridge the gap between so-called ‘academic’ and ‘vocational’ subjects. In a consumer society, it’s easy to lose sight of the importance of knowledge about food, water, fuel and infrastructure. But someone has to have that knowledge and our survival and quality of life are dependent on how good their knowledge is and how well they apply it. An awareness of how the need for food, water and fuel has driven human history and how technological solutions have been developed to deal with problems might serve to narrow the academic/vocational divide in a way that results in communities having a better collective understanding of how the real world works.

the curriculum in context

I can understand why Daisy is unimpressed by the idea that skills can be learned in the absence of knowledge or that skills are generic and completely transferable across knowledge domains. You can’t get to the skills at the top of Bloom’s taxonomy by bypassing the foundation level – knowledge. Having said that, I think Daisy’s criteria for the curriculum overlook some important points.

First, although I agree that subjects provide a useful framework for teaching concepts, the real world isn’t neatly divided up into subject areas. Teaching as if it is means it’s not only students who are likely to get a fragmented view of the world, but newspaper columnists, authors and policy-makers might too – with potentially disastrous consequences for all of us. It doesn’t follow that students need to be taught skills that allegedly transfer across all subjects, but they do need to know how subject areas fit together.

Second, although we can never eliminate subjectivity from knowledge, we can minimise it. Most knowledge domains reflect the real world accurately enough for us to be able to put them to good, practical use on a day-to-day basis. It doesn’t follow that all knowledge consists of verified facts or that students will grasp the unifying principles of a knowledge domains by learning thousands of facts. Students need to learn about the deep structure of knowledge domains and how the evidence for the facts they encompass has been evaluated.

Lastly, cultural references are an inadequate criterion for determining the breadth of the curriculum. Cultural references form exactly the sort of socially constructed framework that critics of traditional subject areas complain about. Most knowledge domains are firmly grounded in the real world and the knowledge itself, despite its inherent subjectivity, provides a much more valid and reliable criterion for deciding what students should know that what people are writing about. Knowledge about cultural references might enable students to participate in what Michael Oakeshott called the ‘conversation of mankind’, but life doesn’t consist only of a conversation – at whatever level you understand the term. For most people, even in the developed world, life is just as much about survival and quality of life, and in order to optimise our chances of both, we need to know as much as possible about how the world functions, not just what a small group of people are saying about it.

In my next post, hopefully the final one about Seven Myths, I plan to summarise why I think it’s so important to understand what Daisy and those who support her model of educational reform are saying.

References

Peal, R (2014). Progressively Worse: The Burden of Bad Ideas in British Schools. Civitas.
Willingham, D (2009). Why don’t students like school?. Jossey-Bass.
Young, T (2014). Prisoners of the Blob. Civitas.