Bold Beginnings – could do better

Bold Beginnings, an Ofsted report on the Reception curriculum, was published at the end of November. It caused a bit of a stir among Early Years teachers. I thought they might be over-reacting, an understandable tendency developed in response to endless assumptions that the children they teach ‘just play’. Last week, an open letter with over 1700 signatories questioning the report’s conclusions was published in the Guardian. An article in response wondered what all the fuss was about. So I read the report. Here’s what I thought. References in brackets are to the paragraph numbers.

The report was commissioned as part of a review of the curriculum. 41 primary schools  judged good or outstanding in their last Ofsted inspection (86) were visited and asked to complete an online questionnaire.

Implicit assumptions

The first thing that struck me was the implicit assumptions on which the report is based. Implicit assumptions are sneaky things.   For one thing, they’re assumptions; no one wheels out evidence to support them – and sometimes there isn’t any supporting evidence. For another thing, they’re implicit – no one spells them out, so they’re easy to miss. Sometimes the people making the assumptions aren’t aware that they’re making them. Here are three.

Falling behind  

The first implicit assumption appears in the first paragraph. It refers to the “painful and unnecessary consequences of falling behind their peers” (p.4). I find the idea of children ‘falling behind’ baffling. Falling behind what, exactly? The school population is, like any other large population, very varied. And then there’s the age range. Expecting the youngest children in a Reception class to be at the same level of attainment as the oldest, flies in the face of everything we know about human development and population statistics.   Then there’s “in 2016, around one third of children did not have the essential knowledge and understanding they needed to reach a good level of development [as defined by government] by the age of five” (6). Anyone with a basic knowledge of statistics would expect 50% of children to be developing more slowly than average in a large population. The assumption that children can ‘fall behind’ and should ‘catch up’ is made by an education system designed around administrative convenience, not the educational needs of children.

Increased expectations in Year 1

Reception and Year 1 teachers agreed that the vital, smooth transition from the foundation stage to Year 1 was difficult because the early learning goals were not aligned with the now-increased expectations of the national curriculum.” (p4 §8) The national curriculum isn’t a Law of Nature or Act of God. It’s a system designed by human beings. There is no reason why early learning has to adapt to expectations for children in Year 1. Year 1 expectations could instead adapt to early learning. The report complains “there is no clear curriculum in Reception” (p5 §3). There’s no reason why a clear Reception curriculum shouldn’t be developed, but the current lack of one might be because many children in Reception classes are below the statutory education age.

The curriculum

A third implicit assumption runs through the entire report. Despite the review being of the curriculum, the focus is relentlessly on reading, writing and mathematics – all fundamental, but only three of the skills children need to acquire to access a broad curriculum and understand how the world works.

Bold Beginnings appears to have been written by someone with little knowledge of what is taught and learned at the Early Years Foundation Stage. That might have been a deliberate choice to avoid the bias towards play-based pedagogy and child-initiated learning perceived by some headteachers (81), but it resulted in an impoverished analysis. The focus is on reading, writing and mathematics rather than the curriculum; play is mentioned numerous times but not discussed in detail; and the purpose of  education appears to be GCSE grades.

The three Rs

Schools are supposed to be places where children learn, and for Reception age children there is much to learn. About physics, chemistry, biology, psychology, sociology, geography, history, music, art and drama. I’m not recommending formal subject areas for 4-5 year-olds, but found it mystifying that the report makes only a passing reference to ‘science and the humanities’ (13) and ‘music and science’ (21).  The report’s author doesn’t seem aware that at this age children are forming basic concepts about solids, liquids, gases, plants and animals, maps, timelines, rhythm, melody, art materials, scripts and roles, that form the foundation of later learning (Rakison & Oakes, 2003).   Instead, the author sees reading, writing and number as “the building blocks for all other learning” (7), completely overlooking all the learning children do that doesn’t involve reading, writing or numbers.

Although speaking and talking are mentioned in passing, language skills are seen in terms of their contribution to reading and writing (p4 §3) not as an end in themselves. Reading and writing are crucial skills, but the report overlooks the amount of spoken communication that goes on between human beings at all levels.

The report’s author is a big fan of systematic synthetic phonics, but I felt painted themselves into a corner when discussing children’s books.  It makes sense for reading schemes to introduce grapheme-phoneme correspondences (GPCs) one-by-one, as the report recommends, to secure children’s knowledge and build up their confidence. Books with unfamiliar GPCs are cautioned against because they encourage children to use other strategies, such as guessing (53, 54). But it wasn’t clear how parents or teachers could avoid this if they read a wide range of stories to, or with, children.

And then there’s the mathematics. What’s actually discussed isn’t mathematics as such, or even arithmetic. It’s number. Number is obviously a foundational mathematical skill, but I couldn’t find any reference to shape, spatial relationships, or operations – all foundational mathematical concepts that most 4 year-olds are beginning to get to grips with.

Play

The report mentions play numerous times but its role is seen as “primarily for developing children’s personal, social and emotional skills” (p4 §5). There are many references to teachers knowing how children learn through play, but what they know seems to be a mystery to the report’s author.   There’s a rather breathless account of children dramatizing the Three Billy Goats Gruff (35), suggesting that inspectors weren’t very familiar with an activity that’s probably been a feature of every nursery and infant class since at least the 1930s.

Achievement

The report appears to see achievement solely in terms of succeeding at the tasks set by schools, rather than in terms of children getting a good knowledge and understanding of how the world works.   For example “The research is clear: a child’s early education lasts a lifetime. Done well, it can mean the difference between gaining seven Bs at GCSE compared with seven Cs.7”(5).  Leaving aside the fact that the reference refers to 8 GCSEs not 7, and that a correlation doesn’t indicate a causal relationship, framing the importance of education solely in terms of GCSE results is troubling. The author of the report doubtless got at least 7 B grades at GCSE, but that doesn’t appear to have equipped him or her with adequate research skills.

The research

Ofsted do not appear to be aware of the impact of their own inspections. For example, the statutory moderation of the Early Years Foundation Stage Profile comes in for some stick, one complaint being “a moderator expected to see three pieces of evidence for every separate sentence within the early learning goals” (77). I vividly recall my son’s Year 1 teachers complaining about the insistence of Ofsted in their previous inspection on exactly this. (My son wasn’t very happy about it either, asking why, if he’d shown he could do something, he then had to do it again.)

Then, in Annex B, we have the online questionnaire sent to schools. Q1 doesn’t have an ‘other’ box for anyone completing the form who isn’t a head, early years or reception teacher. And in Q2 there’s an elementary error that most primary school pupils would know to avoid. The narrow focus of the report is clear in Q12. This isn’t the first time I’ve seen an Ofsted questionnaire cause raised eyebrows. One teenager thought a questionnaire sent to families “looks like it was written by a Year 7”. It did too. I’d expect better research skills from a regulatory body.

The narrative

Bold Beginnings isn’t an objective, dispassionate analysis of the Reception curriculum. Instead it propagates a particular narrative that goes like this: 1) Because the long-term outcomes are better for children who attend pre-school provision and attend it for longer, and 2) because teachers at good and outstanding primary schools believe that formal education begins in the Reception year, that 3) the Reception curriculum should be shaped by the increased expectations for children in Year 1, and 4) that reading, writing and number need greater emphasis, it stands to reason that formal education should start in Reception, be shaped by the Y1 curriculum, and should focus on reading, writing and number. But the narrative doesn’t hold water. Here’s why.

1) Research (Sylva et al 2014) indicates that long-term educational outcomes are better for children who have attended pre-school provision, and attended it for longer. That’s the current informal provision. The research doesn’t support the assumption that the earlier formal education starts the better. As far as I’m aware, there’s no evidence that starting formal education later (in some countries age 6 or 7) has a detrimental impact on long-term outcomes.

2) “Nearly 95% of the school staff who responded to Ofsted’s survey questionnaire believed that Nursery and/or Reception signalled the start of school. Leaders clearly believe that the moment a child starts attending their school, in whatever capacity, their educational journey has begun. While Year 1 may be the official start, it is clear that the Reception Year is more commonly recognised as the beginning of a child’s formal education” (3). That’s interesting, but an education system shouldn’t be designed around beliefs, whoever holds them. Initial teacher education (ITE) tutors come in for criticism from some headteachers for their emphasis on play-based pedagogy and child-initiated learning (81), but the ITE tutors’ beliefs, however strongly evidence-based, don’t play any part in the Bold Beginnings narrative. The word ‘believed’ is used 14 times in this report. That’s probably 14 times too many.

3) There’s no reason why the EYFS curriculum shouldn’t shape the Year 1 curriculum, rather than vice versa.

4) There’s no reason not to improve reading, writing and mathematics in Reception classes, but they are not “the building blocks for all other learning” (7) and the report ignores the vast number of other building blocks routinely developed by Early Years teachers.

Conclusion

This is not a well-researched, objective assessment of the Reception curriculum. The research is inadequate, the evaluation of evidence leaves much to be desired, and the recommendations are based largely on the beliefs of teachers in a sample of 41 schools. Ofsted should be leading the way. Instead, they are falling behind.

 

References

Rakison DH & Oakes, LM (eds) (2003). Early category and concept development: Making sense of the blooming, buzzing confusion.  Oxford University Press.
Sylva, K, Melhuish, E, Sammons, P, Siraj, I,  & Taggart, B (2014). Students’ educational and developmental outcomes at age 16: Effective Pre-school, Primary and Secondary Education (EPPSE 3-16) Project. Department for Education.

 

 

 

Advertisements

cognitive science: the wrong end of the stick

A few years ago, some teachers began advocating the application of findings from cognitive science to education. There seemed to be something not quite right about what they were advocating but I couldn’t put my finger on exactly what it was. Their focus was on the limitations of working memory and differences between experts and novices. Nothing wrong with that per se, but working memory and expertise aren’t isolated matters.

Cognitive science is now a vast field; encompassing sensory processing, perception, cognition, memory, learning, and aspects of neuroscience. A decent textbook would provide an overview, but decent textbooks didn’t appear to have been consulted much. Key researchers (e.g. Baddeley & Hitch, Alloway, Gathercole), fields of research (e.g. limitations of long-term memory, neurology), and long-standing contentious issues (e.g. nature vs nurture) rarely got a mention even when highly relevant.

At first I assumed the significant absences were due to the size of the field to be explored, but as time went by that seemed less and less likely.  There was an increasing occurrence of teacher-B’s-understanding-of-teacher-A’s-understanding-of-Daniel-Willingham’s-simplified-model-of-working-memory, with some teachers getting hold of the wrong end of some of the sticks. I couldn’t understand why, given the emphasis on expertise, teachers didn’t seem to be looking further.

The penny dropped last week when I read an interview with John Sweller, the originator of Cognitive Load Theory (CLT), by Ollie Lovell, a maths teacher in Melbourne. Ollie has helpfully divided the interview into topics in a transcript on his website. The interview clarifies several aspects of cognitive load theory. In this post, I comment on some points that came up in the interview, and explain the dropped penny.

1.  worked examples

The interview begins with the 1982 experiment that led to Sweller’s discovery of the worked example effect. Ollie refers to the ‘political environment of education at the time’ being ‘heavily in favour of problem solving’. John thinks that however he’d presented the worked example effect, he’d be pessimistic about the response because ‘the entire research environment in those days was absolutely committed to problem solving’.

The implication that the education system had rejected worked examples was puzzling. During my education (1960s and 70s) you couldn’t move for worked examples. They permeated training courses I attended in the 80s, my children’s education in the 90s and noughties, and still pop up frequently in reviews and reports. True, they’re not always described as a ‘worked example’ but instead might be a ‘for instance’ or ‘here’s an example’ or ‘imagine…’. So where weren’t they? I’d be grateful for any pointers.

2 & 3. goal-free effect

Essentially students told to ‘find out as much as you can’ about a problem, performed better than those given specific instructions about what to find out. But only in relation to problems with a small number of possible solutions – in this case physics problems. The effect wasn’t found for problems with a large number of possible solutions.   But you wouldn’t know that if you’d only read teachers criticising ‘discovery learning’.

4. biologically primary and secondary skills

What’s determined by biology or by the environment has been a hugely contentious issue in cognitive science for decades. Basically, we don’t yet know the extent to which learning is biologically or environmentally determined.  But the contentiousness isn’t mentioned in the interview, is marginalised by David Geary the originator of the biologically primary and secondary concept, and John appears to simply assume Geary’s theory is correct, presumably because it’s plausible.

John says it’s ‘absurd’ to provide someone with explicit instruction about what to do with their tongue, lips or breath when learning English. Ollie points out that’s exactly what he had to do when he learned Chinese. John claims that language acquisition by immersion is biologically primary for children but not for adults. This flies in the face of everything we know about language acquisition.

Adults can and do become very fluent in languages acquired via immersion, just as children do. Explicit instruction can speed up the process and help with problematic speech sounds, but can’t make adults speak like a native. That’s because the adults have to override very robust neural pathways laid down in childhood in response to the sounds the children hear day-in, day-out (e.g. Patricia Kuhl’s ‘Cracking the speech code‘). The evidence suggests that differences between adult and child language acquisition are a frequency of exposure issue, not a type-of-skill issue. As Ollie says: “It’s funny isn’t it?  How it can switch category. It’s just amazing.”  Quite.

5. motivation

The discussion was summed up in John’s comment: “I don’t think you can turn Cognitive Load Theory into a theory of motivation which in no way suggests that you can’t use a theory of motivation and use it in conjunction with cognitive load theory.

 6. expertise reversal effect

John says: “As expertise goes up, the advantage of worked examples go down, and as expertise continues to go up, eventually the relative effectiveness of worked examples and problems reverses and the problems are more helpful than worked examples”.

7. measures of cognitive load

John: “I routinely use self-report and I use self-report because it’s sensitive”. Other measures – secondary tasks, physiological markers – are problematic.

8. collective working memory effect

John: “In problem solving, you may need information and the only place you can get it from is somebody else.” He doesn’t think you can teach somebody to act collaboratively because he thinks social interaction is biologically primary knowledge. See 4 above.

9. The final section of the interview highlighted, for me, two features that emerge from much of the discourse about applying cognitive science to education:

  • The importance of the biological mechanisms and the weaknesses of analogy.
  • The frame of reference used in the discourse.

biological mechanisms

In the final part of the interview John asks an important question: Is the capacity of working memory fixed? He says: “If you’ve been using your working memory, especially in a particular area, heavily for a while, after a while, and you would have experienced this yourself, your working memory keeps getting narrower and narrower and narrower and after a while it just about disappears.”

An explanation for the apparent ‘narrowing’ of working memory is habituation, where the response of neurons to a particular stimulus diminishes if the stimulus is repeated. The best account I’ve read of the biological mechanisms in working memory is in a 2004 paper by Wagner, Bunge & Badre.  If I’ve understood their findings correctly, signals representing sensory information coming into the prefrontal area of the brain are maintained for a few seconds until they degrade or are overridden by further incoming information. This is exactly what was predicted by Baddeley & Hitch’s phonological loop and visual-spatial sketchpad. (Wagner, Bunge and Badre’s findings also indicate there might be more components to working memory than Baddley & Hitch’s model suggests.)

John was using a figure of speech, but I fear it will only be a matter of time before teachers start referring to the ‘narrowing’ of working memory. This illustrates why it’s important to be aware of the biological mechanisms that underpin cognitive functions. Working memory is determined by the behaviour of neurons, not by the behaviour of analogous computer components.

frame of reference

John and Ollie were talking about cognitive load theory in education, so that’s what the interview focussed on, obviously.  But every focus has a context, and John and Ollie’s frame of reference seemed rather narrow. Ollie opens by talking about ‘the political environment of education at the time [1982]’ being ‘heavily in favour of problem solving’. I don’t think he actually means the ‘political environment of education at the time’ as such. Similarly John comments ‘the entire research environment in those days was absolutely committed to problem solving’. I don’t think he means ‘the entire research environment’ as such either.

John also observes: “It’s only been very recently that people started taking notice of Cognitive Load Theory. For decades I put papers out there and it was like putting them into outer-space, you know, they disappeared into the ether!” I first heard about Cognitive Load Theory in the late 80s, soon after Sweller first proposed it, via a colleague working in artificial intelligence. I had no idea, until recently, that Sweller was an educational psychologist. People have been taking notice of CLT, but maybe not in education.

Then there’s the biologically primary/secondary model. It’s ironic how little it refers to biology. We know a fair amount about the biological mechanisms involved in learning, and I’ve not yet seen any evidence suggesting two distinct mechanisms. The model appears to be based on the surface features of how people appear to learn, not on the deep structure of how learning happens.

Lastly, the example of language acquisition. The differences between adults and children learning languages can be explained by frequency of exposure and how neurons work; there’s no need to introduce a speculative evolutionary model.

Not only is cognitive load theory the focus of the interview, it also appears to be its frame of reference; political issues and knowledge domains other than education don’t get much of a look in.

the penny that dropped

Ever since I first heard about teachers applying cognitive science to education, I’ve been puzzled by their focus on the limitations of working memory and the characteristics of experts and novices. It suddenly dawned on me, reading Ollie’s interview with John, that what the teachers are actually applying isn’t so much cognitive science, as cognitive load theory. CLT, the limitations of working memory and the characteristics of experts and novices are important, but constitute only a small area of cognitive science. But you wouldn’t know that from this interview or most of the teachers advocating the application of cognitive science.  There’s a real risk, if CLT isn’t set in context, of teachers getting hold of the wrong stick entirely.

references

Geary, D. (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Kuhl, P. (2004). Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience 5, 831-843.

Wagner, A.D., Bunge, S.A. & Badre, D. (2004). Cognitive control, semantic memory          and priming: Contributions from prefrontal cortex. In M. S. Gazzaniga (Ed.) The Cognitive Neurosciences (3rd edn.). Cambridge, MA: MIT Press.

 

 

 

 

 

 

 

 

 

 

genes, environment and behaviour

There was considerable kerfuffle on Twitter last week following a blog post by David Didau entitled ‘What causes behaviour?’  The ensuing discussion resulted in a series of five further posts from David culminating in an explanation of why his views weren’t racist. I think David created problems for himself through lack of clarity about gene-environment interactions and through ambiguous wording. Here’s my two-pennyworth.

genes

Genes are regions of DNA that hold information about (mainly) protein production. As far as we know, that’s all they do. The process of using this information to produce proteins is referred to as genetic expression.

environment

The context in which genes are expressed. Before birth, the immediate environment in which human genes are expressed is limited, and is largely a chemical and biological one. After birth, the environment gets more complex as Urie Bronfenbrenner demonstrated.  Remote environmental effects can have a significant impact on immediate ones. Whether a mother smokes or drinks is influenced by genetic and social factors, and the health of both parents is often affected by factors beyond their control.

epigenetics

Epigenetic factors are environmental factors that can directly change the expression of genes; in some cases they can be effectively ‘switched’ on or off.   Some epigenetic changes can be inherited.

behaviour

Behaviour is a term that’s been the subject of much discussion by psychologists. There’s a useful review by Levitis et al here. A definition of behaviour the authors decided reflected consensus is:

Behaviour is: the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal and/or external stimuli, excluding responses more easily understood as developmental changes.

traits and states

Trait is a term used to describe a consistent pattern in behaviour, personality etc. State is used to describe transient behaviours or feelings.

David Didau’s argument

David begins with the point that behavioural traits in adulthood are influenced far more by genes than by shared environments during childhood. He says: “Contrary to much popular wishing thinking, shared environmental effects like parenting have (almost) no effect on adult’s behaviour, characteristics, values or beliefs.* The reason we are like our parents and siblings is because we share their genes. *Footnote: There are some obvious exceptions to this. Extreme neglect or abuse before the age of 5 will likely cause permanent developmental damage as will hitting someone in the head with a hammer at any age.”

In support he cites a paper by Thomas Bouchard, a survey of research (mainly twin studies) about genetic influence on psychological traits; personality, intelligence, psychological interests, psychiatric illnesses and social attitudes. David rightly concludes that it’s futile for schools to try to teach ‘character’ because character (whatever you take it to mean) is a stable trait.

traits, states and outcomes

But he also refers to children’s behaviour in school, and behaviour encompasses traits and states; stable patterns of behaviour and one-off specific behaviours. For David, school expectations can “mediate these genetic forces”, but only within school; “an individual’s behaviour will be, for the most part, unaffected by this experience when outside the school environment”.

He also refers to “how we turn out”, and how we turn out can be affected by one-off, even uncharacteristic behaviours (on the part of children, parents and teachers and/or government).   One-off actions can have a hugely beneficial or detrimental impact on long-term outcomes for children.

genes, environment and interactions

It’s easy to get the impression from the post that genetic influences (David calls them genetic ‘forces’ – I don’t know what that means) and environmental influences are distinct and act in parallel. He refers, for example, to “genetic causes for behaviour as opposed to environmental ones” (my emphasis), but concedes “there’s definitely some sort of interaction between the two”.

Obviously, genes and environment influence behaviour. What’s emerged from research is that the interactions between genetic expression and environmental factors are pretty complex. From conception, gene expression produces proteins; cells form, divide and differentiate, the child’s body develops and grows. Genetic expression obviously plays a major role in pre-natal development, but the proteins expressed by the genes very quickly form a complex biochemical, physiological and anatomical environment that impacts on the products of later genetic expression. This environment is internal to the mother’s body, but external environmental factors are also involved in the form of nutrients, toxins, activities, stressors etc. After birth, genes continue to be expressed, but the influence of the external environment on the child’s development increases.

Three points to bear in mind: 1) A person’s genome remains pretty stable throughout their lifetime. 2) The external environment doesn’t remain stable – for most people it changes constantly.  Some of the changes will counteract others; rest and good nutrition can overcome the effects of illness, beneficial events can mitigate the impact of adverse ones. So it’s hardly surprising that shared childhood environments have comparatively little, if any, effect on adult traits.   3) Genetic and environmental influences are difficult to untangle due to their complex interactions from the get-go. Annette Karmiloff-Smith* highlights the importance of gene-environment-behaviour interactions here.

Clearly, if you’re a kid with drive, enthusiasm and aspirations, but grow up on a sink estate in an area of high social and economic deprivation where the only wealthy people with high social status are drug dealers, you’re far more likely to end up with rather dodgy career prospects than a child with similar character traits who lives in a leafy suburb and attends Eton. (I’ve blogged elsewhere about the impact of life events on child development and long-term outcomes, in a series of posts starting here.)

In other words, parents and teachers might have little influence over behavioural traits, but they can make a huge difference to the outcomes for a child, by equipping them (or not) with the knowledge and strategies they need to make the most of what they’ve got. From other things that David has written, I don’t think he’d disagree.  I think what he is trying to do in this post is to put paid to the popular idea that parents (and teachers) have a significant long-term influence on children’s behavioural traits.  They clearly don’t.  But in this post he doesn’t make a clear distinction between behavioural traits and outcomes. I suggest that’s one reason his post resulted in so much heated discussion.

genes, environment and the scientific method

I’m not sure where his argument goes after he makes the point about character education. He goes on to suggest that anyone who queries his conclusions about the twin studies is dismissing the scientific method, which seems a bit of a stretch, and finishes the post with a series of ‘empirical questions’ that appear to reflect some pet peeves about current educational practices, rather than testing hypotheses about behaviour per se.

So it’s not surprising that some people got hold of the wrong end of the stick. The behavioural framework including traits, states and outcomes is an important one and I wish, instead of going off at tangents, he’d explored it in more detail.

*If you’re interested,  Neuroconstructivism by Mareschal et al and Rethinking Innateness by Elman et al. are well worth reading on gene-environment interactions during children’s development.  Not exactly easy reads, but both reward effort.

references

Bouchard, T. (2004).  Genetic influence on human psychological traits.  Current Directions in Psychological Science, 13, 148-151.

Elman, J. L., Bates, E.A., Johnson, M., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development.  Cambridge, MA: MIT Press.

Karmiloff-Smith A (1998). Development itself is the key to understanding developmental disorders. Trends in Cognitive Sciences, 2, 389-398.

Levitis, D.A., Lidicker, W.Z., & Freund, G. (2009).  Behavioural biologists don’t agree on what constitutes behaviour.  Animal Behaviour, 78 (1) 103-110.

Mareschal, D., Johnson, M., Sirois, S., Spratling, M.W., Thomas, M.S.C. & Westermann, G. (2007). Neuroconstructivism: How the brain constructs cognition, Vol. I. Oxford University Press.

 

 

 

 

 

all snowflakes are unique: comments on ‘What every teacher needs to know about psychology’ (David Didau & Nick Rose)

This book and I didn’t get off to a good start. The first sentence of Part 1 (Learning and Thinking) raised a couple of red flags: “Learning and thinking are terms that are used carelessly in education.” The second sentence raised another one: “If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.”   I’ll get back to the red flags later.

Undeterred, I pressed on, and I’m glad I did. Apart from the red flags and a few quibbles, I thought the rest of the book was great.  The scope is wide and the research is up-to-date but set in historical context. The three parts – Learning and Thinking, Motivation and Behaviour, and Controversies – provide a comprehensive introduction to psychology for teachers or, for that matter, anyone else. Each of the 26 chapters is short, clearly focussed, has a summary “what every teacher needs to know about…”, and is well-referenced.   The voice is right too; David Didau and Nick Rose have provided a psychology-for-beginners, written for grown-ups.

The quibbles? References that were in the text but not in the references section, or vice versa. A rather basic index. And I couldn’t make sense of the example on p.193 about energy conservation, until it dawned on me that a ‘re’ was missing from ‘reuse’. All easily addressed in a second edition, which this book deserves. A bigger quibble was the underlying conceptual framework adopted by the authors. This is where the red flags come in.

The authors are clear about why they’ve written the book and what they hope it will achieve. What they are less clear about is the implicit assumptions they make as a result of their underlying conceptual framework. I want to look at three implicit assumptions about; precise definitions, the school population and psychological theory.

precise definitions

The first two sentences of Part 1 are;

Learning and thinking are terms that are used carelessly in education. If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.” (p.14)

What the authors imply (or at least what I inferred) is that there are precise definitions of learning and thinking. They reinforce their point by providing some. Now, ‘carelessly’ is a somewhat pejorative term. It might be fair to use it if there is a precise definition of learning and there is a precise definition of thinking, but people just can’t be bothered to use them. But if there isn’t a single precise definition of either…

I’d say terms such as ‘learning’, ‘thinking’, ‘teaching’, ‘education’ etc. (the list is a long one) are used loosely rather than carelessly. ‘Learning’ and ‘thinking’ are constructs that are more complex and fuzzier than say, metres or molar solutions. In marked contrast to the way ‘metre’ and ‘molar solution’ are used, people use ‘learning’ and ‘thinking’ to refer to different things in different contexts.   What they’re referring to is usually made clear by the context. For example, most people would consider it reasonable to talk about “what children learn in schools” even if much of the material taught in schools doesn’t meet Didau and Rose’s criterion of retention, transfer and change (p.14). Similarly, it would be considered fair use of the word ‘thinking’ for someone to say “I was thinking about swimming”, if what they were referring to was pleasant mental images of them floating in the Med, rather than the authors’ definition of a conscious, active, deliberative, cognitive “struggle to get from A to B”.

Clearly, there are situations where context isn’t enough, and a precise definition of terms such as ‘learning’ and ‘thinking’ are required; empirical research is a case in point. And researchers in most knowledge domains (maybe education is an exception) usually address this requirement by stating explicitly how they have used particular terms; “by learning we mean…” or “we use thinking to refer to…”.  Or they avoid the use of umbrella terms entirely. In short, for many terms there isn’t one precise definition. The authors acknowledge this when they refer to “two common usages of the term ‘thinking’”, but still try to come up with one precise definition (p.15).

Why does this matter? It matters because if it’s assumed there is a precise definition for labels representing multi-faceted, multi-component processes, that people use in different ways in different circumstances, a great deal of time can be wasted arguing about what that precise definition is. It would make far more sense simply to be explicit how we’re using the term for a particular purpose, or exactly which facet or component we’re referring to.

Exactly this problem arises in the discussion about restorative justice programmes (p.181). The authors complain that restorative justice programmes are “difficult to define and frequently implemented under a variety of different names…” Those challenges could be avoided by not trying to define restorative justice at all, but by people being explicit about how they use the term – or by using different terms for different programmes.

Another example is ‘zero tolerance’ (p.157). This term is usually used to refer to strict, inflexible sanctions applied in response to even the most minor infringements of rules; the authors cite as examples schools using ‘no excuses’ policies. However, zero tolerance is also associated with the broken windows theory of crime (Wilson & Kelling, 1982); that if minor misdemeanours are overlooked, antisocial behaviour will escalate. The broken windows theory does not advocate strict, inflexible sanctions for minor infringements, but rather a range of preventative measures and proportionate sanctions to avoid escalation. Historically, evidence for the effectiveness of both approaches is mixed, so the authors are right to be cautious in their conclusions.

What I want to emphasise is that there isn’t a single precise definition of learning, thinking, restorative justice, zero tolerance, or many other terms used in the education system, so trying to develop one is like trying define apples-and-oranges. To avoid going down that path, we simply need to be explicit about what we’re actually talking about. As Didau and Rose themselves point out “simply lumping things together and giving them the same name doesn’t actually make them the same” (p.266).

all snowflakes are unique

Another implicit assumption emerges in chapter 25, about individual differences;

Although it’s true that all snowflakes are unique, this tells us nothing about how to build a snowman or design a better snowplough. For all their individuality, useful applications depend on the underlying physical and chemical similarities of snowflakes. The same applies to teaching children. Of course all children are unique…however, for all their individuality and any application of psychology to teaching is typically best informed by understanding the underlying similarities in the way children learn and develop, rather than trying to apply ill-fitting labels to define their differences. (p. 254)

For me, this analogy begged the question of what the authors see as the purpose of education, and completely ignores the nomothetic/idiographic (tendency to generalise vs tendency to specify) tension that’s been a challenge for psychology since its inception. It’s true that education contributes to building communities of individuals who have many similarities, but our evolution as a species, and our success at colonising such a wide range of environments hinges on our differences. And the purpose of education doesn’t stop at the community level. It’s also about the education of individuals; this is recognised in the 1996 Education Act (borrowing from the 1944 Education Act), which expects a child’s education to be suitable to them as an individual.  For the simple reason that if it isn’t suitable, it won’t be effective.  Children are people who are part of communities, not units to be built into an edifice of their teachers’ making, or to be shovelled aside if they get in the way of the education system’s progress.

what’s the big idea?

Another major niggle for me was how the authors evaluate theory. I don’t mean the specific theories tested by the psychological research they cite; that would be beyond the scope of the book. Also, if research has been peer-reviewed and there’s no huge controversy over it, there’s no reason why teachers shouldn’t go ahead and apply the findings. My concern is about the broader psychological theories that frame psychologists’ thinking and influence what research is carried out (or not) and how. Didau and Rose demonstrate they’re capable of evaluating theoretical frameworks, but their evaluation looked a bit uneven to me.

For example, they note “there are many questions” relating to Jean Piaget’s theory of cognitive development (pp.221-223), but BF Skinner’s behaviourist model (pp.152-155) has been “much misunderstood, and often unfairly maligned”. Both observations are true, but because there are pros and cons to each of the theories, I felt the authors’ biases were showing. And David Geary’s somewhat speculative model of biologically primary and secondary knowledge and ability, is cited uncritically at least a dozen times, overlooking the controversy surrounding two of its major assumptions –  modularity and intelligence. The authors are up-front about their “admittedly biased view” Continue reading

educating the evolved mind: education

The previous two posts have been about David Geary’s concepts of primary and secondary knowledge and abilities; evolved minds and intelligence.  This post is about how Geary applies his model to education in Educating the Evolved Mind.

There’s something of a mismatch between the cognitive and educational components of Geary’s model.  The cognitive component is a range of biologically determined functions that have evolved over several millennia.  The educational component is a culturally determined education system cobbled together in a somewhat piecemeal and haphazard fashion over the past century or so.

The education system Geary refers to is typical of the schooling systems in developed industrialised nations, and according to his model, focuses on providing students with biologically secondary knowledge and abilities. Geary points out that many students prefer to focus on biologically primary knowledge and abilities such as sports and hanging out with their mates (p.52).   He recognises they might not see the point of what they are expected to learn and might need its importance explained to them in terms of social value (p.56). He suggests ‘low achieving’ students especially might need explicit, teacher driven instruction (p.43).

You’d think, if cognitive functions have been biologically determined through thousands of years of evolution, that it would make sense to adapt the education system to the cognitive functions, rather then the other way round. But Geary doesn’t appear to question the structure of the current US education system at all; he accepts it as a given. I suggest that in the light of how human cognition works, it might be worth taking a step back and re-thinking the education system itself in the light of the following principles:

1.communities need access to expertise

Human beings have been ‘successful’, in evolutionary terms, mainly due to our use of language. Language means it isn’t necessary for each of us to learn everything for ourselves from scratch; we can pass on information to each other verbally. Reading and writing allow knowledge to be transmitted across time and space. The more knowledge we have as individuals and communities, the better our chances of survival and a decent quality of life.

But, although it’s desirable for everyone to be proficient reader and writer and to have an excellent grasp of collective human knowledge, that’s not necessary in order for each of us to have a decent quality of life. What each community needs is a critical mass of people with good knowledge and skills.

Also, human knowledge is now so vast that no one can be an expert on everything; what’s important is that everyone has access to the expertise they need, when and where they need it.  For centuries, communities have facilitated access to expertise by educating and training experts (from carpenters and builders to doctors and lawyers) who can then share their expertise with their communities.

2.education and training is not just for school

Prior to the development of mass education systems, most children’s and young people’s education and training would have been integrated into the communities in which they lived. They would understand where their new knowledge and skills fitted into the grand scheme of things and how it would benefit them, their families and others. But schools in mass education systems aren’t integrated into communities. The education system has become its own specialism. Children and young people are withdrawn from their community for many hours to be taught whatever knowledge and skills the education system thinks fit. The idea that good exam results will lead to good jobs is expected to provide sufficient motivation for students to work hard at mastering the school curriculum.  Geary recognises that it doesn’t.

For most of the millennia during which cognitive functions have been developing, children and young people have been actively involved in producing food or making goods, and their education and training was directly related to those tasks. Now it isn’t.  I’m not advocating a return to child labour; what I am advocating is ensuring that what children and young people learn in school is directly and explicitly related to life outside school.

Here’s an example: A highlight of the Chemistry O level course I took many years ago was a visit to the nearby Avon (make-up) factory. Not only did we each get a bag of free samples, but in the course of an afternoon the relevance of all that rote learning of industrial applications, all that dry information about emulsions, fat-soluble dyes, anti-fungal additives etc. suddenly came into sharp focus. In addition, the factory was a major local employer and the Avon distribution network was very familiar to us, so the whole end-to-end process made sense.

What’s commonly referred to as ‘academic’ education – fundamental knowledge about how the world works – is vital for our survival and wellbeing as a species. But knowledge about how the world works is also immensely practical. We need to get children and young people out, into the community, to see how their communities apply knowledge about how the world works, and why it’s important. The increasing emphasis in education in the developed world on paper-and-pencil tests, examination results and college attendance is moving the education system in the opposite direction, away from the practical importance of extensive, robust knowledge to our everyday lives.  And Geary appears to go along with that.

3.(not) evaluating the evidence

Broadly speaking, Geary’s model has obvious uses for teachers.   There’s considerable supporting evidence for a two-phase model of cognition ranging from Fodor’s specialised, stable/general, unstable distinction, to the System 1/System 2 model Daniel Kahnemann describes in Thinking, Fast and Slow. Whether the difference between Geary’s biologically primary and secondary knowledge and abilities is as clear-cut as he claims, is a different matter.

It’s also well established that in order to successfully acquire the knowledge usually taught in schools, children need the specific abilities that are measured by intelligence tests; that’s why the tests were invented in the first place. And there’s considerable supporting evidence for the reliability and predictive validity of intelligence tests. They clearly have useful applications in schools. But it doesn’t follow that what we call intelligence or g (never mind gF or gC) is anything other than a construct created by the intelligence test.

In addition, the fact that there is evidence that supports Geary’s claims doesn’t mean all his claims are true. There might also be considerable contradictory evidence; in the case of Geary’s two-phase model the evidence suggests the divide isn’t as clear-cut as he suggests, and the reification of intelligence has been widely critiqued. Geary mentions the existence of ‘vigorous debate’ but doesn’t go into details and doesn’t evaluate the evidence by actually weighing up the pros and cons.

Geary’s unquestioning acceptance of the concepts of modularity, intelligence and education systems in the developed world, increases the likelihood that teachers will follow suit and simply accept Geary’s model as a given. I’ve seen the concepts of biologically primary and secondary knowledge and abilities, crystallised intelligence (gC) and fluid intelligence (gF), and the idea that students with low gF who struggle with biologically secondary knowledge just need explicit direct instruction, all asserted as if they must be true – presumably because an academic has claimed they are and cited evidence in support.

This absence of evaluation of the evidence is especially disconcerting in anyone who emphasises the importance of teachers becoming research-savvy and developing evidence-based practice, or who posits models like Geary’s in opposition to the status quo. The absence of evaluation is also at odds with the oft cited requirement for students to acquire robust, extensive knowledge about a subject before they can understand, apply, analyse, evaluate or use it creatively. That requirement applies only to school children, it seems.

references

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Kahneman, D (2012).  Thinking, fast and slow.   Penguin.

evolved minds and education: intelligence

The second vigorously debated area that Geary refers to in Educating the Evolved Mind is intelligence. In the early 1900s statistician Charles Spearman developed a technique called factor analysis. When he applied it to measures of a range of cognitive abilities he found a strong correlation between them, and concluded that there must be some underlying common factor that he called general intelligence (g). General intelligence was later subdivided into crystallised intelligence (gC) resulting from experience, and fluid intelligence (gF) representing a ‘biologically-based ability to acquire skills and knowledge’ (p.25). The correlation has been replicated many times and is reliable –  at the population level, at least.  What’s also reliable is the finding that intelligence, as Robert Plomin puts it “is one of the best predictors of important life outcomes such as education, occupation, mental and physical health and illness, and mortality”.

The first practical assessment of intelligence was developed by French psychologist Alfred Binet, commissioned by his government to devise a way of identifying the additional needs of children in need of remedial education. Binet first published his methods in 1903, the year before Spearman’s famous paper on intelligence. The Binet-Simon scale (Theodore Simon was Binet’s assistant) was introduced to the US and translated into English by Henry H Goddard. Goddard had a special interest in ‘feeble-mindedness’ and used a version of Binet’s scale for a controversial screening test for would-be immigrants. The Binet-Simon scale was standardised for American children by Lewis Terman at Stanford University and published in 1916 as the Stanford-Binet test. Later, the concept of intelligence quotient (IQ – mental age divided by chronological age and multiplied by 100) was introduced, and the rest, as they say, is history.

what’s the correlation?

Binet’s original scale was used to identify specific cognitive difficulties in order to provide specific remedial education. Although it has been superseded by tests such as the Wechsler Intelligence Scale for Children (WISC), what all intelligence tests have in common is that they contain a number of sub-tests that test different abilities. The 1905 Stanford-Binet scale had 30 sub-tests and the WISC-IV has 15. Although the scores in sub-tests tend to be strongly correlated, Early Years teachers, Educational Psychologists and special education practitioners will be familiar with the child with the ‘spiky profile’ who has high scores on some sub-tests but low ones on others. Their overall IQ might be average, but that can mask considerable variation in cognitive sub-skills. Deidre Lovecky, who runs a resource centre in Providence Rhode Island for gifted children with learning difficulties, reports in her book Different Minds having to essentially pick ‘n’ mix sub-tests from different assessment instruments because children were scoring at ceiling on some sub-tests and at floor on others. In short, Spearman’s correlation might be true at the population level, but it doesn’t hold for some individuals. And education systems have to educate individuals.

is it valid?

A number of issues have been vigorously debated in relation to intelligence. One is its construct validity. There’s no doubt intelligence tests measure something – but whether that something is a single biologically determined entity is another matter. We could actually be measuring several biologically determined functions that are strongly dependent on each other. Or some biologically determined functions interacting with culturally determined ones. As the psychologist Edwin Boring famously put it way back in 1923 “intelligence is what the tests test”, ie intelligence is whatever the tests test.

is it cultural?

Another contentious issue is the cultural factors implicit in the tests.  Goddard attempted to measure the ‘intelligence’ of European immigrants using sub-tests that included items culturally specific to the USA.  Stephen Jay Gould goes into detail in his criticism of this and other aspects of intelligence research in his book The Mismeasure of Man.  (Gould himself has been widely criticised so be aware you’re venturing into a conceptual minefield.)  You could just about justify culture-specificity in tests for children who had grown up in a particular culture, on the grounds that understanding cultural features contributed to overall intelligence. But there are obvious problems with the conclusions that can be drawn about gF in the case of children whose cultural background might be different.

I’m not going to venture in to bell-curve territory because the vigorous debate in that area is due to how intelligence tests are applied, rather than the content of the tests. Suffice it to say that much of the controversy about application has arisen because of assumptions made about what intelligence tests tell us. The Wikipedia discussion of Herrnstein & Murray’s book is a good starting point if you’re interested in following this up.

multiple intelligences?

There’s little doubt that intelligence tests are valid and reliable measures of the core abilities required to successfully acquire the knowledge and skills taught in schools in the developed industrialised world; knowledge and skills that are taught in schools because they are valued in the developed industrialised world.

But as Howard Gardner points out in his (also vigorously debated) book Frames of mind: The theory of multiple intelligences, what’s considered to be intelligence in different cultures depends on what abilities are valued by different cultures. In the developed industrialised world, intelligence is what intelligence tests measure. If, on the other hand, you live on a remote Pacific Island and are reliant for your survival on your ability to catch fish and navigate across the ocean using only the sun, moon and stars for reference, you might value other abilities. What would those abilities tell you about someone’s ‘intelligence’? Many people place a high value on the ability to kick a football, sing in tune or play stringed instruments; what do those abilities tell you about ‘intelligence’?

it’s all about the constructs

If intelligence tests are a good measure of the abilities necessary for learning what’s taught in school, then fine, let’s use them for that purpose. What we shouldn’t be using them for is drawing conclusions about a speculative entity we’ve named ‘intelligence’. Or assuming, on the basis of those tests, that we can label some people more or less ‘intelligent’ than others, as Geary does e.g.

Intelligent individuals identify and apprehend bits of social and ecological information more easily and quickly than do other people” (p.26)

and

Individuals with high IQ scores learned the task more quickly than their less-
intelligent peers” (p.59)

 

What concerned me most about Geary’s discussion of intelligence wasn’t what he had to say about accuracy and speed of processing, or about the reliability and predictive validity of intelligence tests, which are pretty well supported. It was the fact that he appears to accept the concepts of g, gC and gF without question. And the ‘vigorous debate’ that’s raged for over a century is reduced to ‘details to be resolved’ (p.25) which doesn’t quite do justice to the furore over the concept, or the devastation resulting from the belief that intelligence is a ‘thing’.  Geary’s apparently unquestioning acceptance of intelligence brings me to the subject of the next post; his model of the education system.

 

References

Gardner, H (1983). Frames of Mind: The theory of multiple intelligences. Fontana (1993).

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Gould, SJ (1996).  The Mismeasure of Man.  WW Norton.

Lovecky, D V (2004).  Different minds: Gifted children with AD/HD, Asperger Syndrome and other learning deficits.  Jessica Kingsley.

 

evolved minds and education: evolved minds

At the recent Australian College of Educators conference in Melbourne, John Sweller summarised his talk as follows:  “Biologically primary, generic-cognitive skills do not need explicit instruction.  Biologically secondary, domain-specific skills do need explicit instruction.”

sweller.png

Biologically primary and biologically secondary cognitive skills

This distinction was proposed by David Geary, a cognitive developmental and evolutionary psychologist at the University of Missouri. In a recent blogpost, Greg Ashman refers to a chapter by Geary that sets out his theory in detail.

If I’ve understood it correctly, here’s the idea at the heart of Geary’s model:

*****

The cognitive processes we use by default have evolved over millennia to deal with information (e.g. about predators, food sources) that has remained stable for much of that time. Geary calls these biologically primary knowledge and abilities. The processes involved are fast, frugal, simple and implicit.

But we also have to deal with novel information, including knowledge we’ve learned from previous generations, so we’ve evolved flexible mechanisms for processing what Geary terms biologically secondary knowledge and abilities. The flexible mechanisms are slow, effortful, complex and explicit/conscious.

Biologically secondary processes are influenced by an underlying factor we call general intelligence, or g, related to the accuracy and speed of processing novel information. We use biologically primary processes by default, so they tend to hinder the acquisition of the biologically secondary knowledge taught in schools. Geary concludes the best way for students to acquire the latter is through direct, explicit instruction.

*****

On the face of it, Geary’s model is a convincing one.   The errors and biases associated with the cognitive processes we use by default do make it difficult for us to think logically and rationally. Children are not going to automatically absorb the body of human knowledge accumulated over the centuries, and will need to be taught it actively. Geary’s model is also coherent; its components make sense when put together. And the evidence he marshals in support is formidable; there are 21 pages of references.

However, on closer inspection the distinction between biologically primary and secondary knowledge and abilities begins to look a little blurred. It rests on some assumptions that are the subject of what Geary terms ‘vigorous debate’. Geary does note the debate, but because he plumps for one view, doesn’t evaluate the supporting evidence, and doesn’t go into detail about competing theories, teachers unfamiliar with the domains in question could easily remain unaware of possible flaws in his model. In addition, Geary adopts a particular cultural frame of reference; essentially that of a developed, industrialised society that places high value on intellectual and academic skills. There are good reasons for adopting that perspective; and equally good reasons for not doing so. In a series of three posts, I plan to examine two concepts that have prompted vigorous debate – modularity and intelligence – and to look at Geary’s cultural frame of reference.

Modularity

The concept of modularity – that particular parts of the brain are dedicated to particular functions – is fundamental to Geary’s model.   Physicians have known for centuries that some parts of the brain specialise in processing specific information. Some stroke patients for example, have been reported as being able to write but no longer able to read (alexia without agraphia), to be able to read symbols but not words (pure alexia), or to be unable to recall some types of words (anomia). Language isn’t the only ability involving specialised modules; different areas of the brain are dedicated to processing the visual features of, for example, faces, places and tools.

One question that has long perplexed researchers is how modular the brain actually is. Some functions clearly occur in particular locations and in those locations only; others appear to be more distributed. In the early 1980s, Jerry Fodor tackled this conundrum head-on in his book The modularity of mind. What he concluded is that at the perceptual and linguistic level functions are largely modular, i.e. specialised and stable, but at the higher levels of association and ‘thought’ they are distributed and unstable.  This makes sense; you’d want stability in what you perceive, but flexibility in what you do with those perceptions.

Geary refers to the ‘vigorous debate’ (p.12) between those who lean towards specialised brain functions being evolved and modular, and those who see specialised brain functions as emerging from interactions between lower-level stable mechanisms. Although he acknowledges the importance of interaction and emergence during development (pp. 14,18) you wouldn’t know that from Fig 1.2, showing his ‘evolved cognitive modules’.

At first glance, Geary’s distinction between stable biologically primary functions and flexible biologically secondary functions appears to be the same as Fodor’s stable/unstable distinction. But it isn’t.  Fodor’s modules are low-level perceptual ones; some of Geary’s modules in Fig. 1.2 (e.g. theory of mind, language, non-verbal behaviour) engage frontal brain areas used for the flexible processing of higher-level information.

Novices and experts; novelty and automation

Later in his chapter, Geary refers to research involving these frontal brain areas. Two findings are particularly relevant to his modular theory. The first is that frontal areas of the brain are initially engaged whilst people are learning a complex task, but as the task becomes increasingly automated, frontal area involvement decreases (p.59). Second, research comparing experts’ and novices’ perceptions of physical phenomena (p.69) showed that if there is a conflict between what people see and their current schemas, frontal areas of their brains are engaged to resolve the conflict. So, when physics novices are shown a scientifically accurate explanation, or when physics experts are shown a ‘folk’ explanation, both groups experience conflict.

In other words, what’s processed quickly, automatically and pre-consciously is familiar, overlearned information. If that familiar and overlearned information consists of incomplete and partially understood bits and pieces that people have picked up as they’ve gone along, errors in their ‘folk’ psychology, biology and physics concepts (p.13) are unsurprising. But it doesn’t follow that there must be dedicated modules in the brain that have evolved to produce those concepts.

If the familiar overlearned information is, in contrast, extensive and scientifically accurate, the ‘folk’ concepts get overridden and the scientific concepts become the ones that are accessed quickly, automatically and pre-consciously. In other words, the line between biologically primary and secondary knowledge and abilities might not be as clear as Geary’s model implies.  Here’s an example; the ability to draw what you see.

The eye of the beholder

Most of us are able to recognise, immediately and without error, the face of an old friend, the front of our own house, or the family car. However, if asked to draw an accurate representation of those items, even if they were in front of us at the time, most of us would struggle. That’s because the processes involved in visual recognition are fast, frugal, simple and implicit; they appear to be evolved, modular systems. But there are people can draw accurately what they see in front of them; some can do so ‘naturally’, others train themselves to do so, and still others are taught to do so via direct instruction.  It looks as if the ability to draw accurately straddles Geary’s biologically primary and secondary divide.  The extent to which modules are actually modular is further called into question by recent research involving the fusiform face area (FFA).

Fusiform face area

The FFA is one of the visual processing areas of the brain. It specialises in processing information about faces. What wasn’t initially clear to researchers was whether it processed information about faces only, or whether faces were simply a special case of the type of information it processes. There was considerable debate about this until a series of experiments found that various experts used their FFA for differentiating subtle visual differences within classes of items as diverse as birds, cars, chess configurations, x-ray images, Pokémon, and objects named ‘greebles’ invented by researchers.

What these experiments tell us is that an area of the brain apparently dedicated to processing information about faces, is also used to process information about modern artifacts with features that require fine-grained differentiation in order to tell them apart. They also tell us that modules in the brain don’t seem to draw a clear line between biologically primary information such as faces (no explicit instruction required), and biologically secondary information such as x-ray images or fictitious creatures (where initial explicit instruction is required).

What the experiments don’t tell us is whether the FFA evolved to process information about faces and is being co-opted to process other visually similar information, or whether it evolved to process fine-grained visual distinctions, of which faces happen to be the most frequent example most people encounter.

We know that brain mechanisms have evolved and that has resulted in some modular processing. What isn’t yet clear is exactly how modular the modules are, or whether there is actually a clear divide between biologically primary and biologically secondary abilities. Another component of Geary’s model about which there has been considerable debate is intelligence – the subject of the next post.

Incidentally, it would be interesting to know how Sweller developed his summary because it doesn’t quite map on to a concept of modularity in which the cognitive skills are anything but generic.

References

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Acknowledgements

I thought the image was from @greg_ashman’s Twitter timeline but can’t now find it.  Happy to acknowledge correctly if notified.