All snowflakes are unique: comments on ‘What every teacher needs to know about psychology’ (David Didau & Nick Rose)

This book and I didn’t get off to a good start. The first sentence of Part 1 (Learning and Thinking) raised a couple of red flags: “Learning and thinking are terms that are used carelessly in education.” The second sentence raised another one: “If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.”   I’ll get back to the red flags later.

Undeterred, I pressed on, and I’m glad I did. Apart from the red flags and a few quibbles, I thought the rest of the book was great.  The scope is wide and the research is up-to-date but set in historical context. The three parts – Learning and Thinking, Motivation and Behaviour, and Controversies – provide a comprehensive introduction to psychology for teachers or, for that matter, anyone else. Each of the 26 chapters is short, clearly focussed, has a summary “what every teacher needs to know about…”, and is well-referenced.   The voice is right too; David Didau and Nick Rose have provided a psychology-for-beginners, written for grown-ups.

The quibbles? References that were in the text but not in the references section, or vice versa. A rather basic index. And I couldn’t make sense of the example on p.193 about energy conservation, until it dawned on me that a ‘re’ was missing from ‘reuse’. All easily addressed in a second edition, which this book deserves. A bigger quibble was the underlying conceptual framework adopted by the authors. This is where the red flags come in.

The authors are clear about why they’ve written the book and what they hope it will achieve. What they are less clear about is the implicit assumptions they make as a result of their underlying conceptual framework. I want to look at three implicit assumptions about; precise definitions, the school population and psychological theory.

precise definitions

The first two sentences of Part 1 are;

Learning and thinking are terms that are used carelessly in education. If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.” (p.14)

What the authors imply (or at least what I inferred) is that there are precise definitions of learning and thinking. They reinforce their point by providing some. Now, ‘carelessly’ is a somewhat pejorative term. It might be fair to use it if there is a precise definition of learning and there is a precise definition of thinking, but people just can’t be bothered to use them. But if there isn’t a single precise definition of either…

I’d say terms such as ‘learning’, ‘thinking’, ‘teaching’, ‘education’ etc. (the list is a long one) are used loosely rather than carelessly. ‘Learning’ and ‘thinking’ are constructs that are more complex and fuzzier than say, metres or molar solutions. In marked contrast to the way ‘metre’ and ‘molar solution’ are used, people use ‘learning’ and ‘thinking’ to refer to different things in different contexts.   What they’re referring to is usually made clear by the context. For example, most people would consider it reasonable to talk about “what children learn in schools” even if much of the material taught in schools doesn’t meet Didau and Rose’s criterion of retention, transfer and change (p.14). Similarly, it would be considered fair use of the word ‘thinking’ for someone to say “I was thinking about swimming”, if what they were referring to was pleasant mental images of them floating in the Med, rather than the authors’ definition of a conscious, active, deliberative, cognitive “struggle to get from A to B”.

Clearly, there are situations where context isn’t enough, and a precise definition of terms such as ‘learning’ and ‘thinking’ are required; empirical research is a case in point. And researchers in most knowledge domains (maybe education is an exception) usually address this requirement by stating explicitly how they have used particular terms; “by learning we mean…” or “we use thinking to refer to…”.  Or they avoid the use of umbrella terms entirely. In short, for many terms there isn’t one precise definition. The authors acknowledge this when they refer to “two common usages of the term ‘thinking’”, but still try to come up with one precise definition (p.15).

Why does this matter? It matters because if it’s assumed there is a precise definition for labels representing multi-faceted, multi-component processes, that people use in different ways in different circumstances, a great deal of time can be wasted arguing about what that precise definition is. It would make far more sense simply to be explicit how we’re using the term for a particular purpose, or exactly which facet or component we’re referring to.

Exactly this problem arises in the discussion about restorative justice programmes (p.181). The authors complain that restorative justice programmes are “difficult to define and frequently implemented under a variety of different names…” Those challenges could be avoided by not trying to define restorative justice at all, but by people being explicit about how they use the term – or by using different terms for different programmes.

Another example is ‘zero tolerance’ (p.157). This term is usually used to refer to strict, inflexible sanctions applied in response to even the most minor infringements of rules; the authors cite as examples schools using ‘no excuses’ policies. However, zero tolerance is also associated with the broken windows theory of crime (Wilson & Kelling, 1982); that if minor misdemeanours are overlooked, antisocial behaviour will escalate. The broken windows theory does not advocate strict, inflexible sanctions for minor infringements, but rather a range of preventative measures and proportionate sanctions to avoid escalation. Historically, evidence for the effectiveness of both approaches is mixed, so the authors are right to be cautious in their conclusions.

What I want to emphasise is that there isn’t a single precise definition of learning, thinking, restorative justice, zero tolerance, or many other terms used in the education system, so trying to develop one is like trying define apples-and-oranges. To avoid going down that path, we simply need to be explicit about what we’re actually talking about. As Didau and Rose themselves point out “simply lumping things together and giving them the same name doesn’t actually make them the same” (p.266).

all snowflakes are unique

Another implicit assumption emerges in chapter 25, about individual differences;

Although it’s true that all snowflakes are unique, this tells us nothing about how to build a snowman or design a better snowplough. For all their individuality, useful applications depend on the underlying physical and chemical similarities of snowflakes. The same applies to teaching children. Of course all children are unique…however, for all their individuality and any application of psychology to teaching is typically best informed by understanding the underlying similarities in the way children learn and develop, rather than trying to apply ill-fitting labels to define their differences. (p. 254)

For me, this analogy begged the question of what the authors see as the purpose of education, and completely ignores the nomothetic/idiographic (tendency to generalise vs tendency to specify) tension that’s been a challenge for psychology since its inception. It’s true that education contributes to building communities of individuals who have many similarities, but our evolution as a species, and our success at colonising such a wide range of environments hinges on our differences. And the purpose of education doesn’t stop at the community level. It’s also about the education of individuals; this is recognised in the 1996 Education Act (borrowing from the 1944 Education Act), which expects a child’s education to be suitable to them as an individual.  For the simple reason that if it isn’t suitable, it won’t be effective.  Children are people who are part of communities, not units to be built into an edifice of their teachers’ making, or to be shovelled aside if they get in the way of the education system’s progress.

what’s the big idea?

Another major niggle for me was how the authors evaluate theory. I don’t mean the specific theories tested by the psychological research they cite; that would be beyond the scope of the book. Also, if research has been peer-reviewed and there’s no huge controversy over it, there’s no reason why teachers shouldn’t go ahead and apply the findings. My concern is about the broader psychological theories that frame psychologists’ thinking and influence what research is carried out (or not) and how. Didau and Rose demonstrate they’re capable of evaluating theoretical frameworks, but their evaluation looked a bit uneven to me.

For example, they note “there are many questions” relating to Jean Piaget’s theory of cognitive development (pp.221-223), but BF Skinner’s behaviourist model (pp.152-155) has been “much misunderstood, and often unfairly maligned”. Both observations are true, but because there are pros and cons to each of the theories, I felt the authors’ biases were showing. And David Geary’s somewhat speculative model of biologically primary and secondary knowledge and ability, is cited uncritically at least a dozen times, overlooking the controversy surrounding two of its major assumptions –  modularity and intelligence. The authors are up-front about their “admittedly biased view” Continue reading

evolved minds and education: evolved minds

At the recent Australian College of Educators conference in Melbourne, John Sweller summarised his talk as follows:  “Biologically primary, generic-cognitive skills do not need explicit instruction.  Biologically secondary, domain-specific skills do need explicit instruction.”

sweller.png

Biologically primary and biologically secondary cognitive skills

This distinction was proposed by David Geary, a cognitive developmental and evolutionary psychologist at the University of Missouri. In a recent blogpost, Greg Ashman refers to a chapter by Geary that sets out his theory in detail.

If I’ve understood it correctly, here’s the idea at the heart of Geary’s model:

*****

The cognitive processes we use by default have evolved over millennia to deal with information (e.g. about predators, food sources) that has remained stable for much of that time. Geary calls these biologically primary knowledge and abilities. The processes involved are fast, frugal, simple and implicit.

But we also have to deal with novel information, including knowledge we’ve learned from previous generations, so we’ve evolved flexible mechanisms for processing what Geary terms biologically secondary knowledge and abilities. The flexible mechanisms are slow, effortful, complex and explicit/conscious.

Biologically secondary processes are influenced by an underlying factor we call general intelligence, or g, related to the accuracy and speed of processing novel information. We use biologically primary processes by default, so they tend to hinder the acquisition of the biologically secondary knowledge taught in schools. Geary concludes the best way for students to acquire the latter is through direct, explicit instruction.

*****

On the face of it, Geary’s model is a convincing one.   The errors and biases associated with the cognitive processes we use by default do make it difficult for us to think logically and rationally. Children are not going to automatically absorb the body of human knowledge accumulated over the centuries, and will need to be taught it actively. Geary’s model is also coherent; its components make sense when put together. And the evidence he marshals in support is formidable; there are 21 pages of references.

However, on closer inspection the distinction between biologically primary and secondary knowledge and abilities begins to look a little blurred. It rests on some assumptions that are the subject of what Geary terms ‘vigorous debate’. Geary does note the debate, but because he plumps for one view, doesn’t evaluate the supporting evidence, and doesn’t go into detail about competing theories, teachers unfamiliar with the domains in question could easily remain unaware of possible flaws in his model. In addition, Geary adopts a particular cultural frame of reference; essentially that of a developed, industrialised society that places high value on intellectual and academic skills. There are good reasons for adopting that perspective; and equally good reasons for not doing so. In a series of three posts, I plan to examine two concepts that have prompted vigorous debate – modularity and intelligence – and to look at Geary’s cultural frame of reference.

Modularity

The concept of modularity – that particular parts of the brain are dedicated to particular functions – is fundamental to Geary’s model.   Physicians have known for centuries that some parts of the brain specialise in processing specific information. Some stroke patients for example, have been reported as being able to write but no longer able to read (alexia without agraphia), to be able to read symbols but not words (pure alexia), or to be unable to recall some types of words (anomia). Language isn’t the only ability involving specialised modules; different areas of the brain are dedicated to processing the visual features of, for example, faces, places and tools.

One question that has long perplexed researchers is how modular the brain actually is. Some functions clearly occur in particular locations and in those locations only; others appear to be more distributed. In the early 1980s, Jerry Fodor tackled this conundrum head-on in his book The modularity of mind. What he concluded is that at the perceptual and linguistic level functions are largely modular, i.e. specialised and stable, but at the higher levels of association and ‘thought’ they are distributed and unstable.  This makes sense; you’d want stability in what you perceive, but flexibility in what you do with those perceptions.

Geary refers to the ‘vigorous debate’ (p.12) between those who lean towards specialised brain functions being evolved and modular, and those who see specialised brain functions as emerging from interactions between lower-level stable mechanisms. Although he acknowledges the importance of interaction and emergence during development (pp. 14,18) you wouldn’t know that from Fig 1.2, showing his ‘evolved cognitive modules’.

At first glance, Geary’s distinction between stable biologically primary functions and flexible biologically secondary functions appears to be the same as Fodor’s stable/unstable distinction. But it isn’t.  Fodor’s modules are low-level perceptual ones; some of Geary’s modules in Fig. 1.2 (e.g. theory of mind, language, non-verbal behaviour) engage frontal brain areas used for the flexible processing of higher-level information.

Novices and experts; novelty and automation

Later in his chapter, Geary refers to research involving these frontal brain areas. Two findings are particularly relevant to his modular theory. The first is that frontal areas of the brain are initially engaged whilst people are learning a complex task, but as the task becomes increasingly automated, frontal area involvement decreases (p.59). Second, research comparing experts’ and novices’ perceptions of physical phenomena (p.69) showed that if there is a conflict between what people see and their current schemas, frontal areas of their brains are engaged to resolve the conflict. So, when physics novices are shown a scientifically accurate explanation, or when physics experts are shown a ‘folk’ explanation, both groups experience conflict.

In other words, what’s processed quickly, automatically and pre-consciously is familiar, overlearned information. If that familiar and overlearned information consists of incomplete and partially understood bits and pieces that people have picked up as they’ve gone along, errors in their ‘folk’ psychology, biology and physics concepts (p.13) are unsurprising. But it doesn’t follow that there must be dedicated modules in the brain that have evolved to produce those concepts.

If the familiar overlearned information is, in contrast, extensive and scientifically accurate, the ‘folk’ concepts get overridden and the scientific concepts become the ones that are accessed quickly, automatically and pre-consciously. In other words, the line between biologically primary and secondary knowledge and abilities might not be as clear as Geary’s model implies.  Here’s an example; the ability to draw what you see.

The eye of the beholder

Most of us are able to recognise, immediately and without error, the face of an old friend, the front of our own house, or the family car. However, if asked to draw an accurate representation of those items, even if they were in front of us at the time, most of us would struggle. That’s because the processes involved in visual recognition are fast, frugal, simple and implicit; they appear to be evolved, modular systems. But there are people can draw accurately what they see in front of them; some can do so ‘naturally’, others train themselves to do so, and still others are taught to do so via direct instruction.  It looks as if the ability to draw accurately straddles Geary’s biologically primary and secondary divide.  The extent to which modules are actually modular is further called into question by recent research involving the fusiform face area (FFA).

Fusiform face area

The FFA is one of the visual processing areas of the brain. It specialises in processing information about faces. What wasn’t initially clear to researchers was whether it processed information about faces only, or whether faces were simply a special case of the type of information it processes. There was considerable debate about this until a series of experiments found that various experts used their FFA for differentiating subtle visual differences within classes of items as diverse as birds, cars, chess configurations, x-ray images, Pokémon, and objects named ‘greebles’ invented by researchers.

What these experiments tell us is that an area of the brain apparently dedicated to processing information about faces, is also used to process information about modern artifacts with features that require fine-grained differentiation in order to tell them apart. They also tell us that modules in the brain don’t seem to draw a clear line between biologically primary information such as faces (no explicit instruction required), and biologically secondary information such as x-ray images or fictitious creatures (where initial explicit instruction is required).

What the experiments don’t tell us is whether the FFA evolved to process information about faces and is being co-opted to process other visually similar information, or whether it evolved to process fine-grained visual distinctions, of which faces happen to be the most frequent example most people encounter.

We know that brain mechanisms have evolved and that has resulted in some modular processing. What isn’t yet clear is exactly how modular the modules are, or whether there is actually a clear divide between biologically primary and biologically secondary abilities. Another component of Geary’s model about which there has been considerable debate is intelligence – the subject of the next post.

Incidentally, it would be interesting to know how Sweller developed his summary because it doesn’t quite map on to a concept of modularity in which the cognitive skills are anything but generic.

References

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Acknowledgements

I thought the image was from @greg_ashman’s Twitter timeline but can’t now find it.  Happy to acknowledge correctly if notified.