the view from the signpost: learning styles

Discovering that some popular teaching approaches (Learning Styles, Brain Gym, Thinking Hats) have less-than-robust support from research has prompted teachers to pay more attention to the evidence for their classroom practice. Teachers don’t have much time to plough through complex research findings. What they want are summaries, signposts to point them in the right direction. But research is a work in progress. Findings are often not clear-cut but contradictory, inconclusive or ambiguous. So it’s not surprising that some signposts – ‘do use synthetic phonics, ‘don’t use Learning Styles’ – often spark heated discussion. The discussions often cover the same ground. In this post, I want look at some recurring issues in debates about synthetic phonics (SP) and Learning Styles (LS).

Take-home messages

Synthetic phonics is an approach to teaching reading that begins by developing children’s awareness of the phonemes within words, links the phonemes with corresponding graphemes, and uses the grapheme-phoneme correspondence to decode the written word. Overall, the reading acquisition research suggests that SP is the most efficient method we’ve found to date of teaching reading. So the take-home message is ‘do use synthetic phonics’.

What most teachers mean by Learning Styles is a specific model developed by Fleming and Mills (1992) derived from the theory behind Neuro-Linguistic Programming. It proposes that students learn better in their preferred sensory modality – visual, aural, read/write or kinaesthetic (VARK). (The modalities are often reduced in practice to VAK – visual, auditory and kinaesthetic.) But ‘learning styles’ is also a generic term for a multitude of instructional models used in education and training. Coffield et al (2004) identified no fewer than 71 of them. Coffield et al’s evaluation didn’t include the VARK or VAK models, but a close relative – Dunn and Dunn’s Learning Styles Questionnaire – didn’t fare too well when tested against Coffield’s reliability and validity criteria (p.139). Other models did better, including Allinson and Hayes Cognitive Styles Index that met all the criteria.

The take-home message for teachers from Coffield and other reviews is that given the variation in validity and reliability between learning styles models, it isn’t worth teachers investing time and effort in using any learning style approach to teaching. So far so good. If the take-home messages are clear, why the heated debate?

Lumping and splitting

‘Lumping’ and ‘splitting’ refer to different ways in which people categorise specific examples; they’re terms used mainly by taxonomists. ‘Lumpers’ tend to use broad categories and ‘splitters’ narrow ones. Synthetic phonics proponents rightly emphasise precision in the way systematic, synthetic phonics (SSP) is used to teach children to read. SSP is a systematic not a scattergun approach, it involves building up words from phonemes not breaking words down to phonemes, and developing phonemic awareness rather than looking at pictures or word shapes. SSP advocates are ‘splitters’ extraordinaire – in respect of SSP practice at least. Learning styles critics, by contrast, tend to lump all learning styles together, often failing to make a distinction between LS models.

SP proponents also become ‘lumpers’ where other approaches to reading acquisition are concerned. Whether it’s whole language, whole words or mixed methods, it makes no difference… it’s not SSP. And both SSP proponents and LS critics are often ‘lumpers’ in respect of the research behind the particular take-home message they’ve embraced so enthusiastically. So what? Why does lumping or splitting matter?

Lumping all non-SSP reading methods together or all learning styles models together matters because the take-home messages from the research are merely signposts pointing busy practitioners in the right direction, not detailed maps of the territory. The signposts tell us very little about the research itself. Peering at the research through the spectacles of the take-home message is likely to produce a distorted view.

The distorted view from the signpost

The research process consists of several stages, including those illustrated in the diagram below.
theory to application
Each stage might include several elements. Some of the elements might eventually emerge as robust (green), others might be turn out to be flawed (red). The point of the research is to find out which is which. At any given time it will probably be unclear whether some components at each stage of the research process are flawed or not. Uncertainty is an integral part of scientific research. The history of science is littered with findings initially dismissed as rubbish that later ushered in a sea-change in thinking, and others that have been greeted as the Next Big Thing that have since been consigned to the trash.

Some of the SP and LS research findings have been contradictory, inconclusive or ambiguous. That’s par for the course. Despite the contradictions, unclear results and ambiguities, there might be general agreement about which way the signposts for practitioners are pointing. That doesn’t mean it’s OK to work backwards from the signpost and make assumptions about the research. In the diagram, there’s enough uncertainty in the research findings to put a question mark over all potential applications. But all that question mark itself tells us is that there’s uncertainty involved. A minor tweak to the theory could explain the contradictory, inconclusive or ambiguous results and then it would be green lights all the way down.

But why does that matter to teachers? It’s the signposts that are important to them, not the finer points of research methodology or statistical analysis. It matters because some of the teachers who are the most committed supporters of SP or critics of LS are also the most vociferous advocates of evidence-based practice.

Evidence: contradictory, inconclusive or ambiguous?

Decades of research into reading acquisition broadly support the use of synthetic phonics for teaching reading, although many of the research findings aren’t unambiguous. One example is the study carried out in Clackmannanshire by Rhona Johnston and Joyce Watson. The overall conclusion is that SP leads to big improvements in reading and spelling, but closer inspection of the results shows they are not entirely clear-cut, and the study’s methodology has been criticised. But you’re unlikely to know that if you rely on SP advocates for an evaluation of the evidence. Personally, I can’t see a problem with saying ‘the research evidence broadly supports the use of synthetic phonics for teaching reading’ and leaving it at that.

The evidence relating to learning styles models is also not watertight, although in this case, it suggests they are mostly not effective. But again, you’re unlikely to find out about the ambiguities from learning styles critics. Tom Bennett, for example, doesn’t like learning styles – as he makes abundantly clear in a TES blog post entitled “Zombie bølløcks: World War VAK isn’t over yet.”

The post is about the VAK Learning Styles model. But in the ‘Voodoo teaching’ chapter of his book Teacher Proof, Bennett concludes about learning styles in general “it is of course, complete rubbish as far as I can see” (p.147). Then he hedges his bets in a footnote; “IN MY OPINION”.

Tom’s an influential figure – government behaviour adviser, driving force behind the ResearchEd conferences and a frequent commentator on educational issues in the press. He’s entitled to lump together all learning styles models if he wants to and to write colourful opinion pieces about them if he gets the chance, but presenting the evidence in terms of his opinion, and missing out evidence that doesn’t support his opinion is misleading. It’s also at odds with an evidence-based approach to practice. Saying there’s mixed evidence for the effectiveness of learning styles models doesn’t take more words than implying there’s none.

So why don’t supporters in the case of SP, or critics in the case of LS, say what the evidence says, rather than what the signposts say? I’d hazard a guess it’s because they’re worried that teachers will see contradictory, inconclusive or ambiguous evidence as providing a loophole that gives them licence to carry on with their pet pedagogies regardless. But the risk of looking at the signpost rather than the evidence is that one set of dominant opinions will be replaced by another.

In the next few posts, I’ll be looking more closely at the learning styles evidence and what some prominent critics have to say about it.

Note:

David Didau responded to my thoughts about signposts and learning styles on his blog. Our discussion in the comments section revealed that he and I use the term ‘evidence’ to mean different things. Using words in different ways. Could explain everything.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

Advertisements

there’s more to working memory than meets the eye

I’ve had several conversations on Twitter with Peter Blenkinsop about learning and the brain. At the ResearchEd conference on Saturday, we continued the conversation and discovered that much of our disagreement was because we were using different definitions of learning. Peter’s definition is that learning involves being able to actively recall information; mine is that it involves changes to the brain in response to information.

working memory

Memory is obviously essential to learning. One thing that’s emerged clearly from years of research into how memory works is that the brain retains information for a very short time in what’s known as working memory, and indefinitely in what’s called long-term memory – but that’s not all there is to it. I felt that advocates of direct instruction at the conference were relying on a model of working memory that was oversimplified and could be misleading. The diagram they were using looked like this;

simple model of memory

simple model of memory

This model is attributed to Daniel Willingham. From what the teachers were saying, the diagram is simpler than most current representations of working memory because its purpose is to illustrate three key points;

• the capacity of working memory is limited and it holds information for a short time
• information in long-term memory is available for recall indefinitely and
• information can be transferred from working memory to long-term memory and vice versa.

So far, so good.

My reservation about the diagram is that if it’s the only diagram of working memory you’ve ever seen, you might get the impression that it shows the path information follows when it’s processed by the brain. From it you might conclude that;

• information from the environment goes directly into working memory
• if you pay attention to that information, it will be stored permanently in long-term memory
• if you don’t pay attention to it it will be lost forever, and
• there’s a very low limit to how much information from the environment you can handle at any one time.

But that’s not quite what happens to information coming into the brain. As Peter pointed out during our conversation, simplifying things appropriately is challenging; you want to simplify enough to avoid confusing people, but not so much that they might misunderstand.

In this post, I’m going to try to explain the slightly bigger picture of how brains process information, and where working memory and long-term memory fit in.

sensory information from the external environment

All information from the external environment comes into the brain via the sense organs. The incoming sensory information is on a relatively large scale, particularly if it’s visual or auditory information; you can see an entire classroom at once and hear simultaneously all the noises emanating from it. But individual cells within the retina or the cochlea respond to tiny fragments of that large-scale information; lines at different angles, areas of light and dark and colour, minute changes in air pressure. Information from the fragments is transmitted via tiny electrical impulses, from the sense organs to the brain. The brain then chunks the fragments together to build larger-scale representations that closely match the information coming in from the environment. As a result, what we perceive is a fairly accurate representation of what’s actually out there. I say ‘fairly accurate’ because perception isn’t 100% accurate, but that’s another story.

chunking

The chunking of sensory information takes place via networks of interconnected neurons (long spindly brain cells). The brain forms physical connections (synapses) between neighbouring neurons in response to novel information. The connections allow electrical activation to pass from one neuron to another. The connections work on a use-it-or-lose-it principle; the more they are used the stronger they get, and if they’re not used much they weaken and disappear. Not surprisingly, toddlers have vast numbers of connections, but that number diminishes considerably during childhood and adolescence. That doesn’t mean we have to keep remembering everything we ever learned or we’ll forget it, it’s a way of ensuring that the brain can process efficiently the types of information from the environment that it’s most likely to encounter.

working memory

Broadly speaking, incoming sensory information is processed in the brain from the back towards the front. It’s fed forward into areas that Alan Baddeley has called variously a ‘loop’, ‘sketchpad’ and ‘buffer’. Whatever you call them, they are areas where very limited amounts of information can be held for very short periods while we decide what to do with it. Research evidence suggests there are different loops/sketchpads/buffers for different types of sensory information – for example Baddeley’s most recent model of working memory includes temporary stores for auditory, visuospatial and episodic information.

Baddeley's working memory model

Baddeley’s working memory model

The incoming information held briefly in the loops/sketchpads/buffers is fed forward again to frontal areas of the brain where it’s constantly monitored by what’s called the central executive – an area that deals with attention and decision-making. The central executive and the loops/sketchpads/buffers together make up working memory.

long-term memory

The information coming into working memory activates the more permanent neural networks that carry information relevant to it – what’s called long-term memory. The neural networks that make up long-term memory are distributed throughout the brain. Several different types of long-term memory have been identified but the evidence points increasingly to the differences being due to where neural networks are located, not to differences in the biological mechanisms involved.

Information in the brain is carried in the pattern of connections between neurons. The principle is similar to the way pixels represent information on a computer screen; that information is carried in the patterns of pixels that are activated. This makes computer screens – and brains – very versatile; they can carry a huge range of different types of information in a relatively small space. One important difference between the two processes is that pixels operate independently, whereas brain cells form physical connections if they are often activated at the same time. The connections allow fast, efficient processing of information that’s encountered frequently.

For example, say I’m looking out of my window at a pigeon. The image of the pigeon falling on my retina will activate the neural networks in my brain that carry information about pigeons; what they look like, sound like, feel like, their flight patterns and feeding habits. My thoughts might then wander off on to related issues; other birds in my garden, when to prune the cherry tree, my neighbour repairing her fence. If I glance away from the pigeon and look at my blank computer screen, other neural networks will be activated, those that carry information about computers, technology, screens and rectangles in general. I will no longer be thinking about pigeons, but my pigeon networks will still be active enough for me to recall that I was looking at a pigeon previously and I might glance out of the window to see if it is still there.

Every time my long-term neural networks are activated by incoming sensory information, they are updated. If the same information comes in repeatedly the connections within the network are strengthened. What’s not clear is how much attention needs to be paid to incoming information in order for it to update long-term memory. Large amounts of information about the changing environment are flowing through working memory all the time, and evidence from brain-damaged patients suggests that long-term memory can be changed even if we’re not paying attention to the information that activates it.

the central executive

Information from incoming sensory information and from long-term memory is fed forward to the central executive. The function of the central executive is a bit like the function of a CCTV control room. According to Antonio Damasio it monitors, evaluates and responds to information from three main sources;

• the external environment (sensory information)
• the internal environment (body states) and
• previous representations of the external and internal environments (carried in the pattern of connections in neural networks).

One difference is that loops/sketchpads/buffers and the system that monitors them consist of networks of interconnected neurons, not TV screens (obviously). Another is that there isn’t anybody watching the brain’s equivalent of the CCTV screens – it’s an automated process. We become aware of information in the loops/sketchpads/buffers only if we need to be aware of it – so we are usually conscious of what’s happening in the external environment or if there are significant changes internally or externally.

The central executive constantly compares the streams of incoming information. It responds to it via networks of neurons that feed back information to other areas of the brain. If the environment has changed significantly, or an interesting or threatening event occurs, or we catch sight of something moving on the periphery of our field of vision, or experience sudden discomfort or pain, the feedback from the central executive ensures that we pay attention to that, rather than anything else. It’s important to note that information from the body includes information about our overall physiological state, including emotions.

So a schematic general diagram of how working memory fits in with information processing in the brain would look something like this:

Slide1

It’s important to note that we still don’t have a clear map of the information processing pathways. Researchers keep coming across different potential loops/sketchpads/buffers and there’s evidence that the feedback and feed-forward pathways are more complex than this diagram shows.

I began this post by suggesting that an over-simplified model of working memory could be misleading. I’ll explain my reasons in more detail in the next post, but first I want to highlight an important implication of the way incoming sensory information is handled by the brain.

pre-conscious processing

A great deal of sensory information is processed by the brain pre-consciously. Advocates of direct instruction emphasise the importance of chunking information because it increases the capacity of working memory. A popular example is the way expert chess players can hold simultaneously in working memory several different configurations of chess pieces, chunking being seen as something ‘experts’ do. But it’s important to remember that the brain chunks information automatically if we’re exposed to it frequently enough. That’s how we recognise faces, places and things – most three year-olds are ‘experts’ in their day-to-day surroundings because they have had thousands of exposures to familiar faces, places and things. They don’t have to sit down and study these things in order to chunk the fragments of information that make up faces, places and things – their visual cortex does it automatically.

This means that a large amount of information going through young children’s working memory is already chunked. We don’t know to what extent the central executive has to actively pay attention to that information in order for it to change long-term memory, but pre-conscious chunking does suggest that a good deal of learning happens implicitly. I’ll comment on this in more detail in my next post.