Can’t help it, root causes and strict discipline: part 2

The second of two posts analysing Old Andrew’s view of the behaviour of children with special educational needs.

special educational needs

In the Can’t Help It model that Old Andrew satirises in Charlie and the Inclusive Chocolate Factory, special educational needs (SEN) are conflated with disability. The child is seen as “ill with ADHD” or “on the autistic spectrum”. And we’ve all seen discussions about whether children ‘really have SEN’. According to one newspaper a 2010 Ofsted report claimed that “many of these pupils did not actually suffer from any physical, emotional or educational problems”.

The SEND Code of Practice defines special educational needs in terms of the “facilities of a kind generally provided for others of the same age in mainstream schools or mainstream post-16 institutions” (p.14). In other words, the definition of SEN is a piece of string. If the facilities generally provided are brilliant, there will be hardly any children with SEN. If they are generally inadequate, there will be many children with SEN.

special educational needs and disability

Another post referred to by Old Andrew is The Blameless Part 3: the Afflicted.   He again pillories Can’t Help It as assuming “if a child is behaving badly in a lesson they must secretly be unable to do the work, and that the most likely reason a child might be unable to keep up with their peers is some form of disability or illness”.

Andrew asks why “a child unable to do their school work would misbehave rather than simply say they couldn’t do it”, completely overlooking communication difficulties ranging from children physically not being able to put the words together if under stress, to feeling intense apprehension about the consequences of drawing the problem to the teacher’s attention in public, such as jeers from peers or the teacher saying ‘you can do it you’re just not trying’ (I’ve lost count of the number of times I’ve heard that statement masquerading as ‘high expectations’).

The second charge Andrew levels at Can’t Help It is the assumption that “medical or psychological conditions directly cause involuntary incidents of poor behaviour.” Leaving aside the question of who decides what constitutes poor behaviour, Andrew draws attention to the circular reasoning that Can’t Help It entails. If a medical or psychological condition is defined in terms of behaviour, then the behaviour must be explained in terms of a medical or psychological condition.

That’s a fair criticism, but it doesn’t mean there are no medical or psychological conditions involved. Old Andrew goes on to question the existence of ‘proprioception disorder’, linking it, bizarrely, to a Ship of Fools definition of purgatory. Impaired proprioception is well established scientifically. A plausible mechanism is the variation in function of the different kinds of sensory receptor in the skin and muscles. (The best description of it I’ve found is in the late great Donna Williams’ book Nobody Nowhere.) Whether Andrew has heard of ‘proprioception disorder’ or whether or not it’s formally listed as a medical disorder, is irrelevant to whether or not variations in proprioceptive function are causal factors in children’s behaviour.

It’s the Can’t Help It model that has led, in Andrew’s opinion, to a “Special Needs racket”. I’d call it a mess rather than a racket, but a mess it certainly is.  And it’s not just about children who don’t have ‘genuine disabilities’.  Mainstream teachers are expected to teach 98% of the school population but most are trained to teach only the 70% in the middle range. If teachers don’t have the relevant expertise to teach the 15% or so of children whose performance, for whatever reason, is likely to be more than one standard deviation below average, it’s hardly surprising that they label those children as having special educational needs and expect local authorities to step in with additional resources.

children as moral agents

Old Andrew questions an assumption he thinks is implicit in Can’t Help It – that the child is ‘naturally good’. I think he’s right to question it, not because children are or are not naturally good, but because morality is only tangentially relevant to what kinds of behaviours teachers want or don’t want in their classrooms, and completely irrelevant to whether or not children can meet those expectations. The good/bad behavioural continuum is essentially a moral one, and thus open to debate.

The third post Old Andrew linked to was Needs.  He suggests that framing behaviour in terms of needs “absolves people of responsibility for their actions”. He points out the difficulty of determining what children’s needs are and how to meet them, and goes on to consider an ‘extreme example’ of a school discovering that many of its pupils were starving. If the school feels it has a moral duty to the children, it would feed all those who were starving. But if the school attributes bad behaviour to going without food, it would “cease looking for the most famished child to feed first and start feeding the worst behaved… We would be rewarding the worst behaved child with something they wanted”.  Andrew concludes “Imagine how more contentious other types of “help” (like extra attention, free holidays, help in lessons or immunity from punishment) might be”… Whenever we view human beings as something other than moral agents we are likely to end up advocating solutions which are in conflict with both our consciences and our knowledge of the human mind”.

Andrew has raised some valid points about how we figure out what needs are, how they are best met, and about the Can’t Help It model. But his alternative is to frame behaviour in terms of a simplistic behaviourist model (reward and punishment), and human beings as moral agents with consciences and minds. In short, his critique, and the alternative he posits, are based on his beliefs. He’s entitled to hold those beliefs, of course, but they don’t necessarily form an adequate framework for determining what behaviour schools want, what behaviour is most beneficial to most children in the short and long term, or how schools should best address the behaviour of children with special educational needs (as legally defined).

Andrew seems to view children as moral agents who can control their behaviour regardless of what disability they might have. The moral agents aspect of his model rests on unsupported assumptions about human nature. The behavioural control aspect is called into question by research indicating that the frontal lobes of the brain don’t fully mature until the early 20s.  Moral agency and behavioural control in young people is a controversial topic.

conclusion

The Can’t Help It model is obviously flawed and the Strict Discipline model rests on questionable assumptions. The Root Cause model, in contrast, recognises that preventing unwanted behaviours might require an analysis of the behaviour expected of children, and the reasons children aren’t meeting those expectations. It’s an evidence-based model. It doesn’t rest on beliefs or absolve children of all responsibility. It can identify environmental factors that contribute to unwanted behaviour, and can provide children with strategies that increase their ability to control what they do.  To me, it looks like the only model that’s likely to be effective.

Advertisements

Can’t help it, root causes and strict discipline: part 1

This week Old Andrew, renowned education blogger, has drawn attention to some of his old posts about children with special educational needs. He identifies two conceptual models that focus on children’s behaviour – and overlooks a third.  In this post, I describe the models and why teachers might adopt one and not the others.

the model muddle

In Charlie and the Inclusive Chocolate Factory Andrew satirises a particular conceptual model of children’s behaviour. I’ll call  the model Can’t Help It. This view is that children identified as having special educational needs, or those from deprived backgrounds, are not responsible for behaving in ways that are unwanted by those around them. The Can’t Help It model is the one that claims criminals abused or neglected in childhood can’t be held responsible for breaking the law. I don’t doubt it’s a view held by some people, and I can understand the temptation to satirise it. It’s flawed because almost everyone could identify some adverse experience in childhood that explains why they behave in ways that distress others.

But satirising Can’t Help It is risky, because of its similarity to another conceptual model, which I’ll call Root Cause. The two models have similar surface features, but a fundamentally different deep structure. The Root Cause model claims that all behaviour has causes and if we want to prevent unwanted behaviour we have to address the causes. If we don’t do that the behaviour is likely to persist. (Ignoring causal factors is a frequent cause of re-offending; prisoners are often released into a community that prompted them to engage in criminal behaviour in the first place).

I’ve never encountered Can’t Help It as such. What I have encountered frequently is something of a hybrid between Can’t Help It and Root Cause. People are aware that there might be causes for unwanted behaviour and that those causes should be addressed, but  have no idea what the causes are or how to deal with them.

If the TES Forum is anything to go by, this is often true for teachers in mainstream schools who’ve had no special educational needs or disability training. They don’t want to apply the usual a reward/punishment approach in the case of a kid with a diagnosis of ADHD or autism, because they know it might be ineffective or make the problem worse. But they know next to nothing about ADHD or autism, so haven’t a clue how to proceed. In some cases the school appears to have just left the teacher to get on with it and is hoping for the best. Teachers in this position can’t apply Root Cause because they don’t know how, so tend to default to either Can’t Help It or to a third model I’ll call Strict Discipline.

Strict Discipline has a long history, dating back at least to Old Testament times. It also has a long history of backfiring. Children have a strong sense of fairness and will resent punishments they see as unfair or disproportionate. The resentment can last a lifetime. A Strict Discipline approach needs a robust evidential framework it’s going to be effective in both the short and long term. In Charlie and the Inclusive Chocolate Factory, Old Andrew rightly eschews Can’t Help It and appears to opt for Strict Discipline, bypassing Root Cause entirely; he describes Charlie, despite “eating nothing but bread and cabbage for six months” as “polite and well-behaved”.

Good behaviour

This evaluation of Charlie’s behaviour begs the question of what constitutes ‘well-behaved’. Teachers who identify as ‘traditional’ often refer to ‘good’ and ‘bad’ behaviour as if both are self-evident. Inevitably, behaviour isn’t that simple. ‘Traditional’ teachers appear to see behaviour on a linear continuum. At one pole is strict adherence to social norms – whatever they are deemed to be in a particular environment. At the other is complete license, assumed to result in extreme anti-social activities.

The flaws of this behaviour continuum are immediately apparent because it’s based on assumptions. The norms set by a particular teacher or school are assumed to be reasonable and attainable by all children. Those are big assumptions, as shown by the variation in different schools’ expectations and in the behaviour of children.

Even very young children are aware of different behavioural expectations. What’s allowed in Miss Green’s class isn’t tolerated in Mr Brown’s. They can do things in their grandparents’ home that their parents wouldn’t like, and that would be completely unacceptable in school. That doesn’t make Mr Brown’s expectations or those of the school right, and everybody else wrong. We all have to behave in different ways in different environments. Most children intuitively pick up and respond appropriately to these variations in expectations, but some don’t. By definition autistic children struggle to make sense of what they are expected to do, and children with attentional deficits get distracted from the task in hand.

It doesn’t follow that children with autism or ADHD should be permitted to behave how they like, nor have all their ‘whims’ catered for. Nor does it follow that every child should be expected to behave in exactly the same way. What it does mean is that if a child exhibits behaviour that’s problematic for others, the causes of the problematic behaviour should be identified and appropriate action taken. In some cases, schools and teachers do not appear to know what that appropriate action should be.

In the next post I’ll look at the flaws in the Strict Discipline model in relation to children with special educational needs.

cognitive science: the wrong end of the stick

A few years ago, some teachers began advocating the application of findings from cognitive science to education. There seemed to be something not quite right about what they were advocating but I couldn’t put my finger on exactly what it was. Their focus was on the limitations of working memory and differences between experts and novices. Nothing wrong with that per se, but working memory and expertise aren’t isolated matters.

Cognitive science is now a vast field; encompassing sensory processing, perception, cognition, memory, learning, and aspects of neuroscience. A decent textbook would provide an overview, but decent textbooks didn’t appear to have been consulted much. Key researchers (e.g. Baddeley & Hitch, Alloway, Gathercole), fields of research (e.g. limitations of long-term memory, neurology), and long-standing contentious issues (e.g. nature vs nurture) rarely got a mention even when highly relevant.

At first I assumed the significant absences were due to the size of the field to be explored, but as time went by that seemed less and less likely.  There was an increasing occurrence of teacher-B’s-understanding-of-teacher-A’s-understanding-of-Daniel-Willingham’s-simplified-model-of-working-memory, with some teachers getting hold of the wrong end of some of the sticks. I couldn’t understand why, given the emphasis on expertise, teachers didn’t seem to be looking further.

The penny dropped last week when I read an interview with John Sweller, the originator of Cognitive Load Theory (CLT), by Ollie Lovell, a maths teacher in Melbourne. Ollie has helpfully divided the interview into topics in a transcript on his website. The interview clarifies several aspects of cognitive load theory. In this post, I comment on some points that came up in the interview, and explain the dropped penny.

1.  worked examples

The interview begins with the 1982 experiment that led to Sweller’s discovery of the worked example effect. Ollie refers to the ‘political environment of education at the time’ being ‘heavily in favour of problem solving’. John thinks that however he’d presented the worked example effect, he’d be pessimistic about the response because ‘the entire research environment in those days was absolutely committed to problem solving’.

The implication that the education system had rejected worked examples was puzzling. During my education (1960s and 70s) you couldn’t move for worked examples. They permeated training courses I attended in the 80s, my children’s education in the 90s and noughties, and still pop up frequently in reviews and reports. True, they’re not always described as a ‘worked example’ but instead might be a ‘for instance’ or ‘here’s an example’ or ‘imagine…’. So where weren’t they? I’d be grateful for any pointers.

2 & 3. goal-free effect

Essentially students told to ‘find out as much as you can’ about a problem, performed better than those given specific instructions about what to find out. But only in relation to problems with a small number of possible solutions – in this case physics problems. The effect wasn’t found for problems with a large number of possible solutions.   But you wouldn’t know that if you’d only read teachers criticising ‘discovery learning’.

4. biologically primary and secondary skills

What’s determined by biology or by the environment has been a hugely contentious issue in cognitive science for decades. Basically, we don’t yet know the extent to which learning is biologically or environmentally determined.  But the contentiousness isn’t mentioned in the interview, is marginalised by David Geary the originator of the biologically primary and secondary concept, and John appears to simply assume Geary’s theory is correct, presumably because it’s plausible.

John says it’s ‘absurd’ to provide someone with explicit instruction about what to do with their tongue, lips or breath when learning English. Ollie points out that’s exactly what he had to do when he learned Chinese. John claims that language acquisition by immersion is biologically primary for children but not for adults. This flies in the face of everything we know about language acquisition.

Adults can and do become very fluent in languages acquired via immersion, just as children do. Explicit instruction can speed up the process and help with problematic speech sounds, but can’t make adults speak like a native. That’s because the adults have to override very robust neural pathways laid down in childhood in response to the sounds the children hear day-in, day-out (e.g. Patricia Kuhl’s ‘Cracking the speech code‘). The evidence suggests that differences between adult and child language acquisition are a frequency of exposure issue, not a type-of-skill issue. As Ollie says: “It’s funny isn’t it?  How it can switch category. It’s just amazing.”  Quite.

5. motivation

The discussion was summed up in John’s comment: “I don’t think you can turn Cognitive Load Theory into a theory of motivation which in no way suggests that you can’t use a theory of motivation and use it in conjunction with cognitive load theory.

 6. expertise reversal effect

John says: “As expertise goes up, the advantage of worked examples go down, and as expertise continues to go up, eventually the relative effectiveness of worked examples and problems reverses and the problems are more helpful than worked examples”.

7. measures of cognitive load

John: “I routinely use self-report and I use self-report because it’s sensitive”. Other measures – secondary tasks, physiological markers – are problematic.

8. collective working memory effect

John: “In problem solving, you may need information and the only place you can get it from is somebody else.” He doesn’t think you can teach somebody to act collaboratively because he thinks social interaction is biologically primary knowledge. See 4 above.

9. The final section of the interview highlighted, for me, two features that emerge from much of the discourse about applying cognitive science to education:

  • The importance of the biological mechanisms and the weaknesses of analogy.
  • The frame of reference used in the discourse.

biological mechanisms

In the final part of the interview John asks an important question: Is the capacity of working memory fixed? He says: “If you’ve been using your working memory, especially in a particular area, heavily for a while, after a while, and you would have experienced this yourself, your working memory keeps getting narrower and narrower and narrower and after a while it just about disappears.”

An explanation for the apparent ‘narrowing’ of working memory is habituation, where the response of neurons to a particular stimulus diminishes if the stimulus is repeated. The best account I’ve read of the biological mechanisms in working memory is in a 2004 paper by Wagner, Bunge & Badre.  If I’ve understood their findings correctly, signals representing sensory information coming into the prefrontal area of the brain are maintained for a few seconds until they degrade or are overridden by further incoming information. This is exactly what was predicted by Baddeley & Hitch’s phonological loop and visual-spatial sketchpad. (Wagner, Bunge and Badre’s findings also indicate there might be more components to working memory than Baddley & Hitch’s model suggests.)

John was using a figure of speech, but I fear it will only be a matter of time before teachers start referring to the ‘narrowing’ of working memory. This illustrates why it’s important to be aware of the biological mechanisms that underpin cognitive functions. Working memory is determined by the behaviour of neurons, not by the behaviour of analogous computer components.

frame of reference

John and Ollie were talking about cognitive load theory in education, so that’s what the interview focussed on, obviously.  But every focus has a context, and John and Ollie’s frame of reference seemed rather narrow. Ollie opens by talking about ‘the political environment of education at the time [1982]’ being ‘heavily in favour of problem solving’. I don’t think he actually means the ‘political environment of education at the time’ as such. Similarly John comments ‘the entire research environment in those days was absolutely committed to problem solving’. I don’t think he means ‘the entire research environment’ as such either.

John also observes: “It’s only been very recently that people started taking notice of Cognitive Load Theory. For decades I put papers out there and it was like putting them into outer-space, you know, they disappeared into the ether!” I first heard about Cognitive Load Theory in the late 80s, soon after Sweller first proposed it, via a colleague working in artificial intelligence. I had no idea, until recently, that Sweller was an educational psychologist. People have been taking notice of CLT, but maybe not in education.

Then there’s the biologically primary/secondary model. It’s ironic how little it refers to biology. We know a fair amount about the biological mechanisms involved in learning, and I’ve not yet seen any evidence suggesting two distinct mechanisms. The model appears to be based on the surface features of how people appear to learn, not on the deep structure of how learning happens.

Lastly, the example of language acquisition. The differences between adults and children learning languages can be explained by frequency of exposure and how neurons work; there’s no need to introduce a speculative evolutionary model.

Not only is cognitive load theory the focus of the interview, it also appears to be its frame of reference; political issues and knowledge domains other than education don’t get much of a look in.

the penny that dropped

Ever since I first heard about teachers applying cognitive science to education, I’ve been puzzled by their focus on the limitations of working memory and the characteristics of experts and novices. It suddenly dawned on me, reading Ollie’s interview with John, that what the teachers are actually applying isn’t so much cognitive science, as cognitive load theory. CLT, the limitations of working memory and the characteristics of experts and novices are important, but constitute only a small area of cognitive science. But you wouldn’t know that from this interview or most of the teachers advocating the application of cognitive science.  There’s a real risk, if CLT isn’t set in context, of teachers getting hold of the wrong stick entirely.

references

Geary, D. (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Kuhl, P. (2004). Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience 5, 831-843.

Wagner, A.D., Bunge, S.A. & Badre, D. (2004). Cognitive control, semantic memory          and priming: Contributions from prefrontal cortex. In M. S. Gazzaniga (Ed.) The Cognitive Neurosciences (3rd edn.). Cambridge, MA: MIT Press.

 

 

 

 

 

 

 

 

 

 

a philosophy conference for teachers

Yesterday was a bright, balmy autumn day. I spent it at Thinking about teaching: a philosophy conference for teachers at the University of Birmingham. Around 50 attendees, and the content Tweeted on #EdPhilBrum. And I met in person no fewer than five people I’ve previously met only on Twitter @PATSTONE55, @ded6ajd, @sputniksteve, @DSGhataura and @Rosalindphys.  In this post, a (very) brief summary of the presentations (missed the last one by Joris Vleighe) and my personal impressions.

Geert Biesta: Teachers, teaching and the beautiful risk of education

The language we use to describe education is important. English doesn’t have words to accurately denote what Biesta considers to be key purposes of education, but German does:

  • Ausbildung (‘qualification’) – cultivation, knowledge & skills
  • Bildung (‘socialisation’) – developing identity in relation to tradition
  • Erziehung (‘subjectivisation’) – grown-up engagement with the world.

These facets are distinct but overlap; focussing on purposes individually can result in:

  • obsession with qualifications
  • strong socialisation – conformity
  • freedom as license.

Education has an interruptive quality that allows the integration of its purposes. The risk of teaching is that the purposes might not be achieved because the student is an active subject, not a ‘pure object’.

Judith Suissa : ‘Seeing like a state?’ Anarchism, philosophy of education and the political imagination

Anarchist tools allow us to question fundamental assumptions about the State, often not questioned by those who do question particular State policies. State education per se is rarely questioned, for example.

Anarchism is often accused of utopianism, but utopianism has different meanings and can serve to ‘relativise the present’.

Andrew Davis:  The very idea of an ‘evidence based’ teaching method. Educational research and the practitioner

One model of ‘evidence based’ teaching is summarised as ‘it works’. But what is the ‘it’? Even a simple approach like ‘sitting in rows’ can involve many variables. ‘It works’ bypasses the need for professional judgement and overlooks distinction between instrumental and relational understanding (Skemp). Children should have relational cognitive maps; new knowledge needs a place.

Regulative rules apply to activities whose existence is independent of the rules e.g. driving on the left-hand side of the road.

Constituitive rules are rules on which the activity depends for its existence e.g the rules of chess. Many educational functions involved constitutive rules and collective intentions.

Joe Wolff:  Fake news and twisted values: political philosophy in the age of social media

Fake news and twisted values can emerge for different reasons.

  • innocent mistakes: out of context citations, misattributed authorship, different criteria in use e.g. life expectancy
  • propaganda: Trojan Horse speeches, manipulation of information
  • peer reviewed literature: errors, replication crisis Difficulties with access and readability

writing for philosophers

Philosophy isn’t my field, but lately I’ve been dabbling in it increasingly often. The main obstacle to accessibility hasn’t been the concepts, but the terminology. I’ve ploughed through a dense argument stopping sometimes several times a sentence to find out what the words refer to, only to discover eventually I’ve been reading about a familiar concept called something else in another domain, but explained in ways that address philosophers’ queries and objections.  I now call these texts writing for philosophers.  An example is Daniel Dennett’s Consciousness Explained. Struggled with the words only to realise I’d already read Antonio Damasio explaining similar ideas but writing for biologists.

Like @PATSTONE55 I was expecting this conference to consist of presentations for philosophers and that I’d struggle to keep up. But it wasn’t and we didn’t.  Instead there were very accessible presentations for teachers. And themes that, as Pat also found, were very familiar, or at least had been familiar some decades ago.  It felt like a rather glitchy time warp flipping between the 1970s and the present. In the present context, the themes felt distinctly subversive.  Three key themes emerged for me.

context is everything

Everything is related; education is a multi-purpose process, underpinned by political assumptions, it’s relational, and evaluating evidence isn’t straightforward. The disjointed educational policy ‘ideas’ that have dominated the education landscape for several decades are usually a failure waiting to happen. They waste huge amounts of time and money, have contributed to teacher shortages and have caused needless problems for students. In systems theory they’d be catchily termed sub-systems optimisation at the expense of systems optimization, often shortened to suboptimization. Urie Bronfenbrenner wasn’t mentioned yesterday, but he addressed the issue of the social ecology in the 70s in his ecological systems theory of child development.

implicit assumptions are difficult to detect

It’s easy to focus on one purpose of education and ignore others, easy to assume the status quo can’t be questioned, easy to what’s there and difficult to spot what’s missing, and all too easy to forget what things look like from a child’s perspective.

We all make implicit assumptions, but because they are implicit and assumptions, it’s fiendishly difficult for us to make our own assumptions explicit. Sometimes a fresh pair of eyes is enough, sometimes a colleague from another discipline can help, but sometimes we need a radical, unorthodox perspective like anarchism.

space and time are essential for reflection

Most people can learn facts (I use the term loosely) pretty quickly, but putting the facts in context might require developing or changing a schema and you can’t do that while you’re busy learning other facts. It’s no accident that thinkers from Aristotle to Darwin did their best thinking whilst walking. Neurons need downtime to make and strengthen connections. There’s a limit to how much time children (or adults) can spend actively ‘learning’. Too much time can be counterproductive.

Yesterday’s conference offered a superb space for reflection. Thought-provoking and challenging ideas, motivated and open-minded participants, an excellent venue and some of the best conference food ever – the gluten-free/vegetarian/vegan platters were amazing. Thanks to the Philosophy of Education Society of Great Britain for organising it.  Couldn’t have been more timely.

 

 

Improving reading from Clackmannanshire to West Dunbartonshire

In the 1990s, two different studies began tracking the outcomes of reading interventions in Scottish schools.   One, run by Joyce Watson and Rhona Johnston then from the University of St Andrews, started in 1992/3 in schools in Clackmannanshire, which hugs the River Forth, just to the east of Stirling. The other began in 1998 in West Dunbartonshire, with the Clyde on side and Loch Lomond on the other, west of Glasgow. It was led by Tommy MacKay, an educational psychologist with West Dunbartonshire Council, who also lectured in psychology at the University of Strathclyde.

I’ve blogged about the Clackmannanshire study in more detail here. It was an experiment involving 13 schools and 300 children divided into three groups, taught to read using synthetic phonics, analytic phonics or analytic phonics plus phonemic awareness. The researchers measured and compared the outcomes.

The West Dunbartonshire study had a more complex design involving five different studies and ten strands of intervention over ten years in all pre-schools and primary schools in the local authority area (48 schools and 60 000 children). As in Clackmannanshire, analytic phonics was used as a control for the synthetic phonics experimental group. The study also had an aim; to eradicate functional illiteracy in school leavers in West Dunbartonshire. It very nearly succeeded; Achieving the Vision, the final report, shows that by the time the study finished in 2007 only three children were deemed functionally illiterate. ( Thanks to @SaraJPeden on Twitter for the link.)

Five studies, ten strands of intervention

The main study was a multiple-component intervention using cross-lagged design. Four supporting studies were;

  • Synthetic phonics study (18 schools)
  • Attitudes study (24 children from earlier RCT)
  • Declaration study (12 nurseries & primaries in another education authority area
  • Individual support study (24 secondary pupils).

The West Dunbartonshire study was unusual in that it addressed multiple factors already known to impact on reading attainment, but that are often sidelined in interventions focusing on the mechanics of reading. The ten strands were (p.14);

Strand 1: Phonological awareness and the alphabet

Strand 2: A strong and structured phonics emphasis

Strand 3: Extra classroom help in the early years

Strand 4: Fostering a ‘literacy environment’ in school and community

Strand 5: Raising teacher awareness through focused assessment

Strand 6: Increased time spent on key aspects of reading

Strand 7: Identification of and support for children who are failing

Strand 8: Lessons from research in interactive learning

Strand 9: Home support for encouraging literacy

Strand 10: Changing attitudes, values and expectations

Another unusual feature was that the researchers were looking not only for statistically significant improvements in reading, but wider significant improvements;

statistical significance must be viewed in terms of wider questions that were primarily social, cultural and political rather than scientific – questions about whether lives were being changed as a result of the intervention; questions about whether children would leave school with the skills needed for a successful career in a knowledge society; questions about whether ‘significant’ results actually meant significant to the participants in the research or only to the researcher.” (p.16)

The researchers also recognized the importance of ownership of the project throughout the local community, everyone “from the leader of the Council to the parents and the children themselves identifying with it and owning it as their own project”. (p.7)

In addition they were aware that a project following students through their entire school career would need to survive inevitable organisational challenges. Despite the fact that West Dunbartonshire was the second poorest council in Scotland, the local authority committed to continue funding the project;

The intervention had to continue and to succeed through virtually every major change or turmoil taking place in its midst – including a total restructuring of the educational directorate, together with significant changes in the Council. (p.46)

Results

 The results won’t surprise anyone familiar with the impact of synthetic phonics; there were significant improvements in reading ability in children in the experimental group. What was remarkable was the impact of the programme on children who didn’t participate. Raw scores for pre-school assessments improved noticeably between 1997 and 2006 and there were many reports from parents that the intervention had stimulated interest in reading in older siblings.

One of the most striking results was that at the end of the study, there were only three pupils in secondary schools in the local authority area with reading ages below the level of functional literacy (p.31). That’s impressive when compared to the 17% of school leavers in England considered functionally illiterate. So why hasn’t the West Dunbartonshire programme been rolled out nationwide? Three factors need to be considered in order to answer that question.

1.What is functional literacy?

The 17% figure for functional illiteracy amongst school leavers is often presented as ‘shocking’ or a ‘failure’ on the part of the education system. These claims are valid only if those making them have evidence that higher levels of school-leaver literacy are attainable. The evidence cited often includes literacy levels in other countries or studies showing very high percentages of children being able to decode after following a systematic synthetic phonics (SSP) programme. Such evidence is akin to comparing apples and oranges because:

– Many languages are orthographically more transparent than English (there’s a higher direct correspondence between graphemes and phonemes). The functional illiteracy figure of 17% (or thereabouts) holds for the English-speaking world, not just England, and has done so since at least the end of WW2  – and probably earlier given literacy levels in older adults.  (See Rashid & Brooks (2010) and McGuinness (1998).)

– Both the Clackmannanshire and West Dunbartonshire studies resulted in high levels of decoding ability. Results were less stellar when it came to comprehension.

– It depends what you mean by functional literacy. This was a challenge faced by Rashid & Brooks in their review; measures of functional literacy have varied, making it difficult to identify trends across time.

In the West Dunbartonshire study, children identified as having significant reading difficulties followed an intensive 3-month individual support programme in early 2003. This involved 91 children in P7, 12 in P6 and 1 in P5. By 2007, 12 pupils at secondary level were identified as still having not reached functional literacy levels; reading ages ranged between 6y 9m and 8y 10m (p.31). By June 2007, only three children had scores below the level of functional literacy. (Two others missed the final assessment.)

The level of functional literacy used in the West Dunbartonshire study was a reading age of at least 9y 6m using the Neale Assessment of Reading Ability (NARA-II). I couldn’t find an example online, but there’s a summary here. The tasks are rather different to the level 1 tasks in National Adult Literacy survey carried out in the USA in 1992 (NCES p.86).

A reading/comprehension age of 9y 6m is sufficient for getting by in adult life; reading a tabloid newspaper or filling in simple forms. Whether it’s sufficient for doing well in GCSEs (reading age 15y 7m ), getting a decent job in later life, or having a good understanding of how the world works is another matter.

2. What were the costs and benefits?

Overall, the study cost £13 per student per year, or, 0.5% of the local authority’s education budget (p.46), which doesn’t sound very much. But for 60 000 students over a ten year period it adds up to almost £8m, a significant sum. I couldn’t find details of the overall reading abilities of secondary school students when the study finished in 2007, and haven’t yet tracked down any follow-up studies showing the impact of the interventions on the local community.

Also, we don’t know what difference the study would have made to adult literacy levels in the area. Adult literacy levels are usually presented as averages, and in the case of the US National Adult Literacy survey included those with disabilities. Many children with disabilities in West Dunbartonshire would have been attending special schools and the study appears to have involved only mainstream schools.  Whether the impact of the study is sufficient to persuade cash-strapped local authorities to invest in it is unclear.

3. Could the interventions be implemented nationwide?

One of the strengths of Achieving the Vision is that it explores the limitations of the study in some detail (p.38ff). One of the strengths of the study was that the researchers were well aware of the challenges that would have to be met in order for the intervention to achieve its aims. These included issues with funding; the local Council, although supportive, was working within a different funding framework to the Scottish Executive Education Department. The funding issues had a knock-on impact on staff seconded to the project – who had no guarantee of employment once the initial funding ran out. The study was further affected by industrial action and by local authority re-structuring. How many projects would have access to the foresight, tenacity and collaborative abilities of those leading the West Dunbartonshire initiative?

Conclusion

The aim of the West Dunbartonshire initiative was to eradicate functional illiteracy in an entire local authority area. The study effectively succeeded in doing so – in mainstream schools, and if a functional illiteracy level is considered to be below a reading/ comprehension age of 9y 6m. Synthetic phonics played a key role.  Synthetic phonics is frequently advocated as a remedy for functional illiteracy in school leavers and in the adult population. The West Dunbartonshire study shows, pretty conclusively, that synthetic phonics plus individual support plus a comprehensive local authority-backed focus on reading, can result in significant improvements in reading ability in secondary school students. Does it eradicate functional illiteracy in school leavers or in the adult population?  We don’t know.

References

MacKay, T (2007).  Achieving the Vision: The Final Research Report of the West Dunbartonshire Literacy Initiative.

McGuinness, D (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

NCES (1993). Adult Literacy in America. National Center for Educational Statistics.

Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

Johnston, R & Watson, J (2005). The Effects of Synthetic Phonics teaching on reading and spelling attainment: A seven year longitudinal study, The Scottish Executive website. http://www.gov.scot/Resource/Doc/36496/0023582.pdf

 

 

 

 

Science, postmodernism and the real world

In a recent blogpost Postmodernism is killing the social sciences, Eoin Lenihan recommends that the social sciences rely on the scientific method “to produce useful and reliable evidence, or objective truths”.  Broadly, I agree with Eoin, but had reservations about the ‘objective truths’ he refers to. In response to a comment on Twitter I noted;postm quote 1

which was similar to a point made by Eoin, “postmodernism originally was a useful criticism of the Scientific Method or dominant narratives and a reminder of the importance of accounting for the subjective experiences of different people and groups.”

Ben Littlewood took issue with me;

quote 2

In the discussion that followed I said science couldn’t claim to know anything for sure. Ben took issue with that too. The test question he asked repeatedly was:

flat earth

simple question

For Ben,

facts

Twitter isn’t the best medium for a discussion of this kind, and I suspect Ben and I might have misunderstood each other. So here, I’m setting out what I think. I’d be interested in what he (and Eoin) has to say.

reason and observation

Something that has perplexed philosophers for millennia is what our senses can tell us about the world. Our senses tell us there’s a real world out there, that it’s knowable, and that we all experience it in more or less the same way. But our senses can deceive us, we can be mistaken in our reasoning, and different people can experience the same event in different ways. So how do we resolve the tension between figuring out what’s actually out there and what we perceive to be out there, between reason and observation, rationalism and empiricism?

Human beings (even philosophers) aren’t great at dealing with uncertainty, so philosophers have tended to favour one pole of the reason-observation axis over the other. As Karl Popper observes in his introduction to Conjectures and Refutations, some (e.g. Plato, Descartes, Spinoza, Leibnitz) have opted for the rationalist view, in contrast to, for example, Aristotle, Bacon, Locke, Berkeley, Hume and Mill’s empiricism.  (I refer to Popper throughout this post because of his focus on the context and outcomes of the scientific method.)

The difficulty with both perspectives, as Popper points out, is that philosophers have generally come down on one side or the other; either reason trumps observation or vice versa. But the real world isn’t like that; both our reason and our observations tend to be flawed, and both are needed to work out what’s actually out there, so there’s no point trying to decide which is superior. The scientific method developed largely to avoid the errors we tend to make in reasoning and observation.

hypotheses and observations

The scientific method tests hypotheses against observations. If the hypothesis doesn’t fit the observations, we can eliminate it from our enquiries.

It’s relatively easy to rule out a specific hypothesis – because we’re matching only one hypothesis at a time to observations.   It’s much more difficult to come up with an hypothesis that turns out to be a good fit with observations – because our existing knowledge is always incomplete; there might be observations about which we currently have no knowledge.

If  an hypothesis is a good fit with our observations, we can make a working assumption that the hypothesis is true – but it’s only a working assumption. So the conclusions science draws from hypotheses and observations have varying degrees of certainty. We have a high degree of certainty that the earth isn’t flat, we have very little certainty about what causes schizophrenia, and what will happen as a consequence of climate change falls somewhere between the two.

Given the high degree of certainty we have that the earth isn’t flat, why not just say, as Ben does, that we’re certain about it and call it an objective fact? Because doing so in a discussion about the scientific method and postmodernism, opens a can of pointless worms. Here are some of them.

-What level of certainty would make a conclusion ‘certain’? 100%, 75%, 51%?

-How would we determine the level of certainty? It would be feasible to put a number on an evaluation of the evidence (for and against) but that would get us into the kind of arguments about methodology that have surrounded p values. And would an hypothesis with 80% support be considered certain, whereas a competing hypothesis with only 75% support might be prematurely eliminated?

-Who would decide whether a conclusion was certain or not? You could bet your bottom dollar it wouldn’t be the people at the receiving end of a morally suspect idea that had nonetheless reached an arbitrary certainty threshold.  The same questions apply to deciding whether something is a ‘fact’ or not.

-Then there’s ‘objectivity’. Ironically, there’s a high degree of certainty that objectivity, in reasoning and observation, is challenging for us even when armed with the scientific method.

life in the real world

All these problematic worms can be avoided by not making claims about ‘100% certainty’ and ‘objective facts’ in the first place.  Because it’s so complex, and because our knowledge about it is incomplete, the real world isn’t a 100%-certain-objective-fact kind of a place. Scientists are accustomed to working with margins of error and probabilities that would likely give philosophers and pure mathematicians sleepless nights. As Popper implies in The Open Society and its Enemies the human craving for certainty has led to a great deal of knowledge of what’s actually out there, but also to a preoccupation with precise definitions and the worst excesses of scholasticism – “treating what is vague as if it were precise“.*

I declined to answer Ben’s ‘simple question’ because in the context of the discussion it’s the wrong kind of question. It begs further questions about what is meant by certainty, objectivity and facts, to which a yes/no answer can’t do justice. I suspect that if I’d said ‘yes, it is certain that the earth isn’t flat’, Ben would have said ‘there you are, science can be certain about things’ and the can of pointless worms would have been opened. Which brings me on to my comment about postmodernism, that the root cause of postmodernism was the belief that science can produce objective truth.

postmodernism, science and objective truth

The 19th and 20th centuries were characterised by movements in thinking that were in large part reactions against previous movements. The urbanisation and mechanisation of the industrial revolution prompted Romanticism. Positivism (belief in verification using the scientific method) was in part a reaction to Romanticism, as was Modernism (questioning and rejecting traditional assumptions). Postmodernism, with its emphasis on scepticism and relativism was, in turn, a reaction to Modernism and Positivism, which is why I think claims about objective truth (as distinct from the scientific method per se) are a root cause of postmodernism.

I would agree with Eoin that postmodernism, taken to its logical conclusion, has had a hugely detrimental impact on the social sciences. At the heart of the problem however, is not postmodernism as such, but the logical conclusion bit. That’s because the real world isn’t a logical-conclusion kind of a place either.   I can’t locate where he says it, but at one point Popper points out that the world of philosophy and mathematicians (and, I would add, many postmodernists) isn’t like the real world. Philosophy and mathematics are highly abstracted fields. Philosophers and mathematicians explore principles abstracted from the real world. That’s OK as far as it goes. Clearing away messy real-world complications and looking at abstracted principles has resulted in some very useful outcomes.

It’s when philosophers and mathematicians start inappropriately imposing on the real world ideas such as precise definitions, objective truths, facts, logical conclusions and pervasive scepticism and relativism that things go awry, because the real world isn’t a place where you can always define things precisely, be objective, discover true truths, follow things to their logical conclusion, nor be thoroughly sceptical and relativistic. Philosophy and mathematics have made some major contributions to the scientific method obviously, but they are not the scientific method. The job of the scientific method is to reduce the risk of errors, not to reveal objective truths about the world. It might do that, but if we can’t be sure whether it has or not, it’s pointless to make such claims. It’s equally pointless to conclude that if we can’t know anything for certain, everything must be equally uncertain, or that if everything is relative, everything has equal weight. It isn’t and it doesn’t.

My understanding of the scientific method is that it has to be fit for purpose; good enough to do its job. Not being able to define everything exactly, or arrive at conclusively objective truths, facts and logical conclusions doesn’t mean that we can be sure of nothing. Nor does it mean that anything goes. Nor that some sort of ‘balance’ between positivism and postmodernism is required.

We can instead, evaluate the evidence, work with what conclusions appear reasonably certain, and correct errors as they become apparent. The simple expedient of acknowledging that the real world is complex and messy but not intractably complex and messy, and the scientific method can, at best, produce a best guess at what’s actually out there, bypasses pointless arguments about exact definitions, objectivity, truth and logicality. I’d be interested to know what Ben thinks.

Note

* Popper is quoting FP Ramsay, a close friend of Wittgenstein (The Open Society and its Enemies, vol II, p. 11)

References

Popper K. (2003).  The Open Society and its Enemies vol. II: Hegel and Marx, Routledge (first published 1945).

Popper, K. (2002).  Conjectures and Refutations, Routledge (first published 1963).

 

 

 

 

 

 

 

 

 

genes, environment and behaviour

There was considerable kerfuffle on Twitter last week following a blog post by David Didau entitled ‘What causes behaviour?’  The ensuing discussion resulted in a series of five further posts from David culminating in an explanation of why his views weren’t racist. I think David created problems for himself through lack of clarity about gene-environment interactions and through ambiguous wording. Here’s my two-pennyworth.

genes

Genes are regions of DNA that hold information about (mainly) protein production. As far as we know, that’s all they do. The process of using this information to produce proteins is referred to as genetic expression.

environment

The context in which genes are expressed. Before birth, the immediate environment in which human genes are expressed is limited, and is largely a chemical and biological one. After birth, the environment gets more complex as Urie Bronfenbrenner demonstrated.  Remote environmental effects can have a significant impact on immediate ones. Whether a mother smokes or drinks is influenced by genetic and social factors, and the health of both parents is often affected by factors beyond their control.

epigenetics

Epigenetic factors are environmental factors that can directly change the expression of genes; in some cases they can be effectively ‘switched’ on or off.   Some epigenetic changes can be inherited.

behaviour

Behaviour is a term that’s been the subject of much discussion by psychologists. There’s a useful review by Levitis et al here. A definition of behaviour the authors decided reflected consensus is:

Behaviour is: the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal and/or external stimuli, excluding responses more easily understood as developmental changes.

traits and states

Trait is a term used to describe a consistent pattern in behaviour, personality etc. State is used to describe transient behaviours or feelings.

David Didau’s argument

David begins with the point that behavioural traits in adulthood are influenced far more by genes than by shared environments during childhood. He says: “Contrary to much popular wishing thinking, shared environmental effects like parenting have (almost) no effect on adult’s behaviour, characteristics, values or beliefs.* The reason we are like our parents and siblings is because we share their genes. *Footnote: There are some obvious exceptions to this. Extreme neglect or abuse before the age of 5 will likely cause permanent developmental damage as will hitting someone in the head with a hammer at any age.”

In support he cites a paper by Thomas Bouchard, a survey of research (mainly twin studies) about genetic influence on psychological traits; personality, intelligence, psychological interests, psychiatric illnesses and social attitudes. David rightly concludes that it’s futile for schools to try to teach ‘character’ because character (whatever you take it to mean) is a stable trait.

traits, states and outcomes

But he also refers to children’s behaviour in school, and behaviour encompasses traits and states; stable patterns of behaviour and one-off specific behaviours. For David, school expectations can “mediate these genetic forces”, but only within school; “an individual’s behaviour will be, for the most part, unaffected by this experience when outside the school environment”.

He also refers to “how we turn out”, and how we turn out can be affected by one-off, even uncharacteristic behaviours (on the part of children, parents and teachers and/or government).   One-off actions can have a hugely beneficial or detrimental impact on long-term outcomes for children.

genes, environment and interactions

It’s easy to get the impression from the post that genetic influences (David calls them genetic ‘forces’ – I don’t know what that means) and environmental influences are distinct and act in parallel. He refers, for example, to “genetic causes for behaviour as opposed to environmental ones” (my emphasis), but concedes “there’s definitely some sort of interaction between the two”.

Obviously, genes and environment influence behaviour. What’s emerged from research is that the interactions between genetic expression and environmental factors are pretty complex. From conception, gene expression produces proteins; cells form, divide and differentiate, the child’s body develops and grows. Genetic expression obviously plays a major role in pre-natal development, but the proteins expressed by the genes very quickly form a complex biochemical, physiological and anatomical environment that impacts on the products of later genetic expression. This environment is internal to the mother’s body, but external environmental factors are also involved in the form of nutrients, toxins, activities, stressors etc. After birth, genes continue to be expressed, but the influence of the external environment on the child’s development increases.

Three points to bear in mind: 1) A person’s genome remains pretty stable throughout their lifetime. 2) The external environment doesn’t remain stable – for most people it changes constantly.  Some of the changes will counteract others; rest and good nutrition can overcome the effects of illness, beneficial events can mitigate the impact of adverse ones. So it’s hardly surprising that shared childhood environments have comparatively little, if any, effect on adult traits.   3) Genetic and environmental influences are difficult to untangle due to their complex interactions from the get-go. Annette Karmiloff-Smith* highlights the importance of gene-environment-behaviour interactions here.

Clearly, if you’re a kid with drive, enthusiasm and aspirations, but grow up on a sink estate in an area of high social and economic deprivation where the only wealthy people with high social status are drug dealers, you’re far more likely to end up with rather dodgy career prospects than a child with similar character traits who lives in a leafy suburb and attends Eton. (I’ve blogged elsewhere about the impact of life events on child development and long-term outcomes, in a series of posts starting here.)

In other words, parents and teachers might have little influence over behavioural traits, but they can make a huge difference to the outcomes for a child, by equipping them (or not) with the knowledge and strategies they need to make the most of what they’ve got. From other things that David has written, I don’t think he’d disagree.  I think what he is trying to do in this post is to put paid to the popular idea that parents (and teachers) have a significant long-term influence on children’s behavioural traits.  They clearly don’t.  But in this post he doesn’t make a clear distinction between behavioural traits and outcomes. I suggest that’s one reason his post resulted in so much heated discussion.

genes, environment and the scientific method

I’m not sure where his argument goes after he makes the point about character education. He goes on to suggest that anyone who queries his conclusions about the twin studies is dismissing the scientific method, which seems a bit of a stretch, and finishes the post with a series of ‘empirical questions’ that appear to reflect some pet peeves about current educational practices, rather than testing hypotheses about behaviour per se.

So it’s not surprising that some people got hold of the wrong end of the stick. The behavioural framework including traits, states and outcomes is an important one and I wish, instead of going off at tangents, he’d explored it in more detail.

*If you’re interested,  Neuroconstructivism by Mareschal et al and Rethinking Innateness by Elman et al. are well worth reading on gene-environment interactions during children’s development.  Not exactly easy reads, but both reward effort.

references

Bouchard, T. (2004).  Genetic influence on human psychological traits.  Current Directions in Psychological Science, 13, 148-151.

Elman, J. L., Bates, E.A., Johnson, M., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development.  Cambridge, MA: MIT Press.

Karmiloff-Smith A (1998). Development itself is the key to understanding developmental disorders. Trends in Cognitive Sciences, 2, 389-398.

Levitis, D.A., Lidicker, W.Z., & Freund, G. (2009).  Behavioural biologists don’t agree on what constitutes behaviour.  Animal Behaviour, 78 (1) 103-110.

Mareschal, D., Johnson, M., Sirois, S., Spratling, M.W., Thomas, M.S.C. & Westermann, G. (2007). Neuroconstructivism: How the brain constructs cognition, Vol. I. Oxford University Press.