Robert Peal has posted a series of responses to critics of his book Progressively Worse here. The second is on ‘data and dichotomies’. In this post I want to comment on some of the things he says about data and evidence.
when ‘evidence doesn’t work’
Robert* refers back to a previous post entitled ‘When evidence doesn’t work’ summarising several sessions at the ResearchED conference held at Dulwich College last year. He rightly draws attention to the problem of hard-to-measure outcomes, and to which outcomes we decide to measure in the first place. But he appears to conclude that there are some things – ideology, morality, values – that are self-evidently good or bad and that are outside the remit of evidence.
In his response to critics, Robert claims that one reason ‘evidence doesn’t work’ is because “some of the key debates in education are based on value judgements, not efficacy.” This is certainly true – and those key debates have resulted in a massive waste of resources in education over the past 140 years. There’s been little consensus on what long-term outcomes people want from the education system, what short-term outcomes they want, what pedagogies are effective and how effectiveness can be assessed. If a decision as to whether Shakespeare ‘should’ be studied at GCSE is based on value judgements it’s hardly surprising it’s been the subject of heated debate for decades. Robert’s conclusion appears to be that heated debate about value judgements is inevitable because values aren’t things that lend themselves to being treated as evidence. I disagree.
I think he draws this conclusion because his view of data is rather limited. Data don’t just consist of ‘things we can easily measure’ like exam results (Robert’s second reason why ‘evidence doesn’t work’). They don’t have to involve measuring things at all; qualitative data can be very informative. Let’s take the benefits of studying Shakespeare in school. Robert asks “Can an RTC tell us, for example, whether secondary school pupils benefit from studying Shakespeare?” If it was carefully controlled it could, though we would have to tackle the question of what outcomes to measure. But randomised controlled trials are only one of many methods for gathering data. Collecting qualitative data from a representative sample of the population about the impact studying Shakespeare had had on their lives could give some insights, not only into whether Shakespeare should be studied in school, but how his work should be studied. And whether people should have the opportunity to undertake some formal study of Shakespeare in later life if they wanted to. People might appreciate actually being asked.
I don’t know whether Robert sees me as what he refers to as a ‘data bore’, but if he does I accept the epithet as a badge of honour. For the record however, not only have I never let a skinny latte pass my lips, but the word ‘nuanced’ has never done so either (not in public, at least). Nor do I have a “lofty distain for anything so naïve as ‘having an opinion’”.
I’m more than happy for people to have opinions and to express them and for them to be taken into account when education policy is being devised. But not all opinions are equal. They can vary between professional, expert opinion derived from a thorough theoretical knowledge and familiarity with a particular research literature, through well-informed personal opinion, to someone simply liking or not liking something but not having a clue why. I would not want to receive medical treatment based on a vox pop carried out in my doctor’s waiting room, nor do I want a public sector service to be designed on a similar basis. If it is, then the people who voice their opinions most loudly are likely to get what they want, leaving the rest of us, ‘data bores’ included, to work on the damage limitation.
rationality and values
Robert appears to have a deep suspicion of rationality. He says “rational man believes that they can make their way in the world without recourse to the murky business of ideology and morality, or to use a more contemporary term, ‘values’.” He also says it was ‘terrific’ to hear Sam Freedman expound the findings of Jonathan Haidt and Daniel Kahnemann “about the dominance of the subconscious, emotional part of our minds, over the logical, conscious part.” He could add Antonio Damasio to that list. There’s little doubt that our judgement and decision-making is dominated by the subconscious emotional part of our minds. That doesn’t mean it’s a good thing.
Ideology, morality and values can inspire people to do great things, and rationality can inflict appalling damage, but it’s not always like that. Every significant step that’s been ever been taken towards reducing infant mortality, maternal mortality, disease, famine, poverty and conflict and every technological advance ever made has involved people using the ‘logical conscious part’ of their minds as well as, or instead of, the ‘subconscious emotional part’. Those steps have sometimes involved a lifetime’s painstaking work in the teeth of bitter opposition. In contrast, many of the victims of ideology, morality and values lie buried where they fell on the world’s battlefields.
Robert’s last point about data is that they are “simply not able to ‘speak for themselves’. Its voice is always mediated by human judgement.” That’s not quite the impression given on page 4 of his book when referring to a list of statistics he felt showed there was a fundamental problem in British education. In the case of these statistics, ‘the bare figures are hard to ignore’.
Robert is quite right that the voice of the data is always mediated by human judgement, but we have devised ways of interpreting the data that make them less susceptible to bias. The data are perfectly capable of speaking for themselves, if we know how to listen to them. Clearly the researcher, like the historian, suffers from selection bias, but some fields of discourse, unlike history it seems, have developed robust methodologies to address that. The biggest problem faced by the data is that they can’t get a word in edgeways because of all the opinion being voiced.
According to this tweet from Civitas…
Robert says he has responded to criticism in blogs by Tim Taylor, Guy Woolnough and myself. I’m doubtless biased, but the comment most closely resembling ‘venom’ that I could find was actually in a scurrilous tweet from Debra Kidd, shown in Robert’s third response to his critics. Debra, shockingly for a teacher, uses a four-letter-word to describe Robert’s description of state schools as ‘a persistent source of national embarrassment’. She calls it ‘tosh’. If Civitas thinks that’s venom, it clearly has little experience of academia, politics or the playground. Rather worrying on all counts, if it’s a think tank playing a significant role in education reform.
* I felt we should be on first name terms now we’ve had a one-to-one conversation about statistics.
§ Image courtesy Christian Fischer from Britannica Kids.
It’s not really a venomous data bore, it’s a Metallic wood-boring beetle. It’s not really metallic either, it just looks like it. Nor does the beetle bore wood, its larvae do. Words can be so misleading.