play or direct instruction in early years?

One of the challenges levelled at advocates of the importance of play for learning in the Early Years Foundation Stage (EYFS) has been the absence of solid evidence for its importance. Has anyone ever tested this theory? Where are the randomised controlled trials?

The assumption that play is an essential vehicle for learning is widespread and has for many years dominated approaches to teaching young children. But is it anything more than an assumption?  I can understand why critics have doubts.  After all, EY teachers tend to say “Of course play is important. Why would you question that?” rather than “Of course play is important (Smith & Jones, 1943; Blenkinsop & Tompkinson, 1972).”  I think there are two main reasons why EY teachers tend not to cite the research.

why don’t EY teachers cite the research?

First, the research about play is mainly from the child development literature rather than the educational literature. There’s a vast amount of it and it’s pretty robust, showing how children use play to learn how the world works: What does a ball do? How does water behave? What happens if…?  If children did not learn through play, much of the research would have been impossible.

Secondly, you can observe children learning through play. In front of your very eyes. A kid who can’t post all the bricks in the right holes at the beginning of a play session, can do so at the end. A child who doesn’t know how to draw a cat when they sit down with the crayons, can do so a few minutes later.

Play is so obviously the primary vehicle for learning used by young children, that a randomised controlled trial of the importance of play in learning would be about as ethical as one investigating the importance of food for growth, or the need to hear talk to develop speech.

what about play at school?

But critics have another question: Children can play at home – why waste time playing in school when they could use that time to learn something useful, like reading, writing or arithmetic? Advocates for learning through play often argue that a child has to be developmentally ‘ready’ before they can successfully engage in such tasks, and play facilitates that development ‘readiness’. By developmentally ‘ready’, they’re not necessarily referring to some hypothetical, questionable Piagetian ‘stages’, but whether the child has developed the capability to carry out the educational tasks. You wouldn’t expect a six month-old to walk – their leg muscles and sense of balance wouldn’t be sufficiently well developed. Nor would you expect the average 18 month-old to read – they wouldn’t have the necessary language skills.

Critics might point out that a better use of time would be to teach the tasks directly. “These are the shapes you need to know about.” “This is how you draw a cat.” Why not ‘just tell them’ rather than spend all that time playing?

There are two main reasons why play is a good vehicle for learning at the Early Years stage. One is that young children are highly motivated to play. Play involves a great deal of trial-and-error, an essential mechanism for learning in many contexts. The variable reinforcement that happens during trial-and-error play is strongly motivating for mammals, and human beings are no exception.

The other reason is during play, there is a great deal of incidental learning going on. When posting bricks children learn about manual dexterity as well as about colour, number, texture, materials, shapes and angles. Drawing involves learning about shape, colour, 2-D representation of 3-D objects, and again, manual dexterity. Approached as play, both activities could also expand a child’s vocabulary and enable them to learn how to co-operate, collaborate or compete with others. Play offers a high learning return for a small investment of time and resources.

why not ‘just tell them’?

But isn’t ‘just telling them’ a more efficient use of time?   Sue Cowley, a keen advocate of the importance of play in Early Years, recently tweeted a link to an article in Psychology Today by Peter Gray, a researcher at Boston College. It’s entitled “Early Academic Training Produces Long-Term Harm”.

This is a pretty dramatic claim, and for me it raised a red flag – or at least an amber one. I’ve read through several longitudinal studies about children’s long-term development and they all have one thing in common; they show that the impact of early experiences (good and bad) is often moderated by later life events. ‘Delinquents’ settle down and become respectable married men with families; children from exemplary middle class backgrounds get in with the wrong crowd in their teens and go off the rails; the improvements in academic achievement resulting from a language programme in kindergarten have all but disappeared by third grade. The findings set out in Gray’s review article didn’t square with the findings of other longitudinal studies. Also, review articles can sometimes skate over crucial points in the methods used in studies that call the conclusions into question.

what the data tell us

So I was somewhat sceptical about Dr Gray’s claims – until I read the references (at least, three of the references – I couldn’t access the second). The studies he cites compared outcomes from three types of pre-school programme; High/Scope, direct instruction (including the DISTAR programme), and a traditional nursery pre-school curriculum. Some of the findings weren’t directly related to long-term outcomes but caught my attention:

  • In first, second and third grades, school districts used retention in grade rather than special education services for children experiencing learning difficulties (Marcon).
  • Transition (in this case grade 3 to 4) was followed by a dip in children’s academic performance (Marcon).
  • Because of the time that had elapsed since the original interventions, there had been ample opportunity for methodological criticisms to be addressed and resolved (Schweinhart & Weikart).
  • Mothers’ educational level was a significant factor (as in other studies) (Schweinhart & Weikart).
  • Small numbers of teachers were involved, so individual teachers could have had a disproportionate influence (Schweinhart & Weikart).
  • The lack of cited evidence for Common Core State Standards (Carlsson-Page et al).

Essentially, the studies cited by Dr Gray found that educational approaches featuring a significant element of child-initiated learning result in better long-term outcomes overall (including high school graduation rates) than those featuring direct instruction. The reasons aren’t entirely clear. Peter Gray and some of the researchers suggested the home visits that were a feature of all the programmes might have played a significant role; if parents had bought-in to a programme’s ethos (likely if there were regular home visits from teachers), children expected to focus on academic achievement at school and at home might have fewer opportunities for early incidental learning about social interaction that could shape their behaviour in adulthood.

The research findings provided an unexpected answer to a question I have repeatedly asked of proponents of Engelmann’s DISTAR programme (featured in one of the studies) but to which I’ve never managed to get a clear answer; what outcomes were there from the programme over the long-term?  Initially, children who had followed direct instruction programmes performed significantly better in academic tests than those who hadn’t, but the gains disappeared after a few years, and the long-term outcomes included more years in special education, and later in significantly more felony arrests and assaults with dangerous weapons.

This wasn’t what I was expecting. What I was expecting was the pattern that emerged from the Abecedarian study; that academic gains after early intervention peter out after a few years, but that there are marginal long-term benefits. Transient and marginal improvements are not to be sniffed at. ‘Falling behind’ early on at school can have a devastating impact on a child’s self-esteem, and only a couple of young people choosing college rather than teenage parenthood or petty crime can make a big difference to a neighbourhood.

The most likely reason for the tail-off in academic performance is that the programme was discontinued, but the overall worse outcomes for the direct instruction children than for those in the control group are counterintuitive.  Of course it doesn’t follow that direct instruction caused the worse outcomes. The results of the interventions are presented at the group level; it would be necessary to look at the pathways followed by individuals to identify the causes for them dropping out of high school or getting arrested.

conclusion

There’s no doubt that early direct instruction improves children’s academic performance in the short-term. That’s a desirable outcome, particularly for children who would otherwise ‘fall behind’. However, from these studies, direct instruction doesn’t appear to have the long-term impact sometimes claimed for it; that it will address the problem of ‘failing’ schools; that it will significantly reduce functional illiteracy; or that early intervention will eradicate the social problems that cause so much misery and perplex governments.  In fact, these studies suggest that direct instruction results in worse outcomes.  Hopefully, further research will tell us whether that is a valid finding, and if so why it happened.

I’ve just found a post by Greg Ashman drawing attention to a critique of the High/Scope studies.  Worth reading.  [edit 21/4/17]

References

Carlsson-Paige, N, McLaughlin, GB and Almon, JW. (2015).  “Reading Instruction in Kindergarten: Little to Gain and Much to Lose”.  Published online by the Alliance for Childhood. http://www.allianceforchildhood.org/sites/allianceforchildhood.org/files…

Gray, P. (2015). Early Academic Training Produces Long-Term Harm.  Psychology Today https://www.psychologytoday.com/blog/freedom-learn/201505/early-academic-training-produces-long-term-harm

Marcon, RA (2002). “Moving up the grades: Relationship between preschool model and later school success.” Early Childhood Research & Practice 4 (1). http://ecrp.uiuc.edu/v4n1/marcon.html.

Schweinhart, LJ and Weikart, DP (1997). “The High/Scope Pre- school Curriculum Comparison Study through age 23.” Early Childhood Research Quarterly, 12. pp. 117-143. https://pdfs.semanticscholar.org/c339/6f2981c0f60c9b33dfa18477b885c5697e1d.pdf

About the Author

You are reading

Freedom to Learn

Social Norms, Moral Judgments, and Irrational Parenting

From Chinese foot binding to today’s extreme constraints on children’s freedom.

Childrearing Beliefs Were Best Predictor of Trump Support

A poll with four weird questions helps explain Trump’s surprising victory.

A Frugal Man’s Guide to Happiness and Health

For me, the inexpensive ways to do things are also the healthiest and most fun.

Sue Cowley is a robust advocate of the importance of play in learning https://suecowley.wordpress.com/2014/08/09/early-years-play-is/

A tale of two Blobs

The think-tank Civitas has just published a 53-page pamphlet written by Toby Young and entitled ‘Prisoners of The Blob’. ‘The Blob’ for the uninitiated, is the name applied by the UK’s Secretary of State for Education, Michael Gove, to ‘leaders of the teaching unions, local authority officials, academic experts and university education departments’ described by Young as ‘opponents of educational reform’. The name’s not original. Young says it was coined by William J Bennett, a former US Education Secretary; it was also used by Chris Woodhead, first Chief Inspector of Ofsted in his book Class War.

It’s difficult to tell whether ‘The Blob’ is actually an amorphous fog-like mass whose members embrace an identical approach to education as Young claims, or whether such a diverse range of people espouse such a diverse range of views that it’s difficult for people who would like life to be nice and straightforward to understand all the differences.

Young says;

They all believe that skills like ‘problem-solving’ and ‘critical thinking’ are more important than subject knowledge; that education should be ‘child-centred’ rather than ‘didactic’ or ‘teacher-led’; that ‘group work’ and ‘independent learning’ are superior to ‘direct instruction’; that the way to interest children in a subject is to make it ‘relevant’; that ‘rote-learning’ and ‘regurgitating facts’ is bad, along with discipline, hierarchy, routine and anything else that involves treating the teacher as an authority figure. The list goes on.” (p.3)

It’s obvious that this is a literary device rather than a scientific analysis, but that’s what bothers me about it.

Initially, I had some sympathy with the advocates of ‘educational reform’. The national curriculum had a distinctly woolly appearance in places, enforced group-work and being required to imagine how historical figures must have felt drove my children to distraction, and the approach to behaviour management at their school seemed incoherent. So when I started to come across references to educational reform based on evidence, the importance of knowledge and skills being domain-specific, I was relieved. When I found that applying findings from cognitive science to education was being advocated, I got quite excited.

My excitement was short-lived. I had imagined that a community of researchers had been busily applying cognitive science findings to education, that the literatures on learning and expertise were being thoroughly mined and that an evidence-based route-map was beginning to emerge. Instead, I kept finding references to the same small group of people.

Most fields of discourse are dominated by a few individuals. Usually they are researchers responsible for significant findings or major theories. A new or specialist field might be dominated by only two or three people. The difference here is that education straddles many different fields of discourse (biology, psychology sociology, philosophy and politics, plus a range of subject areas) so I found it a bit odd that the same handful of names kept cropping up. I would have expected a major reform of the education system to have had a wider evidence base.

Evaluating the evidence

And then there was the evidence itself. I might be looking in the wrong place, but so far, although I’ve found a few references, I’ve uncovered no attempts by proponents of educational reform to evaluate the evidence they cite.

A major flaw in human thinking is confirmation bias. To represent a particular set of ideas, we develop a mental schema. Every time we encounter the same set of ideas, the neural network that carries the schema is activated. The more it’s activated, the more readily it’s activated in future. This means that any configuration of ideas that contradicts a pre-existing schema, has, almost literally, to swim against the electromagnetic tide. It’s going to take a good few reiterations of the new idea set before a strongly embedded pre-existing schema is likely to be overridden by a new one. Consequently we tend to favour evidence that confirms our existing views, and find it difficult to see things in a different way.

The best way we’ve found to counteract confirmation bias in the way we evaluate evidence is through hypothesis testing. Essentially you come up with a hypothesis and then try to disprove it. If you can’t, it doesn’t mean your hypothesis is right, it just means you can’t yet rule it out. Hypothesis testing as such is mainly used in the sciences, but the same principle underlies formal debating, the adversarial approach in courts of law, and having an opposition to government in parliament. The last two examples are often viewed as needlessly combative, when actually their job is to spot flaws in what other people are saying. How well they do that job is another matter.

It’s impossible to tell at first glance whether a small number of researchers have made a breakthrough in education theory, or whether their work is simply being cited to affirm a set of beliefs. My suspicion that it might be the latter was strengthened when I checked out the evidence.

The evidence

John Hattie conducted a meta-anlaysis of over 800 studies of student achievement. My immediate thought when I came across his work was of the well-documented problems associated with meta-analyses. Hattie does discuss these, but I’m not convinced he disposed of one key issue; the garbage-in-garbage-out problem. A major difficulty with meta-analyses is ensuring that all the studies involved use the same definitions for the constructs they are measuring; and I couldn’t find a discussion of what Hattie (or other researchers) mean by ‘achievement’. I assume that Hattie uses test scores as a proxy measure of achievement. This is fine if you think the job of schools is to ensure that children learn what somebody has decided they should learn. But that assumption poses problems. One is who determines what students should learn. Another is what happens to students who, for whatever reason, can’t learn at the same rate as the majority. And a third is how the achievement measured in Hattie’s study maps on to achievement in later life. What’s noticeable about the biographies of many ‘great thinkers’ – Darwin and Einstein are prominent examples – is how many of them didn’t do very well in school. It doesn’t follow that Hattie is wrong – Darwin and Einstein might have been even greater thinkers if their schools had adopted his recommendations – but it’s an outcome Hattie doesn’t appear to address.

Siegfreid Engelmann and Wesley C Becker developed a system called Direct Instruction System for Teaching Arithmetic and Reading (DISTAR) that was shown to be effective in Project Follow-Through – a evaluation of a number of educational approaches in the US education system over a 30 year period starting in the 1960s. There’s little doubt that Direct Instruction is more effective than many other systems at raising academic achievement and self-esteem. The problem is, again, who decides what students learn, what happens to students who don’t benefit as much as others, and what’s meant by ‘achievement’.

ED Hirsch developed the Core Knowledge sequence – essentially an off-the-shelf curriculum that’s been adapted for the UK and is available from Civitas. The US Core Knowledge sequence has a pretty obvious underlying rationale even if some might question its stance on some points. The same can’t be said of the UK version. Compare, for example, the content of US Grade 1 History and Geography with that of the UK version for Year 1. The US version includes Early People and Civilisations and the History of World Religion – all important for understanding how human geography and cultures have developed over time. The UK version focuses on British Pre-history and History (with an emphasis on the importance of literacy) followed by Kings and Queens, Prime ministers then Symbols and figures – namely the Union Jack, Buckingham Palace, 10 Downing Street and the Houses of Parliament – despite the fact that few children in Y1 are likely to understand how or why these people or symbols came to be important. Although the strands of world history and British history are broadly chronological, Y4s study Ancient Rome alongside the Stuarts, and Y6s the American Civil War potentially before the Industrial Revolution.

Daniel Willingham is a cognitive psychologist and the author of Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for the classroom and When can you trust the experts? How to tell good science from bad in education. He also writes for a column in American Educator magazine. I found Willingham informative on cognitive psychology. However, I felt his view of education was a rather narrow one. There’s nothing wrong with applying cognitive psychology to how teachers teach the curriculum in schools – it’s just that learning and education involve considerably more than that.

Kirschner, Sweller and Clark have written several papers about the limitations of working memory and its implications for education. In my view, their analysis has three key weaknesses; they arbitrarily lump together a range of education methods as if they were essentially the same, they base their theory on an outdated and incomplete model of memory, and they conclude that only one teaching approach is effective – explicit, direct instruction – ignoring the fact that knowledge comes in different forms.

Conclusions

I agree with some of the points made by the reformers:
• I agree with the idea of evidence-based education – the more evidence the better, in my view.
• I have no problem with children being taught knowledge. I don’t subscribe to a constructivist view of education – in the sense that we each develop a unique understanding of the world and everybody’s worldview is as valid as everybody else’s – although cognitive science has shown that everybody’s construction of knowledge is unique. We know that some knowledge is more valid and/or more reliable than other knowledge and we’ve developed some quite sophisticated ways of figuring out what’s more certain and what’s less certain.
• The application of findings from cognitive science to education is long overdue.
• I have no problem with direct instruction (as distinct from Direct Instruction) per se.

However, some of what I read gave me cause for concern:
• The evidence-base presented by the reformers is limited and parts of it are weak and flawed. It’s vital to evaluate evidence, not just to cite evidence that at face-value appears to support what you already think. And a body of evidence isn’t a unitary thing; some parts of it can be sound whilst other parts are distinctly dodgy. It’s important to be able to sift through it and weigh up the pros and cons. Ignoring contradictory evidence can be catastrophic.
• Knowledge, likewise, isn’t a unitary thing; it can vary in terms of validity and reliability.
• The evidence from cognitive science also needs to be evaluated. It isn’t OK to assume that just because cognitive scientists say something it must be right; cognitive scientists certainly don’t do that. Being able to evaluate cognitive science might entail learning a fair bit about cognitive science first.
• Direct instruction, like any other educational method, is appropriate for acquiring some types of knowledge. It isn’t appropriate for acquiring all types of knowledge. The problem with approaches such as discovery learning and child-led learning is not that there’s anything inherently wrong with the approaches themselves, but that they’re not suitable for acquiring all types of knowledge.

What has struck me most forcibly about my exploration of the evidence cited by the education reformers is that, although I agree with some of the reformers’ reservations about what’s been termed ‘minimal instruction’ approaches to education, the reformers appear to be ignoring their own advice. They don’t have extensive knowledge of the relevant subject areas, they don’t evaluate the relevant evidence, and the direct instruction framework they are advocating – certainly the one Civitas is advocating – doesn’t appear to have a structure derived from the relevant knowledge domains.

Rather than a rational, evidence-based approach to education, the ‘educational reform’ movement has all the hallmarks of a belief system that’s using evidence selectively to support its cause; and that’s what worries me. This new Blob is beginning to look suspiciously like the old one.