synthetic phonics, dyslexia and natural learning

Too intense a focus on the virtues of synthetic phonics (SP) can, it seems, result in related issues getting a bit blurred. I discovered that some whole language supporters do appear to have been ideologically motivated but that the whole language approach didn’t originate in ideology. And as far as I can tell we don’t know if SP can reduce adult functional illiteracy rates. But I wouldn’t have known either of those things from the way SP is framed by its supporters. SP proponents also make claims about how the brain is involved in reading. In this post I’ll look at two of them; dyslexia and natural learning.

Dyslexia

Dyslexia started life as a descriptive label for the reading difficulties adults can develop due to brain damage caused by a stroke or head injury. Some children were observed to have similar reading difficulties despite otherwise normal development. The adults’ dyslexia was acquired (they’d previously been able to read) but the children’s dyslexia was developmental (they’d never learned to read). The most obvious conclusion was that the children also had brain damage – but in the early 20th century when the research started in earnest there was no easy way to determine that.

Medically, developmental dyslexia is still only a descriptive label meaning ‘reading difficulties’ (causes unknown, might/might not be biological, might vary from child to child). However, dyslexia is now also used to denote a supposed medical condition that causes reading difficulties. This new usage is something that Diane McGuinness complains about in Why Children Don’t Learn to Read.

I completely agree with McGuinness that this use isn’t justified and has led to confusion and unintended and unwanted outcomes. But I think she muddies the water further by peppering her discussion of dyslexia (pp. 132-140) with debatable assertions such as:

“We call complex human traits ‘talents’”.

“Normal variation is on a continuum but people working from a medical or clinical model tend to think in dichotomies…”.

“Reading is definitely not a property of the human brain”.

“If reading is a biological property of the brain, transmitted genetically, then this must have occurred by Lamarckian evolution.”

Why debatable? Because complex human traits are not necessarily ‘talents’; clinicians tend to be more aware of normal variation than most people; reading must be a ‘property of the brain’ if we need a brain to read; and the research McGuinness refers to didn’t claim that ‘reading’ was transmitted genetically.

I can understand why McGuinness might be trying to move away from the idea that reading difficulties are caused by a biological impairment that we can’t fix. After all, the research suggests SP can improve the poor phonological awareness that’s strongly associated with reading difficulties. I get the distinct impression, however, that she’s uneasy with the whole idea of reading difficulties having biological causes. She concedes that phonological processing might be inherited (p.140) but then denies that a weakness in discriminating phonemes could be due to organic brain damage. She’s right that brain scans had revealed no structural brain differences between dyslexics and good readers. And in scans that show functional variations, the ability to read might be a cause, rather than an effect.

But as McGuinness herself points out reading is a complex skill involving many brain areas, and biological mechanisms tend to vary between individuals. In a complex biological process there’s a lot of scope for variation. Poor phonological awareness might be a significant factor, but it might not be the only factor. A child with poor phonological awareness plus visual processing impairments plus limited working memory capacity plus slow processing speed – all factors known to be associated with reading difficulties – would be unlikely to find those difficulties eliminated by SP alone. The risk in conceding that reading difficulties might have biological origins is that using teaching methods to remediate them might then called into question – just what McGuinness doesn’t want to happen, and for good reason.

Natural and unnatural abilities

McGuinness’s view of the role of biology in reading seems to be derived from her ideas about the origin of skills. She says;

It is the natural abilities of people that are transmitted genetically, not unnatural abilities that depend upon instruction and involve the integration of many subskills”. (p.140, emphasis McGuinness)

This is a distinction often made by SP proponents. I’ve been told that children don’t need to be taught to walk or talk because these abilities are natural and so develop instinctively and effortlessly. Written language, in contrast, is a recent man-made invention; there hasn’t been time to evolve a natural mechanism for reading, so we need to be taught how to do it and have to work hard to master it. Steven Pinker, who wrote the foreword to Why Children Can’t Read seems to agree. He says “More than a century ago, Charles Darwin got it right: language is a human instinct, but written language is not” (p.ix).

Although that’s a plausible model, what Pinker and McGuinness fail to mention is that it’s also a controversial one. The part played by nature and nurture in the development of language (and other abilities) has been the subject of heated debate for decades. The reason for the debate is that the relevant research findings can be interpreted in different ways. McGuinness is entitled to her interpretation but it’s disingenuous in a book aimed at a general readership not to tell readers that other researchers would disagree.

Research evidence suggests that the natural/unnatural skills model has got it wrong. The same natural/unnatural distinction was made recently in the case of part of the brain called the fusiform gyrus. In the fusiform gyrus, visual information about objects is categorised. Different types of objects, such as faces, places and small items like tools, have their own dedicated locations. Because those types of objects are naturally occurring, researchers initially thought their dedicated locations might be hard-wired.

But there’s also word recognition area. And in experts, the faces area is also used for cars, chess positions, and specially invented items called greebles. To become an expert in any of those things you require some instruction – you’d need to learn the rules of chess or the names of cars or greebles. But your visual system can still learn to accurately recognise, discriminate between and categorise many thousands of items like faces, places, tools, cars, chess positions and greebles simply through hours and hours of visual exposure.

Practice makes perfect

What claimants for ‘natural’ skills also tend to overlook is how much rehearsal goes into them. Most parents don’t actively teach children to talk, but babies hear and rehearse speech for many months before they can say recognisable words. Most parents don’t teach toddlers to walk, but it takes young children years to become fully stable on their feet despite hours of daily practice.

There’s no evidence that as far as the brain is concerned there’s any difference between ‘natural’ and ‘unnatural’ knowledge and skills. How much instruction and practice knowledge or skills require will depend on their transparency and complexity. Walking and bike-riding are pretty transparent; you can see what’s involved by watching other people. But they take a while to learn because of the complexity of the motor-co-ordination and balance involved. Speech and reading are less transparent and more complex than walking and bike-riding, so take much longer to master. But some children require intensive instruction in order to learn to speak, and many children learn to read with minimal input from adults. The natural/unnatural distinction is a false one and it’s as unhelpful as assuming that reading difficulties are caused by ‘dyslexia’.

Multiple causes

What underpins SP proponents’ reluctance to admit biological factors as causes for reading difficulties is, I suspect, an error often made when assessing cause and effect. It’s an easy one to make, but one that people advocating changes to public policy need to be aware of.

Let’s say for the sake of argument that we know, for sure, that reading difficulties have three major causes, A, B and C. The one that occurs most often is A. We can confidently predict that children showing A will have reading difficulties. What we can’t say, without further investigation, is whether a particular child’s reading difficulties are due to A. Or if A is involved, that it’s the only cause.

We know that poor phonological awareness is frequently associated with reading difficulties. Because SP trains children to be aware of phonological features in speech, and because that training improves word reading and spelling, it’s a safe bet that poor phonological awareness is also a cause of reading difficulties. But because reading is a complex skill, there are many possible causes for reading difficulties. We can’t assume that poor phonological awareness is the only cause, or that it’s a cause in all cases.

The evidence that SP improves children’s decoding ability is persuasive. However, the evidence also suggests that 12% – 15% of children will still struggle to learn to decode using SP. And that around 15% of children will struggle with reading comprehension. Having a method of reading instruction that works for most children is great, but education should benefit all children, and since the minority of children who struggle are the ones people keep complaining about, we need to pay attention to what causes reading difficulties for those children – as individuals. In education, one size might fit most, but it doesn’t fit all.

Reference

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

Advertisements

41 thoughts on “synthetic phonics, dyslexia and natural learning

  1. Your use of the PSC figure is highly disingenuous. For a start, you are assuming that all YR/1 pupils are being effectively taught SP whereas we know, from previous NFER reports on the PSC that teachers themselves are reporting that they are not teaching SP effectively as they still mix the use of SP for word identification with their much loved ‘other strategies’. If, instead of taking England as a whole, you looked at schools which do teach SP properly you will find quite different percentages of children failing, or being slower, to learn.

    I find it curious that having had ‘phonics’ consistently held up as the prime method for teaching dyslexics for many years; in fact since Sam Orton’s research into dyslexia and the development of the Orton Gillingham programme in the 1930s, now that ‘phonics’ has become ‘mainstream’ we are suddenly being told that it ‘fails’ dyslexics.

    I think that it would have been more honest to have flagged this as a critique of McGuinness rather than imply that her views on dyslexia represent those of all SP advocates. She is not the onlie begetter of SP and her views are not uncritically accepted in every aspect by advocates. Perhaps some reference to Elliot & Grigorenko might have been useful.

    I still think you are stretching the definition of ‘natural’ by using it to cover all learned skills.

    • “If, instead of taking England as a whole, you looked at schools which do teach SP properly you will find quite different percentages of children failing, or being slower, to learn.”

      I would love to read this to gain a balanced view. Is it possible for you to provide some pointers. Would be much appreciated. Thanks

    • Reply to Maggie:

      Your use of the PSC figure is highly disingenuous. For a start, you are assuming that all YR/1 pupils are being effectively taught SP whereas we know, from previous NFER reports on the PSC that teachers themselves are reporting that they are not teaching SP effectively as they still mix the use of SP for word identification with their much loved ‘other strategies’.

      In that case any citation of the PSC figure is disingenuous, surely. We should wait for however long it takes to get instruction up to scratch before drawing any conclusions.

      If, instead of taking England as a whole, you looked at schools which do teach SP properly you will find quite different percentages of children failing, or being slower, to learn.If, instead of taking England as a whole, you looked at schools which do teach SP properly you will find quite different percentages of children failing, or being slower, to learn.

      Maggie, I completely accept that SP might not be being taught properly in many schools. But since we don’t have reliable data on schools which do teach SP properly, we can’t draw any conclusions about them. My main point in this series of posts is that SP advocates make a big deal out of there being scientific evidence to support a lot of claims, but when you look more closely, some of the evidence is a bit shaky. Shaky evidence isn’t a problem per se – it can point you to what needs to be researched next – but it’s important that people know it’s shaky.

      I find it curious that having had ‘phonics’ consistently held up as the prime method for teaching dyslexics for many years; in fact since Sam Orton’s research into dyslexia and the development of the Orton Gillingham programme in the 1930s, now that ‘phonics’ has become ‘mainstream’ we are suddenly being told that it ‘fails’ dyslexics.

      I don’t understand what you mean there.

      I think that it would have been more honest to have flagged this as a critique of McGuinness rather than imply that her views on dyslexia represent those of all SP advocates. She is not the onlie begetter of SP and her views are not uncritically accepted in every aspect by advocates. Perhaps some reference to Elliot & Grigorenko might have been useful.

      There are lots of people I could have quoted about dyslexia, but McGuinness’s view appears to be quite widely shared by people who don’t think dyslexia exists. I appreciate that not all SP advocates might agree with McGuinness but the only reason I read her in the first place was because they kept quoting her and I kept being urged to read her books. Some SP supporters might have criticised her ideas but I can’t ever recall observing any doing so.

      I still think you are stretching the definition of ‘natural’ by using it to cover all learned skills.

      I’m not stretching it, I’m referring to examples that have been cited by SP proponents. It might just be a small, vociferous number, but I’ve been involved in several debates about it and I’ve always found myself in the minority. Often a minority of one.

      • “We should wait for however long it takes to get instruction up to scratch before drawing any conclusions.”
        Or Hell has frozen over; which I think will be sooner.

        “I don’t understand what you mean there.”

        if you don’t understand what I mean then I suggest that you are not following the debate very closely. Before ‘phonics’ became ‘mainstream’ it was the method used to ‘remediate’ dyslexics. Now it is frequently being suggested that the same children, the percentage who haven’t learned to read competently, are being failed by ‘phonics’. I have even seen ‘dyslexia specialists’ say this..It makes no sense that a remedy which has been considered paramount for many decades has suddenly become the cause of ‘dyslexia’.

        “I appreciate that not all SP advocates might agree with McGuinness but the only reason I read her in the first place was because they kept quoting her and I kept being urged to read her books.”

        Probably because, for our generation, McGuinness is the first academic to have gathered all the evidence into some handy books which are easy to refer people to (as opposed to hundreds and hundreds of research studies which all contribute pieces to the over all picture). I am perfectly happy to read a critque of her work but I wouldn’t want an uninformed reader to get the impression that she is the sole and unquestionable authority on all things SP.

        (How do you get your quotes in bold?)

      • Reply to Maggie

        “I don’t understand what you mean there.”

        If you don’t understand what I mean then I suggest that you are not following the debate very closely. Before ‘phonics’ became ‘mainstream’ it was the method used to ‘remediate’ dyslexics. Now it is frequently being suggested that the same children, the percentage who haven’t learned to read competently, are being failed by ‘phonics’. I have even seen ‘dyslexia specialists’ say this..It makes no sense that a remedy which has been considered paramount for many decades has suddenly become the cause of ‘dyslexia’.

        What I didn’t understand was your comment:

        “I find it curious that having had ‘phonics’ consistently held up as the prime method for teaching dyslexics for many years; in fact since Sam Orton’s research into dyslexia and the development of the Orton Gillingham programme in the 1930s, now that ‘phonics’ has become ‘mainstream’ we are suddenly being told that it ‘fails’ dyslexics.”

        I agree that I might not have been following the debate about remediating ‘dyslexia’ very closely. I wasn’t aware that phonics had been recommended for teaching dyslexics, nor that it was being claimed phonics fails them. Now I know that, I get your point.

        (How do you get your quotes in bold?)

        I use the WordPress comments editor. Don’t know how others do it.

  2. If it were possible to “start fresh” today, it would be hard to make a case for the terms, Synthetic Phonics, Dyslexia, or Natural Learning. But that’s NOT possible.

    The term, Phonics, is deeply embedded in usage. The best that can be done is to nudge recognition of the structure and substance of the Alphabetic Code and to promote schemes and programmmes that reliably teach children how to handle the Code.

    The term, “Systematic Phonics” is embedded in the UK national curriculum, so the only sensible debate in how best to operationalize the curriculum. The situation is different in the US and in other English-speaking countries where neither the “Systematic” nor the “Synthetic” modifiers have traction. In all countries “Balanced Literacy”/Mixed Methods rule. Phonics proponents again won the skirmish but lost the battle, and the war goes on.

    The best route for cleaning up the terminology and ending the war is more intensive analysis of the Screening Check results.

    Ditto with respect to “Dyslexia.”

    I don’t see that the “Natural-Unnatural” Leaning distinction is bothering anyone but you, Sue. My reading of Diane McGuinness, is that she didn’t make a “big deal” of the distinction per se. In trying to explain a technically complicated matter in terms that “everyone” can understand, the only way to do so is to “cut corners” here and there. Of course, to more sophisticated specialists, what is an inconsequential shortcut is a “critical error.” One can argue either way; personally, I’d cut Diane some slack in each of the quotes you list. Even were she “wrong,” it’s a “never mind” matter, it seems to me.

    The nature/nurture debate really isn’t relevant to schooling. Schools are stuck with/blessed with the kids that parents send them, and by statute public schools can’t rid themselves of kids for instructional reasons. This makes the teachers’ job today an impossible burden, but that’s a whole nother story.

    • Reply to Dick:

      The best route for cleaning up the terminology and ending the war is more intensive analysis of the Screening Check results.
      Ditto with respect to “Dyslexia.”

      Not if there’s no agreement on the ‘prescription’ for the ‘diagnosis’.

      I don’t see that the “Natural-Unnatural” Leaning distinction is bothering anyone but you, Sue.

      It bothers a lot of people, Dick. It’s bothered people since Chomsky came up with it in respect of language, since Pinker picked up the baton, and, as I pointed out in my post, it’s bothered people looking at visual processing.

      My reading of Diane McGuinness, is that she didn’t make a “big deal” of the distinction per se.

      Oh yes she does. For her it’s a key justification for SP. Why else would she get Pinker to write the Foreword?

      In trying to explain a technically complicated matter in terms that “everyone” can understand, the only way to do so is to “cut corners” here and there.

      No it isn’t. That’s just sloppy writing. It’s quite possible to avoid blinding readers with science, but still be accurate.

      Of course, to more sophisticated specialists, what is an inconsequential shortcut is a “critical error.” One can argue either way; personally, I’d cut Diane some slack in each of the quotes you list. Even were she “wrong,” it’s a “never mind” matter, it seems to me.

      It’s not an inconsequential shortcut at all. It’s being used to justify a policy emphasis on direct instruction. And ‘never mind’ errors are amplified the further they travel from home.

      The nature/nurture debate really isn’t relevant to schooling. Schools are stuck with/blessed with the kids that parents send them, and by statute public schools can’t rid themselves of kids for instructional reasons. This makes the teachers’ job today an impossible burden, but that’s a whole nother story.

      What makes the teachers’ job an impossible burden is the use of education as a political football. An entire industry has grown up around measuring the education system, which is pointless, counterproductive and a vast waste of public money.

    • I think the ‘natural/unnatural’ distinction is bothering Sue because she will not agree that there is a difference between skills, such as walking and communicating through language which most young humans will learn without any instruction whatsoever; nobody teaches babies to crawl or shuffle on their bottoms, they don’t even demonstrate the technique, yet most babies end up moving around in a such a way, but you could expose children to the written word for ever and ever and without instruction they are most unlikely to learn to read it. By Sue’s reasoning it appears that one could say that *anything at all* which a human does is ‘natural’ because all human activities exploit human physical and neurological capabilities.

      I feel that one’s stance on what is and isn’t ‘natural’ is a matter of opinion rather than fact and that a distinction between a skill which doesn’t need to be taught and one which does is a perfectly reasonable one.

      • Reply to Maggie

        I think the ‘natural/unnatural’ distinction is bothering Sue because she will not agree that there is a difference between skills, such as walking and communicating through language which most young humans will learn without any instruction whatsoever; nobody teaches babies to crawl or shuffle on their bottoms, they don’t even demonstrate the technique, yet most babies end up moving around in a such a way,

        It isn’t just me it bothers, Maggie. It’s been a major debate in biology, psychology and linguistics at least since Darwin. And the nature/nurture debate itself goes back at least to the Ancient Greeks (‘does not nature itself teach…?’ etc). What the research shows is that the skills people usually think of as instinctive are kick-started by reflexes. Reflexes definitely are instinctive and hard-wired. But the development of skills is shaped by the affordances and constraints inherent in the body and in the environment. Crawling is something you find you can do if you lie on your stomach and wave your arms and legs around enough. Bottom-shuffling is something you find you can do once you learn to sit upright. Our body structure makes walking a more efficient form of mobility than crawling and allows the use of hands, so sooner or later, most children will do that. They also see most other people doing it. You don’t always need to have things demonstrated to discover that you can do them, but demonstrations can help.

        Babies begin to make sense of the speech they hear around them because their brains are hard-wired to categorise sounds. Speech sounds are a subset of sounds that piggy-back on that capability. But spoken language, like mobility, is shaped by the constraints and affordances of the body and the environment. Babies all over the world make the same set of speech sounds initially – ma-ma, ba-ba etc. That’s because they are the easiest sounds to articulate and the ones that babies are most likely to produce by chance. The ma- or ba- is detected by the baby’s brain which then matches it with the set of speech sounds it’s already acquired and hey presto! you’ve got self-regulated feedback.

        but you could expose children to the written word for ever and ever and without instruction they are most unlikely to learn to read it.

        All learning requires information. It doesn’t necessarily require instruction. Babies can acquire a great deal of information about mobility and speaking just by watching, listening and trial-and-error. But if they don’t have enough information they don’t develop the skills. If you kept a baby strapped in a cot, it wouldn’t ‘naturally’ learn to walk. And deaf babies don’t ‘naturally’ learn to speak. The information required to learn to read is the match between the squiggles on the page and the speech patterns they represent. All you need is enough information and rehearsal to be able to decipher the squiggles.

        I vividly remember learning to read. We learned letter-sound correspondences, briefly, and then I was given a book, Tip. The teacher read the words first and then I copied her. ‘Tip’. That was easy. ‘Here is Tip’. That was OK, although ‘here’ was a bit weird. ‘Come, Tip, come’. What??? ‘Come’ took a few attempts. Then I was away. All I needed was for an adult to read the unfamiliar words and a few rehearsals, and I rapidly built up a sight vocabulary. I also rapidly figured out the way English spelling works, so it wasn’t long before I could decode new words myself. My daughter learned in exactly the same way. Francis Spufford describes precisely the same process in The Child that Books Built; he learned to read from The Hobbit.

        My son learned to read the same way, although it took him longer because he couldn’t discriminate between some speech sounds and because his eye movements are a law unto themselves. For a long time, he used to have to ask me what unfamiliar words were. Now he doesn’t need to, even though he’s reading at adult level, because he can decode for himself. I only get asked when he doesn’t know what words mean.

        In order to learn to read like this, all children will obviously need information about what the squiggles represent. They will then need to practice deciphering the squiggles until that process is automated. Some children will require so much practice, they will probably give up or get ‘left behind’, so enhancing their phonological awareness and making the alphabetic code explicit makes a lot of sense. But for many children it isn’t a necessary pre-requisite for reading.

        By Sue’s reasoning it appears that one could say that *anything at all* which a human does is ‘natural’ because all human activities exploit human physical and neurological capabilities.

        So if all human activities exploit human physical and neurological capabilities, why make a distinction between ‘natural’ and ‘unnatural’ learning at all? How does it help? As someone observed on Twitter, it’s learning that’s natural. How much instruction and how much rehearsal people need to learn a particular skill varies depending on the skill and the person. That’s what makes teaching so interesting and rewarding – being able to tailor the instruction and rehearsal so that each child learns effectively.

        I feel that one’s stance on what is and isn’t ‘natural’ is a matter of opinion rather than fact and that a distinction between a skill which doesn’t need to be taught and one which does is a perfectly reasonable one.

        I agree that it’s perfectly reasonable to make a distinction between a skill which needs to be taught explicitly and one which doesn’t. But I don’t see how framing that distinction in terms of natural/unnatural helps. Nor do I see how the natural/unnatural distinction could be simply a matter of opinion, in the light of the considerable amount of research that’s looked at which behaviours are instinctive and which aren’t – from imprinting, to language to reading facial expressions. There might be disagreements in those areas, but those disagreements are based on a vast body of factual information.

        And if people think it is a simply matter of opinion, they shouldn’t be using it to justify the amount or type of instruction that children require. The natural/unnatural distinction is one that’s been patiently (and not so patiently) explained to me again and again by SP advocates, and used by them to justify an emphasis on direct instruction and to dismiss progressive approaches in education.

      • So Sue. I’m a newbe to your blog and I haven’t gone back to read all your previous blogs, but I’ve heard from you is a lot about what you oppose, and (I think) I understand what you are saying and at least in part I think I understand why you are saying it. What I haven’t heard is anything about what you advocate operationally about primary schooling or the best bet direction to take to improve the enterprise.

        One doesn’t have to be either logical or incremental to critique any person or any matter. But such critique doesn’t net logical incrementalism, or at least I’ve never seen it happen.

        Can you say a bit about how best to proceed, both with those kids who are and are not passing the Screening Check? And if you would alter the current UK statutory inclusion of “SP” and throw out the Screening Check, what would you replace them with?

      • Reply to Dick

        I think we’re all in agreement that reading is important. The literacy level of the community is important for our economic prosperity, and the literacy level of the individual can make a big difference to their quality of life.

        Obviously, the sooner a child learns to read the better, especially since they are expected to access much of the curriculum through written material. Many people involved with the education system make the implicit assumption that every child should be able to read fluently by the time they’re about 7. But what the data tell us is that not every child is likely to be able to read fluently by then. I think we need a three-pronged approach to this challenge.

        First, I think we need to de-stigmatise not being a good reader. You don’t need to be able to read fluently to learn stuff. You can learn vast amounts from the spoken word, images, drama, and these days, audio-visuals. Poor reading needn’t be an obstacle to learning and shouldn’t be a reason for ‘failure’.

        Secondly, we need to support children’s spoken language. Just immersing children in language, spoken and written, isn’t likely to result in making every child a good reader, but neither is SP alone. But the schools my children attended have had nothing like the same focus on speech that my own primary education had. Free reading, listening to stories, music, singing, drama, memorisation, recitation and writing are all very likely to address some of the reasons children are still struggling after SP and why some struggle with functional literacy. As far as I know English schools have never used SP AND immersion in language.

        I think you are right that analysing the screening check in detail would give us more information about why a particular child was having difficulties; and in principle it should give us enough information to investigate properly and come up with an instruction programme that remediated the underlying problem. But it won’t because most teachers won’t know how to analyse the data, and even if they do the underlying causes are likely to be developmental ones that need specialist input – and good luck with that in the UK. Which brings me on to my third prong:

        Thirdly, teachers need to understand why SP is important and how it fits into the bigger picture. Just making them use it isn’t enough, because, as you and Maggie have pointed out, they aren’t using it properly and many actively disagree with it. On top of that, specialist advice is really difficult to access here. My son’s infant school got so fed up with not being able to get children to speech and language therapists, they have trained all their teaching staff in speech and language development. And they all use Makaton.

        Without all three factors in place, I can’t see how we are ever going to meet the challenge of making every child at least a competent reader. If teachers understood children’s speech and language development properly, SP wouldn’t need to be compulsory, nor would the screening check. This is just government trying to do literacy on the cheap – which has long been a guaranteed way ensuring that more public money is wasted.

  3. Thanks for another interesting blog post, Sue. Dehaene has some stuff that might be relevant in his book ‘Reading in the Brain’.

    Dehaene’s take on the neuroscience of reading (as I understand it – very much a layperson’s understanding) proposes that writing is as it is because pathways and areas in the brain are recycled from more primitive uses for use in processing the symbols used. In brief, the primate brain has the capacity to respond to certain configurations of line, and contains areas in which these are processed when we view and scan our environment. We use the same areas when scanning writing, which bears similarities to line patterns seen in nature, and parts of the brain are ‘recycled’ to store letter and word forms. Perhaps one implication of this is that we are ‘naturally’ disposed to being able to read – we have the equipment – but we have to teach that equipment to do the job of responding to writing by learning the significance of the letter and word forms. This would indicate that we have to learn how to read (and this has to be aided by others ‘in the know’), a social activity, but we are naturally disposed to be able to do so (we have the capacity to see what our teachers are getting at).

    Dehaene describes dyslexia and concludes:
    ” In conclusion, we should perhaps question the very idea of a single cause for dyslexia. The problem that faces us is complex and cannot easily be reduced to a single well-defined cause. At the interface between nature and culture, our ability to read stems from a fortunate array of circumstances. Reading instruction capitalizes on the prior presence of efficient connections between visual and phonological processors. I therefore think it very likely that dyslexia arises from a joint deficit of vision and language. The weakness itself probably rests somewhere at the crossroads between invariant visual recognition and phonemic processing. As I will now go on to show, brain imaging supports the claim that the crux of the problem often lies at the interface between vision and speech, inside the web of connections found in the left temporal lobe.”
    Pp242-3 ‘Reading in the Brain’ Stanislas Dehaene. 2009

    It is also interesting that Dehaene describes the situation in which the dyslexic subject has intensive training of grapheme/ phoneme correspondences such that s/he can decode nonwords correctly but despite this success: “slow reading betrays them – some children require over 300 milliseconds per letter. This speed is comparable to that of adults with pure alexia due to lesions of the occipito-temporal letterbox area” (p239). As studies involved nonwords it would appear that there is more to dyslexia than a lack of phonics training, and that it can exist in transparent orthographies if reading is so slowed by laborious decoding that fluency is lost.

    David Share has looked into reading problems that are shared by readers of both shallow and opaque orthographies. In English many problems are attributed to the opacity of the orthography. However, reading problems also exist where the orthography is shallow. Pupils are able to pick up the alphabetic code of these languages quickly but may still experience problems. See his paper: http://www.edu.haifa.ac.il/personal/dshare/Share_Anglocentricities_2008.pdf
    which I reflected on here: https://community.tes.co.uk/reading_theory_and_practice/b/weblog/archive/2014/04/22/perils-and-eccentricities.aspx

    It might be useful to add to this material the observations made by Robert Port, who has observed that speech is not held in memory in the form of strings of discrete phonemes. Pre-readers and illiterate adults have great difficulty extracting phonemes from the flow of speech. It is something which becomes easy once a person can read because of literacy training. So there may be a case for saying that phonemic awareness is a result of cultural input – it is not ‘natural’. But neither is it ‘unnatural’, surely, if the vast majority are able to do it, once shown.
    http://www.academia.edu/3206681/How_are_words_stored_in_memory_Beyond_phones_and_phonemes

  4. Well, you dodge my questions, Sue, but you do say what you advocate, which is “good enough.”

    First, I think we need to de-stigmatise not being a good reader
    Ah, yes. And just how will we effect the “de-stigmatizaation”? That is, how much time will it take and how will we know when we’ve done it?

    It’s true that InfoTech is trending back to “Pictures” and “Speech” Consider, the “directions” that accompany the “Setup” of computers, printers. Usually today, they are all icons and symbols with little or text. But “reading” the directions demands a lot of “brain spinning” that is actually more complicated than reading text.

    And “computers” can read to you, if you can’t read to yourself. But as yet, they can’t “communicate” with you, because they are just “barking the words.”

    For these reasons, for the foreseeable future the absence of reading capability will be an individual liability and a public stigma.

    Secondly, we need to support children’s spoken language.
    That, and pocket change will buy you a cup of coffee. The recognition is explicit in the “Simple View of Reading” that was appended to Sir Jim Rose’s Report, now going on nearly a decade ago.

    The thing is, the view is under-simplified for research purposes and over-simplified for instructional purposes. We do NOT have do anything with children’s spoken language to begin teaching them how to handle the Alphabetic Code. If they can participate in every day conversation, speaking in full sentences, they are ready, and “phonemic awareness” activities are unnecessary use of instructional time.

    Moreover, children of school age acquire new vocabulary at a much faster rate without any formal vocabulary instruction than schools can possibly provide with formal vocabulary instruction. What they lack is technical/academic lexicons of domains/school subjects, which “language immersion” doesn’t impact.

    Re analysis of the Screening Check. Teachers don’t have to analyse the results. Specialists know how to do it; they just haven’t done it. They analyses won’t shed any light on developmental disabilities, but they will illuminate how to eliminate pseudo-disabilities.

    Thirdly, teachers need to understand why SP is important and how it fits into the bigger picture.
    Good luck with that one. You would first have to end the “reading wars” to make it happen. And again it isn’t necessary. It’s what teachers are “adding to SP” that is the rub. Were they to “do less” with the “bigger picture,” there would be “no problem.”

    Every teacher believes that s/he is doing the right thing in teaching kids how to read, and we know that at a national level in England most are doing “good enough.” But some schools and teachers aren’t. The Screening Check data show clearly that they are “out there,” but the analyses to date don’t show where they are. All I’m trying to say is, the EVIDENCE is there, allyagotta do is look at it.

    • Well, you dodge my questions, Sue, but you do say what you advocate, which is “good enough.”

      I thought I had answered them. Again:

      1. Can you say a bit about how best to proceed, both with those kids who are and are not passing the Screening Check?

      The kids who don’t pass the screening check need further investigation. Those who do need to do lots of language work.

      2. And if you would alter the current UK statutory inclusion of “SP” and throw out the Screening Check, what would you replace them with?

      I wouldn’t replace them with anything. Allowing government to dictate a single pedagogical approach that teachers should take not only rides roughshod over children’s individual differences, it undermines the point of having teachers and it’s ideologically risky.

      Does that clarify?

      First, I think we need to de-stigmatise not being a good reader
      Ah, yes. And just how will we effect the “de-stigmatizaation”? That is, how much time will it take and how will we know when we’ve done it?

      It wouldn’t take long if we dismantled the educational assessment industry. What would you prefer? Children who get ‘left behind’, frustrated and see themselves as failures because government has decided reading is the most appropriate way for them to learn stuff or children who know a lot, but might take longer to learn to read than the assessment industry things is normative?

      For these reasons, for the foreseeable future the absence of reading capability will be an individual liability and a public stigma.

      Most children don’t have ‘an absence of reading capability’. They just vary in their reading capability. I’m not saying we give up teaching them to read, I’m just saying we don’t make reading difficulty an obstacle to learning.

      Secondly, we need to support children’s spoken language.

      We do NOT have do anything with children’s spoken language to begin teaching them how to handle the Alphabetic Code. If they can participate in every day conversation, speaking in full sentences, they are ready, and “phonemic awareness” activities are unnecessary use of instructional time.

      Those are big ‘if’s. Have you listened to any 5/6 year olds recently? Their speech is immature. Their pronunciation is approximate. Their vocab is limited. Their grammar is all over the place. It makes sense to explain the alphabetic code to them, but it’s pretty clear that SP inherently trains in phonemic awareness, so wouldn’t take up any additional time.

      Moreover, children of school age acquire new vocabulary at a much faster rate without any formal vocabulary instruction than schools can possibly provide with formal vocabulary instruction. What they lack is technical/academic lexicons of domains/school subjects, which “language immersion” doesn’t impact.

      That depends on what language you’re immersing them in. Extensive talking extends vocab and brings in what you call technical/academic lexicons informally and in context.

      Re analysis of the Screening Check. Teachers don’t have to analyse the results. Specialists know how to do it; they just haven’t done it. They analyses won’t shed any light on developmental disabilities, but they will illuminate how to eliminate pseudo-disabilities.

      So how is that supposed to work? The specialists work with schools, or do they publish their findings and teachers read and apply them? And what if the results show patterns that do shed light on developmental anomalies, such as persistent conflation of similar phonemes or graphemes?

      Thirdly, teachers need to understand why SP is important and how it fits into the bigger picture.
      The Screening Check data show clearly that they are “out there,” but the analyses to date don’t show where they are. All I’m trying to say is, the EVIDENCE is there, allyagotta do is look at it.

      There’s not much point looking at the data if they don’t tell us which teachers aren’t implementing SP properly. And I get the impression that a lot of teachers either don’t understand SP or are not convinced by it yet. Given the garbled accounts I’ve seen of it and the methodological issues around the research, that’s hardly surprising.

      • I think perhaps the most important question revolves around what implementing SP ‘properly’ looks like. Presumably the most proper implementation of SP is one that ensures the ultimate aim of skilled reading is well-served. As reading depends on a combination of skills it is essential that this single subskill does not replace skilled reading as the aim. It may be essential to reading that pupils can use the alphabetic code. It is not necessarily essential that they are taught the code intensively and exclusively, at a pace dictated by the presence of the PSC and the programmes used, with the possibility that individual pupils’ needs are neglected or that the other subskills that contribute to reading are pushed out. While there is evidence that SSP is effective for getting children decoding it has self-evident limitations. These limitations seem to be largely ignored or dismissed by the pro-SSP ‘camp’ in their anxiety to ensure an undiluted concentration on the method. They are right in believing that SSP is a valuable strategy. It’s the current implementation that is not proven to be ‘proper’.

  5. Comment on Sue’s comment of March 2

    1. Full agreementIt’s good to start on a note of full agreement. Agreement is possible a lot of the time, but typically people don’t keep at it and start yelling at each other or the topic changes—or something else happens.

    2. The full agreement didn’t last long.

    2a For better or worse, we have to live with the governments we have at the moment. You are are on record in this reply and elsewhere as agreeing with the UK government’s commitment to teach all kids (or as many as possible) how to read by the end of KS 1. At some time in the future, it will become recognized that it would have been better to focus on the Alphabetic Code–the link between written and spoken language–but no never mind. “Phonics” is deeply embedded in usage, and whatever it might have been titled, the Yr 1 Screening Check is a “good enough” measure of whether an individual has been taught/learned how to read per the Alphabetic Code.

    The results of the Screening Check to date are “promising and provocative.” Despite wide spread opposition throughout EdLand and Higher EdLand, and clumsy, albeit well-intended, fumbling by the DfE, the highest possible score on the Screen is the modal score, and per our agreement on 1 above, “more analysis is needed.” That’s a good as gets in EdLand–and anywhere else for that matter

    Unless you have some implementable ideas on how to “fix” governments, you’re “crying in your beer”–or whatever we’re drinking.

    2b. Do you have any ideas on how to eliminate the testing industry? Actually, the Screening Check flies in the face of testing industry. The industry likes long achievement tests that force all examinees into the bell-shaped distribution that you very rightly object to. The “ChecK’ that you would chuck is the best bet for circumventing the industry.

    2c. Absence of reading capability. Full agreement.

    2d. Child language. You miss the point here, but you are not alone. You are focusing on the children’s language lacunae, Allyagotta do is look at the minimal prerequisite for beginning formal instruction instruction in how to use the Alphabetic Code. It would be foolhardy instruction to try to “explain the Alphabetic Code” to little kids. And the aim is not to “teach the Code.” The point is to teach kids HOW TO USE the Code in interacting with text so they can “read speech” in the same way the “speak prose.”

    2e. Not quite. “Extensive talking” is a good thing at all ages, but general conversation does not touch the technical lexicon I was referencing. By definition, Tech Lex is outside the boundaries of an individual’s Gen Lex. Tech Lex enters the person’s Gen Lex only after it has been taught/learned.

    2f. How does the analysis work? Very simply. All schools think they are doing “the right thing”–otherwise they would be doing “something different.” The Screening Check results to date indicate they SHOULD be doing something different. They are “out there” and the data have largely collected to identify where they are, but no one as yet has compiled that information.

    The results to date also indicate that there are schools that are doing “100%,” but they haven’t been identified either. You–and a lot of other people–say, Chuck the Check. Better to say, Check the Check while checking the instruction. Since schools and teachers say “They already have identified “the kids needs” something is haywire. It could be the kids’ “developmental differences.” That’s not the direction the data to date point, but transparency of the “analyisis ” will illuminate the “cause.”

    2g. The point of the analysis. The analysis will do exactly what you want to accomplish. However, it won’t focus on “proper implementation of SSP.” The analysis will illuminate next “best bet” action to accomplish the intent of teaching as many children possible how to handle the Alphabetic Code–the link between written and spoken English.

    Back to full agreement with what you are saying.

    • 1. good to start on a note of full agreement. Agreement is possible a lot of the time, but typically people don’t keep at it and start yelling at each other or the topic changes—or something else happens.
      2. The full agreement didn’t last long.

      It was full agreement on one point.

      2a For better or worse, we have to live with the governments we have at the moment. You are are on record in this reply and elsewhere as agreeing with the UK government’s commitment to teach all kids (or as many as possible) how to read by the end of KS 1.

      I’m not on record anywhere as saying that. I think SP is a good approach for starting teaching children to read. I am strongly opposed to government being actively involved in education except in the broadest sense of making resources available for the population to be educated and ensuring that the infrastructure is in place for that to happen. I certainly don’t support it mandating particular pedagogical approaches. Actively making education a political football is a very risky strategy.

      Unless you have some implementable ideas on how to “fix” governments, you’re “crying in your beer”–or whatever we’re drinking.

      There are some things that aren’t ‘fixable’. Sometimes the best you can do is limit the damage.

      2b. Do you have any ideas on how to eliminate the testing industry? Actually, the Screening Check flies in the face of testing industry. The industry likes long achievement tests that force all examinees into the bell-shaped distribution that you very rightly object to.

      What the assessment industry in the UK likes is a curve that’s increasingly skewed to the right year on year. And I don’t object to the bell-curve per se – how could I, it occurs all over the place in large populations? I’m just aware that it has some unsavoury associations.

      The “ChecK’ that you would chuck is the best bet for circumventing the industry.

      How?

      The assessment industry I referred to is the one that emerged in the wake of the Education Reform Act 1988, when the responsibility for education in England essentially shifted from local to central government. If we set it up, we can dismantle it again. The pendulum is currently swinging back in favour of local responsibility again – now successive governments have demonstrated that one size central control isn’t any better.

      2d. Child language. You miss the point here, but you are not alone. You are focusing on the children’s language lacunae, Allyagotta do is look at the minimal prerequisite for beginning formal instruction in how to use the Alphabetic Code. It would be foolhardy instruction to try to “explain the Alphabetic Code” to little kids. And the aim is not to “teach the Code.” The point is to teach kids HOW TO USE the Code in interacting with text so they can “read speech” in the same way the “speak prose.”

      I don’t see how you can teach kids how to use the code but still avoid explaining how it works or avoid training them to detect phonemes and graphemes. How would you not be able to do that?

      2e. “Extensive talking” is a good thing at all ages, but general conversation does not touch the technical lexicon I was referencing. By definition, Tech Lex is outside the boundaries of an individual’s Gen Lex. Tech Lex enters the person’s Gen Lex only after it has been taught/learned.

      But surely the distinction is one of ‘how terms are used’ rather than being Gen Lex and Tech Lex being two distinct categories. There’s no reason why Tech Lex can’t be acquired implicitly – that would depend on what the general conversations were about. Or are we back to the natural/unnatural dichotomy again?

      2f. How does the analysis work?

      I can see that the Phonics Check could be a useful source of data. I just can’t see why we need a mandatory phonics check to get those data.

      • 2a. Sorry I misunderstood you. What do you think is a better instructional approach. Not governmental, instructional.

        2b. The thing is, to accomplish the intent of teaching all kids to read–or to learn anything else–the scores will all pile up at the high end. This is something the test industry can’t tolerate because there is no basis for comparative norms.

        The “normal distribution” is actually “abnormal” in instruction. It is observed only when the determinants of the phenomenon are random. Effective instruction is anything but random.

        The shape of a distribution has nothing to do with population size, It’s all about the replicability of the phenomenon. Only if the occurrence of the phenomenon is randomly determined–multiple and complex–will a bell-shaped distribution be observed.

        2d Take a look at the Promethean Trust’s “Dancing Bears” and/or Piper Books BRI, ARI and MRI for two examples of different ways to go about it. They each have websites with examples that you can google for.

        2e. Nope. Tech Lex and Gen Lex are distinct categories, The distinctions reference individuals but they can also be compiled by academic discipline/school subjects. Each discipline/domain has it own Technical Lexicon. It’s what makes the language of school subjects differ from one another, All of the disciplines use a common General Lexicon of 10,000ish words. It’s the words that are common in only the discipline under consideration that distinguish it from other disciplines.

        2f. OK. We can end on a note of agreement with your first sentence! The reason we need a full Yr 1 population check, is to identify kids who need further instruction in how to handle the Alphabetic Code, so that they can get it in Yr 2. The needed instruction isn’t currently happening reliably. The Check is a valuable probe for investigating why and where it isn’t happening.

      • Reply to Dick Schutz:

        2b. The thing is, to accomplish the intent of teaching all kids to read–or to learn anything else–the scores will all pile up at the high end. This is something the test industry can’t tolerate because there is no basis for comparative norms.
        The “normal distribution” is actually “abnormal” in instruction. It is observed only when the determinants of the phenomenon are random. Effective instruction is anything but random.

        I understand the normal distribution. I also understand how it can become skewed by a particular factor. But what governments (and the education test industry) seem to want is maximum skewing due to optimum instruction. They don’t want a normal distribution.

        The shape of a distribution has nothing to do with population size,

        Oh, it does. The smaller the population, the less likely factors are to be distributed randomly.

        It’s all about the replicability of the phenomenon. Only if the occurrence of the phenomenon is randomly determined–multiple and complex–will a bell-shaped distribution be observed.

        And that’s my point. If you were to assess ‘reading ability’ in the adult population or in children of the same age, and used as your criterion an age-related norm, you’d get a bell-shaped curve. However, if your criterion is an absolute standard of reading, the curve would be skewed depending on whether the ‘reading ability’ criterion was easier or harder than the age-related norm, yes? The curve would also start skewing to the right if you were successful in improving that absolute reading ability in the population you were assessing.

        So UK Key Stage 2 reading results form a distribution that’s skewed to the right. They must do because around 80% of children score above the ‘expected’ standard – which was once the age-related norm. Interestingly, I couldn’t find a graph that showed the whole distribution of scores; results seem to be presented graphically only for the proportion of children above that magic level 4.

        When ‘functional literacy’ has been assessed in large samples of the adult population of the US the result has been a curve that’s roughly bell-shaped. However, this curve is skewed to the left, which is what Diane McGuinness was complaining about https://nces.ed.gov/pubs93/93275.pdf (p.17) The curve is bell-shaped because ‘functional literacy’ is affected by multiple factors that are, to all intents and purposes, ‘random’. And as I said in my blog, the NCES report goes into some detail about what those random factors are.

        2e. Nope. Tech Lex and Gen Lex are distinct categories, The distinctions reference individuals but they can also be compiled by academic discipline/school subjects. Each discipline/domain has it own Technical Lexicon. It’s what makes the language of school subjects differ from one another, All of the disciplines use a common General Lexicon of 10,000ish words. It’s the words that are common in only the discipline under consideration that distinguish it from other disciplines.

        Yes I get that. What I’m saying is that there is no reason why teachers can’t use Tech Lex words in general conversation. And those words don’t always require explanation; their meaning can be made clear through using them. I guess I’m thinking of this from the perspective of UK primary education where teachers teach all subjects, and Early Years where you often have to show rather than tell. And categories are often distinct only at their cores, not at their boundaries.

        2f. OK. We can end on a note of agreement with your first sentence! The reason we need a full Yr 1 population check, is to identify kids who need further instruction in how to handle the Alphabetic Code, so that they can get it in Yr 2. The needed instruction isn’t currently happening reliably. The Check is a valuable probe for investigating why and where it isn’t happening.

        But we don’t need a full Yr 1 population check to do that. Teachers should be perfectly capable of doing it themselves. And if we need a national check because they’re not capable of doing so, why is it being left up to schools to identify the type of further instruction needed for the Y2 children who don’t pass the check?

        I get the strong impression that the UK government presents data in a way that shows education policy in the best light (of course), so the focus is on the kids for whom the policy works. But all the complaints are about the kids for whom the policies don’t work. Despite this, there’s no systematic, exhaustive analysis going on into why the policies don’t work, government just assumes it’s because of inadequate instruction.

  6. Comment on nemocracy’s March 2 comment.

    Putting the onus on “implementation” isn’t productive and it’s unfair to schools and teachers. Ultimately, the “failure” could be traced back to the Big Bang, but the tracing would still leave us clueless re what to do to “intervene” with kids who have been identified (with or without the Screening Check) as having “reading problems”–or however you want to say it,

    Schools and teachers are caught in the Reading Wars and some kids are inadvertent casualties of the war. Teachers are “doing what they think is right.” So are kids.

    “Mistakes have been made.” But schools and teachers are foot-soldier implementers. The valuable function of Science and Technology is that it provides a mechanism for self-correcting and moving on. But it’s the cumulative science that is corrected and the “how to” that advances. If we throttle the self corrective mechanism and attribute results to “implementation failure,” nothing is learned and those least responsible for the failure are unfairly abused in the process.

    • I think you may have misunderstood me, Dick. I was using the word ‘implementation’ to refer to the way SSP is taught in schools where schools abide strictly by the prescription coming from government: first, fast and only SSP, as per approved programme and principles, and preparation of children to pass the PSC. In effect, the goverment is doing the implementing.

      If the changes implemented by the government – designed to eradicate illiteracy – do not succeed, then it is reasonable, not unfair, to look at the structure of the implementation and make further changes based on research as necessary. If blame is cast it should be cast on government – from which the implementation derives, not the teachers (as you say, the foot soldiers; as I might say, the de-skilled professionals). Such research would be productive, aiming to improved ways of working towards the eradication of illiteracy, and would therefore benefit children with reading problems (the whole point of the enterprise). And science and technology would obviously be part of the equation.

      Schools and teachers don’t seem much interested in the reading wars. As the NFER survey shows they simply want to support children in reading: the reading wars are raged where pro-SSP groupings argue strongly for the exclusive use of SSP first and fast, manufacturing enemies from anyone who says, “Hold on a minute… “, and expresses reservations about evidence and practice. Some go as far, in their efforts to demonise the questioners, or perhaps to stoke the dying embers, as to call them ‘phonics denialists’.

      Where the government policy has been successfully implemented it may well be that teachers are, specifically, NOT doing what they think is right, and more and more teachers will be in this group the more successful implementation becomes. They will be doing what is prescribed.

      • Paragraph 1. Actually, all we really know is that whatever the schools where all the kids “pass” the Check is doing is “good enough” that the kids pass. There are no doubt variations in what these schools are doing. And there are indeed schools and teachers whose kids didn’t pass the Check who believe they are doing what you say they should be doing.

        Further analysis of the available Screening Check data can untangle what is going on. That’s the point I’ve been trying to make.

        Actually, the facts on the ground so far indicate that what the UK government has done so far has been “good enough” In retrospect, “mistakes were made.” but they haven’t consequentially affected progress in the accomplishment of the intent to teach all children how to handle the Alphabetic Code by the end of KS 1

        The US government has abandoned are previous comparable intent as “unreasonable,” soi in comparison, the UK government looks very good–for what that’s worth.

        Indeed there may be some wrong headed teachers, as I could well be. The methodology of Science and Technology is a tried and true mechanism for sorting out such matters. All I’ve been trying to say, is “Let’s get on with it.”

  7. It’s when we free ourselves of the delusion that the check necessarily encourages schools to teach SSP in ways that lead to skilled reading that we will be ready to “get on with it”, Dick, by which I mean decide what to do next to best serve the interests of children.

    • I hear what you are saying. Everyone is for what “best serves the interest of the children.” The question is “what’s best:” so the Reading War goes on, and on–certainly not serving the best interest of anyone. Your “getting on with it” accomplishes nothing more than perpetuating the War.

      Actually, the facts on the ground are that the Check IS encouraging teachers to teach children how to handle the Alphabetic Code–the link between written and spoken English. Further analysis of the data can increase the reliability of this accomplishment. Are you opposed to further analysis of the data?

      • So you are saying that in order to call a truce in the reading wars we abandon looking for what’s best for children and go along with the situation as it stands – in fact promote the continuation and intensification of the government-prescribed SSP teaching? That sounds more like surrender than a truce, except that the reading war is a fabrication – there are not 2 opposing sides battling each other for supremacy and for undiluted control of the reading curriculum. SSP proponents like to frame it like this and call any approach that isn’t their favoured SSP approach Whole Language, or Whole Language in disguise, but the views expressed on this blog and in other contexts which do not fully support government policy are not trying to substitute Whole Language for SSP; they are questioning aspects of SSP and aspects of its implementation and the supporting philosophy.

        In fact the ‘denialists’ are already ‘getting on’ with the task in hand. Seeing the flaws in the check and looking at the research so far into its effect on teaching reading they are questioning its usefulness and drawing attention to its limitations. Is teaching the alphabetic code in this way the right way? Are good results followed through with good reading results? The data will be used to help answer these questions. Further analysis of the data is fine as long as it asks the right questions. It’s a waste of time to interrogate the data to find out which SSP programme performs the best, if the question of whether using a programme is wise is not answered first.

  8. Comment on Sue’s March 4 comment

    2b. Well, Sue, your statistical facts are tangled. For starters, what you term “skewed to the right” is actually “skewed to the left.”–and your overall interpretation is “off.” No problem. Statistical terminology and logic is remote and tangential to the present colloquy, and trying to untangle statistics instruction is as complicated as trying to untangle reading instruction. I’m dropping further comment about 2b.

    2e. What I’m saying is that there is no reason why teachers can’t use Tech Lex words in general conversation
    Actually, teachers do use Tech Lex words in their instruction. What they don’t do is recognize that the Tech Lex they are using is not within the Tech Lex of at least some of the students they are teaching.

    This holds for Primary teachers and for Reception(/Kindergarten) teachers. SWRL research I was associated with stumbled into this realization when Reception teachers kept telling us, “Some of our children just aren’t ready for reading instruction.” When asked why not, they said, “They don’t understand even the simplest things I’m telling them.” “OK,” said we, “We can deal with that.”

    We hadn’t thought of it, but there are Instructional Concepts, that all Reception teachers use,–terms like shapes (circle), colors (purple), positions (first), and so on. These are easily taught if they aren’t in the kid’s Tech Lex. So we developed the instruction so that teacher could in a week or 2-3 weeks at most, “fix” the readiness problem.

    From there, we realized that the same consideration applied to each KS 1 and 2 school subject and so compiled a Tech Lex for Elementary Schooling. But that’s a whole nother story.

    2f. Teachers should be perfectly capable of doing it themselves.
    Ah, yes. They indeed are capable. The fact that home schooler parents with only secondary school credentials are doing it, proves that it’s not in the teachers (and it’s not in the kids; it;s in the instruction.

    When teachers say they “already have the information” they are largely right. The kids largely “have the information.” Yet the facts on the ground are that whatever some schools and teachers are doing isn’t reliably teaching a large number of kids how to handle the Alphabetic Code. We could argue why this is, (Answer: it’s everybody’s fault and no one is responsible), whether it’s enough (Answer: No it’s not. It’s a necessity, but not a sufficiency. Instruction, and learning, and life goes on) until hell freezes over and after that. OR we could just look at data already collected to resolve the confusion.

    It’s anomalous, for me from the US to be praising the good work of present and past UK governments while y’all are banging its head, but the facts on the ground merit praise. What the future holds is anyone’s guess.

    • 2b. Well, Sue, your statistical facts are tangled. For starters, what you term “skewed to the right” is actually “skewed to the left.”–and your overall interpretation is “off.” No problem. Statistical terminology and logic is remote and tangential to the present colloquy, and trying to untangle statistics instruction is as complicated as trying to untangle reading instruction. I’m dropping further comment about 2b.

      You’re absolutely correct, I’ve been talking about tails whilst thinking about bulges. That still doesn’t affect my main point. It could be pure coincidence that reading difficulties in the English-speaking population have remained stubbornly at 17% for decades, or that 15% of Year 2 teachers have failed to instruct children properly in the alphabetic code. Those proportions could be completely unrelated to the fact that 16% is the proportion of people you’d expect to have reading difficulties in a large population. But it could also indicate that a whole bunch of random variables are involved that one-size-fits-all instruction alone isn’t likely to address. The data about the functionally illiterate adults in the NCES study suggests the latter.

      It’s quite possible that my overall interpretation is ‘off’. But I’d be interested to know how.

      2e. What I’m saying is that there is no reason why teachers can’t use Tech Lex words in general conversation.

      Actually, teachers do use Tech Lex words in their instruction. What they don’t do is recognize that the Tech Lex they are using is not within the Tech Lex of at least some of the students they are teaching.>>>>>>>>We hadn’t thought of it, but there are Instructional Concepts, that all Reception teachers use,–terms like shapes (circle), colors (purple), positions (first), and so on. These are easily taught if they aren’t in the kid’s Tech Lex. So we developed the instruction so that teacher could in a week or 2-3 weeks at most, “fix” the readiness problem.

      Hang on, you told me that Tech Lex and Gen Lex are distinct categories and that Tech Lex is determined by the words that make disciplines distinct – that aren’t the 10k words used across disciplines. Now you’re telling me Tech Lex can refer to words that are unfamiliar to particular small children.

      Although I’m willing to believe that there are Reception teachers who assume that all children will be familiar with the terminology of shape and colour and position, I’ve never encountered any. The point I was making is that those concepts do not have to be taught explicitly; they can be taught implicitly through usage. After all, that’s how most children learn about shape, colour and position before they arrive at school/nursery/kindergarten. Doesn’t mean it’s the best way to teach them – just that people do learn stuff implicitly.

      2f. Not at all sure what you meant by your last point.

  9. Comment on nemocracy March 4 comment.

    I hear what you are saying, but I’m trying to get past the “tis/taint.” The best way to do that is to look at the data.”
    The data will be used to help answer these questions.
    Hey, I think we may be agreeing again!

    Further examination of the data already collected will answer all the questions you rightly raise–or fully illuminate operational best-next-steps. That’s as good as Science and Technology methodology gets.

  10. 2b It’s quite possible that my overall interpretation is ‘off’. But I’d be interested to know how.
    Well, it’s off in all sorts of ways. Statistical distributions depict effects, not causes. There are all sorts of distributions–bimodal (yes-no), truncated (cut off). You have to be specific about what effect you are considering.

    In the present instance we are considering the effect of teaching children how to handle the Alphabetic Code–the link between written and spoken English– as measured by the Screening Check. The shape of the distribution depicts the “facts on the ground.” What the distribution depicts is a highly skewed distribution. This is good news instructionally, because it tells us that the intent is largely being met at a national level. Since the “Treatment” is instruction, the “wild guess” possible causes that you and others keep bringing up can be ruled out. The replicability of the distribution over calendar years (with national gains) further confirms the conclusion, It’s the instruction.

    (The reason that clinging to causes-that-can-be-dismissed is counter-productive, is that it thwarts operational action to further clarify how to accomplish the intent.)

    However, the long tail of the distribution that extends all the way down to zero (with a bump up at zero) tells us that there is still a lot of “uncontrolled variance.” The possibility that “it’s in the kids, or their parents is ruled out” (except for segments of “Special Needs” and the possibility of SES) by cross-checking these categorizations at the LEA level. The LEA level again further supports the conclusion, “It’s in the instruction.”

    What we do not yet know, is what going on at the school and class level of analysis. The data have been largely collected, they just haven’t been analyzed. All I’ve been trying to say is: Do the analysis.

    To continue to argue points that have been resolved by the EVIDENCE earlier in the Natural Experiment is not in the best interests of anyone other than the disputants.

    2e, First paragraph. Your current understanding of Gen Lex and Tech Lex is correct. The substance differs both across disciplines/school subjects (math and music) and expertise in the discipline (primary school math and secondary school math)

    Second paragraph. Certainly schooling Tech Lex can be and is learned implicitly. (As soon as an individual has been taught how to handle the Alphabetic Code, it can be self-taught. And certainly teachers recognize individual differences in students (usually to a fault, but that’s a whole nother story)

    The point is, focusing on the “minimal child prerequisites” for initiating reading instruction, the prerequisites can be reduced to “Speaking in whole sentences in everyday conversation and a couple of weeks of Reception instruction. All of the Checklists and “accountability” fal-de-ral of Early Years, and the Phonemic Awareness phase of SSP can be discarded. That’s the EVIDENCE, but who is looking?

    2f. Which paragraph(s) need clarification?
    When we examine the results at the LEA level, we see that the “cause” of the long tail is variability at the

    • Ooops. The fragmentary sentence after 2f is an intended “delete” that didn’t get deleted. I find editing in the little window that WordPress provides to be “Challenging,”–the evidence is obvious.

    • 2b It’s quite possible that my overall interpretation is ‘off’. But I’d be interested to know how.
      
Well, it’s off in all sorts of ways. Statistical distributions depict effects, not causes. There are all sorts of distributions–bimodal (yes-no), truncated (cut off). You have to be specific about what effect you are considering.

      I’m aware that there are different kinds of distributions, but ‘reading ability’ is a normal distribution. And what I am saying is that that kind of distribution is an effect caused by the random distribution of multiple factors.

      
In the present instance we are considering the effect of teaching children how to handle the Alphabetic Code–the link between written and spoken English– as measured by the Screening Check. The shape of the distribution depicts the “facts on the ground.” What the distribution depicts is a highly skewed distribution. This is good news instructionally, because it tells us that the intent is largely being met at a national level.

      Indeed. And what I’m drawing attention to is the ‘largely’ aspect. The ‘largely’ is remarkably close to 84%, which, in a normal distribution is the proportion of the population with a ‘reading ability’ above the mean plus those more than 1 SD below it. To me, that figure looks too similar to be likely to be a coincidence.

      Since the “Treatment” is instruction, the “wild guess” possible causes that you and others keep bringing up can be ruled out. The replicability of the distribution over calendar years (with national gains) further confirms the conclusion, It’s the instruction.

      But the ‘treatment’ doesn’t seem to be making much difference to the proportion of the population below the required reading ability/functional literacy level/whatever you like to call it. As for the ‘wild guess’ probable causes – they’re not ‘wild guesses’ at all. The NCES adult literacy data listed characteristics of the functionally illiterate population that would have impacted on functional literacy. And we know that impaired auditory processing, visual processing, speech/language skills and working memory impact on reading ability.

      
(The reason that clinging to causes-that-can-be-dismissed is counter-productive, is that it thwarts operational action to further clarify how to accomplish the intent.) 
However, the long tail of the distribution that extends all the way down to zero (with a bump up at zero) tells us that there is still a lot of “uncontrolled variance.” The possibility that “it’s in the kids, or their parents is ruled out” (except for segments of “Special Needs” and the possibility of SES) by cross-checking these categorizations at the LEA level. The LEA level again further supports the conclusion, “It’s in the instruction.”

      How?

      

The point is, focusing on the “minimal child prerequisites” for initiating reading instruction, the prerequisites can be reduced to “Speaking in whole sentences in everyday conversation and a couple of weeks of Reception instruction. All of the Checklists and “accountability” fal-de-ral of Early Years, and the Phonemic Awareness phase of SSP can be discarded. That’s the EVIDENCE, but who is looking?

      But you haven’t come up with any evidence, apart from a phonics check that tells us is ‘the intent is largely being met’. And still no evidence of the impact on functional literacy at 16. Again, I’m not saying SP doesn’t improve adult functional literacy, just that I haven’t found any evidence that it does.

  11. it’s true that conventional standardized reading tests do purport to measure the reified abstraction “reading ability.” And it’s true that the results of such tests form a normal distribution. But this is not the “on the ground” matter at hand. What you or I would like the matter to be, or think it should be is a whole nother story. The story would be a matter of opinion rather than a matter of grounded statistics.

    A distribution in which the modal score is the highest possible score on the test is unprecedented in achievement testing and is far removed from a “normal distribution.” “Normal distributions” are symmetrical, they do not have long tails. If you don’t see the long-tail on the Screening Check distributions, you otta take another look.

    The Yr 1 Screening Check does not purport to measure functional literacy at 16. The Check is a psychometrically sound measure “good enough” to identify children who need further instruction in how to handle the Alphabetic Code–the link between written and spoken English. Children–or anyone else–who have this capacity can read any text with comprehension equalling that were the communication spoken.

    I’ve said that we could learn a lot by administering the Check to Teens who lack “functional literacy.” That inquiry would be the route to take if anyone were concerned with testing the logic underlying the Screening Check. By the time the present cohort of primary school children reach age 16, “functional literacy” will have a whole nother meaning. But whatever that meaning turns out to be, children who been taught how to handle the Alphabetic Code by the end of KS 1 will have a leg up.

    • it’s true that conventional standardized reading tests do purport to measure the reified abstraction “reading ability.” And it’s true that the results of such tests form a normal distribution. But this is not the “on the ground” matter at hand.

      It’s precisely the ‘on the ground’ matter at hand. I’ve come across many people who believe in the reified abstraction ‘dyslexia’ – as in whether someone ‘has dyslexia’, or not. It’s possible that there are also people out there who think ‘reading ability’ is a single thing that you either have or haven’t got, but I’ve never met any. Most of us are so familiar with reading that we are well aware that a standardized reading test shows us what we know already, that there’s a wide variation in people’s ability to read. And that the variation can be due to a number of causes.

      What you or I would like the matter to be, or think it should be is a whole nother story. The story would be a matter of opinion rather than a matter of grounded statistics.

      We know that if you’ve never seen the printed word, you won’t be able to read. Or if you’re unaware of the letter-sound correspondences. Or that you are unlikely to become a good reader if you’ve got no books at home, don’t do any free reading at school and are totally preoccupied by things other than reading. Or that you might struggle if your speech is a bit wonky or your vocabulary is limited. Those are not matters of opinion they are matters of fact. Most of those issues can be addressed by appropriate instruction, I agree, but that doesn’t alter the fact that people’s reading ability varies considerably for a variety of reasons.

      A distribution in which the modal score is the highest possible score on the test is unprecedented in achievement testing and is far removed from a “normal distribution.” “Normal distributions” are symmetrical, they do not have long tails. If you don’t see the long-tail on the Screening Check distributions, you otta take another look.

      Dick, I have referred to the screening check distribution in my post. I linked to Dorothy Bishop’s comments about the dips just before and after the 32-item ‘pass’ score. I mentioned the peak at the far end of the tail representing the almost 6000 children who scored 0. I pointed out that the distribution could indicate two populations; children who ‘get’ phonics and those who don’t. Or children who have been taught really well, or really badly.

      Whether the modal score is the highest possible score on a test is determined by what the test is testing. If the test was one of colour perception or shape recognition, most 6/7 year-olds would do well on that too. In a class of children who had thoroughly rote-learned multiplication tables or spellings, you’d get a similar distribution if you tested them, but most teachers don’t bother to tabulate the data and turn them into a graph and go “the modal score is the highest possible score, that’s unprecedented in achievement testing!” because they don’t need to and it isn’t. Not in their achievement tests, anyway.

      But ceiling and floor effects are by no means unprecedented in educational testing, as I’m sure you’re aware. The reason we don’t tend to use tests with ceiling or floor effects for particular populations is because they don’t tell us much about the kids at the top and bottom of the range.

      The Yr 1 Screening Check does not purport to measure functional literacy at 16.

      Of course it doesn’t. That was the point of my post. But the importance of SP is often presented, not in terms of its impact on word-reading and spelling, but in terms of its supposed impact on functional literacy. Here’s Kerry Hempenstall, in an example @MaggieDownie cited on Twitter yesterday. Poor literacy apparently prevents 50% of Australians ‘participating effectively in society’. The data from the survey he cites show that 20% of professionals and 30% of managers also have ‘inadequate literacy’. That might mean that professionals and managers could do better – but their inadequate literacy skills clearly haven’t stopped them participating effectively in society.

      The Check is a psychometrically sound measure “good enough” to identify children who need further instruction in how to handle the Alphabetic Code–the link between written and spoken English. Children–or anyone else–who have this capacity can read any text with comprehension equalling that were the communication spoken.

      That’s great. And I understand that the issues around comprehension might be due to knowledge, vocabulary etc.

      I’ve said that we could learn a lot by administering the Check to Teens who lack “functional literacy.” That inquiry would be the route to take if anyone were concerned with testing the logic underlying the Screening Check. By the time the present cohort of primary school children reach age 16, “functional literacy” will have a whole nother meaning. But whatever that meaning turns out to be, children who been taught how to handle the Alphabetic Code by the end of KS 1 will have a leg up.

      But the point of my post was that SP advocates often present its main advantage in terms of functional literacy at 16, whatever that happens to be at any given time. Yet the data show that roughly the same proportion children ‘fail’ the phonics check, were ‘underachievers’ in the Clacks study, and emerge from school as functionally illiterate. The proportions might be pure coincidence, and the children in those minorities might be there for different reasons. The phonics check might well help us unpack those reasons, but so far, it looks to me like the SP leg-up isn’t enough to impact on functional illiteracy rates.

      The point I’m making is that despite SP advocates often citing adult functional illiteracy rates as justifying the use of SP programmes, there doesn’t appear to be any evidence showing that SP reduces adult functional literacy rates. If it doesn’t by the time the current Y3 cohort reach 16, there’s a high likelihood that people opposed to the use of SP will go ‘I told you so. Waste of time and money’ and a valuable initiative will be abandoned.

  12. Celebration time, Sue! I agree with everything you say here. (Well, I still have a few quibbles and raised eyebrows here and there, but they’re not worth mentioning). Seems to me the loooong exchange of comments here offers N=1 experimental proof of an important “if-then” proposition: IF rational parties can manage to exchange views long enough to find out what the other party is ” really trying to say,” THEN (?and only then?) will their consequential points of agreement and disagreement be evident.

    A good part of the Reading War has been about people on both sides citing “research” while ignoring the “facts on the ground” experimental evidence that goes unanalyzed while the War persists. This is where we currently find ourselves with respect to the UK Screening Check inquiry.

    The thing is, your blog is titled “synthetic phonics, dyslexia, and natural learning.” Re-reading it now if the perspective of your present comment, I understand (I think) what you were trying to communicate. But “comprehension” has little to do with the “author’s intent.” Had your comment here been the “precipitating event” for the colloquy, the interchange could have been avoided,

    “The Alphabetic Code, Initial Reading Instruction, and ‘Evidence’ (or other words to that effect) would have been a less-misleading caption, and the present comment a more apt body for the blog.

    But no nevermind. This blog is already a blog behind your current blogs. More blogs are needed!

    • Celebration time, Sue! I agree with everything you say here. (Well, I still have a few quibbles and raised eyebrows here and there, but they’re not worth mentioning).

      Gosh!

      Seems to me the loooong exchange of comments here offers N=1 experimental proof of an important “if-then” proposition: IF rational parties can manage to exchange views long enough to find out what the other party is ” really trying to say,” THEN (?and only then?) will their consequential points of agreement and disagreement be evident.

      I’d say that’s the point of debate. To tease out and test out what the other party is saying. If both parties are saying the same thing using different words, or different things using the same words, areas of agreement and disagreement will become apparent only if they stick at it.

      A good part of the Reading War has been about people on both sides citing “research” while ignoring the “facts on the ground” experimental evidence that goes unanalyzed while the War persists. This is where we currently find ourselves with respect to the UK Screening Check inquiry.

      Couldn’t agree more.

      The thing is, your blog is titled “synthetic phonics, dyslexia, and natural learning.” Re-reading it now if the perspective of your present comment, I understand (I think) what you were trying to communicate. But “comprehension” has little to do with the “author’s intent.” Had your comment here been the “precipitating event” for the colloquy, the interchange could have been avoided, “The Alphabetic Code, Initial Reading Instruction, and ‘Evidence’ (or other words to that effect) would have been a less-misleading caption, and the present comment a more apt body for the blog.

      But those are the terms in which interchanges are often framed. I wanted to take a closer look at the frames.

      But no nevermind. This blog is already a blog behind your current blogs. More blogs are needed!

      I’m impressed that you stuck with the discussion, Dick. I’ve found it really helpful.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s