synthetic phonics and functional literacy: the missing link

According to Diane McGuinness in Why Can’t Children Read, first published in 1997, California’s low 4th grade reading scores prompted it in 1996 to revert to using phonics rather than ‘real books’ for teaching reading. McGuinness, like the legislators in California, clearly expected phonics to make a difference to reading levels. It appears to have had little impact (NCES, 2013). McGuinness would doubtless point out that ‘phonics’ isn’t systematic synthetic phonics, and that might have made a big difference. Indeed it might. We don’t know.

Synthetic phonics and functional literacy

Synthetic phonics is important because it can break a link in a casual chain that leads to functional illiteracy:

• poor phonological awareness ->
• poor decoding ->
• poor reading comprehension ->
• functional illiteracy and low educational attainment

The association between phonological awareness and reading difficulties is well established. And obviously if you can’t decode text you won’t understand it and if you can’t understand text your educational attainment won’t be very high.

SP involves training children to detect, recognise and discriminate between phonemes, so we’d expect it to improve phonological awareness and decoding skills, and that’s exactly what studies have shown. But as far as I can tell, we don’t know what impact SP has on the rest of the causal chain; on functional literacy rates in school leavers or on overall educational attainment.

This is puzzling. The whole point of teaching children to read is so they can be functionally literate. The SP programmes McGuinness advocates have been available for at least a couple of decades, so there’s been plenty of time to assess their impact on functional literacy. One of them, Phono-graphix (developed by a former student of McGuinness’s, now her daughter-in-law), has been the focus of several peer-reviewed studies all of which report improvements, but none of which appears to have assessed the impact on functional literacy by school leaving age. SP proponents have pointed out that might be because they’ve had enough difficulty getting policy-makers to take SP seriously, let alone fund long-term pilot studies.

The Clackmannanshire study

One study that did involve SP and followed the development of literacy skills over time was carried out in Clackmannanshire in Scotland by Rhona Johnston and Joyce Watson, then based at the University of Hull and the University of St Andrews respectively.

They compared three reading instruction approaches implemented in Primary 1 and tracked children’s performance in word reading, spelling and reading comprehension up to Primary 7. The study found very large gains in word reading (3y 6m; fig 1) and spelling (1y 9m; fig 2) for the group of children who’d had the SP intervention. The report describes reading comprehension as “significantly above chronological age throughout”. What it’s referring to is a 7-month advantage in P1 that had reduced to a 3.5-month advantage by P7.

A noticeable feature of the Clackmannanshire study is that scores were presented as group means, although boys’ and girls’ scores and those of advantaged and disadvantaged children were differentiated. One drawback of aggregating scores this way is that it can mask effects within the groups. So an intervention might be followed by a statistically significant average improvement that’s caused by some children performing much better than others.

This is exactly what we see in the data on ‘underachievers’ (fig 9). Despite large improvements at the group level, by P7 5% of children were more than two years behind their chronological age norm for word reading, 10% for spelling and 15% for reading comprehension. The improvements in group scores on word reading and spelling increased with age – but so did the proportion of children who were more than two years behind. This is an example of the ‘Matthew effect’ that Keith Stanovich refers to; children who can decode read more so their reading improves, whereas children who can’t decode don’t read so don’t improve. For the children in the Clackmannanshire study as a group, SP significantly improved word reading and spelling and slightly improved their comprehension, but it didn’t eliminate the Matthew effect.

The phonics check

There’s a similar within-group variation in the English KS1 phonics check, introduced in 2012. Ignoring the strange shape of the graph in 2012 and 2013 (though Dorothy Bishop’s observations are worth reading), the percentage of Year 2 children who scored below the expected standard was 15% in 2013 and 12% in 2014. The sharp increase at the cut-off point suggests that there are two populations of children – those who grasp phonics and those who don’t. Or that most children have been taught phonics properly but some haven’t. There’s also a spike at the end of the long tail of children who don’t quite ‘get’ phonics for whatever reason, representing the 5783 children who scored 0.

It’s clear that SP significantly improves children’s ability to decode and spell – at the group level. But we don’t appear to know whether that improvement is due to children who can already decode a bit getting much better at it, or to children who previously couldn’t decode learning to do it, or both, or if there are some children for whom SP has no impact.

And I have yet to find evidence showing that SP reduces the rates of functional illiteracy that McGuinness, politicians and the press complain about. The proportion of school leavers who have difficulty with reading comprehension has hovered around 17% for decades in the US (NCES, 2013) and in the UK (Rashid & Brooks, 2010). A similar proportion of children in the US and the UK populations have some kind of learning difficulty. And according to the Warnock report that figure appears to have been stable in the UK since mass education was introduced.

The magical number 17 plus or minus 2

There’s a likely explanation for that 17% (or thereabouts). In a large population, some features (such as height, weight, IQ or reading ability) are the outcome of what are essentially random variables. If you measure one of those features across the population and plot a graph of your measurements, they will form what’s commonly referred to as a normal distribution – with the familiar bell curve shape. The curve will be symmetrical around the mean (average) score. Not only does that tell you that 50% of your population will score above the mean and 50% below it, it also enables you to predict what proportion of the population will be significantly taller/shorter, lighter/heavier, more/less intelligent or better/worse at reading than average. Statistically, around 16% of the population will score more than one standard deviation below the mean. Those people will be significantly shorter/lighter/less intelligent or have more difficulties with reading than the rest of the population.

Bell curves tend to ring alarm bells so I need to make it clear what I am not saying. I’m not saying that problems with reading are due to a ‘reading gene’ or to biology or IQ and that we can’t do anything about them. What I am saying is that if reading ability in a large population is the outcome of not just one factor, but many factors that are to all intents and purposes random, then it’s a pretty safe bet that around 16% of children will have a significant problem with it. What’s important for that 16% is figuring out what factors are causing reading problems for individual children within that group. There are likely to be several different causes, as the NCES (1993) study found. So a child might have reading difficulties due to persistent glue ear as an infant, an undiagnosed developmental disorder, having a mother with mental health problems who hardly speaks to them, having no books at home or because their family dismisses reading as pointless. Or all of the above. SP might help, but is unlikely to address all of the obstacles to word reading, spelling and comprehension that child faces.

The data show that SP enables 11 year-olds as a group to make huge gains in their word reading and spelling skills. That’s brilliant. Let’s use synthetic phonics.

The data also show that SP doesn’t eliminate reading comprehension problems for at least 15% of 11 year-olds – or the word reading problems of around 15% of 6-7 year-olds. That could be due to some SP programmes not being taught systematically enough, intensively enough or for long enough. But it could be due to other causes. If so, those causes need to be identified and addressed or the child’s functional literacy will remain at risk.

I can see why the Clackmannanshire study convinced the UK government to recommend then mandate the use of SP for reading instruction in English schools (things are different in Scotland), but I haven’t yet found a follow-up study that measured literacy levels at 16, or the later impact on educational attainment; and the children involved in the study would now be in their early 20s.

What concerns me is that if more is being implicitly claimed for SP than it can actually deliver or if it fails to deliver a substantial improvement in the functional literacy of school leavers in a decade’s time, then it’s likely to be seen as yet another educational ‘fad’ and abandoned, regardless of the gains it brings in decoding and spelling. Meanwhile, the many other factors involved in reading comprehension are at risk of being marginalised if policy-makers pin their hopes on SP alone. Which just goes to show why nationally mandated educational policies should be thoroughly piloted and evaluated before they are foisted on schools.


References

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.
NCES (1993). Adult Literacy in America. National Center for Educational Statistics.
NCES (2013). Trends in Academic Progress. National Center for Educational Statistics.
Rashid, S & Brooks, G (2010). The levels of attainment in literacy and numeracy of 13- to 19-year-olds in England, 1948–2009. National Research and Development Centre for adult literacy and numeracy.

Advertisements

41 thoughts on “synthetic phonics and functional literacy: the missing link

  1. I have to say that I don’t particularly disagree with your analysis on this one, Sue. But I would ask, what ‘method’ of teaching reading *does* have peer reviewed studies showing superior longterm effects over ‘other methods’?

    As you surely know, research studies don’t just happen; there has to be a rationale, interest and funding. If none are forthcoming the research isn’t done.
    As long ago as 2005 the Education Select committee recommended that a full study should be undertaken into the initial teaching of reading. The recommendation was ignored by the government in favour of a watered down ‘pilot’, which, as I recall, was based on the then current guidance.

    • The point I’m making is that if, as the functional literacy data suggests, the causes of functional literacy are many and varied, no one method of teaching reading is likely to have a major impact.

      That isn’t to say that it doesn’t matter how we teach children to read; the evidence shows pretty conclusively that SP significantly improves word reading and spelling and comprehension for some children. And word reading and spelling and comprehension are important skills.

      What concerns me is that although there have been endless complaints about the 15-20% of children who struggle with their literacy, numeracy or education in general, we don’t focus sufficient resources on finding out what causes those problems. What’s ironic is that if we did, all children would be likely to benefit.

      Government policy-making and research funding decisions are major problem areas, you’re quite right.

  2. Again, a well-argued and informative blog. Thank you. I agree that there is much about SP to commend it as a tool for teaching reading. However, you have shown that it is unlikely to eradicate illiteracy, and I also agree with that. One factor may be that while SSP is an ideal method to teach decoding for shallow orthographies, it does not completely solve decoding problems for the English deep orthography. There are some self-evident problems with using it, unsupported by other approaches, in this context.

    As regards the evidence from the Clackmannanshire study I want to point out that this study is flawed. Initially the research team used control groups, who were taught using analytic phonics, against which to compare the SSP group. When it appeared that the SSP group were making greater gains than the AP groups all the children were transferred to the SSP programme for ethical reasons. This meant that there was no control group and results had to be compared to wider norms. As I see it, comparison to such norms exacerbates the problem of the specific profile of the group. We don’t know if a comparison group, matched in all possible ways bar teaching method, would have been ‘average’ or otherwise.

    For more on the Clack study:
    https://community.tes.co.uk/reading_theory_and_practice/b/weblog/archive/2014/06/01/how-secure-is-the-clackmannanshire-study.aspx

      • The Ellis & Moss paper is indeed “especially interesting.” It tells us a lot about the sociology of academia as well as the workings of EdLand. It also provides evidence for the proposition that the “missing link is “the English Alphabetic Code combined with reliance on Reification,”

        Were I doing a “League Table” ranking (sitting half-a-world away and sitting on a “rich multi-lateral data set) it would go.

        1. UK Parliament
        2. Reception-Yr 1 students in England
        3. Reception -Yr 1 teachers
        4 DfE and Ed publishers (tied)
        5.
        Academia

        My analytics are very different from those of Ellis-Moss. The Academy bears responsibility for providing the “evidence base” for the now-abandoned-by-law-but- not-by-practice “Searchlights Model.” –as well as providing the weak-theoretical “Simple View of Reading,” appended like an albatross to the Rose Report. Academia was very late to the party in critiquing Clack, and have done a tepid blasting of the study.

        Ellis-Moss hold Nick Gibb individually responsible, failing to note that “Systematic Synthetic Phonics” gained governmental support under the Labour Government, which has been maintained under the present Coalition government.

        Despite what the actions of the “actees” ranked below them, and the political lobbying of their leadership, the Reception and Year 1 teachers and kids have done remarkably well. Each kid-cohort has improved in core Alphabetic Code capability in the three years screening has been done. The improvement will be markedly accelerated if and when the Academy gets around to look the data that are in plain sight.

        “Mistakes have been made” by the DfE and the publishing sector, but the mistakes haven’t been sufficiently consequential to derail progress in building a strong foundation that can lead to real rather than reified “functional literacy.”

        Ellis-Moss conclude with the rhetorical imperative: “The research community needs to act fast.” Don’t hold your breath for that action or expect much from the action if and when it comes.

        Having said all that, although the Ellis and Moss are preaching to the choir, it’s a good sermon. They get a lot of things right, and It would benefit all if the research community were to act fast, My crystal ball just doesn’t see that happening.

  3. Question: What is the “missing link” between Synthetic Phonics and Functional Literacy?
    Short Answer: The English Alphabetic Code combined with reliance on Reification

    Longer Answer:
    To use your terminology, it’s all about logicalincrementalism—or the lack thereof. EdLand is short on b logic and long on accretion, but that holds for all institutional communities.

    The thing is, Children learn to speak prose, but they have to be taught how to read communication. The link between written and spoken communication is the English Alphabetic Code. Although commonly maligned, the substance and the structure of the Code make it the world’s common core language, but these characteristics make it one of the world’s most complicated languages.
    Logically, it would make sense to treat kids entering school speaking prose how to handle the Alphabetic Code, the link between their spoken language capability and their latent written communication capability. But this is EdLand. Psychologists commonly leapfrog over the substance and structure of the Code. EdLanders and citizenry communicate it as “phonics”, or at best as “systematic synthetic phonics.” leading to fuzzy communication within EdLand and fogged communication externally

    The “missing link” is further complicated by the fact that some children learn the rudiments of handling the Alphabetic Code without any (apparent) formal instruction. EdLand calls them Gifted. Others acquire “workarounds” to inadvertent defective instruction that permit them to function passably. EdLand calls them Proficient. Others who are biologically or instructionally scuffed, never learn. Since all children with few exceptions enter school speaking minimal prose to make initial instruction leading to functional literacy feasible, the differences are school-induced. However, they aren’t attributed to instructional deficits; they are viewed as deficits in the kids, their parents, or the “culture.”

    Long Answer: Outside the bounds of comment to a blogpost.

    Your post flags so many matters that warrant deliberation, I’ll not go beyond the first paragraph.

    Re Diane McGuinness. Her 1997 book is historically important, but a more current and comprehensive reference is her 3-book trilogy: Growing a Reader; Language Development and Learning to Read; Early Reading Instruction. The trilogy unpacks the quandary conveyed in your post.

    Re Califonia: A concrete example of my answer to the “missing link.” “Whole Language” was implemented in CA in 1986. This was a decade after Goodman’s article. Prior to 1986, primary reading instruction relied on a textbook and workbook for each student in Yr-Grades 1-3, morphing into a “literature text” per grade in Grades 4-6. The 1986 change was a bonanza for publishers and was welcomed by teachers because it “freed them from the textbook.’

    A decade later, “Whole Language” was generally recognized as a failure, and it was “back to phonics.” The thing is, the Alphabetic Code was never “there” to go back to, the schools had all the “real books” on their hands. Five years later, when publishers “implemented” by adding a smattering of “phonics” (that was less than the “phonics” they ostensibly went back to) to the gear they had been using. They added more gear for the teacher and classroom and called it Balanced Literacy. Although the US and CA have since gone through the gymnastics of “No Child Left Behind” and “Race to the Top” the textbooks haven’t changed. Meanwhile, the tests, which never matched the texts, didn’t change to match the rhetoric. It
    isn’tt at all surprising that reading test scores have remained largely “flat,” although each year governmental and school officials have found some way to spin some part of the results as a “gain.” The “gains” have no consistency or progression, but who remembers from year to year.

    That’s an example of “absence of the Alphabetic Code and reliance on Reification.”

    • Thanks for your comment. Responses to a few points;

      “Children learn to speak prose, but they have to be taught how to read communication.” This is a frequent meme amongst SP proponents and one that I’m planning to discuss in my next post. I completely agree that the alphabetic code is the link between spoken and written language (at least for languages that use it). But as you point out, many children learn to read without being taught the alphabetic code explicitly.

      “Since all children with few exceptions enter school speaking minimal prose to make initial instruction leading to functional literacy feasible, the differences are school-induced. However, they aren’t attributed to instructional deficits; they are viewed as deficits in the kids, their parents, or the “culture.””

      How are the differences ‘school-induced’ if only a tiny minority of children in a class struggle with reading? Or with synthetic phonics come to that? Your view is that some children learn to read despite poor teaching. My view is that some children don’t learn to read despite good teaching. I can see why teachers might want to shed the blame for some kids not learning to read, but if 27 out of 30 children in a class are reading fine and 3 aren’t, I think there are good reasons for not suspecting ‘instructional deficits’.

      Re Diane McGuinness: I tried to read “Growing a Reader” but gave up in despair. In my post I’ve picked up on only two points McGuinness makes in “Why Can’t Children Read”, but I could have mentioned dozens. Much of the factual content of her books is factual; but much of it is speculation, misunderstanding and sweeping generalisation. I find that too frustrating to persevere with.

      Re California: For example in Growing a Reader (2004) McGuinness refers again to California’s performance. She says it had a functional illiteracy rate of nearly 60%. That sounds terrible – unless you know that McGuinness sets her functional literacy standard higher than the NAEP assessment. And that California’s fall from grace wasn’t statistically significant.

      Thank you for explaining about the textbooks. That does clarify what was meant by ‘phonics’.

      Re The missing link: If I’ve understood correctly, you’re saying that we don’t have evidence showing the impact of SP on functional literacy because in practice, the alphabetic code has never been rigorously and systematically taught and its impact later evaluated. If that’s what you are saying, I’d agree.

      But that’s the problem. If we don’t actually know what the impact would be, we can’t assume that SP would reduce functional illiteracy. And if the functional illiteracy rate is around 16% across the English-speaking world and has remained at that level for decades, it does suggest that multiple factors (not just phonological awareness or awareness of the alphabetic code) are involved.

      • How are the differences ‘school-induced’ if only a tiny minority of children in a class struggle with reading?
        All (+/-) children differ with the pre-requisites. They are spread out by what happens in school. You can call it the “Matthew Effect” metaphorically, but operationally it’s the School Instruction Effect. It’s not in the kids or in the Bible, it’s in the instruction.

        Your view is that some children learn to read despite poor teaching. My view is that some children don’t learn to read despite good teaching.

        I think I said (or should have said) “inadvertent mal-instruction,” which seems to me to reconcile the two views.

        Re Diane McGuinness’ contribution
        “Reading comprehension” is a function of what the reader brings to the text. When McGuinness is good, she is good par excellence, but she, like all of us, has blind sides and clinkers. To exorcise the clinkers would take us far beyond the scope of the colloquy here.

        But that’s the problem. If we don’t actually know what the impact would be, we can’t assume that SP would reduce functional illiteracy.
        Actually, we have the data! It’s in the results of the Yr 1 Screening Check, and the KS Reading Test. But the metrics have to be analyzed much more intensively than has been done to date, and no one (+/-) seems at all interested in doing this.

      • Me: How are the differences ‘school-induced’ if only a tiny minority of children in a class struggle with reading?

        DS: All (+/-) children differ with the pre-requisites. They are spread out by what happens in school. You can call it the “Matthew Effect” metaphorically, but operationally it’s the School Instruction Effect. It’s not in the kids or in the Bible, it’s in the instruction.

        Me:Your view is that some children learn to read despite poor teaching. My view is that some children don’t learn to read despite good teaching.
        D.S:I think I said (or should have said) “inadvertent mal-instruction,” which seems to me to reconcile the two views.

        Me: You’re saying that the variation in outcome is only due to instruction? How would you rule out all the other possible variables?

        Me: But that’s the problem. If we don’t actually know what the impact would be, we can’t assume that SP would reduce functional illiteracy.

        DS: Actually, we have the data! It’s in the results of the Yr 1 Screening Check, and the KS Reading Test. But the metrics have to be analyzed much more intensively than has been done to date, and no one (+/-) seems at all interested in doing this.

        Me:But decoding in Yr1 and the KS 1 reading test are not assessments of functional literacy at 16. Where are those data?

      • The format is getting a little hard to follow I’ll try to answer the two questions I think you are asking.
        1. How can we rule out possible variables other than “instruction”?
        The same way we do in all scientific experimentation. By confirming the replicability of the “If-Then” relationship. That is: “If”(variations in instruction)/”Then”(differences in dependent variable i.e the Screening Check). Should there be further doubt, the population of classrooms and schools is sufficiently large that we can draw random samples to test the conclusion that “it’s the instruction.

        Then when we act on the “evidence” we’ve confirmed, we find that Voila! That’s it.
        If we can’t say That’s It!, we conclude “experimenter error,” and it’s back to the drawing board. But minimally, the experiment will provide clues re the “best bet” of how to correct and move on.

        The point is, we are testing the instruction, not the kids. This, in my view, is what we should be doing.

        2. Decoding in Yr1 and the KS 1 reading test are not assessments of functional literacy at 16. Where are those data?
        The logic is that the foundations for functional literacy (whatever it means) are best laid down with instruction during the the sensitive developmental period of +/- age 4-6.

        Schools can and sometimes do mess up instruction after kids have been taught how to handle the Alphabetic Code. But kids don’t “lose capability” they’ve acquired. That is, completing formal instruction in reading per se, is an asset that kids personally and schools institutionally can build on, True, “a rich curriculum” is important all the way through, but if the “how to” of a “rich curriculum” isn’t specified, it’s happy-talk that messes up the entire shoes-on-the-ground instruction.

        To specifically answer your question: We don’t have the data you ask for, but we don’t need them. We should be keeping an eye on each cohort of kids as they go through schooling, but we have to instruct the primary kids we have now, not those we had earlier or that we will have later. By the time the current cohort gets to Age 16, “functional literacy” will have a whole nother meaning.

        There are indeed kids in the schooling pipeline now who have been inadvertently instructionally scuffed and are regarded as “dyslexic” It’s feasible to “fix the problem”–but not by debating the meaning of “dyslexia.”

      • Me: How can we rule out possible variables other than “instruction”?

        DS: The same way we do in all scientific experimentation. By confirming the replicability of the “If-Then” relationship. That is: “If”(variations in instruction)/”Then”(differences in dependent variable i.e the Screening Check). Should there be further doubt, the population of classrooms and schools is sufficiently large that we can draw random samples to test the conclusion that “it’s the instruction.

        It’s true that ‘if’ there are variations in instruction ‘then’ there will be differences in the dependent variable, and those differences will emerge in the Phonics Check. But there are many possible causes for impaired decoding ability, and you seem to be saying that instruction must be able to overcome all of them. Is that what you are saying?

        If we want to test the instruction, not the kids, then we need to see what happens when we control for other all other major variables. I don’t think we’ve ever done that.

        2. Me: Decoding in Yr1 and the KS 1 reading test are not assessments of functional literacy at 16. Where are those data?

        DS: The logic is that the foundations for functional literacy (whatever it means) are best laid down with instruction during the the sensitive developmental period of +/- age 4-6.

        I understand the ‘logic’. I am completely persuaded that SP significantly improves the decoding ability of some children. It wouldn’t surprise me if it improved the decoding ability of most children. But the data tell us that it doesn’t improve the decoding ability of all children. And we don’t know what impact it has on the functional literacy of adults.

        I have yet to meet an SP advocate who is ambivalent about SP improving adult functional literacy. Or one who says that improvements to decoding are enough. But from what you’re saying, if some children’s decoding is still a bit iffy when they are 11, or if SP doesn’t make any difference to the functional literacy of English school students in 10 years’ time, then that must be because instruction wasn’t good enough. That looks like a watertight argument, except it rests on a very wobbly implicit assumption that every child could decode given the appropriate instruction.

        As for there being a “sensitive developmental period of +/- age 4-6”, I’d be interested to see some data to support that assertion too.

  4. Dick, you said, “Despite what the actions of the “actees” ranked below them, and the political lobbying of their leadership, the Reception and Year 1 teachers and kids have done remarkably well. Each kid-cohort has improved in core Alphabetic Code capability in the three years screening has been done. The improvement will be markedly accelerated if and when the Academy gets around to look the data that are in plain sight.” I take it that you mean that scores in the phonics screening check have improved. This probably does indicate that children are increasingly being taken through phonics training at the recommended pace. And we know teachers are reporting more teaching of nonwords in preparation for the check. Despite all this it is far less clear that children are consequently improved readers. Until we know that it is important to hold fire on the conclusion that the phonics check is improving reading. The NFER interim report concludes that some children who fail the phonics check nevertheless pass the KS1 reading assessment and that a school’s enthusiasm for phonics, or lack thereof, does not seem to affect the KS1 results.

    I am interested in your viewpoint. You seem to be saying that the phonics programmes do not teach the alphabetic code. We’ve discussed this before on the TES blog and I have to admit I was puzzled then about what you were getting at. Could you give some details as to how the phonics programmes are failing to teach pupils to ‘handle the alphabetic code’, and how you think this should be taught?

    TES blog discussion here: https://community.tes.co.uk/reading_theory_and_practice/b/weblog/archive/2015/01/16/the-relationship-of-synthetic-phonics-to-comprehension.aspx

    • I take it that you mean that scores in the phonics screening check have improved.
      Yes. That holds at the National Level. At the LEA level, all that we see is variability–with strong indications that the variability is a function of the instruction that schools and teachers are actually providing–irrespective of what they say the are doing about “Phonics”, think they are doing, or their attitude to “Phonics” or to the Screening Check.

      What I’ve been trying to say, in as many different ways, in as many different places as I can find: Look at the data. It’s largely been collected, and its examination will illuminate “best bet” next steps.

      You seem to be saying that the phonics programmes do not teach the alphabetic code.
      Well, I’d turn it around a bit. The Alphabetic Code is the link between written and spoken English, and yagotta use the Code to “read right.” (There ARE work-arounds, but promoting these for novices isn’t smart. Some young children learn how to handle the Code “easily”; others have much more difficulty. If the child can speak prose, the differences are only in rate of acquisition + school induced unintended mal-instruction.)

      Although the Alphabetic Code is the link between written and spoken communication, like any Code (or cipher) there are oodles of other considerations beyond the Code, but entangling reified abstractions like “comprehension,” “meaning” and “functional literacy” isn’t smart.

      The “proof” of any Programme is the “pudding.” The UK Alphabetic Code Screening Check is a “good enough” metric for testing the pudding. of the formal instruction in reading per se. If we look more closely at the “evidence” of the Natural Experiment underway, using the Screening Check as the probe, we wont’t have to speculate about Programmes, Phonics, and a lot of other contemporary entanglements.

      • Surely the Alphabetic Code is simply a pronunciation guide to the symbols of written English; the pronunciation is what links it to spoken English. The only difference between this and a dictionary is that it is presented the other way round (in charts) and uses normal letters rather than IPA symbols.

      • Dick, what data can we expect to get from the results of the phonics check?

        I can see that it will tell us if pupils have been taught and learnt the phonic correspondences necessary for decoding the nonwords in the check. We could also learn something from their responses to the real words – but this is slippery, for if the real words are pronounced correctly this could either be the result of applying the alphabetic code or of the fact that the child knows the word/s, while if pronounced incorrectly, this may nevertheless not indicate lack of knowledge of correspondences. This feature of the check makes it unfit for the purpose of giving a clear assessment of progress in phonics. And while these details may be useful to teachers for planning further work, I believe results are reported upwards simply as a mark – analysts at NFER or the DfE won’t get the useful details, they will just get a mark which is not even a reliable measure of phonics use.

        So how will this data tell us “best bet” next steps?

        Analysis of programmes used against check results might indicate that some programmes are more successful than others for teaching the correspondences (if analysis can factor out or allow for the real words). Then that programme could be selected to be used exclusively, with the decision as to how to teach phonics taken completely out of the hands of schools. Perhaps this is the hope – identification of a highly effective teacher-proof text book for phonics. But would this be satisfactory, given that the programme would only cover phonics with no allowance made for other factors? Factors such as the thorny issue of differing levels of ‘prose’ use – highly influential, not only on the accuracy of decoding but also on the educational benefits obtained from reading. Is it wise to de-skill teachers by making their professional judgements redundant?

        Surely the ultimate aim of phonics check and programme should be to improve reading. Unless it is shown that these improve reading there is no justification for either. They greatly reduce the choices teachers can make based on their first-hand experience of their pupils, their knowledge of pupils’ levels of language and understanding, and their assessments of pupils’ abilities to get to reading through a synthetic phonics route. This is a massive sacrifice to make unless it is proven to be of real, tangible benefit – the attainment of at least functional literacy for an increased percentage of pupils. Sue’s blogpost above places a question mark over the belief that SSP can eradicate illiteracy.

        In fact we have the data linking the phonic check results with which schools best achieve the ultimate aim of teaching reading, from the NFER interim report. But unfortunately, you choose to dismiss this softer data when you assert that results are simply and directly dictated by instructional fidelity to programmes. According to the softer data schools where this fidelity is weak do not suffer in their KS1 results, ie they are nevertheless able to deliver the good reading which is the ultimate aim. It is true that this data depends on the subjective views of teachers, schools and the surveying body. This does not justify ignoring it. Finding “best bet” next steps should, at the very least, introduce some standard means of assessing these subjective judgements in order to measure the importance of phonics as compared to the importance of teacher judgement (possibly to use other methods alongside). Otherwise we lose sight of the ultimate aim of effective reading instruction through a dangerous assumption that effective reading instruction must always be through a strictly applied and prescribed phonics route.

        The UK Alphabetic Code Screening Check is NOT a “good enough” metric for testing the pudding of the formal instruction in reading. It is a flawed check which does not provide the data needed to assess pupil progress in reading.

        We have already discussed this exhaustively on the TES blog, and I have no wish to hijack Sue’s blog, which deals with wider issues, any further. I’m signing out at this point.

  5. I’m not an academic or a teacher so I’ll have to write in laymen’s terms. I hope you don’t mind me butting in on an academic debate but I have two thoughts or questions:
    1. Can we compare apples and pears to shed light on illiteracy beyond decoding ie. What are the causes of illiteracy in countries with a transparent orthography? How are they tackled? Maybe that can help English speaking countries resolve remaining literacy problems, once they’ve implemented good phonics across the board? (I really don’t think that is happening now, in my own experience).
    2. It seems to me that many issues get lumped together in this debate. Can we calculate (using data on neurodevelopmental problems, poverty, neglect, genetics) just which and how many children are still at risk of literacy problems (after good phonics)? I’d have thought research in neuroscience has a lot to offer in terms of disentangling different types of problems and the causes of them in this debate.

    • Good points!
      In languages with one-to-one Rules of Grapheme-Phoneme Correspondence (a transparent readography) reading per se can be, and is, reliably taught in a matter of weeks or months. This would hold for English, were it only the 40ish Correspondences of “Phonics first, fast, and only.” The complications arise largely in the Advanced/Extended/Complex Code.

      In one-to-one Correspondence Codes, “dyslexia”/reading problem isn’t a matter of “accuracy.” It’s a matter of “fluency.”

      Re untangling. When we don’t have consensus about either what constitutes “literacy problems” or what constitutes “good phonics” looking to other data bases, however they may be in their own right, further tangles, rather than untangles.

      The UK Alphabetic Code Screening Check is a “good enough” measure for identifying children at instructional risk. The data show that a good portion of kids with “neurological needs” really don’t need “special education” in reading–they aren’t “reading disabled.” Indications are that more of these children would be “enabled” were they not inadvertently mal-instructed, but the instruction they are receiving hasn’t been examined.

      It doesn’t require research or “data” to know that “poverty, neglect, and other social abuses scuff kids, The only indicator available for these variables is “Free School Meals.” Screening Check data show a relationship, but the data provide no support that the relationship is causal.

      • Thank you for your reply Dick. I coach children, dyslexic or otherwise, who struggle with reading using the excellent SP training I undertook last year (and wish that all teachers did).

        I can see good SP (with comprehensive teaching of the complex code) will help nearly all of the children who come to me. But what it also does is highlight where there are additional problems which tend to get lumped in with dyslexia and are often not picked out from the (presenting) reading problem or treated adequately.

        So what hinders fluency in one-to-one correspondence codes? That, to me, might prove to really useful information for helping children with problems beyond reading. (And it might help to stem the almost inevitable expansion of various diagnoses of other ‘issues’ if they continue to incorporate reading.)

    • Lucy, you might be interested in work done by David Share, looking at the range of skills needed for effective reading beyond decoding. His theory is that reading research has been so pre-occupied with problems presented by the deep orthography of English that other issues have been missed or dismissed. I have written a blogpost about this with the necessary references to Share’s work. I suspect you may not agree with my conclusions, but you might find the journey interesting. Here:
      https://community.tes.co.uk/reading_theory_and_practice/b/weblog/archive/2014/04/22/perils-and-eccentricities.aspx

  6. Response to Marlene Greenwood.

    The technical title of the English Alphabetic Code is “Rules of Grapheme/Phoneme Correspondence.” The “Rules” are NOT the generalizations that are the currency in psychology and other social sciences. And they are not the “Phonics Rules” of spelling and reading embedded in some instructional schemes.

    The action or “link” in the Code is provided by the Correspondences, not by either the phonemes or the graphemes. This is the sense in which the term, “Rules” is used.

    It’s also worth noting that there English Morphology Rules, such as prefixes, suffixes, plurals, and suffixes internal to words that change the meaning of base words. English is a Grapho-Phonemic language and much of the misunderstood “irregularity ” is straightened out by adding “Morphologics” to “Phonics.”

    There is also the consideration of Syllabification, since syllables are the smallest pronounceable unit of spoken English. English is Alphabetic-based, not Syllable based, so a Syllabary or onset-rime progression is not the best way to initiate reading instruction, However, syllabification considerations warrant later instructional attention.

    There are other complications (or simplifications, if used as such. Punctuation marks, for one. Font size and characteristics for another. And reading on the Internet introduces a whole nother set of complications/simplifications.

    It’s “smart: to place the highest priority on the Alphabetic Code, because teaching kids how to use the Code in written communication is foundational to “everything else.” If this instruction is messed up, “everything else” gets messed up.

    It seems to me that there are more differences in comparing the Alphabetic Code to a Dictionary than there are similarities. Each is a “stand-alone” reference. However, the use of the Code makes sense only in the context of the foregoing considerations. It seems to me that the Code is more comparable to the Periodic Table of Elements or the Genetic Code. That is, it’s useful to specialists, but doesn’t play much of a role in everyday life, where we use the artifacts that have evolved or engineered to be “fit for purpose.” A Dictionary on the other hand is fit for purpose of non specialist duffers who know how to read and also the conventions of reading a dictionary–duffers.

    We’re all duffers in some reading. I can read, but there are some words I have to “look up.” I was just reading an article that used the word “belletristic,” Since the phrase was “belletristic disdain” I got the drift but i at first read misread the word as “balletristic” since “ballet” is a word in my lexicon, and I didn’t make the connection with “belle lettres” Slowing down I corrected the vowel Correspondence, but I still had only dim notion of the communication. “Looking it up” involved only a few clicks on the keyboard, and I now understood it as a neatly turned phrase. I’ll be surprised if I ever encounter the word again, and I don’t plan to ever use it, but the dictionary made the reading more pleasurable.

    Incidentally, SpellCheck just told me I had misspelled the word with 2 “t’s”, a carry-over from my “ballet” confusion. I corrected that misspelling, but other typos likely remain.

  7. But there are many possible causes for impaired decoding ability, and you seem to be saying that instruction must be able to overcome all of them. Is that what you are saying?

    If we (people in general and the UK government if particular) want to teach all kids to read by the end of KS1, the answer is Yes. It’s not me saying it, it’s what the intent entails. The dither is in the term “overcome.” The results of the Screening Check in England indicate that many of the “possible causes” HAVE been overcome–ELL for example–and the results have also illuminated next steps for cleaning up other messy matters–“special needs” and “dyslexia, for example.

    “Poverty” is the only popular “possible cause” for which there is still any relationship, and the Check results strongly suggest that the relationship is NOT causal. Pursuing the matter requires only analysis at the school and class level–which is what I’ve been pitching.

    If we want to test the instruction, not the kids, then we need to see what happens when we control for all other major variables. I don’t think we’ve ever done that.

    I’m saying we have to live with the “major variables” we have. That is, we can’t “control” poverty, for example. Efforts to control it statistically lead to all kinds of nonsense. All wegotta do is look more carefully at the “evidence” right in front us in the Natural Experiment probe of the UK Screening Check.

    from what you’re saying, if some children’s decoding is still a bit iffy when they are 11, or if SP doesn’t make any difference to the functional literacy of English school students in 10 years’ time, then that must be because instruction wasn’t good enough. That looks like a watertight argument, except it rests on a very wobbly implicit assumption that every child could decode given the appropriate instruction.
    Actually, the assumption isn’t implicit. The proposition is being put to the test before our eyes in the Natural Experiment. Moreover, I’m saying that it’s feasible to “clean up” the older “iffy” kids now. Moreover plus, I’m saying that the “functional literacy” EdLand is concerned with today isn’t even up to contemporary Communication Technology, let alone prevailing conditions 16 years out.

    • Possible causes for impaired decoding

      The ‘intent’ might be to teach all kids to read by the end of KS1, but that doesn’t mean it’s a realistic intent. Of those children eligible for the phonics test, 12% didn’t meet the expected standard and almost 6000 scored 0. What evidence do we have that the ‘intent’ is actually feasible? Do we know what proportion of children we can teach to read English, or what proportion can become functionally literate?

      How has the Phonics Check illuminated the next steps for cleaning up special needs and dyslexia?

      Controlling for major variables

      But the evidence is that 12% of children in Y2 weren’t able to decode 80% or more of target words/non-words. What does that tell us about the major variables?

      Implicit assumption that every child can decode

      If the assumption is being tested, then the results so far are telling us that it’s wrong. What evidence do you have about ‘cleaning up’ the older ‘iffy’ kids?

  8. Hi Suzy, loving these blogs, sorry can’t cope with working my way through all the comments. Just wanted to point out that, where later literacy is concerned, surely one of the keys is that children have to enjoy reading, because the majority of reading they do (over the course of their lives) will need to be outside of school? This, I suspect is what some people are missing – that some children basically stop reading later on because they stop enjoying it. I saw that someone here said they don’t go backwards after they’ve learned how to do it, but if they stop practising then surely that is a risk? As far as I’m aware, various studies have shown a very powerful link between taking pleasure from reading, doing more of it, and getting better at it. Looking forward to the next instalment! 🙂

    p.s. I don’t buy the faulty instruction line either.

  9. What evidence do we have that the ‘intent’ [to teach all kids to read] is actually feasible?
    The “evidence” is in play in the Natural Experiment. There is more data there than has been analyzed. And with the wide-spread belief that the intent is unrealistic, it’s remarkable that the Experiment is still in progress.

    How has the Phonics Check illuminated the next steps for cleaning up special needs and dyslexia?
    Re: “special needs.” The results to date indicate that many children classified as special needs don’t have those “needs” insofar as reading is concerned. The next step is further analysis of the differences in “needs” categories, and action consistent with the differences.

    Re “dyslexia” The Screening Check is applicable to individuals of any age who have a “reading problem.” Quick application of the Check can determine if the “cause” is absence of capability to handle the Alphabetic Code. If that’s it, the ‘problem” can be fixed. If that’s ‘not it” more troubleshooting can be done. In any case, addressing “dyslexia” at the present time entails instruction. The only question is, What instruction? That’s an empirical matter.

    the evidence is that 12% of children in Y2 weren’t able to decode 80% or more of target words/non-words. What does that tell us about the major variables?
    Several things:
    1. Schools nationally aren’t doing much to “intervene” in Y2.
    2. If kids are inadvertently mal-instructed in Reception and Yr 1, they are increasingly “tough cases.”
    3. The relative difficulty of the items that comprise the Check match the pattern of difficulty for the Year1 overall and Yr 2 non-pass.

    • What evidence do we have that the ‘intent’ [to teach all kids to read] is actually feasible?
      So despite there being no evidence whatsoever, it seems, that a group of kids representing the normal range have ever all been taught to read well enough for them all to do more than extract simple information from a newspaper report, you’re willing to claim the evidence is there, but we just haven’t analysed it yet? Please note I am not saying that we know it’s impossible to teach all kids to read, or that we shouldn’t aim for that – what I’m saying is that the data we have so far say it’s unlikely that we’ll be able to do it.

      Special needs

      In the UK, ‘special needs’ are defined in terms of the educational provision available in most schools. If a child needs provision additional to what’s generally available, they have SEN. So whether a child is classified as having SEN is a matter of provision, rather than factors within the child. The Phonics Check results to date show that a significant percentage of children in Y2 cannot effectively decode text. So whatever provision is being made in their schools it isn’t effective in 10%-15% of cases. I completely agree that the next step is further analysis, and appropriate action, but that, in itself, involves additional educational provision.

      ‘Dyslexia’
      If the cause of the problem isn’t absence of capability to handle the Alphabetic Code, what troubleshooting do you have in mind? What other causes of ‘dyslexia’ might there be?

      12% of Y2 children not meeting the expected standard
      1. What do you mean by ‘intervene’?
      2. I can see why ‘mal-instructed’ children might have problems with decoding as a result, but these are children who have been taught SP for two years running. And they are in Y2. They’re not school-hardened 11 year-olds with attitude.
      3. How is ‘difficulty’ determined? We keep reducing it until we get a 100% pass-rate?

  10. The usual danger of some circular arguments here: the claim that children who don’t meet the ‘expected standard’ haven’t been taught SP properly becomes a tautology (thus excluding by conceptual fiat the empirical possibility that the best possible SP programme won’t ‘work’ for all children.
    Or ‘Dyslexia doesn’t exist’ because the pupils allegedly in the set of ‘dyslexics’ actually weren’t taught SP properly (again, this becomes an unfalsifiable claim, and therefore deeply suspect. ) By the way, I’m not assuming any particular position in the debate about whether dyslexia ‘exists’.

  11. Dick, you wrote :
    “All wegotta do is look more carefully at the “evidence” right in front us in the Natural Experiment probe of the UK Screening Check.”

    What do you mean by ‘Natural Experiment’?
    Are you saying that the Phonics Screening Check is an experiment in which all the 6 year old children in England as its guinea pigs?
    or:
    Are you saying that mandatory teaching of synthetic phonics is the ‘Natural Experiment’ and the phonic screening check is the evidence for the experiment?

    I don’t think I like the idea of children in English schools being the subject of an experiment!

  12. Pingback: On Teaching Reading | ThinkSayWriteCheck

  13. Have no fear about the Natural Experiment, Marlene. The thing about a Natural Experiment is that there is no artificial imposition of any kind at all. The results of the Screening Check nationally indicate that there is variability in the reading instruction that LEA’s are providing. That is, LEAs are getting different results that is not a function of commonly believed, possible “major” biosocial variables (a relationship with “free school meals” excepted). However, no analysis has as yet been done of what instruction is being provided “shoes on the ground” in schools and classrooms. By comparing the differences among schools and classes we can find out such things as
    –do differences in “SP and “Mixed Methods” make any difference in student learning
    –are there differences among “SP Programmmes”
    –does the difference in training that schools chose to provide in the Match Funding make any difference
    –were differences in schools choosing to participate in the Match Funding and the patterns of their purchases consequential

    None of these questions entails any additional burden on either school personnel or students–It’s all natural but it also yields experimental evidence.

  14. Reply to Professor Davis–
    What we have here isn’t a tautology. We have the results of a psychometrically sound Alphabetic Code Screening Check. If you question the Alphabetic Code or what constitutes a psychometrically sound instrument, we can talk about that, but the Code and the Check are “on the ground conditions.”

    Conceptually as well as empirically, it is indeed possible that +/- all kids cannot be taught how to handle the Alphabetic Code and the the other conventions entailed in written communication. That proposition is being tested in the Natural Experiment in progress.

    At this point, we know that there are differences in SP instructional schemes and programmes, and differences in school and teacher implementation, but we don’t know whether these differences are consequential. The data to clarify these uncertainties are (largely) “there,” but they have not as yet been analyzed–That is the point I’ve been trying to pitch.

    The logic just stated also holds for “dyslexia,” The Screening Check can also be used as a probe to clarify what is at present a messy conceptual and empirical matter. I’m also for that inquiry, which is cheap and presently feasible.

    I haven’t made any claims beyond the above that I’m aware of. Do you oppose empirical inquiry?
    If not, is there a better, immediately feasible alternative methodology.

  15. Response to Sue’s February 25 comment.

    What evidence do we have that the ‘intent’ [to teach all kids to read] is actually feasible?.
    The data to provide more evidence than we now have has largely been collected, but it has not as yet been analyzed. That’s all I’ve been trying to say.

    ‘Dyslexia’
    If the cause of the problem isn’t absence of capability to handle the Alphabetic Code, what troubleshooting do you have in mind? What other causes of ‘dyslexia’ might there be?

    In doing any troubleshooting, you have to take it one step at a time. Identifying individuals who flunk the Alphabetic Code Check. I have no idea how many there will be at various ages, but that information will be useful in it’s own right. Completing that initial probe will open up probe of the “work arounds” that dyslexics use to cope, and also probes of the learned helplessness that the syndrome inherently entails.

    What other causes of ‘dyslexia’ might there be?
    You name it. I’m putting my chips on instruction, but it’s an empirical question. The best methodology for untangling the matter, in my view, is what I’ve been talking about. If there’s a better way, I’ll buy into it in a London minute.

    1. What do you mean by ‘intervene’?
    RTI was coined as “Response to Intervention.” It typically turns out to be “Really Terrible Instruction.”

    I can see why ‘mal-instructed’ children might have problems with decoding as a result, but these are children who have been taught SP for two years running. And they are in Y2. They’re not school-hardened 11 year-olds with attitude.
    I agree 100%. The thing is, whatever schools think they are doing to “teach SP” just ain’t altogether right, insofar as handling the Alphabetic Code is concerned. HTs, by and large, think their teachers are prepared, and Reception and Yr 1 teachers, by and large, think they know the results of the Screening Check without the Check. And all think they’re doing what’s best for each and every kid. If what schools and teachers believe were sound, all kids, except those excluded for physical reasons, would be reading all 40 items on the Check with allowance for “little-kid carelessness.” Those that Yr 1 teachers missed, Yr 2 teachers should be able to pick up.

    Further analysis of Screening Check results that have been collected will clarify why “what should be happening” is “not happening.”

    3. How is ‘difficulty’ determined? We keep reducing it until we get a 100% pass-rate?
    I’m referring to “item difficulty” of the 40 items on the Check. This important information is buried deep within the Tech Reports for each administration of the Check. For 2013
    https://logicalincrementalism.wordpress.com/2015/02/23/synthetic-phonics-and-functional-literacy-the-missing-link/#comment-491

    The Figure is on page 14-15 and misleadingly labeled “Facility.” You can see by inspection that the Yr 1 difficulty closely matches the Y2 difficulty. And if you compare 2013 with 2012, you’ll see the same findings. Whatever the Yr 2 teachers are doing, the “intervention” ain’t intervening.

    The relative difficulty of the 40 items is also worth noting. For example, the “debate” over “pseudo-words” and “real words” was/is much ado about little. It’s in the Alphabetic Code and the instruction.

      • I really must protest at your efforts to dismiss the real-words-in-the-check issue, Dick.

        This is what the technical report says:
        “The phonics screening check consists of 20 real words and 20 pseudo-words. The pseudo-words provide the purest assessment of phonic decoding because they are new to all children, so there is no unintended bias based on visual memory of words or vocabulary knowledge. The pseudo-words are presented alongside a picture prompt (a picture of an imaginary creature) and children are asked to name the type of creature. This approach makes it clear to children that they are reading a pseudo-word, which they should not expect to be able to match to their existing vocabulary. The real words include between 40 per cent and 60 per cent less common words, which children are less likely to have read previously. Less common words are included so that the majority of children will need to decode using phonics rather than rely on sight memory of words they have seen before.”

        This admits that between 40 per cent and 60 per cent of the real words are more common words which children are likely to have read previously. Therefore with these words it is a definite chance they will have been memorised, not decoded using phonics. Therefore these words do not efficiently test phonic knowledge. With the 40-60 per cent less common real words it is expected that children will need to use decoding rather than rely on memory. Yet administrators are told that such words must be pronounced correctly (same old example – blow must rhyme with low not cow). Therefore these words do not efficiently test phonic knowledge. It is more or less admitted here that the check is not a ‘pure’ check of phonic decoding and, unless the authors of the technical report are going wrong somewhere, it is clear that the real word issue exists.

        Do you mean, when you say we have the data, that teachers giving and marking the test have the data for their own pupils? This will be true if they have kept good records of which words were incorrectly pronounced and in what ways, although where real words are pronounced correctly they will be guessing ( pretty well-informed guessing if the tester is the child’s teacher) whether this is because the word has been memorised or decoded correctly. However, I’m pretty sure I’m right in saying that this detailed information is not passed upwards. The data may possibly be out there but it hasn’t been collected for analysis.

        What we do know from the NFER analysis is that schools don’t have to be super keen on phonics to get their pupils through the KS1 reading test. This casts a question mark over your basic thesis, Dick. Your basic thesis, and it is a belief, seems to be that if phonics is taught correctly all pupils will learn to read, and its converse – if pupils do not learn to read it is because phonics has not been taught correctly. But the phonics check only checks phonics and if all schools gain 100% pass rate it will only be shown that the children are learning phonics. Yes, that’s good, but let’s not get carried away until it is shown that the 100% can also read effectively for meaning. It has not been proved impossible that an over-emphasis on phonics will get in the way of the teaching and learning of other essential reading skills, and it has not been proved impossible that a differently configured approach to reading might not serve pupils better. Answers to these questions will not come out of the natural experiment because that is only testing whether well-taught phonics teaches phonics well. In fact, it will not even tell us that until the real words are thrown out.

    • Dick, thanks for the link to the technical report. It’s very helpful regarding the reliability and validity of the phonics check itself, but doesn’t answer any of my questions about the link between SP and functional literacy.

      Me: What evidence do we have that the ‘intent’ [to teach all kids to read] is actually feasible?
      DS:The data to provide more evidence than we now have has largely been collected, but it has not as yet been analyzed. That’s all I’ve been trying to say.

      If you mean the data from the Phonics Check, are you saying that identifying the difficulties of children who failed the check will tell us whether they are likely to be able to decode or not? Even if we find that with appropriate support they do learn to decode efficiently, we are still left with the issue of reading comprehension.

      Also, who will analyse the data? The technical report seems to be saying that it will be left up to schools what approaches they use with children who don’t reach the expected standard. This is just bizarre. What we have is a method that’s improved the decoding of some children (hurrah!), but hasn’t yet resolved the problem of the stubborn 10%+ that everyone complains about.

      Troubleshooting

      DS: If what schools and teachers believe were sound, all kids, except those excluded for physical reasons, would be reading all 40 items on the Check with allowance for “little-kid carelessness.” Those that Yr 1 teachers missed, Yr 2 teachers should be able to pick up.

      Agreed. But as far as I’m aware, that’s never happened whatever method teachers have used. At what point would you look at the data and say ‘this isn’t down to poor instruction, there’s something else involved’? Or would you go on blaming instruction ad infinitum?

  16. Comment on Sue’s comment of Feb 26.
    Even if we find that with appropriate support they do learn to decode efficiently, we are still left with the issue of reading comprehension.
    Ah yes. And that’s VERY good news. “Reading comprehension” is a reified construct that has tangled up EdLand for a century. In written communication as in spoken communication, “meaning” is what it’s all about. But in spoken communication, the ambiguity is readily resolved. We recognize that there are different audiences, different topics, different treatments of topics, different attitudes, and so on. In reading instruction, all of these considerations get lumped into one reified construct, and are turned into “wars.”

    Understanding/comprehension of a “dyslexic” is easy to troubleshoot. Here’s the “test/instrument” and protocol.
    1. Put a text in front of the person, say “Read this and tell me what it says.”
    You’ll get different answers, depending upon the assets the examinee bring to the table and which text you use. The “analysis of the results will tell you why the examinee is who/what the individual is, and why the individual does and does/not do what the individual does. That’s allyagotta do. The “diagnosis” is simple.” It’s the “prescription” that gets complicated, but it too can feasibly be resolved by further troubleshooting.

    What we have is a method that’s improved the decoding of some children (hurrah!), but hasn’t yet resolved the problem of the stubborn 10%+ that everyone complains about.
    I’d put it a bit differently by using the “evidence” of the SP results. What we have [SP] is a method that is being variously implemented by schools and teachers, most of whom oppose the method without further instructional augmentation. They also believe they “already know” the results of the test, which is a “good enough” test of whether an individual can handle the Alphabetic Code–the link between written and spoken language. The Screening Check can be used to troubleshoot the etiology of any individual who has a “reading problem.

    At what point would you look at the data and say ‘this isn’t down to poor instruction, there’s something else involved’? Or would you go on blaming instruction ad infinitum?
    I would self-correct the instant there was any evidence, “It’s not instruction.” We [generic] find ourselves in a situation where we’ve blamed the students, their parentage, and their culture pretty close to infinitum. Since schooling is a matter of instructing individuals, not of breeding individuals, focusing on instruction first seems sensible to me.

    • Ooops. The comment is on Sue’s Feb 28 comment, not Feb 26. I comprehend the calendar but don’t fully understand the mechanics of formatting to keep the commenting from getting tangled in form.

    • 
Even if we find that with appropriate support they do learn to decode efficiently, we are still left with the issue of reading comprehension.

      
Ah yes. And that’s VERY good news……Understanding/comprehension of a “dyslexic” is easy to troubleshoot. Here’s the “test/instrument” and protocol.
      
1. Put a text in front of the person, say “Read this and tell me what it says.”
You’ll get different answers, depending upon the assets the examinee bring to the table and which text you use. The “analysis of the results will tell you why the examinee is who/what the individual is, and why the individual does and does/not do what the individual does. That’s allyagotta do. The “diagnosis” is simple.” It’s the “prescription” that gets complicated, but it too can feasibly be resolved by further troubleshooting.

      I’m not suggesting that ‘reading comprehension’ is a single ‘thing’ that people can or can’t do; obviously there are many reasons why someone might not be able to understand a text. Are you saying that people who question SP because it has a limited impact on reading comprehension do think it’s a single thing?

      
What we have is a method that’s improved the decoding of some children (hurrah!), but hasn’t yet resolved the problem of the stubborn 10%+ that everyone complains about.
      
I’d put it a bit differently by using the “evidence” of the SP results. What we have [SP] is a method that is being variously implemented by schools and teachers, most of whom oppose the method without further instructional augmentation. They also believe they “already know” the results of the test, which is a “good enough” test of whether an individual can handle the Alphabetic Code–the link between written and spoken language. The Screening Check can be used to troubleshoot the etiology of any individual who has a “reading problem.

      It can; doesn’t mean it will.

      
At what point would you look at the data and say ‘this isn’t down to poor instruction, there’s something else involved’? Or would you go on blaming instruction ad infinitum?
      
I would self-correct the instant there was any evidence, “It’s not instruction.” We [generic] find ourselves in a situation where we’ve blamed the students, their parentage, and their culture pretty close to infinitum. Since schooling is a matter of instructing individuals, not of breeding individuals, focusing on instruction first seems sensible to me.

      But what would that evidence look like?

      Teachers seem to have come in for a fair amount of flak over the years. Just because focussing on instruction seems sensible, it doesn’t mean educational achievement is the outcome primarily of instruction. All the evidence points to multiple factors being involved that teachers can’t always compensate for.

  17. Comment on Nemocracy comment of Feb 28 [I think I got the date right this time.]

    Here’s the thing. There are all kinds of “issues” with the “Yr 1 Phonics Check.” Every word in the phrase can be contested. But if we examine the EVIDENCE to date, the instrument is a psychometrically sound instrument that is “good enough ” to screen individuals who have not been taught how to handle the Alphabetic Code. Further, the instrument provides a probe for sound investigation of how best to instruct students–those who pass the screen and those who don’t. This holds for individuals of all ages. Further, further, this investigation can lead to the untangling of confusion that has tangled schooling in the English Speaking World for at least a century and is current general political issue nationally in the most prominent of these countries.

    All I’m trying to say, is “More examination of the EVIDENCE of the Screening Check is needed.”

    Specific to your argument. It would be very difficult for “people” to accept that pseudo-words should be used exclusively to screen students who should be reading only “real texts.” As it turns out, the results of the Screening Check are the same with real words as they are with pseudo words. The evidence for that is in the item difficulties. The cumulative evidence for the Screening Check has regularly strengthened its psychometric credentials and integrity. Earlier “possible flaws” have turned out to be pseudo-issues. Allyagotta do is look at the data, accept the issues, and move on. That’s the essence of science and technology.

    • In reply to Dick:

      One can only get evidence for use (or not) of a specific practice by isolating that practice from any others that can come into play. In order to check a child’s use of phonics you have to present them with items for which they have to use phonics to have any chance of getting them correct. This is what the authors of the technical report mean when they talk about the pseudowords being the ‘purest’ items. You haven’t actually dealt with this point, Dick. You have talked about the check being psychometrically sound. Well, how can that be so when the design doesn’t ensure that the check purely checks phonics?

      That is the declared purpose of the check. You put it a little differently, suggesting it is to identify which children have not been taught to handle the alphabetic code. Either way it really doesn’t tell anyone much about the value of alphabetic code knowledge, how much of a contribution such knowledge makes to reading skill, and whether other approaches might suit children who have problems *learning* via the alphabetic code (one can teach a skill for ever without it being learnt). At every turn this initiative fails to justify itself. Every reading session is a chance for a teacher to check how well a child is progressing with phonics and reading, and how well they are responding to the present emphasis on phonics and/or using other strategies in support. Sadly the teacher’s privileged access to this knowledge is not recognised.

      Yes, the scores in the real words may, up to this point, have echoed the scores in the pseudowords. What does that tell us? If I wanted to be facetious I could argue that it shows that pupils have to be good at remembering real words in order to be able to decode pseudowords. This simple, contingent fact does not reliably predict the future or identify a necessary link. Unless we know what strategy children are using with the real words the overall score does not give a full picture of their phonic skills. A psychometrically sound test cannot be based on such shaky foundations.

      Thank you, however, for highlighting the reason the real words are included in the check. You’re right. It’s a PR job. Were the check to include only pseudowords the limitations of phonics for reading real words and the inherent absurdity of the check would be more obvious. A nonword phonics check might be useful alongside other assessments, to explore (probe, if you like) what strategies a struggler is using, but it would not be necessary where children are progressing well in their reading.

      I don’t think we should ‘accept the issues’. The ‘issues’ are significant. You want the evidence to be examined, but the evidence is flawed. Paradoxically it would be an extremely easy task to ‘purify’ the evidence. Why isn’t this being done? If the scores on the real words pretty much correspond with the scores on the pseudowords, as you say, increasing the number of pseudowords and getting rid of the real ones would not impair the comparisons of year on year results. Aah, perhaps it would impair the credibility of the check in the minds of the general public.

      • Sorry for the delay in responding. I lost track of the thread.

        It’s true that better a better measure “coulda” been developed. And it’s also true that “teachers already know.” And so on. Lot’s of “shoulda, couldas” but these beg what “is.”

        The whole point is that the Screening Check is psychometrically “good enough” to screen children who need further instruction in how to handle the Alphabetic Code.
        Use of the Check to date has identified children who do not pass the screen. The data have been analyzed at the national level and at the LEA level. The results to date are that gains are being made at the national level over the years. At the LEA level, LEAs vary in their instructional accomplishments, both at the end of Yr 1 and at Yr 2. All indications are that the differences stem from the instructional practices of schools and teachers rather than from the biosocial characteristics of school settings and students. The data for pursuing the intelligence indications has been largely collected but has not been analyzed. The interest of children, parents, and citizenry would benefit from the low-cost analysis. The analysis will also contribute to clarifying the matters of “Phonics” and “Functional Literacy”–the point of the thread.

        That’s all I’ve been trying to say. Three months later, the situation hasn’t changed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s