genes, environment and behaviour

There was considerable kerfuffle on Twitter last week following a blog post by David Didau entitled ‘What causes behaviour?’  The ensuing discussion resulted in a series of five further posts from David culminating in an explanation of why his views weren’t racist. I think David created problems for himself through lack of clarity about gene-environment interactions and through ambiguous wording. Here’s my two-pennyworth.

genes

Genes are regions of DNA that hold information about (mainly) protein production. As far as we know, that’s all they do. The process of using this information to produce proteins is referred to as genetic expression.

environment

The context in which genes are expressed. Before birth, the immediate environment in which human genes are expressed is limited, and is largely a chemical and biological one. After birth, the environment gets more complex as Urie Bronfenbrenner demonstrated.  Remote environmental effects can have a significant impact on immediate ones. Whether a mother smokes or drinks is influenced by genetic and social factors, and the health of both parents is often affected by factors beyond their control.

epigenetics

Epigenetic factors are environmental factors that can directly change the expression of genes; in some cases they can be effectively ‘switched’ on or off.   Some epigenetic changes can be inherited.

behaviour

Behaviour is a term that’s been the subject of much discussion by psychologists. There’s a useful review by Levitis et al here. A definition of behaviour the authors decided reflected consensus is:

Behaviour is: the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal and/or external stimuli, excluding responses more easily understood as developmental changes.

traits and states

Trait is a term used to describe a consistent pattern in behaviour, personality etc. State is used to describe transient behaviours or feelings.

David Didau’s argument

David begins with the point that behavioural traits in adulthood are influenced far more by genes than by shared environments during childhood. He says: “Contrary to much popular wishing thinking, shared environmental effects like parenting have (almost) no effect on adult’s behaviour, characteristics, values or beliefs.* The reason we are like our parents and siblings is because we share their genes. *Footnote: There are some obvious exceptions to this. Extreme neglect or abuse before the age of 5 will likely cause permanent developmental damage as will hitting someone in the head with a hammer at any age.”

In support he cites a paper by Thomas Bouchard, a survey of research (mainly twin studies) about genetic influence on psychological traits; personality, intelligence, psychological interests, psychiatric illnesses and social attitudes. David rightly concludes that it’s futile for schools to try to teach ‘character’ because character (whatever you take it to mean) is a stable trait.

traits, states and outcomes

But he also refers to children’s behaviour in school, and behaviour encompasses traits and states; stable patterns of behaviour and one-off specific behaviours. For David, school expectations can “mediate these genetic forces”, but only within school; “an individual’s behaviour will be, for the most part, unaffected by this experience when outside the school environment”.

He also refers to “how we turn out”, and how we turn out can be affected by one-off, even uncharacteristic behaviours (on the part of children, parents and teachers and/or government).   One-off actions can have a hugely beneficial or detrimental impact on long-term outcomes for children.

genes, environment and interactions

It’s easy to get the impression from the post that genetic influences (David calls them genetic ‘forces’ – I don’t know what that means) and environmental influences are distinct and act in parallel. He refers, for example, to “genetic causes for behaviour as opposed to environmental ones” (my emphasis), but concedes “there’s definitely some sort of interaction between the two”.

Obviously, genes and environment influence behaviour. What’s emerged from research is that the interactions between genetic expression and environmental factors are pretty complex. From conception, gene expression produces proteins; cells form, divide and differentiate, the child’s body develops and grows. Genetic expression obviously plays a major role in pre-natal development, but the proteins expressed by the genes very quickly form a complex biochemical, physiological and anatomical environment that impacts on the products of later genetic expression. This environment is internal to the mother’s body, but external environmental factors are also involved in the form of nutrients, toxins, activities, stressors etc. After birth, genes continue to be expressed, but the influence of the external environment on the child’s development increases.

Three points to bear in mind: 1) A person’s genome remains pretty stable throughout their lifetime. 2) The external environment doesn’t remain stable – for most people it changes constantly.  Some of the changes will counteract others; rest and good nutrition can overcome the effects of illness, beneficial events can mitigate the impact of adverse ones. So it’s hardly surprising that shared childhood environments have comparatively little, if any, effect on adult traits.   3) Genetic and environmental influences are difficult to untangle due to their complex interactions from the get-go. Annette Karmiloff-Smith* highlights the importance of gene-environment-behaviour interactions here.

Clearly, if you’re a kid with drive, enthusiasm and aspirations, but grow up on a sink estate in an area of high social and economic deprivation where the only wealthy people with high social status are drug dealers, you’re far more likely to end up with rather dodgy career prospects than a child with similar character traits who lives in a leafy suburb and attends Eton. (I’ve blogged elsewhere about the impact of life events on child development and long-term outcomes, in a series of posts starting here.)

In other words, parents and teachers might have little influence over behavioural traits, but they can make a huge difference to the outcomes for a child, by equipping them (or not) with the knowledge and strategies they need to make the most of what they’ve got. From other things that David has written, I don’t think he’d disagree.  I think what he is trying to do in this post is to put paid to the popular idea that parents (and teachers) have a significant long-term influence on children’s behavioural traits.  They clearly don’t.  But in this post he doesn’t make a clear distinction between behavioural traits and outcomes. I suggest that’s one reason his post resulted in so much heated discussion.

genes, environment and the scientific method

I’m not sure where his argument goes after he makes the point about character education. He goes on to suggest that anyone who queries his conclusions about the twin studies is dismissing the scientific method, which seems a bit of a stretch, and finishes the post with a series of ‘empirical questions’ that appear to reflect some pet peeves about current educational practices, rather than testing hypotheses about behaviour per se.

So it’s not surprising that some people got hold of the wrong end of the stick. The behavioural framework including traits, states and outcomes is an important one and I wish, instead of going off at tangents, he’d explored it in more detail.

*If you’re interested,  Neuroconstructivism by Mareschal et al and Rethinking Innateness by Elman et al. are well worth reading on gene-environment interactions during children’s development.  Not exactly easy reads, but both reward effort.

references

Bouchard, T. (2004).  Genetic influence on human psychological traits.  Current Directions in Psychological Science, 13, 148-151.

Elman, J. L., Bates, E.A., Johnson, M., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development.  Cambridge, MA: MIT Press.

Karmiloff-Smith A (1998). Development itself is the key to understanding developmental disorders. Trends in Cognitive Sciences, 2, 389-398.

Levitis, D.A., Lidicker, W.Z., & Freund, G. (2009).  Behavioural biologists don’t agree on what constitutes behaviour.  Animal Behaviour, 78 (1) 103-110.

Mareschal, D., Johnson, M., Sirois, S., Spratling, M.W., Thomas, M.S.C. & Westermann, G. (2007). Neuroconstructivism: How the brain constructs cognition, Vol. I. Oxford University Press.

 

 

 

 

 

Advertisements

All snowflakes are unique: comments on ‘What every teacher needs to know about psychology’ (David Didau & Nick Rose)

This book and I didn’t get off to a good start. The first sentence of Part 1 (Learning and Thinking) raised a couple of red flags: “Learning and thinking are terms that are used carelessly in education.” The second sentence raised another one: “If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.”   I’ll get back to the red flags later.

Undeterred, I pressed on, and I’m glad I did. Apart from the red flags and a few quibbles, I thought the rest of the book was great.  The scope is wide and the research is up-to-date but set in historical context. The three parts – Learning and Thinking, Motivation and Behaviour, and Controversies – provide a comprehensive introduction to psychology for teachers or, for that matter, anyone else. Each of the 26 chapters is short, clearly focussed, has a summary “what every teacher needs to know about…”, and is well-referenced.   The voice is right too; David Didau and Nick Rose have provided a psychology-for-beginners, written for grown-ups.

The quibbles? References that were in the text but not in the references section, or vice versa. A rather basic index. And I couldn’t make sense of the example on p.193 about energy conservation, until it dawned on me that a ‘re’ was missing from ‘reuse’. All easily addressed in a second edition, which this book deserves. A bigger quibble was the underlying conceptual framework adopted by the authors. This is where the red flags come in.

The authors are clear about why they’ve written the book and what they hope it will achieve. What they are less clear about is the implicit assumptions they make as a result of their underlying conceptual framework. I want to look at three implicit assumptions about; precise definitions, the school population and psychological theory.

precise definitions

The first two sentences of Part 1 are;

Learning and thinking are terms that are used carelessly in education. If we are to discuss the psychology of learning then it makes sense to begin with precise definitions.” (p.14)

What the authors imply (or at least what I inferred) is that there are precise definitions of learning and thinking. They reinforce their point by providing some. Now, ‘carelessly’ is a somewhat pejorative term. It might be fair to use it if there is a precise definition of learning and there is a precise definition of thinking, but people just can’t be bothered to use them. But if there isn’t a single precise definition of either…

I’d say terms such as ‘learning’, ‘thinking’, ‘teaching’, ‘education’ etc. (the list is a long one) are used loosely rather than carelessly. ‘Learning’ and ‘thinking’ are constructs that are more complex and fuzzier than say, metres or molar solutions. In marked contrast to the way ‘metre’ and ‘molar solution’ are used, people use ‘learning’ and ‘thinking’ to refer to different things in different contexts.   What they’re referring to is usually made clear by the context. For example, most people would consider it reasonable to talk about “what children learn in schools” even if much of the material taught in schools doesn’t meet Didau and Rose’s criterion of retention, transfer and change (p.14). Similarly, it would be considered fair use of the word ‘thinking’ for someone to say “I was thinking about swimming”, if what they were referring to was pleasant mental images of them floating in the Med, rather than the authors’ definition of a conscious, active, deliberative, cognitive “struggle to get from A to B”.

Clearly, there are situations where context isn’t enough, and a precise definition of terms such as ‘learning’ and ‘thinking’ are required; empirical research is a case in point. And researchers in most knowledge domains (maybe education is an exception) usually address this requirement by stating explicitly how they have used particular terms; “by learning we mean…” or “we use thinking to refer to…”.  Or they avoid the use of umbrella terms entirely. In short, for many terms there isn’t one precise definition. The authors acknowledge this when they refer to “two common usages of the term ‘thinking’”, but still try to come up with one precise definition (p.15).

Why does this matter? It matters because if it’s assumed there is a precise definition for labels representing multi-faceted, multi-component processes, that people use in different ways in different circumstances, a great deal of time can be wasted arguing about what that precise definition is. It would make far more sense simply to be explicit how we’re using the term for a particular purpose, or exactly which facet or component we’re referring to.

Exactly this problem arises in the discussion about restorative justice programmes (p.181). The authors complain that restorative justice programmes are “difficult to define and frequently implemented under a variety of different names…” Those challenges could be avoided by not trying to define restorative justice at all, but by people being explicit about how they use the term – or by using different terms for different programmes.

Another example is ‘zero tolerance’ (p.157). This term is usually used to refer to strict, inflexible sanctions applied in response to even the most minor infringements of rules; the authors cite as examples schools using ‘no excuses’ policies. However, zero tolerance is also associated with the broken windows theory of crime (Wilson & Kelling, 1982); that if minor misdemeanours are overlooked, antisocial behaviour will escalate. The broken windows theory does not advocate strict, inflexible sanctions for minor infringements, but rather a range of preventative measures and proportionate sanctions to avoid escalation. Historically, evidence for the effectiveness of both approaches is mixed, so the authors are right to be cautious in their conclusions.

What I want to emphasise is that there isn’t a single precise definition of learning, thinking, restorative justice, zero tolerance, or many other terms used in the education system, so trying to develop one is like trying define apples-and-oranges. To avoid going down that path, we simply need to be explicit about what we’re actually talking about. As Didau and Rose themselves point out “simply lumping things together and giving them the same name doesn’t actually make them the same” (p.266).

all snowflakes are unique

Another implicit assumption emerges in chapter 25, about individual differences;

Although it’s true that all snowflakes are unique, this tells us nothing about how to build a snowman or design a better snowplough. For all their individuality, useful applications depend on the underlying physical and chemical similarities of snowflakes. The same applies to teaching children. Of course all children are unique…however, for all their individuality and any application of psychology to teaching is typically best informed by understanding the underlying similarities in the way children learn and develop, rather than trying to apply ill-fitting labels to define their differences. (p. 254)

For me, this analogy begged the question of what the authors see as the purpose of education, and completely ignores the nomothetic/idiographic (tendency to generalise vs tendency to specify) tension that’s been a challenge for psychology since its inception. It’s true that education contributes to building communities of individuals who have many similarities, but our evolution as a species, and our success at colonising such a wide range of environments hinges on our differences. And the purpose of education doesn’t stop at the community level. It’s also about the education of individuals; this is recognised in the 1996 Education Act (borrowing from the 1944 Education Act), which expects a child’s education to be suitable to them as an individual.  For the simple reason that if it isn’t suitable, it won’t be effective.  Children are people who are part of communities, not units to be built into an edifice of their teachers’ making, or to be shovelled aside if they get in the way of the education system’s progress.

what’s the big idea?

Another major niggle for me was how the authors evaluate theory. I don’t mean the specific theories tested by the psychological research they cite; that would be beyond the scope of the book. Also, if research has been peer-reviewed and there’s no huge controversy over it, there’s no reason why teachers shouldn’t go ahead and apply the findings. My concern is about the broader psychological theories that frame psychologists’ thinking and influence what research is carried out (or not) and how. Didau and Rose demonstrate they’re capable of evaluating theoretical frameworks, but their evaluation looked a bit uneven to me.

For example, they note “there are many questions” relating to Jean Piaget’s theory of cognitive development (pp.221-223), but BF Skinner’s behaviourist model (pp.152-155) has been “much misunderstood, and often unfairly maligned”. Both observations are true, but because there are pros and cons to each of the theories, I felt the authors’ biases were showing. And David Geary’s somewhat speculative model of biologically primary and secondary knowledge and ability, is cited uncritically at least a dozen times, overlooking the controversy surrounding two of its major assumptions –  modularity and intelligence. The authors are up-front about their “admittedly biased view” Continue reading

educating the evolved mind: education

The previous two posts have been about David Geary’s concepts of primary and secondary knowledge and abilities; evolved minds and intelligence.  This post is about how Geary applies his model to education in Educating the Evolved Mind.

There’s something of a mismatch between the cognitive and educational components of Geary’s model.  The cognitive component is a range of biologically determined functions that have evolved over several millennia.  The educational component is a culturally determined education system cobbled together in a somewhat piecemeal and haphazard fashion over the past century or so.

The education system Geary refers to is typical of the schooling systems in developed industrialised nations, and according to his model, focuses on providing students with biologically secondary knowledge and abilities. Geary points out that many students prefer to focus on biologically primary knowledge and abilities such as sports and hanging out with their mates (p.52).   He recognises they might not see the point of what they are expected to learn and might need its importance explained to them in terms of social value (p.56). He suggests ‘low achieving’ students especially might need explicit, teacher driven instruction (p.43).

You’d think, if cognitive functions have been biologically determined through thousands of years of evolution, that it would make sense to adapt the education system to the cognitive functions, rather then the other way round. But Geary doesn’t appear to question the structure of the current US education system at all; he accepts it as a given. I suggest that in the light of how human cognition works, it might be worth taking a step back and re-thinking the education system itself in the light of the following principles:

1.communities need access to expertise

Human beings have been ‘successful’, in evolutionary terms, mainly due to our use of language. Language means it isn’t necessary for each of us to learn everything for ourselves from scratch; we can pass on information to each other verbally. Reading and writing allow knowledge to be transmitted across time and space. The more knowledge we have as individuals and communities, the better our chances of survival and a decent quality of life.

But, although it’s desirable for everyone to be proficient reader and writer and to have an excellent grasp of collective human knowledge, that’s not necessary in order for each of us to have a decent quality of life. What each community needs is a critical mass of people with good knowledge and skills.

Also, human knowledge is now so vast that no one can be an expert on everything; what’s important is that everyone has access to the expertise they need, when and where they need it.  For centuries, communities have facilitated access to expertise by educating and training experts (from carpenters and builders to doctors and lawyers) who can then share their expertise with their communities.

2.education and training is not just for school

Prior to the development of mass education systems, most children’s and young people’s education and training would have been integrated into the communities in which they lived. They would understand where their new knowledge and skills fitted into the grand scheme of things and how it would benefit them, their families and others. But schools in mass education systems aren’t integrated into communities. The education system has become its own specialism. Children and young people are withdrawn from their community for many hours to be taught whatever knowledge and skills the education system thinks fit. The idea that good exam results will lead to good jobs is expected to provide sufficient motivation for students to work hard at mastering the school curriculum.  Geary recognises that it doesn’t.

For most of the millennia during which cognitive functions have been developing, children and young people have been actively involved in producing food or making goods, and their education and training was directly related to those tasks. Now it isn’t.  I’m not advocating a return to child labour; what I am advocating is ensuring that what children and young people learn in school is directly and explicitly related to life outside school.

Here’s an example: A highlight of the Chemistry O level course I took many years ago was a visit to the nearby Avon (make-up) factory. Not only did we each get a bag of free samples, but in the course of an afternoon the relevance of all that rote learning of industrial applications, all that dry information about emulsions, fat-soluble dyes, anti-fungal additives etc. suddenly came into sharp focus. In addition, the factory was a major local employer and the Avon distribution network was very familiar to us, so the whole end-to-end process made sense.

What’s commonly referred to as ‘academic’ education – fundamental knowledge about how the world works – is vital for our survival and wellbeing as a species. But knowledge about how the world works is also immensely practical. We need to get children and young people out, into the community, to see how their communities apply knowledge about how the world works, and why it’s important. The increasing emphasis in education in the developed world on paper-and-pencil tests, examination results and college attendance is moving the education system in the opposite direction, away from the practical importance of extensive, robust knowledge to our everyday lives.  And Geary appears to go along with that.

3.(not) evaluating the evidence

Broadly speaking, Geary’s model has obvious uses for teachers.   There’s considerable supporting evidence for a two-phase model of cognition ranging from Fodor’s specialised, stable/general, unstable distinction, to the System 1/System 2 model Daniel Kahnemann describes in Thinking, Fast and Slow. Whether the difference between Geary’s biologically primary and secondary knowledge and abilities is as clear-cut as he claims, is a different matter.

It’s also well established that in order to successfully acquire the knowledge usually taught in schools, children need the specific abilities that are measured by intelligence tests; that’s why the tests were invented in the first place. And there’s considerable supporting evidence for the reliability and predictive validity of intelligence tests. They clearly have useful applications in schools. But it doesn’t follow that what we call intelligence or g (never mind gF or gC) is anything other than a construct created by the intelligence test.

In addition, the fact that there is evidence that supports Geary’s claims doesn’t mean all his claims are true. There might also be considerable contradictory evidence; in the case of Geary’s two-phase model the evidence suggests the divide isn’t as clear-cut as he suggests, and the reification of intelligence has been widely critiqued. Geary mentions the existence of ‘vigorous debate’ but doesn’t go into details and doesn’t evaluate the evidence by actually weighing up the pros and cons.

Geary’s unquestioning acceptance of the concepts of modularity, intelligence and education systems in the developed world, increases the likelihood that teachers will follow suit and simply accept Geary’s model as a given. I’ve seen the concepts of biologically primary and secondary knowledge and abilities, crystallised intelligence (gC) and fluid intelligence (gF), and the idea that students with low gF who struggle with biologically secondary knowledge just need explicit direct instruction, all asserted as if they must be true – presumably because an academic has claimed they are and cited evidence in support.

This absence of evaluation of the evidence is especially disconcerting in anyone who emphasises the importance of teachers becoming research-savvy and developing evidence-based practice, or who posits models like Geary’s in opposition to the status quo. The absence of evaluation is also at odds with the oft cited requirement for students to acquire robust, extensive knowledge about a subject before they can understand, apply, analyse, evaluate or use it creatively. That requirement applies only to school children, it seems.

references

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Kahneman, D (2012).  Thinking, fast and slow.   Penguin.

evolved minds and education: intelligence

The second vigorously debated area that Geary refers to in Educating the Evolved Mind is intelligence. In the early 1900s statistician Charles Spearman developed a technique called factor analysis. When he applied it to measures of a range of cognitive abilities he found a strong correlation between them, and concluded that there must be some underlying common factor that he called general intelligence (g). General intelligence was later subdivided into crystallised intelligence (gC) resulting from experience, and fluid intelligence (gF) representing a ‘biologically-based ability to acquire skills and knowledge’ (p.25). The correlation has been replicated many times and is reliable –  at the population level, at least.  What’s also reliable is the finding that intelligence, as Robert Plomin puts it “is one of the best predictors of important life outcomes such as education, occupation, mental and physical health and illness, and mortality”.

The first practical assessment of intelligence was developed by French psychologist Alfred Binet, commissioned by his government to devise a way of identifying the additional needs of children in need of remedial education. Binet first published his methods in 1903, the year before Spearman’s famous paper on intelligence. The Binet-Simon scale (Theodore Simon was Binet’s assistant) was introduced to the US and translated into English by Henry H Goddard. Goddard had a special interest in ‘feeble-mindedness’ and used a version of Binet’s scale for a controversial screening test for would-be immigrants. The Binet-Simon scale was standardised for American children by Lewis Terman at Stanford University and published in 1916 as the Stanford-Binet test. Later, the concept of intelligence quotient (IQ – mental age divided by chronological age and multiplied by 100) was introduced, and the rest, as they say, is history.

what’s the correlation?

Binet’s original scale was used to identify specific cognitive difficulties in order to provide specific remedial education. Although it has been superseded by tests such as the Wechsler Intelligence Scale for Children (WISC), what all intelligence tests have in common is that they contain a number of sub-tests that test different abilities. The 1905 Stanford-Binet scale had 30 sub-tests and the WISC-IV has 15. Although the scores in sub-tests tend to be strongly correlated, Early Years teachers, Educational Psychologists and special education practitioners will be familiar with the child with the ‘spiky profile’ who has high scores on some sub-tests but low ones on others. Their overall IQ might be average, but that can mask considerable variation in cognitive sub-skills. Deidre Lovecky, who runs a resource centre in Providence Rhode Island for gifted children with learning difficulties, reports in her book Different Minds having to essentially pick ‘n’ mix sub-tests from different assessment instruments because children were scoring at ceiling on some sub-tests and at floor on others. In short, Spearman’s correlation might be true at the population level, but it doesn’t hold for some individuals. And education systems have to educate individuals.

is it valid?

A number of issues have been vigorously debated in relation to intelligence. One is its construct validity. There’s no doubt intelligence tests measure something – but whether that something is a single biologically determined entity is another matter. We could actually be measuring several biologically determined functions that are strongly dependent on each other. Or some biologically determined functions interacting with culturally determined ones. As the psychologist Edwin Boring famously put it way back in 1923 “intelligence is what the tests test”, ie intelligence is whatever the tests test.

is it cultural?

Another contentious issue is the cultural factors implicit in the tests.  Goddard attempted to measure the ‘intelligence’ of European immigrants using sub-tests that included items culturally specific to the USA.  Stephen Jay Gould goes into detail in his criticism of this and other aspects of intelligence research in his book The Mismeasure of Man.  (Gould himself has been widely criticised so be aware you’re venturing into a conceptual minefield.)  You could just about justify culture-specificity in tests for children who had grown up in a particular culture, on the grounds that understanding cultural features contributed to overall intelligence. But there are obvious problems with the conclusions that can be drawn about gF in the case of children whose cultural background might be different.

I’m not going to venture in to bell-curve territory because the vigorous debate in that area is due to how intelligence tests are applied, rather than the content of the tests. Suffice it to say that much of the controversy about application has arisen because of assumptions made about what intelligence tests tell us. The Wikipedia discussion of Herrnstein & Murray’s book is a good starting point if you’re interested in following this up.

multiple intelligences?

There’s little doubt that intelligence tests are valid and reliable measures of the core abilities required to successfully acquire the knowledge and skills taught in schools in the developed industrialised world; knowledge and skills that are taught in schools because they are valued in the developed industrialised world.

But as Howard Gardner points out in his (also vigorously debated) book Frames of mind: The theory of multiple intelligences, what’s considered to be intelligence in different cultures depends on what abilities are valued by different cultures. In the developed industrialised world, intelligence is what intelligence tests measure. If, on the other hand, you live on a remote Pacific Island and are reliant for your survival on your ability to catch fish and navigate across the ocean using only the sun, moon and stars for reference, you might value other abilities. What would those abilities tell you about someone’s ‘intelligence’? Many people place a high value on the ability to kick a football, sing in tune or play stringed instruments; what do those abilities tell you about ‘intelligence’?

it’s all about the constructs

If intelligence tests are a good measure of the abilities necessary for learning what’s taught in school, then fine, let’s use them for that purpose. What we shouldn’t be using them for is drawing conclusions about a speculative entity we’ve named ‘intelligence’. Or assuming, on the basis of those tests, that we can label some people more or less ‘intelligent’ than others, as Geary does e.g.

Intelligent individuals identify and apprehend bits of social and ecological information more easily and quickly than do other people” (p.26)

and

Individuals with high IQ scores learned the task more quickly than their less-
intelligent peers” (p.59)

 

What concerned me most about Geary’s discussion of intelligence wasn’t what he had to say about accuracy and speed of processing, or about the reliability and predictive validity of intelligence tests, which are pretty well supported. It was the fact that he appears to accept the concepts of g, gC and gF without question. And the ‘vigorous debate’ that’s raged for over a century is reduced to ‘details to be resolved’ (p.25) which doesn’t quite do justice to the furore over the concept, or the devastation resulting from the belief that intelligence is a ‘thing’.  Geary’s apparently unquestioning acceptance of intelligence brings me to the subject of the next post; his model of the education system.

 

References

Gardner, H (1983). Frames of Mind: The theory of multiple intelligences. Fontana (1993).

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Gould, SJ (1996).  The Mismeasure of Man.  WW Norton.

Lovecky, D V (2004).  Different minds: Gifted children with AD/HD, Asperger Syndrome and other learning deficits.  Jessica Kingsley.

 

evolved minds and education: evolved minds

At the recent Australian College of Educators conference in Melbourne, John Sweller summarised his talk as follows:  “Biologically primary, generic-cognitive skills do not need explicit instruction.  Biologically secondary, domain-specific skills do need explicit instruction.”

sweller.png

Biologically primary and biologically secondary cognitive skills

This distinction was proposed by David Geary, a cognitive developmental and evolutionary psychologist at the University of Missouri. In a recent blogpost, Greg Ashman refers to a chapter by Geary that sets out his theory in detail.

If I’ve understood it correctly, here’s the idea at the heart of Geary’s model:

*****

The cognitive processes we use by default have evolved over millennia to deal with information (e.g. about predators, food sources) that has remained stable for much of that time. Geary calls these biologically primary knowledge and abilities. The processes involved are fast, frugal, simple and implicit.

But we also have to deal with novel information, including knowledge we’ve learned from previous generations, so we’ve evolved flexible mechanisms for processing what Geary terms biologically secondary knowledge and abilities. The flexible mechanisms are slow, effortful, complex and explicit/conscious.

Biologically secondary processes are influenced by an underlying factor we call general intelligence, or g, related to the accuracy and speed of processing novel information. We use biologically primary processes by default, so they tend to hinder the acquisition of the biologically secondary knowledge taught in schools. Geary concludes the best way for students to acquire the latter is through direct, explicit instruction.

*****

On the face of it, Geary’s model is a convincing one.   The errors and biases associated with the cognitive processes we use by default do make it difficult for us to think logically and rationally. Children are not going to automatically absorb the body of human knowledge accumulated over the centuries, and will need to be taught it actively. Geary’s model is also coherent; its components make sense when put together. And the evidence he marshals in support is formidable; there are 21 pages of references.

However, on closer inspection the distinction between biologically primary and secondary knowledge and abilities begins to look a little blurred. It rests on some assumptions that are the subject of what Geary terms ‘vigorous debate’. Geary does note the debate, but because he plumps for one view, doesn’t evaluate the supporting evidence, and doesn’t go into detail about competing theories, teachers unfamiliar with the domains in question could easily remain unaware of possible flaws in his model. In addition, Geary adopts a particular cultural frame of reference; essentially that of a developed, industrialised society that places high value on intellectual and academic skills. There are good reasons for adopting that perspective; and equally good reasons for not doing so. In a series of three posts, I plan to examine two concepts that have prompted vigorous debate – modularity and intelligence – and to look at Geary’s cultural frame of reference.

Modularity

The concept of modularity – that particular parts of the brain are dedicated to particular functions – is fundamental to Geary’s model.   Physicians have known for centuries that some parts of the brain specialise in processing specific information. Some stroke patients for example, have been reported as being able to write but no longer able to read (alexia without agraphia), to be able to read symbols but not words (pure alexia), or to be unable to recall some types of words (anomia). Language isn’t the only ability involving specialised modules; different areas of the brain are dedicated to processing the visual features of, for example, faces, places and tools.

One question that has long perplexed researchers is how modular the brain actually is. Some functions clearly occur in particular locations and in those locations only; others appear to be more distributed. In the early 1980s, Jerry Fodor tackled this conundrum head-on in his book The modularity of mind. What he concluded is that at the perceptual and linguistic level functions are largely modular, i.e. specialised and stable, but at the higher levels of association and ‘thought’ they are distributed and unstable.  This makes sense; you’d want stability in what you perceive, but flexibility in what you do with those perceptions.

Geary refers to the ‘vigorous debate’ (p.12) between those who lean towards specialised brain functions being evolved and modular, and those who see specialised brain functions as emerging from interactions between lower-level stable mechanisms. Although he acknowledges the importance of interaction and emergence during development (pp. 14,18) you wouldn’t know that from Fig 1.2, showing his ‘evolved cognitive modules’.

At first glance, Geary’s distinction between stable biologically primary functions and flexible biologically secondary functions appears to be the same as Fodor’s stable/unstable distinction. But it isn’t.  Fodor’s modules are low-level perceptual ones; some of Geary’s modules in Fig. 1.2 (e.g. theory of mind, language, non-verbal behaviour) engage frontal brain areas used for the flexible processing of higher-level information.

Novices and experts; novelty and automation

Later in his chapter, Geary refers to research involving these frontal brain areas. Two findings are particularly relevant to his modular theory. The first is that frontal areas of the brain are initially engaged whilst people are learning a complex task, but as the task becomes increasingly automated, frontal area involvement decreases (p.59). Second, research comparing experts’ and novices’ perceptions of physical phenomena (p.69) showed that if there is a conflict between what people see and their current schemas, frontal areas of their brains are engaged to resolve the conflict. So, when physics novices are shown a scientifically accurate explanation, or when physics experts are shown a ‘folk’ explanation, both groups experience conflict.

In other words, what’s processed quickly, automatically and pre-consciously is familiar, overlearned information. If that familiar and overlearned information consists of incomplete and partially understood bits and pieces that people have picked up as they’ve gone along, errors in their ‘folk’ psychology, biology and physics concepts (p.13) are unsurprising. But it doesn’t follow that there must be dedicated modules in the brain that have evolved to produce those concepts.

If the familiar overlearned information is, in contrast, extensive and scientifically accurate, the ‘folk’ concepts get overridden and the scientific concepts become the ones that are accessed quickly, automatically and pre-consciously. In other words, the line between biologically primary and secondary knowledge and abilities might not be as clear as Geary’s model implies.  Here’s an example; the ability to draw what you see.

The eye of the beholder

Most of us are able to recognise, immediately and without error, the face of an old friend, the front of our own house, or the family car. However, if asked to draw an accurate representation of those items, even if they were in front of us at the time, most of us would struggle. That’s because the processes involved in visual recognition are fast, frugal, simple and implicit; they appear to be evolved, modular systems. But there are people can draw accurately what they see in front of them; some can do so ‘naturally’, others train themselves to do so, and still others are taught to do so via direct instruction.  It looks as if the ability to draw accurately straddles Geary’s biologically primary and secondary divide.  The extent to which modules are actually modular is further called into question by recent research involving the fusiform face area (FFA).

Fusiform face area

The FFA is one of the visual processing areas of the brain. It specialises in processing information about faces. What wasn’t initially clear to researchers was whether it processed information about faces only, or whether faces were simply a special case of the type of information it processes. There was considerable debate about this until a series of experiments found that various experts used their FFA for differentiating subtle visual differences within classes of items as diverse as birds, cars, chess configurations, x-ray images, Pokémon, and objects named ‘greebles’ invented by researchers.

What these experiments tell us is that an area of the brain apparently dedicated to processing information about faces, is also used to process information about modern artifacts with features that require fine-grained differentiation in order to tell them apart. They also tell us that modules in the brain don’t seem to draw a clear line between biologically primary information such as faces (no explicit instruction required), and biologically secondary information such as x-ray images or fictitious creatures (where initial explicit instruction is required).

What the experiments don’t tell us is whether the FFA evolved to process information about faces and is being co-opted to process other visually similar information, or whether it evolved to process fine-grained visual distinctions, of which faces happen to be the most frequent example most people encounter.

We know that brain mechanisms have evolved and that has resulted in some modular processing. What isn’t yet clear is exactly how modular the modules are, or whether there is actually a clear divide between biologically primary and biologically secondary abilities. Another component of Geary’s model about which there has been considerable debate is intelligence – the subject of the next post.

Incidentally, it would be interesting to know how Sweller developed his summary because it doesn’t quite map on to a concept of modularity in which the cognitive skills are anything but generic.

References

Fodor, J (1983).  The modularity of mind.  MIT Press.

Geary, D (2007).  Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, in Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology, JS Carlson & JR Levin (Eds). Information Age Publishing.

Acknowledgements

I thought the image was from @greg_ashman’s Twitter timeline but can’t now find it.  Happy to acknowledge correctly if notified.

magic beans, magic bullets and crypto-pathologies

In the previous post, I took issue with a TES article that opened with fidget-spinners and closed with describing dyslexia and ADHD as ‘crypto-pathologies’. Presumably as an analogy with cryptozoology – the study of animals that exist only in folklore. But dyslexia and ADHD are not the equivalent of bigfoot and unicorns.

To understand why, you have to unpack what’s involved in diagnosis.

diagnosis, diagnosis, diagnosis

Accurate diagnosis of health problems has always been a challenge because:

  • Some disorders* are difficult to diagnose. A broken femur, Bell’s palsy or measles are easier to figure out than hypothyroidism, inflammatory bowel disease or Alzheimer’s.
  • It’s often not clear what’s causing the disorder. Fortunately, you don’t have to know the immediate or root causes for successful treatment to be possible. Doctors have made the reasonable assumption that patients presenting with the same signs and symptoms§ are likely to have the same disorder.

Unfortunately, listing the signs and symptoms isn’t foolproof because;

  • some disorders produce different signs and symptoms in different patients
  • different disorders can have very similar signs and symptoms.

some of these disorders are not like the others…

To complicate the picture even further, some signs and symptoms are qualitatively different from the aches, pains, rashes or lumps that indicate disorders obviously located in the body;  they involve thoughts, feelings and behaviours instead. Traditionally, human beings have been assumed to consist of a physical body and non-physical parts such as mind and spirit, which is why disorders of thoughts, feelings and behaviours were originally – and still are – described as mental disorders.

Doctors have always been aware that mind can affect body and vice versa. They’ve also long known that brain damage and disease can affect thoughts, feelings, behaviours and physical health. In the early 19th century, mental disorders were usually identified by key symptoms. The problem was that the symptoms of different disorders often overlapped. A German psychiatrist, Emil Kraepelin, proposed instead classifying mental disorders according to syndromes, or patterns of co-occurring signs and symptoms. Kraepelin hoped this approach would pave the way for finding the biological causes of disorders. (In 1906, Alois Alzheimer found the plaques that caused the dementia named after him, while he was working in Kraepelin’s lab.)

Kraepelin’s approach laid the foundations for two widely used modern classification systems for mental disorders; the Diagnostic and Statistical Manual of Mental Disorders, published by the American Psychiatric Association, currently in its 5th edition (DSM V), and the International Classification of Diseases Classification of Mental and Behavioural Disorders published by the World Health Organisation, currently in its 10th edition (ICD-10).

Kraepelin’s hopes for his classification system have yet to be realised. That’s mainly because the brain is a difficult organ to study. You can’t poke around in it without putting your patient at risk. It’s only in the last few decades that scanning techniques have enabled researchers to look more closely at the structure and function of the brain, and the scans require interpretation –  brain imaging is still in its infancy.

you say medical, I say experiential

Kraepelin’s assumptions about distinctive patterns of signs and symptoms, and about their biological origins, were reasonable ones. His ideas, however, were almost the polar opposite to those of his famous contemporary, Sigmund Freud, who located the root causes of mental disorders in childhood experience. The debate has raged ever since. The dispute is due to the plasticity of the brain.  Brains change in structure and function over time and several factors contribute to the changes;

  • genes – determine underlying structure and function
  • physical environment e.g. biochemistry, nutrients, toxins – affects structure and function
  • experience – the brain processes information, and information changes the brain’s physical structure and biochemical function.

On one side of the debate is the medical model; in essence, it assumes that the causes of mental disorders are primarily biological, often due to a ‘chemical imbalance’. There’s evidence to support this view; medication can improve a patient’s symptoms. The problem with the medical model is that it tends to assume;

  • a ‘norm’ for human thought, feelings and behaviours – disorders are seen as departures from that norm
  • the cause of mental disorders is biochemical and the chemical ‘imbalance’ is identified (or not) through trial-and-error – errors can be catastrophic for the patient.
  • the cause is located in the individual.

On the other side of the debate is what I’ll call the experiential model (often referred to as anti-psychiatry or critical psychiatry). In essence it assumes the causes of unwanted thoughts, feelings or behaviours are primarily experiential, often due to adverse experiences in childhood. The problem with that model is that it tends to assume;

  • the root causes are experiential and not biochemical
  • the causes are due to the individual’s response to adverse experiences
  • first-hand reports of early adverse experiences are always reliable, which they’re not.

labels

Kraepelin’s classification system wasn’t definitive – it couldn’t be, because no one knew what was causing the disorders. But it offered the best chance of identifying distinct mental health problems – and thence their causes and treatments. The disorders identified in Kraepelin’s system, the DSM and ICD, were – and most still are – merely labels given to clusters of co-occurring signs and symptoms.  People showing a particular cluster are likely to share the same underlying biological causes, but that doesn’t mean they do share the same underlying causes or that the origin of the disorder is biological.

This is especially true for signs and symptoms that could have many causes. There could be any number of reasons for someone hallucinating, withdrawing, feeling depressed or anxious – or having difficulty learning to read or maintain attention.  They might not have a medical ‘disorder’ as such. But you wouldn’t know that to read through the disorders listed in the DSM or ICD. They all look like bona fide, well-established medical conditions, not like labels for bunches of symptoms that sometimes co-occur and sometimes don’t, and that have a tendency to appear or disappear with each new edition of the classification system.  That brings us to the so-called ‘crypto-pathologies’ referred to in the TES article.

Originally, terms like dyslexia were convenient and legitimate shorthand labels for specific clusters of signs or symptoms. Dyslexia means difficulty with reading, as distinct from alexia which means not being able to read at all; both problems can result from stroke or brain damage. Similarly, autism was originally a shorthand term for the withdrawn state that was one of the signs of schizophrenia – itself a label.  Delusional parasitosis is also a descriptive label (the parasites being what’s delusional, not the itching).

reification

What’s happened over time is that many of these labels have become reified – they’ve transformed from mere labels into disorders widely perceived as having an existence independent of the label. Note that I’m not saying the signs and symptoms don’t exist. There are definitely children who struggle with reading regardless of how they’ve been taught; with social interaction regardless of how they’ve been brought up; and with maintaining focus regardless of their environment. What I am saying is that there might be different causes, or multiple causes, for clusters of very similar signs and symptoms.  Similar signs and symptoms don’t mean that everybody manifesting those signs and symptoms has the same underlying medical disorder –  or even that they have a medical disorder at all.

The reification of labels has caused havoc for decades with research. If you’ve got a bunch of children with different causes for their problems with reading, but you don’t know what the different causes are so you lump all the children together according to their DSM label; or another bunch with different causes for their problems with social interaction but lump them all together; or a third bunch with different causes for their problems maintaining focus, but you lump them all together; you are not likely to find common causes in each group for the signs and symptoms.  It’s this failure to find distinctive features at the group level that has been largely responsible for claims that dyslexia, autism or ADHD ‘don’t exist’, or that treatments that have evidently worked for some individuals must be spurious because they don’t work for other individuals or for the heterogeneous group as a whole.

crypto-pathologies

Oddly, in his TES article, Tom refers to autism as an ‘identifiable condition’ but to dyslexia and ADHD as ‘crypto-pathologies’ even though the diagnostic status of autism in the DSM and ICD is on a par with that of ADHD, and with ‘specific learning disorder with impairment in reading‘ with dyslexia recognised as an alternative term (DSM), or ‘dyslexia and alexia‘ (ICD).  Delusional parasitosis, despite having the same diagnostic status and a plausible biological mechanism for its existence, is dismissed as ‘a condition that never was’.

Tom is entitled to take a view on diagnosis, obviously. He’s right to point out that reading difficulties can be due to lack of robust instruction, and inattention can be due to the absence of clear routines. He’s right to dismiss faddish simplistic (but often costly) remedies. But the research is clear that children can have difficulties with reading due to auditory and/or visual processing impairments (search Google scholar for ‘dyslexia visual auditory’), that they can have difficulties maintaining attention due to low dopamine levels – exactly what Ritalin addresses (Iversen, 2006), or that they can experience intolerable itching that feels as if it’s caused by parasites.

But Tom doesn’t refer to the research, and despite provisos such as acknowledging that some children suffer from ‘real and grave difficulties’ he effectively dismisses some of those difficulties as crypto-pathologies and implies they can be fixed by robust teaching and clear routines  –  or that they are just imaginary.  There’s a real risk, if the research is by-passed, of ‘robust teaching’ and ‘clear routines’ becoming the magic bullets and magic beans he rightly despises.

Notes

*Disorder implies a departure from the norm.  At one time, it was assumed the norm for each species was an optimal set of characteristics.  Now, the norm is statistically derived, based on 95% of the population.

§ Technically, symptoms are indicators of a disorder experienced only by the patient and signs are detectable by others.  ‘Symptoms’ is often used to include both.

Reference

Iversen, L (2006).  Speed, Ecstasy, Ritalin: The science of amphetamines.  Oxford University Press.

white knights and imaginary dragons: Tom Bennett on fidget-spinners

I’ve crossed swords – or more accurately, keyboards – with Tom Bennett, the government’s behaviour guru tsar adviser, a few times, mainly about learning styles. And about Ken Robinson. Ironic really, because broadly speaking we’re in agreement. Ken Robinson’s ideas about education are woolly and often appear to be based on opinion rather than evidence, and there’s clear evidence that teachers who use learning styles, thinking hats and brain gym probably are wasting their time. Synthetic phonics helps children read and whole school behaviour policies are essential for an effective school and so on…

My beef with Tom has been his tendency to push his conclusions further than the evidence warrants. Ken Robinson is ‘the butcher given a ticker tape parade by the National Union of Pigs‘.  Learning Styles are ‘the ouija board of serious educational research‘.  What raised red flags for me this time is a recent TES article by Tom prompted by the latest school-toy fad ‘fidget-spinners’.

fidget-spinners

Tom begins with claims that fidget-spinners can help children concentrate. He says “I await the peer-reviewed papers from the University of Mickey Mouse confirming these claims“, assuming that he knows what the evidence will be before he’s even seen it.  He then introduces the idea that ‘such things’ as fidget-spinners might help children with an ‘identifiable condition such as autism or sensory difficulties’, and goes on to cite comments from several experts about fidget-spinners in particular and sensory toys in general. We’re told “…if children habitually fidget, the correct path is for the teacher to help the child to learn better behaviour habits, unless you’ve worked with the SENCO and the family to agree on their use. The alternative is to enable and deepen the unhelpful behaviour. Our job is to support children in becoming independent, not cripple them with their own ticks [sic]”.

If a child’s fidgeting is problematic, I completely agree that a teacher’s first course of action should be to help them stop fidgeting, although Tom offers no advice about how to do this. I’d also agree that the first course of action in helping a fidgety child shouldn’t be to give them a fidget-toy.

There’s no question that children who just can’t seem to sit still, keep their hands still, or who incessantly chew their sleeves, are seeking sensory stimulation, because that’s what those activities are – by definition. It doesn’t follow that allowing children to walk about, or use fidget or chew toys will ‘cripple them with their own ticks’. These behaviours are not tics, and usually extinguish spontaneously over time. If they’re causing disruption in the classroom, questions need to be asked about school expectations and the suitability of the school provision for the child, not about learning unspecified ‘better behaviour habits’.

mouthwash

Tom then devotes an entire paragraph to, bizarrely, Listerine. His thesis is that sales of antiseptic mouthwash soared due to an advertising campaign persuading Americans that halitosis was a serious social problem. His evidence is a blogpost by Sarah Zhang, a science journalist.  Sarah’s focus is advertising that essentially invented problems to be cured by mouthwash or soap. Neither she nor Tom mention the pre-existing obsession with cleanliness that arose from the discovery – prior to the discovery of antibiotics – that a primary cause of death and debility was bacterial infections that could be significantly reduced by the use of alcohol rubs, boiling and soap.

itchy and scratchy

The Listerine advertising campaign leads Tom to consider ‘fake or misunderstood illnesses’ that he describes as ‘charlatan’. His examples are delusional parasitosis (people believe their skin is itching because it’s infested with parasites) and Morgellon’s (belief that the itching is caused by fibres). Tom says “But there are no fibres or parasites. It’s an entirely psycho-somatic condition. Pseudo sufferers turn up at their doctors scratching like mad, some even cutting themselves to dig out the imaginary threads and crypto-bugs. Some doctors even wearily prescribe placebos and creams that will relieve the “symptoms”. A condition that never was, dealt with by a cure that won’t work. Spread as much by belief as anything else, like fairies.”

Here, Tom is pushing the evidence way beyond its limits. The fact that the bugs or fibres are imaginary doesn’t mean the itching is imaginary. The skin contains several different types of tactile receptor that send information to various parts of the brain. The tactile sensory system is complex so there are several points at which a ‘malfunction’ could occur.  The fact that busy GPs – who for obvious reasons don’t have the time or resources to examine the functioning of a patient’s neural pathways at molecular level – wearily prescribe a placebo, says as much about the transmission of medical knowledge in the healthcare system as it does about patients’ beliefs.

crypto-pathologies

Tom refers to delusional parasitosis and Morgellon’s as ‘crypto-pathologies’ – whatever that means – and then introduces us to some crypto-pathologies he claims are encountered in school; dyslexia and ADHD. As he points out dyslexia and ADHD are indeed labels for ‘a collection of observed symptoms’. He’s right that some children with difficulty reading might simply need good reading tuition, and those with attention problems might simply need a good relationship with their teacher and clear routines. As he points out “…our diagnostic protocol is often blunt. Because we’re unsure what it is we’re diagnosing, and it becomes an ontological problem“.  He then says “This matters when we pump children with drugs like Ritalin to stun them still.

Again, some of Tom’s claims are correct but others are not warranted by the evidence. In the UK, Ritalin is usually prescribed by a paediatrician or psychiatrist after an extensive assessment of the child, and its effects should be carefully monitored. It’s a stimulant that increases available levels of dopamine and norepinephrine and it often enhances the ability to concentrate. It isn’t ‘pumped into’ children and it doesn’t ‘stun them still’, In the UK at least, NICE guidelines indicate it should be used as a last resort. The fact that its use has doubled in the last decade is a worrying trend. This is more likely to be due to the crisis in child and adolescent mental health services, than to an assumption that all attention problems in children are caused by a supposed medical condition we call ADHD.

Tom, rightly, targets bullshit. He says it matters because “many children suffer from very real and very grave difficulties, and it behoves us as their academic and social guardians to offer support and remedy when we can”. Understandably he wants to drive his point home. But superficial analysis and use of hyperbole risk real and grave difficulties being marginalised at best and ridiculed at worst by teachers who don’t have the time/energy/inclination to check out the detail of what he claims.

Specialist education, health and care services for children have been in dire straits for many years and the situation isn’t getting any better. This means teachers are likely to have little information about the underlying causes of children’s difficulties in school. If teachers take what Tom says at face value, there’s a real risk that children with real difficulties, whether they need to move their fingers or chew in order to concentrate, experience unbearable itching, struggle to read because of auditory, visual or working memory impairments, or have levels of dopamine that prevent them from concentrating, will be seen by some as having ‘crypto-conditions’ that can be resolved by good teaching and clear routines. If they’re not resolved, then the condition must be ‘psycho-somatic’.  Using evidence to make some points, but ignoring it to make others means the slings and arrows Tom hurls at the snake-oil salesmen and white knights galloping to save us from imaginary dragons are quite likely to be used as ammunition against the very children he seeks to help.