Evidence II: The mathematics strikes back

So just over 12 months ago, I blogged about the ‘Evidence for Learning’ [E4L] Toolkit, which was, then, newly available for Australian teachers as an accessible resource which purports to break down research in order to provide a metric of “what works”. (At this juncture I’m reminded of Dylan Wiliams’ warning that ‘everything works somewhere, and nothing works everywhere’). Anyhow, discussion about evidence is back on educational radars once more.

In my post last year I referred to the work of my colleague, James Ladwig, who at that time, blogged about why Australia does not yet have the research infrastructure for a truly credible, independent National Evidence Base for educational policy. James has returned to the topic of evidence again, writing about what is going wrong with ‘evidence-based’ policies and practices in schools in Australia:

Now just think about how many times you have seen someone say this or that practice has this or that effect size without also mentioning the very restricted nature of the studied ‘cause’ and measured outcome.

Simply ask ‘effect on what?’ and you have a clear idea of just how limited such meta-analyses actually are.

This is all very topical because yesterday’s report into the Review to Achieve Educational Excellence in Australian Schools  recommends (recommendation 5.5) the establishment of a national research and evidence institute  to drive better practice and innovation. As an educational researcher myself this sounds very good, depending of course, on how evidence is defined and understood.

In many educational contexts in Australia the work of John Hattie (Professor in Education at Melbourne University and the current chair of AITSL) is understood as constituting “evidence” in terms of what works in education. Hattie’s preeminent  book, Visible Learning, famously summarises the largest ever synthesis of meta-analyses of quantitative measures of the effect of different factors on educational outcomes. In terms of his work and in defense of its legacy and influence, Hattie describes himself as a statistician and not a theoretican (Knudsen, 2017). But statistics are not theory nor value neutral, and are underpinned by various assumptions. Before one makes claims based upon mathematical proofs, one needs to understand the assumptions underpinning the calculations being performed.

Simpson (2017) unpacks and describes the mathematical assumptions that underpin “effect size”, which in Hattie’s work constitutes the gauge by which one determines whether an educational intervention is worth implementing. For those that can’t access Simpson’s article the Soundcloud below is an interview with Simpson. In this interview  he talks through the mathematics that underpin effect sizes, and explains why in his analysis, these are a categorisation error when used to determine the effectiveness of a given educational intervention. (He goes so far as to say that using effect size to determine the suitability of an educational intervention is akin to using your cat’s weight in order to determine it’s age).

research

Simpson (as he explains in the interview) started examining the calculations underpinning effect sizes and critiquing their use in policy after a chance encounter with a philosopher got him “thinking deeply” about the meaning and uses of such statistical methods. Given the traction that “effect sizes” and “what works” (understood narrowly in the content of “evidence-based”) has in current discourse, it is very important to understand what these mean. Simpson (himself a Professor of Education and ex-high school maths teacher) explains effects sizes and the limitations of meta-analyses and meta-meta-analyses, in terms that those with only a passing familiarity with statistics can understand. All this to say, I strongly recommend listening to this interview if you have any interest in educational research at all. (Listening to this podcast, or reading the paper will make the title of this post make more sense. As a philosopher/sociologist I can’t communicate the maths the way that Simpson is able to).

Educational philosophy is much maligned, and has been removed from many teacher education programs; however, in a era where we have built machines that can learn exponentially it is more important than ever to think deeply and carefully about the purposes of learning and schooling. While I think the development of a national education institute for research and evidence could be a good thing, without serious consideration of the purposes of education, the implications of the policies that we implement, and what, exactly, constitutes evidence, such an institute will not make any difference at all to the status quo.

References

Knudsen, H. (2017). John Hattie: I’m a statistician, I’m not a theoretician. Nordic Journal of Studies in Education Policy, 3(3), 253 – 261. https://doi.org/10.1080/20020317.2017.1415048

Simpson, A. (2017). The misdirection of public policy: Comparing and combining and standarised effect sizes. Journal of Education Policy, 32(4), 450-466. https://doi.org/10.1080/02680939.2017.1280183

One thought on “Evidence II: The mathematics strikes back

  1. Pingback: Links of interest on the Gonski report 2.0 | Dr Rachel Buchanan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s