Triangle

What use is research evidence?

Geoff Wake, Professor of Education

Last week I attended the world’s biggest gathering of maths educators, ICME (the International Congress of Mathematical Education) in Sydney. Altogether the gathering, which converges every four years, included some 2500+ of us and over 50% identified as researchers. It was a brilliant experience and provided many opportunities to indulge in conversations with like-minded educators from across the world. And to be truthful it’s the international nature of the event that makes it so special and hugely interesting. 

What might we learn from research evidence? What evidence might we draw on to improve teaching and learning? There was a plenary panel that addressed just that question. Its title was: What counts as evidence in mathematics education? To be honest I was particularly looking forward to finding out what might be said about Randomised Controlled Trials (RCTs) in this session, and more widely, throughout the conference, given that such trials have been central to my own research for about ten years now. And I’m afraid that on reflection I am disappointed.  

U-turn
 

According to some, large-scaled RCTs are the “gold standard” of educational research. Whilst I’ve always challenged such thinking, I came away from my ICME week taking the opposite position and arguing that evidence from RCTs may be as good as you get.  

Let me explain this volte-face. Firstly, I will contextualise a little more; during the week very little RCT research was reported or even referred to. What is going on? Well, I’m sure that a limiting factor is that these large-scale studies are expensive and demanding of time and research expertise. Most research carried out in mathematics education is much smaller in scale and scope. Whilst small studies can provide massively helpful insights into all aspects of the teaching and learning of mathematics, they rarely provide evidence of change at scale. Improving system-wide outcomes requires much expertise in design of both intervention and research programmes AND resource to carry out the detailed research that is needed. 

In the plenary panel I referred to, Adrian Simpson, of Durham University, provided something of a critique of why we should be wary of the claims that large-scale RCTs are something of a “gold-standard” in the world of research evidence. As I have already said, I have always had sympathy with such a stance. However, if such trials are taken together with all the other research evidence we can muster, then I am of the view that they have a major contribution to make. This is particularly the case if they are conducted at scale and on more than on a single occasion. And of course, as Adrian Simpson pointed out, we need to look beyond the headline impact reporting and carefully understand the particular implementation details – what was actually done? 

Even more convincing are those intervention designs that are researched in multiple studies. If we can amass a body of evidence across a number of RCTs and smaller more detailed studies, then perhaps we can really begin to understand “what works”. 

In our current Mastering Maths Study, we aim to find convincing evidence that might just inform approaches to improving outcomes for GCSE students. We have a once-in-a-lifetime opportunity to do this through the current Education Endowment Foundation effectiveness trial.  

If you haven’t already joined us in this endeavour, please contact us by email or nominate one or two teachers here. This is our opportunity to provide another tile in the research evidence mosaic that ensures it does provide something of a “gold-standard”. 

Author information

Based in the Observatory for Mathematical Education at the University of Nottingham, Geoff is leading the Mastering Maths programme. 

Observatory for Mathematical Education team

Geoff Wake on X

Observatory for Mathematical Education on LinkedIn

Mastering Maths