School of Education

Design and Analysis of Education Trials

Location
C49, Dearing Building, Jubilee Campus
Date(s)
Tuesday 31st January 2017 (16:30-18:00)
Contact

Please visit Eventbrite to register 

Description

A Centre for Research in Mathematics Education Seminar

Presented by Dr Adetayo Kasim, Research Statistician for the Wolfson Research Institute for Health and Wellbeing, Durham University

Evidence-based policy is encouraged in all areas of public service, particularly in education to improve educational attainment of disadvantaged children.  Educational stakeholders want to know "how much of a difference an intervention has made" and whether the intervention effect is large or small, meaningful or trivial (Valentine and Cooper 2003). But what constitutes evidence to support effectiveness or efficacy of an intervention and how such evidence is measured is controversial, particularly when such evidence is based on null hypothesis significance testing. Validity of scientific conclusions, including their reproducibility, depends on more than statistical methods alone. Appropriately chosen study design, properly conducted analysis and correct interpretation of statistical results also play key roles in ensuring that conclusions are sound and that uncertainty surrounding them is represented properly (Wassersteing and Lazard 2016).

I will discuss some statistical challenges in the design and analysis of education trials. Most of the trials predominantly focused on establishing whether or not one intervention is superior to a comparison group, but does the lack of statistical significance mean the intervention is equivalent to the comparison group? What is the implication of noncompliance and missing data in the analysis of education trials? Is it time for adaptive design in education trials to ensure right pupils are targeted?  How practical is Bayesian analysis for education trials?  Lastly, do we need a new metric to improve communication of results to education stakeholders?  My discussion will focus on statistical perspectives rather than education context.

References

Valentine, J. C. & Cooper, H. (2003). Effect size substantive interpretation guidelines: Issues in the interpretation of effect sizes. Washington, DC: What Works Clearinghouse.

Ronald L. Wasserstein & Nicole A. Lazar (2016): The ASA's statement on p-values: context, process, and purpose, The American Statistician, DOI:10.1080/00031305.2016.1154108

School of Education

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

Contact us