Featured 

How do you know which educational research to trust? Here are the questions to ask

How do you know which educational research to trust? Here are the questions to ask

I have previously drawn attention to differences in expert opinion over the usefulness of statistical significance testing, particularly in regard to randomised controlled trials (RCTs). Now it's time to look at what can go wrong with RCTs, and the questions you need to ask when judging the trustworthiness of the associated research findings.

What is a RCT?

(Connolly et al., 2017) describe an RCT as "a trial of particular educational programme or intervention to assess whether it is effective; it is a controlled trial because it compares the progress made by those children taking the programme or intervention with a comparison or control group of children who do not and who continue as normal; and it is randomised because the children have been randomly allocated to the groups being compared."

What can go wrong with RCTs?

A lot, unfortunately. (Ginsburg & Smith, 2016) reviewed 27 RCTs that met the minimum standards of the US-based What Works Clearing House, and found that 26 had serious threats to their usefulness. These included:

  • Developer associated. In 12 of the 27 RCT studies (44 percent), the authors had an association with the curriculum's developer.
  • Curriculum intervention not well-implemented. In 23 studies (85 percent), implementation fidelity was threatened because the RCT occurred in the first year of curriculum implementation. The NRC study warns that it may take up to three years to implement a substantially different curricular change.
  • Unknown comparison curricula. In 15 of 27 studies (56 percent), the comparison curricula are either never identified or outcomes are reported for a combined two or more comparison curricula. Without understanding the comparison's characteristics, we cannot interpret the intervention's effectiveness.
  • Instructional time greater for treatment than for control group. In eight of nine studies for which the total time of the intervention was available, the treatment time differed substantially from that for the comparison group. In these studies, we cannot separate the effects of the intervention curriculum from the effects of the differences in the time spent by the treatment and control groups.
  • Limited grade coverage. In 19 studies, a curriculum covering two or more grades does not have a longitudinal cohort and cannot measure cumulative effects across grades.
  • Assessment favours content of the treatment. In 5 studies (19 percent), the assessment was designed by the curricula developer and likely is aligned in favor of the treatment.
  • Outdated curricula. In 19 studies (70 percent), the RCTs were carried out on outdated curricula.

So what should you do?

In the recently published book The Trials of Evidence-Based Education, the authors suggest the following:

First, check whether there is a clear presentation of research findings - are they presented simply and clearly, with all the relevant data provided?

Second, check whether the research is using effect sizes as the way of presenting the scale of the findings. If significance testing is being used, and p values are being quoted - you may wish to pause for a moment (although remember that effect sizes have their own problems).

Third, check where the research design sits on the research design hierarchy of causal questions. At the top of the hierarchy are studies where participants are randomly allocated between groups; below that are participants matched between groups; below that are naturally occurring groups used; below that is only one group studied and before and after data is used, and at the bottom of the hierarchy are examples where only case studies are used.

Fourth, check the scale of the study, for example, are at least 100 pupils involved?

Fifth, look out for missing information - how many subjects/participants dropped out of the study? As a rule of thumb, the higher the percentage level of completion of the research, the more trustworthy the findings. As noted in The Trials of Evidence-Based Education, a study with 200 participants and a 100% completion rates is likely to be more trustworthy than a study with 300 participants and a 67% completion rate.

Sixth, check the data quality. Standardised tests provide higher quality data than say questionnaire data, with impressionistic data for causal questions providing the weakest evidence. Make sure the outcomes being studied are specified in advance. Look at whether there are likely to be any errors in the data caused by inaccuracy or missing data.

And finally...

It's easy to be intimidated by quantitative research studies, but the key is to ask the right questions - are effect sizes being used? Are subjects randomly allocated between the control and the intervention group? Is there missing data? Are standardised measures of assessment used? Are the evaluators clearly separate from the implementers? If the answer to all these questions is yes, you can have a reasonable expectation that the research findings are trustworthy.

Further reading

Connolly, P., Biggart, A., Miller, S., O'Hare, L., & Thurston, A. (2017), Using Randomised Controlled Trials in Education London: Sage.

Ginsburg, A., & Smith, M. S. (2016). Do Randomized Controlled Trials Meet the "Gold Standard"? American Enterprise Institute Retrieved March, 18, 2016.

Gorard, S., See, B., & Siddiqui, N. (2017). The Trials of Evidence-Based Education London: Routledge.

This is an edited version of a blog that first appeared here.

Better together: the importance of forming relatio...
How new teachers can triumph over perfectionism

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Tuesday, 19 November 2024
If you'd like to register, please fill in the username, password and name fields.

By accepting you will be accessing a service provided by a third-party external to https://edcentral.uk/

EdCentral Logo