Focus on ... determining what research evidence to trust
A new guide from US-based Mathematica Policy Research's Center for Improving Research Evidence, explains to educators how to tell which type of research evidence supports claims about effectiveness, ordering them from the weakest (anecdotal) to the strongest (causal).
The guide gives examples of common sources for each type of evidence, such as marketing materials, grey literature, and independent evaluations and provides a clear explanation of the difference between correlational and causal evidence – something even experienced researchers are prone to forget. It states:
'Correlational evidence can identify the relationship between an educational condition or initiative – such as using an educational technology – and a specific outcome, such as student maths test scores. This type of evidence can be useful as a starting point when learning about a technology, but cannot conclusively demonstrate that a technology gets results. This is because it cannot rule out other possible explanations for the differences in outcomes
between technology users and non-users. Correlational evidence is often misinterpreted and used to demonstrate success.'
The truth is that Causal analysis is the only way to determine effectiveness with confidence. The guide says:
'This [Causal] type of analysis compares apples to apples by ensuring the only difference between the group that received the program and a comparison
group is the program itself. An otherwise identical comparison group tells us what would have happened without the program; we can then say that differences in outcomes between the groups were caused by the program.'
The report focuses on which technologies to use – recognising that evidence is needed about the options available; for example, in order to make the best use of technology budgets.
You can download a PDF copy of the report via the link below: