Publication Source

For more than a decade, evidence was a vague buzzword in the EdTech (Educational Technology) circles. The mixed bag of EdTech evidence claims included product endorsements and vendor-collected testimonials, independent teachers’ and parents’ reviews, as well as academic impact evaluations of effectiveness and efficacy. Fast forward to 2023, and a reinvigorated focus on efficacy is touted to be the breakout point for the future of EdTech.

But for many EdTech companies, scientific evidence is becoming an albatross around their neck. What counts as ‘evidence’ for an educational app or platform? An effect size of 0.92 on students’ learning? Positive feedback from thousands of children?

Without a doubt, evidence in itself is always a positive thing: it is better to have some evidence than none. But the question of how to determine that an EdTech is excellent, good or inadequate fuels a divisive myth: namely that there is only one way of demonstrating evidence of learning and educational impact.

On one side of the debate, what works is defined in terms of efficacy and evidence is measured in a hierarchical fashion with randomised controlled trials (RCTs) on top of the pyramid. On the other side of the debate, is the view that an RCT-based definition of evidence propels a research monoculture that is ‘detrimental to the rigour and vigour of educational research’ (Biesta et al., 2022, p. 2).

EdCentral Logo