What research do you need to make evidence-informed decisions about edtech?
Every year at BETT I play a little game: I go up to stalls and ask how they know whether their product makes a difference to what or how well children learn. In the past, I have been met with blank stares or told that "children just love it". This year was different, though: I was impressed by the number of companies that could talk about how research had informed the design of their product and how they had been taking baseline measures and looking for changes after children had used it. Some even talked about plans for more rigorous research.
Evidence in edtech is getting some traction, but the debate now rages about the quality of the evidence being used. The Education Endowment Foundation has set a high standard for rigorous, independent research into "what works" in education. Next to this benchmark, much of the effectiveness of research used by edtech companies is flawed and biased.
The debate around evidence in education technology ping pongs between two uncomfortable truths:
- Improving learning is hard and it is difficult to be sure that an approach "works" without very rigorous research
- It is practically impossible to expect the thousands of edtech products to conduct rigorous impact evaluations of academic quality.
So if we want to make good, evidence-based decisions towards including technology in the process of teaching and learning, how do we move forward? What should be reasonably expected of an edtech business that takes evidence seriously? How can we support this increased enthusiasm for evidence without asking businesses to do the impossible?
The types of evidence
The first step is being clear on what different types of evidence tell us and what they don't. Rigorous experimental quantitative research, such as randomised controlled trials, are powerful tools that can indicate whether a particular approach has "worked". If well-designed and accompanied by strong qualitative research, they can also reveal why an approach has worked (or not), who it works for and how it works. For example, a trial of the Parent Engagement Project, which uses text messages to get parents more involved in their children's schooling, not only demonstrated positive results but also helped to understand where, why and how this sort of intervention is most useful.
By undertaking several experiments over time, an evidence base builds up that, at its best, can reveal "design principles" (or core components). For example analysis of the literature on computer-assisted learning shows that it is most effective when used as an in-class tool or as mandatory homework support. These design principles can help both developers and purchasers of
This is an approach that is in contrast to a more common model whereby a specific intervention from an organisation is tested. If a positive effect is found, it is kite-marked as "effective" and the organisation is encouraged to scale up. There are a few reasons why, for
- Most
edtech innovators are businesses and businesses have a tendency to go bust. We should not put our hopes of spreading effective practice on businesses managing to navigate schools' sales cycles - Edtech is constantly evolving: as individual products are constantly being upgraded or implemented on different populations, previous results lose their validity. Broader design principles are likely to have better longevity
- There is so much
edtech , it is just not practical to expect that more than a few percent of companies will be able to test their approaches in the most rigorous way - Individual experiments are vulnerable to "false positives", where an effect is found due to statistical noise rather than genuine impact. This is particularly the case for analysis of large data sets, typical of those produced by new technologies.
Only repetition of trials in varied contexts can give us reassurance of genuine effects.
Design principles can also come from the vast pedagogical literature that tells us how children learn. The EDUCATE programme – led by UCL's Institute of Education in partnership with Nesta, BESA and F6S – is helping
Rapid cycle testing and other lighter touch evaluation approaches are probably the most common and practical form of evidence used for product development. New ideas are tested on small samples in short time frames to inform the next stage of design. The results from these tests do not tell us that impact has occurred, but that is not their purpose. Their purpose is to improve each small step of a design process. If these tests are done thoughtfully, involving educators, they increase the chance that the final design will be effective.
I come last to probably the most common form of evidence (and probably considered the least rigorous) in
Edtech, more than most education projects, risks not working because it simply isn't used or is not used well. Teacher feedback determines whether that first step towards impact can be taken. Have others in your situation found a way to integrate a particular technology into their teaching? A "yes" does not guarantee impact, but a "no" pretty
What should we expect of an edtech company?
So to return to the question I asked at the beginning of the blog, what should we be expecting from
- Companies should have a clear understanding of where their product sits in the complex education ecosystem of schools, students, teachers and parents. Companies should understand what assumptions they are making about how their product creates impact.
- Companies should be using all of the available evidence, both from impact evaluations and pedagogical research, to design their products and track the quality of what they do.
- Where the evidence is weak and there is not yet consensus on which "design principles" lead to impact, companies should aim to build more evidence. However, where the benefit of this research is broader than an individual company, public or philanthropic money should fund research efforts so that it is high quality and independent.
- Companies should be responsive to reviews and rapid cycle tests involving educators and students.
- Companies should not overclaim. Teacher reviews and rapid cycle testing are good practice and maximise
potential for impact, but they are not evidence of impact in and of themselves. However, these evaluations can be used as an indicator of whereimpact is more likely and can help justify further investment in more rigorous research.
Many
This is a critical time when attitudes and expectations can be shaped. Please get in touch with your ideas and suggestions for supporting an
Amy Solder is the project lead for education and Lucy Heady is the director of
Related Posts
Comments
By accepting you will be accessing a service provided by a third-party external to https://edcentral.uk/