By Amy Solder on Saturday, 24 February 2018
Category: Expert Insights

What research do you need to make evidence-informed decisions about edtech?

Every year at BETT I play a little game: I go up to stalls and ask how they know whether their product makes a difference to what or how well children learn. In the past, I have been met with blank stares or told that "children just love it". This year was different, though: I was impressed by the number of companies that could talk about how research had informed the design of their product and how they had been taking baseline measures and looking for changes after children had used it. Some even talked about plans for more rigorous research.

Evidence in edtech is getting some traction, but the debate now rages about the quality of the evidence being used. The Education Endowment Foundation has set a high standard for rigorous, independent research into "what works" in education. Next to this benchmark, much of the effectiveness of research used by edtech companies is flawed and biased.

The debate around evidence in education technology ping pongs between two uncomfortable truths:

  1. Improving learning is hard and it is difficult to be sure that an approach "works" without very rigorous research
  2. It is practically impossible to expect the thousands of edtech products to conduct rigorous impact evaluations of academic quality.

So if we want to make good, evidence-based decisions towards including technology in the process of teaching and learning, how do we move forward? What should be reasonably expected of an edtech business that takes evidence seriously? How can we support this increased enthusiasm for evidence without asking businesses to do the impossible? 

The types of evidence

The first step is being clear on what different types of evidence tell us and what they don't. Rigorous experimental quantitative research, such as randomised controlled trials, are powerful tools that can indicate whether a particular approach has "worked". If well-designed and accompanied by strong qualitative research, they can also reveal why an approach has worked (or not), who it works for and how it works. For example, a trial of the Parent Engagement Project, which uses text messages to get parents more involved in their children's schooling, not only demonstrated positive results but also helped to understand where, why and how this sort of intervention is most useful.

By undertaking several experiments over time, an evidence base builds up that, at its best, can reveal "design principles" (or core components). For example analysis of the literature on computer-assisted learning shows that it is most effective when used as an in-class tool or as mandatory homework support. These design principles can help both developers and purchasers of edtech pursue strategies that are more likely to have an impact.

This is an approach that is in contrast to a more common model whereby a specific intervention from an organisation is tested. If a positive effect is found, it is kite-marked as "effective" and the organisation is encouraged to scale up. There are a few reasons why, for edtech in particular, this approach will not solve our evidence problem:

Design principles can also come from the vast pedagogical literature that tells us how children learn. The EDUCATE programme – led by UCL's Institute of Education in partnership with Nesta, BESA and F6S – is helping edtech startups to use existing evidence as they test and adapt their products, as well as helping them to collect new evidence.

Rapid cycle testing and other lighter touch evaluation approaches are probably the most common and practical form of evidence used for product development. New ideas are tested on small samples in short time frames to inform the next stage of design. The results from these tests do not tell us that impact has occurred, but that is not their purpose. Their purpose is to improve each small step of a design process. If these tests are done thoughtfully, involving educators, they increase the chance that the final design will be effective.

I come last to probably the most common form of evidence (and probably considered the least rigorous) in edtech: teacher feedback. As with rapid cycle testing, teacher feedback cannot be considered evidence of impact and should not be used as if it is. But it does have two very important uses: to help companies improve their products based on feedback; to help teachers navigate the confusing array of products on offer.

Edtech, more than most education projects, risks not working because it simply isn't used or is not used well. Teacher feedback determines whether that first step towards impact can be taken. Have others in your situation found a way to integrate a particular technology into their teaching? A "yes" does not guarantee impact, but a "no" pretty much rules it out.

What should we expect of an edtech company?

So to return to the question I asked at the beginning of the blog, what should we be expecting from edtech companies who claim to be serious about evidence? We propose five expectations:

  1. Companies should have a clear understanding of where their product sits in the complex education ecosystem of schools, students, teachers and parents. Companies should understand what assumptions they are making about how their product creates impact.
  2. Companies should be using all of the available evidence, both from impact evaluations and pedagogical research, to design their products and track the quality of what they do.
  3. Where the evidence is weak and there is not yet consensus on which "design principles" lead to impact, companies should aim to build more evidence. However, where the benefit of this research is broader than an individual company, public or philanthropic money should fund research efforts so that it is high quality and independent.
  4. Companies should be responsive to reviews and rapid cycle tests involving educators and students.
  5. Companies should not overclaim. Teacher reviews and rapid cycle testing are good practice and maximise potential for impact, but they are not evidence of impact in and of themselves. However, these evaluations can be used as an indicator of where impact is more likely and can help justify further investment in more rigorous research.

Many edtech companies aspire to or meet these expectations, but more support is needed. Research needs to be made digestible and accessible to companies and we also need a clear research agenda that shows where the consensus is and where the biggest gaps in the evidence are. The Education Endowment Foundation plans to update its review of the literature in edtech this year, which is promising.

This is a critical time when attitudes and expectations can be shaped. Please get in touch with your ideas and suggestions for supporting an edtech sector that delivers real impact for students based on evidence.

Amy Solder is the project lead for education and Lucy Heady is the director of impact at NESTA. This piece is an edited version of a blog that originally ran on Nesta's website here.

Related Posts

Leave Comments