Ducks on pond

I recently joined and met with School and University Research Enquiry (SURE), a research group which has connected several schools in Glasgow with the University of Strathclyde to exchange knowledge and conduct new research. The aim is to promote a more evidence-informed approach to educational decision-making and practice.

With this in mind, I thought it would be helpful to write a brief overview of evidence-based education, including some of the main publications and debates. 

What is it?

Evidence-based education is the idea that research of various kinds should inform decisions about teaching and learning. It is often conceived of as an alternative to teaching practice which is guided by intuition and/or experience.

An educator's job includes a huge amount of decision-making. For example, what should be taught today or tomorrow? What type of homework should be set and when? How can a teacher maintain discipline and engage their pupils? Evidence-based education aims to tackle these questions pragmatically, based on past findings; it is sometimes referred to as a "what works" approach.

Take the example of homework. A traditional view might be that the teacher should give students whatever they judge to be useful, or whatever is "the way it's done" (or whatever is lying around the office, is quick to mark, or is in the textbook/revision guide!). An evidence-based alternative would be to look at this issue from the perspective of research which has shown that some strategies lead to more effective and durable learning than others. The cognitive psychology of memory, for example, tells us that learners remember more if there is a delay before they practise the material they mastered in class. It also shows that they remember more if they do a closed-book test rather than copying from notes. The teacher may therefore decide to set a practice test after a one-week delay.

The "what works" approach usually refers to techniques or interventions that boost attainment (as measured by some form of test or exam). But evidence could inform many other types of decision – not just homework and memory.

As a model, this borrows from the philosophy behind evidence-based medicine. We would probably take it for granted that a doctor should select a treatment that has been shown by reliable (and replicated) research to be the most effective, rather than be guided by tradition (leeches, anyone?) or their gut feeling about what ought to work. In the same way, it is argued, teachers should look to the evidence rather than relying on their personal preferences or even their classroom experience. This insistence on evidence may have the incidental advantage of making educational practices less vulnerable to fads, such as the learning styles myth.

Sounds great! So everyone agrees with this…?

No. It has many critics, and their points are well worth taking on board.

Firstly, the idea that education can derive a model of effective practice from medicine is open to doubt. Learning is not really like curing an illness – it's cumulative, has no defined end point, and there are important subtleties, such as how well it can be transferred to new situations. The entire approach could be seen as over-simplistic.

Secondly, what works for one group might not work for all. One example is Kalyuga (2007) who described the 'expertise reversal effect' whereby tasks that are effective with beginners become ineffective – or at least inefficient – when used with more advanced learners. Another example, much discussed in recent years, is that homework appears to be more effective for secondary students than for primary (Cooper et al, 2006). This is not a killer blow to the idea of evidence-based practice, but it does suggest that the use of evidence must be cautious and thoughtful – we can't apply one-size-fits-all solutions.

Thirdly, there are concerns about the validity of some of the evidence used. Education is a notoriously tricky area to research; for ethical reasons it is often necessary to rely on correlations and secondary data, leaving some findings open to confounding variables. Meanwhile, a lot of the research evidence from cognitive psychology – in areas such as working memory and learning – is based on laboratory studies with university students, not classroom environments and school pupils. That doesn't make it inherently bad research, but does mean that we should be cautious about generalising from it.

Finally – and linked to the previous point – some people argue that the evidence referred to in this approach is often positivist (i.e it takes a scientific approach to look at societal issues, including how people learn). Many educators and learning researchers, however, subscribe to a social constructivist view of learning (i.e. where how humans develop is more of a social issue, based around developing shared understandings).

Key literature

There is a lot of literature in this field, including both empirical research studies and reviews. For anyone who is new to this area, these are a few very useful publications to get you started. In the main they come from proponents of the idea, but I've also included some key critiques:

American Psychological Association's top 20 ways to apply psychology in the classroom
Broader than most, the APA's guide includes issues such as creativity, classroom management and growth mindset, as well as strategies that impact on learning more directly.

Biesta (2007)
In this paper, Gert Biesta criticises evidence-based practice and questions the assumption that closely-controlled lab work has ever contributed much to society (!). He argues that it tends to link to top-down approaches where administrators and governments say that strategies work on the basis of lab research, when they may not do so in a specific context. Additionally, the notion of something working doesn't address philosophical issues of who it works for, and to what social end.

Coe et al, What makes great teaching?
Coe et al's 2014 report for the Sutton Trust is useful in that it goes beyond the cognitive evidence and considers issues such as classroom climate, teacher knowledge levels and how teachers can improve. Otherwise, it draws on a similar body of research to Dunlosky et al (2013; see below).

The Sutton Trust also supports the Education Endowment Foundation's 'Teaching and Learning Toolkit', which provides a useful (if rather undiscriminating) visual guide to evidence-based strategies in terms of cost, lasting impact and the security of the supporting research.

Dunlosky et al (2013), Improving Students' Learning With Effective Learning Techniques
The authors are psychologists and memory researchers, and this paper reviews a number of different findings from cognitive psychology. In particular, it endorses the use of retrieval practice (the 'testing effect') and distributed practice (the 'spacing effect'), while noting that techniques such as re-reading and highlighting are generally ineffective as study strategies.

Hattie's Index of Teaching and Learning Strategies
Australian researcher John Hattie is probably the biggest name in this field. He synthesised numerous meta-analyses of educational research and built a list of interventions together with their average statistical effect size. The higher the effect size, the better. He takes an effect size of 0.4 as a "hinge point" – above this, interventions fall into (roughly) the top half, i.e. they are among the more effective interventions.

The work is also helpful in identifying some interventions that have tended not to make a large impact. It has its flaws, both conceptual and statistical, but it's a useful starting place for finding out about several important strategies.

The Learning Scientists
An excellent blog run by four cognitive psychologists who study learning and memory. It is aimed at students and teachers, and makes the science highly accessible without dumbing it down.

Marzano's top ten
It's useful to be aware of the work of Marzano et al (2001), one of the earlier evidence-based summaries of effective teaching interventions. The strategies they endorse include analogies and metaphors, student-generated study notes, and feedback/formative assessment. There have been important new findings and some of the key research questions have moved on a bit since it came out, however, so it is a bit dated.

NCEE's What Works Clearinghouse
The National Center for Education Evaluation and Regional Assistance in the USA offers the 'What Works Clearinghouse'. It usefully reviews studies that look at the efficacy of learning, but the focus tends to be on large-scale programmes, for example the "Great Explorations in Math and Science® (GEMS®) Space Science Sequence" curriculum, rather than on specific techniques that teachers could use in class. This makes their findings less immediately applicable for many teachers.

Zhao (2017) What works may hurt
In his paper, Zhao refers to the analogy of evidence-based medicine and borrows a further concept – that of side effects. From this perspective, an intervention may "work" from a learning point of view, but it could have any number of side effects. Just as with a drug, any benefits must be evaluated in that context. For example, an intervention that boosts learning over the short-term could also harm motivation over the longer term.

Is all of this a threat to teachers?

It is worth considering: does all of this amount to self-proclaimed experts telling us what to do (or what not to do)? At times that might be a valid concern, but the entire nature of making education more evidence-based is that that evidence is (or can be) open to scrutiny. You may not agree with all of the conclusions from the sources above, but their arguments are probably backed up by a more thorough factual base than the opinion of a staffroom colleague. And if you are unsure, then you are free to scrutinise and evaluate the sources.

A problem, certainly, lies with teachers' access to information. If teachers can't or won't access the evidence themselves, this puts a lot of power in the hands of central institutions who may try to push inappropriate programmes and interventions. Teachers (and schools more broadly) are in a stronger position to ward this off if they not only learn about the evidence, but are also aware of its limitations.

For this to happen, practitioners require journal access, CPD time and the skills to critique the research methods and statistics used. How can that be achieved? This BERA report sets out a vision of schools and colleges as "research-rich environments in which to work" (p.5). It's a radical idea, and one that asks us to reconsider the very nature of what teacher professionalism involves.

References

Biesta, G. (2007). Why "what works'' won't work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57(1), 1–22.

Coe, R., Aloisi, C., Higgins, S. and Major, L.E. (2014). What makes great teaching? Review of the underpinning research. Accessed 14 May 2017 at http://www.suttontrust.com/wp-content/uploads/2014/10/What-makes-great-teaching-FINAL-4.11.14.pdf

Cooper, H., Robinson, J. C., & Patall, E. A. (2006). Does homework improve academic achievement? A synthesis of research, 1987–2003. Review of Educational Research, 76(1), 1–62.

Hattie, J. (2013). Visible learning: A Synthesis of Over 800 Meta-analyses Relating to Achievement. London: Routledge.

Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educ Psychol Rev, 19, 509–539. doi: 10.1007/s10648-007-9054-3

Marzano, R. J., Pickering, D., & Pollock, J. E. (2001). Classroom Instruction That Works: Research-Based Strategies for Increasing Student Achievement. Alexandra, V.A.: ASCD.

Zhao, Y. (2017). What works may hurt: Side effects in education. Journal of Educational Change, 18(1), 1-19.

This is an edited version of a piece that originally ran on Jonathan Firth's own blog. Jonathan is the author of Psychology in the Classroom and tweets at @JW_Firth.