The 'Impact' of my research

Tom Welch, former head of research at SSAT, shares his professional journey and vision for a framework for action research that embraces the broadest definition of educational research.

I had meant to stay in university life until retirement. Happily immersed in research and dissemination, I hadn't considered the 'impact' of my research beyond peer-review. Then came an epiphany…

"Will anyone actually read my 30,000 words on the transmission and reception of Piagetian theory in UK teacher training over a thirty year period?" I wanted to make an impact on actual students in actual schools, and after five years of work, I'd realised this wasn't going to do it. Abseiling down from the dreaming spires, I decided to directly engage with school practitioners.

I briefly became a sort of itinerant researcher, peddling practical research experience from school-to-school, talking in enthusiastic terms about "what research could do for you". I was convinced I could answer the questions that they wanted answering, and produce insight that would be of real use in the classroom.

It's hard to explain how demoralising it is to have an epiphany repeatedly thrown back in your face, albeit legitimately. It became clear to me that academia and education did not share an understanding of what 'research' meant. Was it a set of detailed observations of classroom practice? Was it the manipulation of an independent variable, while controlling extraneous variables, in order to assess any impact on the dependent variable? Was it a drilling down of student data in order to reframe classroom challenges in a meaningful way? Would it involve longitudinal study of the impact of wholesale change or short-term analysis of iterative refinements of classroom practice?

I realised that I could not assume that all educational practitioners held research in the same high regard as I did. I sharpened up my act considerably after a year of abject business forecasts. My answer to the question of what research is remained, very clearly, that research should encompass all of the above where appropriate and relevant, I merely stated it more overtly. Teachers began to engage much more readily with the ideas. However, I also learnt that the representation of the term 'research' within educational discourse was shifting from something that was done by the profession to something that was done to it. A passive profession receiving the products of research (and they are often costly products) which answered questions that others had framed. The best way for me to help schools to engage in their own research, it seemed, was to drop the term research altogether.

This trend seemed to reach its apotheosis with media reports of the government paper Evidence in Education in March 2013. The paper, by Dr Ben Goldacre, seemed to offer education a definition of research that it could agree on – the randomised controlled trial. A flurry of social and formal media reported on the model of research that was needed to confirm what worked in education – a heavily précised and often inaccurate account of the paper. The golden bullet should include national trials leading to amalgamated data; national agencies to collect and analyse the data and the production of definitive answers that could be translated into classroom practice across Britain.

The upsurge in randomised controlled trials in education is a good thing, with the usual caveats in terms of quality of execution, but the trend towards randomised controlled trials and research becoming synonymous worries me greatly. It worries me because of what such a narrow definition would negate. As a profession, teachers should not only be interested in whether something works, but in how and why too. Similarly, it should not only be interested in hypothesis testing but in generating new questions that need answering.

As a methodological pluralist, I believe that any definition of educational research should include action research for many reasons including. Firstly, it fully appreciates the power of local context. While national, amalgamated data smooths out certain differences between schools and students in an attempt to offer findings more-or-less applicable to all, it necessarily obfuscates the importance of school context and makes assumptions about the 'average' student contained within. The crucial data on how and why something works must be generated within a culturally sensitive framework that acknowledges the shared meanings salient to the specific groups of student involved in the research. These groups of students will not necessarily fit into the top-down categories that organise nationally reported exam results. I have worked in numerous schools where students from estate X and students from hinterland Y face very different barriers to education and very different cultural norms in terms of where education fits in to their lives, but are all categorised as white, British FSM in the national amalgamated data. In your school, do you talk of the problem with boys, or the problem with our boys? I would argue for the latter. The local challenges that subgroups of your students face would never be applicable for a nation-wide study and the subgroups may not contain enough students for a local randomised controlled trial to be desirable or appropriate, either.

Secondly, a narrow definition of research would lead to an increasing gap between those technicians and academics carrying out the research, and those who work in the classroom. I would argue that the people best placed to measure how and why something works, and those best able to generate new hypotheses, are practitioners themselves. They are immersed in the context rich data on a daily basis and can generate thick description with an ease that visiting academics can only dream of. Above any other framework, it is action research that places practitioners at the centre of the research process. The recognition of a challenge leads to the planning of a change, the enacting of this change and the noting its consequences. The data produced is then reflected upon before the iterative cycle of action research begins again. This reflection – an exploratory element key to the action research model – must, I would argue, take place in the classroom and be undertaken by practitioners. If it is not, then the most rich source of data on the how and why is muted and the hypothesis generation process is in danger of becoming guesswork from outside the profession.

Thirdly, when research is carried out by and for the profession, it aids the process of dissemination enormously. Papers can be written in a style allied to that of practitioner discourse and not in formal academic prose. They can be written from within the value system of the profession and so stay relevant to practitioners. National and regional conferences and networks already exist where papers and thoughts could be shared, as do various social media channels and a more formal practitioner media. In short, the dissemination architecture is already in place, it simply needs populating. While the context-rich and culturally sensitive research findings would not be wholly transferable from institution to institution, they would serve as a rich starting point for schools wishing to engage in more in-school research themselves. A research-informed profession could become the norm.

As a way forward, I would call on teachers to respect the randomised controlled trials that very effectively ask the questions appropriate to that methodological framework, but ask them not to assume that this is research in its purest form and that all else is useless. They undertake research, of sorts, everyday and should learn to value that – an action research framework can help them to do just that. It empowers a profession to refine its craft and enthuses an individual teacher who is able to turn their complex professional judgement to something that is not immediately reducible to exam results.

I would also ask the media, certain academics and others closer to government that they should accept the value of all research that does not make claims beyond its data. The spotlight needs shifting on to a little-reported passage from Ben Goldacre's paper: "… sometimes people think that trials can answer everything, or that they are the only form of evidence. This isn't true, and different methods are useful for answering different questions. Randomised trials are very good at showing that something works; they're not always so helpful for understanding why it worked …'Qualitative' research – such as asking people open questions about their experiences – can help give a better understanding of how and why things worked, or failed, on the ground. This kind of research can also be useful for generating new questions about what works best, to be answered with trials."

We must all embrace this broad definition of research that applies different methodological frameworks, appropriately, to different situations, in order to answer, and generate, different questions.

This article was first published in the TES in October 2015 and we are grateful for their permission to reproduce it here.