“Evidence-based,” a currently popular concept, assumes that identifying the high-quality interventions with valid positive results will enhance educational outcomes on a widespread scale. Clearinghouses (CHs) push this process forward by setting their chosen scientific criteria, evaluating studies of the required quality, synthesizing the study results, and proposing recommendations.
To probe into the consistency of the meanings of “evidence-based” in different CHs, Cook and colleagues recently examined 12 educational clearinghouses to (1) compare their effectiveness criteria, (2) estimate how consistently they evaluate the same program, and (3) analyze why their evaluations differ.
How variable are CHs in their effectiveness criteria? All the CHs value randomized control trials (RCT) as the preferred experimental design, but they vary in how they test whether an RCT is well-implemented enough to deserve the highest study-quality ranking.
Quasi-experimental designs were treated more variably than RCTs based on separate standards for different categories. Additionally, different CHs place different emphases in criteria to ancillary causal factors such as independent replication and long-lasting intervention effects.