a nice article from chris said, discussing how we might alter publication rules (and the granting requirements of donor organizations) in a way to move us closer to good, useful research – specifically, looking more toward the importance of the question and the rigor of the method to answer it. i am, of course, fully in favor of focusing on important (in this case, policy-relevant) questions, rigorous design implementation (in this case, with an eye toward considering scale-up potential), solid data collection (no really, good regressions don’t fix bad data) — as well as publishing results that aren’t necessarily the sexiest but will ultimately move our understanding of what works forward in important ways.
Granting agencies should reward scientists who publish in journals that have acceptance criteria that are aligned with good science. In particular, the agencies should favor journals that devote special sections to replications, including failures to replicate. More directly, the agencies should devote more grant money to submissions that specifically propose replications. Moreover — and this is a fairly radical step that many good scientists I know would disagree with — I would like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes.
(addition 30.04.2012: http://www.overcomingbias.com/2012/04/who-wants-unbiased-journals.html)
if we start taking replications more seriously in social science experiments, we may need to start being more precise with terms. there are a few possible variants/meanings of replications, potentially making it difficult for experimenters, donors, consumers of research, and other stakeholders to speak clearly with one another and set expectations.
- one potential meaning is a program/experiment conducted in one location with one set of implementers, repeated in the same place with different implementers (say, the government versus an NGO). call this internal replication (?).
- another type of replication would be transplanting the program/experiment to a different context, making either minor adjustments (such as language) or more substantive adjustments based on lessons learned from the first pass and a local stakeholder analysis. some range of this is external replication; it’s hard to know at what degree of modification we should really stop calling it a replication and just call it a new or extension experiment inspired by another, rather than selling it as a replication.
- (of course, an internal replication, depending on the number of lessons learned on the first go-round and the modifications required for the second set of implementers to have a go, might itself actually be a new or extended experiment, rather than a replication. again, the line would be fuzzy but presumably some simple criteria/framework could be delineated)