*this is a joint post with suvojit, here.
.
avoiding “we saw the evidence and we made a decision…”
“…and that decision was: given that the evidence didn’t confirm our priors or show a program to be a success, to try to downplay and hide the evidence.”
.
before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. we use the word ‘involved’ rather broadly. our central focus for this post is people who may block the honest presentation of evaluation results.
.
in any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. most obviously, there are researchers and implementers. there are also participants. and, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. both of these may work through sub-contractors and consultants, bringing yet others on board.
.
our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. the current focus is often on a contract between researchers and evidence consumers as a sign that, in ben olken’s terms, researchers are not nefarious and power (statistically speaking)-hungry (2015). to achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.
.
to make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. this is in line with, for example, EGAP’s third research principle on rights to review and publish results. we return to some ideas of how to encourage this at the end of the blog.
.
now, back to the opening statement, a subversion of the goal of evidence-informed decision-making. there are many interesting ways that stakeholders may try to dodge an honest sharing of results once they know what the results are. one is to claim that the public — whether in office or general public — will not be able to make sense of the results, so anything confusing, or, really, unexpected, needs to be pruned from the public report. instead, all the not-as-hoped results can be relegated to internal rather than public, learning.
.
decision-makers may indeed need brief synopses (written or otherwise) rather than being presented with a long report. different combinations and permutations of the evidence may be presented to different stakeholders using different modes of communication, in line with what is salient to them.
.
however, this is not a suitable excuse to fail make the full set of findings public. moreover, an assessment of what stakeholders can/not interpret that fails to account for how they say they want to receive evidence misses a key point of participation and partnership. it might reveal our (mis-)estimation of the policymaker’s intelligence and the complex policy challenges decision-makers encounter as part of their daily work.
.
we’ve talked elsewhere about committing to a decision process informed by evidence. in this post, we are after something even more simple: for key stakeholders to commit ex ante to making the results of a commissioned study public, irrespective of their respective priors regarding the intervention being studied. of course, the piece of research should be deemed as technically sound. assuming that it is, the goal is to encourage the honest sharing of results regardless of the direction of the results.
.
in theory, everyone party to a good ex ante evaluation (and ex post, though there may be slightly less stakeholder engagement; or the degree of engagement could vary depending on the emerging results from the study) is aware that the results for the effect of an intervention on an outcome of interest can be as hoped, opposite, null, or otherwise mixed and confusing. in practice, everyone has a prior, which may involve not just an educated hypothesis but an emotional commitment to a particular outcome.
.
so what can help reduce the impulse and potential to cover-up unexpected results?
1. better explanation of research processes and norms. in some cases, key actors within commissioning agencies may be initially enthusiastic about the idea of evaluation without fully understanding what it — and a measurement and results focus more generally — really entails. here, one often makes the mistake of focusing on agency-capacity, rather than the capacity of individuals within these agencies. by capacity, we refer not only to technical know-how of evaluation methods but also familiarity with research processes and norms. disparity in capacity can lead to serious contradictions within the same agency on the way research findings are treated.
.
too often, though, efforts at “capacity-building” and other modes of education for individuals within agencies about evaluation focus on evaluation designs and analysis. this comes at the expense of explaining the research process, the variety of possible evaluation outcomes, and norms around transparent reporting of results. patrick dunleavy recently outlined the process of storyboarding research from the get-go to improve working in teams and helping to visualize the end-product. such a process may be useful for a broader array of stakeholders than the research team, so that the whole process (the whole magic of “analysis and writing up”) can be made transparent. this represents a potentially softer, friendlier and more feasible version than drafting the entire report in advance, as humphreys et al. attempted in their paper on fishing. it also may allow more of the process to be visible, rather than just the final reporting structure.
.
2. invest time in bringing all stakeholders to understand and agree with the research objectives and processes. several research studies (especially evaluations) have a committee of advisers to steer the process. these are critical stakeholders in addition to those that commission and carry out the research. ideally, all of those involved — including this committee of advisers — would reach a common understanding of the research objectives and methods to be followed. this would also include identifying policy messages from the study and engagement strategies.
.
however, common ground is sometimes elusive, as these wider groups do not arrive early on at a fruitful working arrangement or basic understanding of the research process. setting clearly understood objectives and a shared understanding of research processes may be time consuming but is invaluable when seen in the context of decision-making and transparency over research findings that may not match everyone’s priors.
.
3. formal commitment to results reporting across stakeholders. right now commitment to analyses and results reporting exist between researchers and the public or, really, other researchers. but researchers are not the only ones determining the content of results reporting — and thus reporting requires additional sets of (public? formal? registered?) commitments. these could, like pre-analysis plans or commitment-to-analysis plans, take the form of committing to a core set of analyses and reporting on these results. it may also take the form of MOUs that are less technical than ex ante analysis plans but still represent a commitment to reporting a certain set of results regardless of the direction of those results.
.
in any case, the goal is to move the commitment from being between researchers (and perhaps mostly intelligible to researchers) to also involving study commissioners, other stakeholders with the power to block the publication of findings, and the public (such as the public paying the taxes to fund the program).
.
4. early engagement with decision-makers. if decision-makers are a primary audience for the evaluation and if communicating to decision-makers is seen as a barrier to a complete, nuanced presentation of evaluation findings, then engaging with decision-makers early on may help. we recognise the time constraints of decision-makers and the importance of clarity in messaging. but the clarity of presentation and the complicatedness of the results need not be zero-sum.
.
one way to reduce this tension and to better communicate complex or complicated findings to decision-makers is to engage them in the evaluation from the very beginning, so that the potential for nuanced findings can be gradually introduced. if faced with a passive policy audience at the end of an evaluation, whose only role has been to turn up to listen to research findings in a workshop, the space for taking in complexity, nuance, and caveats in messaging will be limited. but assuming that evaluations findings need to assert only non-complex finding and straightforward recommendation is hugely problematic since we are talking about evaluations in social systems. as such, getting early buy-in and opening channels to gradually introduce results are important.
.
with these steps in place, chances are better that our based-on-a-true-story colleagues could have avoided the scenario that we referred to at the beginning of this post. an early commitment to the research processes and an agreement on the way forward would have helped prime key stakeholders to the possibility that research findings might be a mixed bag — which necessitates a nuanced dissemination strategy but not the burying of unfavorable results.
One thought on “avoiding perversions of evidence-informed decision-making”