A different version of this post appears here.
For several months, I have intended to write a post about what it actually means to do research that is ‘policy relevant,’ as it seems to be a term that researchers can self-ascribe* to their work without stating clearly what this entails or if it is an ex ante goal that can be pursued. I committed to writing about it here, alluded to writing about it here, and nearly stood up to the chicken of Bristol in the interim. Now, here goes a first pass. To frame this discussion, I should point out that I exist squarely in the applied space of impact evaluation (work) and political economy and stakeholder analysis (dissertation), so my comments may only apply in those spheres.
The main thrust of the discussion is this: we (researchers, donors, folks generally bought-into the evidence-informed decision-making enterprise) should parse what passes for ‘policy relevant’ into ‘policy adjacent’ (or ‘policy examining?’) and ‘decision relevant’ (or ‘policymaker-relevant’) so that it is clear what we are all trying to say and do. Just because research is conducted on policy does not automatically make it ‘policy relevant’ — or, more specifically, decision-relevant. it is, indeed, ‘policy adjacent,’ by walking and working alongside a real, live policy to do empirical work and answer interesting questions about whether and why that policy brought about the intended results. but this does not necessarily make it relevant to policymakers and stakeholders trying to make prioritization, programmatic, or policy decisions. In fact, by this point, it may be politically and operationally hard to make major changes to the program or policy, regardless of the evaluation outcome.
This is where more clarity (and perhaps humility) is needed.
I think this distinction was, in part, what Tom Pepinsky wrestled with when he said that it was the murky and quirky (delightful!) questions “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own murky and quirky settings. these questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to grander, paradigmatic questions and big ideas. (Interestingly, and to be thought through carefully, this seems to be the opposite of Marc Bellemare’s advice on making research in agricultural economics more policy-relevant, in which he suggests pursuing bigger questions, partially linked to ag econs often being housed in ‘hard’ or ‘life’ science departments and thus dealing with different standards and expectations.)
I am less familiar with how tom discusses what is labelled as highly policy-relevant (the TRIP policymaker survey and seeing whether policymakers are aware of a given big-thinking researcher’s big idea) and much more familiar with researchers simply getting to declare that their work is relevant to policy because it is in some way adjacent to a real! live! policy. Jeff Hammer has pointed out that even though researchers in some form of applied work on development are increasingly doing work on ‘real’ policies and programs, they are not necessarily in a better position to help high-level policymakers choose the best way forward. This needs to be taken seriously, though it is not surprising that a chief minister is asking over-arching allocative questions (invest in transport or infrastructure?) Whereas researchers may work with lower-level bureaucrats and NGO managers or even street-level/front-line workers, who have more modest goals of improving workings and (cost-)effectiveness of an existing program or trying something new.
What is decision-relevant in a particular case will depend very much on the position of the stakeholder with whom the researcher-evaluator is designing the research questions and evaluation (an early engagement and co-creation of the research questions and plan for how the evidence will be used that i consider a pre-req to doing decision-relevant work — see, e.g., the beginning of Suvojit‘s and my discussion of actually planning to use evidence to make decisions). Intention matters in being decision-relevant, to my way of thinking, and so, therefore, does deciding whose decision you are trying to inform.
I should briefly say that I think plenty of policy-adjacent work is immensely valuable and useful in informing thinking and future planning and approaches. One of my favorite works, for example, The Anti-Politics Machine, offers careful vivisection (as Ferguson calls it) of a program without actually guiding officials deciding what to do next. Learning what is and isn’t working (and why) is critically important. His book is a profound, policy-adjacent work (by being about a real program) but it did not set out to be directly decision-relevant nor is it. The book still adds tremendous value in thinking about how we should approach and think about development but it is unlikely that a given bureaucrat can use it to make a programmatic decision.
But here is where I get stuck and muddled, which one of the reasons I put off writing this for so long. at some stage of my thinking, I felt that being decision-relevant, like being policy-adjacent, required working on real, live policies and programs. In fact, in a July 2014 attempt at writing this post, I was quite sympathetic to Howard White’s argument in a seminar that a good way to avoid doing ‘silly IE’ (sillIE©?) is to evaluate real programs and policies, even though being about a real program is not an automatic buffer against being silly.
But I increasingly wonder if I am wrong about decision-relevance. Instead, the main criterion is working with a decision-maker to sort out what decision needs to be made. One outcome of such a decision is that a particular way forward is definitely not worth pursuing, meaning that there is a serious and insurmountable design failure (~in-efficacy) versus an implementation failure (~in-effectiveness). A clear-cut design failure firmly closes a door on a way forward, which is important in decision-making processes (if stakeholders are willing to have a closed door be a possible result of an evaluation). For example, one might (artificially) test a program or policy idea in a crucial or Sinatra case setting — that is, if the idea can’t make it there, it can’t make it anywhere (Gerring, attributed to Yates). door closed, decision option removed. One might also want to deliver an intervention in what H.L. Mencken called a ‘horse-doctor’s dose‘ (as noted here). again, if that whopping strong version of the program or policy doesn’t do it, it certainly won’t do it at the more likely level of administration. A similar view is expressed in running randomized evaluations, noting the ‘proof-of-concept evaluations’ can show that even “a gold-plated, best-case-scenario version of the program is not effective.” door closed, decision option removed.
Even more mind-bending, Ludwig, Kling, and Mullainathan suggest laying out how researchers may approximate the ‘look’ of a policy to test the underlying mechanism (rather than the entirety of the policy’s causal chain and potential for implementation snafus) and, again, directly informing a prioritization, programmatic, or policy decision. As they note, “in a world of limited resources, mechanism experiments concentrate resources on estimating the parameters that are most decision relevant,” serving as a ‘first screen’ as to whether a policy is even worth trying. Again, this offers an opportunity to close a door and remove a decision option. It is hard to argue that this is not decision-relevant and would not inform policy, even if the experimental evaluation is not a real policy, carried out by the people who would take the policy to scale, and so on. Done well, the suggestion is (controversially) that a mechanism experiment that shows that even under ideal or even hyper-ideal conditions (and taking appropriate time trajectory into account) a policy mechanism does not bring about the desired change could be dismissed on the basis of a single study.
But, the key criterion of early involvement of stakeholders and clarifying the question that needs to be answered remains central to this approach to decision-relevance. And, again, having an identified set of stakeholders intended to be the immediate users of evidence seems to be important to being decision-relevant. And, finally, the role of middle-range or programmatic theory (of change) and clearly identified mechanisms of how a program/policy is meant to lead to an outcome is critical in being decision-relevant. .
To return to the opening premise, it does not seem helpful to label all evaluation research associated with a real-world policy or program as ‘policy relevant.’ It is often seen as desirable to be policy relevant in the current state of (impact) evaluation work but this doesn’t mean that all policy-adjacent research projects should self-label as being policy relevant. This is easy to do when it is not entirely clear what ‘policy relevance’ means and it spreads the term too thin. To gain clarity, it helps to parse studies that are policy adjacent from those that are decision-relevant. Being relevant to decisions or policymakers demands not just stakeholder engagement (another loose term) but stakeholder identification of the questions they need answered in order to make a prioritization, programmatic, or policy decision.
There must, therefore, be clear and tangible decision-makers who intend to make use of the generated evidence to work towards a pre-stated decision goal — including a decision to shut the door on a particular policy/program option. While being policy-adjacent requires working alongside a real-world policy, being decision-relevant may not have to meet this requirement, though it does need to ex ante intend to inform a specific policy/program decision and to engage appropriately with stakeholders to this end.
This is far from a complete set of thoughts — I have more reading to do on mechanisms and more thinking to do about when murky and quirky decisions can be reasonably made for a single setting based on a single study in that murky and quirky setting. Nevertheless, the argument that there should be some clear standards for when the term ‘policy relevant’ can be applied and what it means holds.
*In the same somewhat horrifying way that a person might self-ascribe connoisseur status or a bar might self-label as being a dive. no no no, vomit.
One thought on “What Does It Mean To Do Policy Relevant Evaluation?”