have evidence, will… um, erm (2 of 2)

this is the second in a series of posts with suvojit (and cross-posted here), initially planned as a series of two but growing to six…

reminder: the scenario 
in our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

and yet. the role of evidence in decision-making of this kind is unclear.

in response, we argued for something akin to patton’s utilisation-focused evaluation. such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

(this parallels woolcock’s definition of rigor, here. to the extent that we focus on a scenario in which donors and the M(e)E team design an evaluation intended to inform scale-up decisions (of effectiveness (not efficacy) across the relevant geographies and with likely implementers, say), we sidestep some – though not all – of pritchett (and sandefur’s) critiques of rigor vis-a-vis the multiple dimensions of context.)

in this post, we continue to explore this scenario, which sets up a seemingly ideal case of evidence-informed decision-making (donor-commissioned directly instrumental evaluations (rather than symbolic or conceptual)). we move beyond designing usable/useful evaluations to ask what might facilitate donors making decisions that are, if not necessarily “right,” at least informed, reasoned and justifiable given available evidence. to this end, we present a potential set of criteria to (begin a conversation on how to) set-up a process that can yield thoughtful, reasoned and ‘fair’ decisions that take evidence into account.

to begin, we ask what does influence decision-making at present.

what does drive decision-making?

the recent semantic shift from “evidence-based” to “evidence-informed” decision-making reflects a brewing recognition among evidence nerds that decision-making isn’t — can’t be (?), shouldn’t be (??)** — made in a strictly technocratic way. most political scientists and policymakers — and certainly politicians — have known this for a very long time.

politics are constitutive of policymaking. full stop. it is naive to proceed under any illusions about evidence replacing politics or the need to think more generally. researchers, M&Eers can learn more about the processes; party platforms and ideologies; election cycles and decision timetables — and potentially understand how to leverage them — but they don’t go away no matter how hard we wish for technocratic decision-making.

participants at a 2012 conference on evidence-based policy-making generally agreed that “evidence is a relatively minor factor in most policy maker’s decision making” and that “many other factors” influence the decisions made. additional factors in policy decision-making include:

  • inertia, path-dependence, habit
  • administrative feasibility to implement
  • decision-maker and public values, ideologies and perceptions about the way things are and ought to be
  • political benefit/cost of adding or removing a visible program
  • alignment of program’s expected impact trajectory with political cycles, opportunity  windows
  • political manipulation
  • personal & professional ambition, the interests of powerful advocates and lobbyists
  • justifying past budgets and decisions
  • personal and expert experience, gut feelings
  • given that all this (and more) is usually part of any decision-making reality, we try lay out, below, an approach to guide decision-making.

deliberative process
our proposal draws heavily on norman daniels’s work on “accountability for reasonableness, (A4R)” a rawlsian-influenced approach to procedural justice with regards to distributing resources scarcer than the needs that require fulfilling.***** daniels asks whether in the absence of clearly fair outcomes or principles, if a fair process could be established in a particular context.

to this end, A4R pursues “pure (if imperfect) procedural justice” – a process by which, in the absence of clear principles4 of decision-making (for example, strictly following the results of a cost-effectiveness analysis** or giving complete priority to the worst-off), ex ante agreement on the process of decision-making will lead to outcomes that can be accepted as “fair.”***

in this case, we ask how we could shape the decision-making deliberation process ex ante so that, regardless of the decision taken by designated decision-makers, all stakeholders feel the decision is ‘fair’ because the process was deemed fair, even if it was not their favored outcome? daniels proposes four criteria to guide the formation of such a process.

below, we introduce the basic criteria. we will look at each of these in greater detail in a set of posts in future. (get excited!)

  1. relevant reasons

what types of reasons will be considered “relevant,” and therefore permissible, in decision-making? these reasons, once agreed could also influence the types of data collected in the evaluation itself. we are not proposing that each of the criteria could be giving an ex ante weight so that there is a precise algorithm for decision-making, only that it will be agreed in advance what is on and off the table.

another key consideration, of course, is who will be involved in setting the relevant reasons and who will be involved in the actual decision-making. would there, for example, be a mechanism for public participation or comment?

2. transparency

how transparent should the decision-making process be, including those reasons deemed relevant for decision-making? should everything be made public or does that make it too to speak honestly, some stakeholders will need ‘cover’ and to not have their full views publicized. might a commitment to transparency scare away implementing organisations from trying out innovative ideas for fear of failure – especially if it might be publicly known?

a commitment to transparency includes deciding the extent to which each of the following will be made public and at what point in time: the determined relevant reasons public, the process of deliberation, the full transcript or just a summary of the deliberation.

3. revisibility

in the initial A4R framework, based on health insurance, the revisibility criterion related to appeals made given new evidence. for donor programmes that employ a particular technology that yields it prohibitively expensive to scale, we can imagine that a breakthrough that lowers the price the technology should also lead the donor to revisit their decision not to scale.

another twist on revisibility in the case of development programmes could be an appeals process for the public / civil society that were part of the programme pilot, to argue for (or against) the programme being continued.

4. enforce-ability

the enforceability criterion requires that someone have the institutional authority to make sure that the other conditions are met.

summary / conclusion

we recognise there are many unresolved questions in operationalising the considerations we offer – relevant reasons, transparency, revisibility and enforceability. but this is an attempt to lay out considerations that matter and begin a much-needed conversation. thinking along the lines we propose and discussing how to expand on it is, we feel, the first step towards operationalising towards support decisions that evidence-informed but also, moreover, reasonable.

in sum, we want to call for better preparing evidence for decision-making (previous post) but also remind ourselves to recognise the actual – and desirable – place of evidence in these decisions (this post).

  • * we would like to acknowledge the sounding-board excellence of arjun, payal, sameer and urmy, as representatives of MNDC.
  • ** as a case of strictly following CEA and why evidence perhaps shouldn’t be - setting aside whether it can be -- the only driving reason for a decision, consider the 1990s’ Oregon effort at strictly technocratic priority-setting for medicaid.
  • *** daniels notes that the validity of the approach is premised on the supposition that we can better agree on a fair process than on principles -- this premise needs to be empirically documented and tested to move the conversation forward.
  • **** see daniels on the ‘four unsolved rationing problems’ with regard to health decision-making at a population level.
  • ***** daniels’ ideas have yet to be tested empirically.

it’s all just a little bit of history repeating

(soundtrack.)

this is an idea i have been turning over for some time in my head but will finally put out there since i am too exhausted to do any other work and or to care any longer if it makes sense (that said, it does seem that some seemingly smart people have suggested similar things – e.g. here and here and here – so i am hoping it is not such a crazy suggestion).

i should also preface this by saying that i read historical non-fiction for fun, found monuments men to be a page-turner, thought the ancient egypt exhibit at the field museum was probably the coolest thing ever — so thinking about making history exciting (not just in an indiana jones way) is, well, exciting. in addition, i’ve heard a few too many way-off statements of late, about what reagan did and did not do or how the germans bombed the US at pearl harbor…

i am beginning to think that history curricula – in high school, undergrad – should be taught backwards, from current affairs to the ancient past. in part, this is because in all the classes i took, we never reached the present day. american history usually got us to truman or eisenhower and then i’d sort of fill in the vietnam era from music, books and movies. in european history, we’d similarly make it to yalta and the beginning of decolonization and then we’d run out of time in the school year after spending a month on gandhi.

this leaves out an awful lot of pretty important stuff with immediate consequences for what’s going on in the world today. and while it is important to know about the cradle of civilization and the greek city-states… some of the stuff from the past 20 years is pretty important too — including important for being a national and global citizen. and, by being connected to current event, it seems more easily graspable than memorizing a line of kings starting from 1066.

it’s understandable that it is fundamentally easier to teach from the beginning onward — it makes chronological sense, it’s how most of the books already do it and it moves us from a point of less information to a point of overwhelming amounts of information involving ~193+ countries and wars around the world rather than the world wars. at which point in [enter present year] would one start? which part of the world or with which event? would these choices seem to reveal a political stance or ideological preference?

these are though questions. but, i’d like to see more people give it a try, peeling back from a key event to its antecedents, to discover how we got to where we are now. in so doing, it may be that history ‘matters’ more and can better sustain student attention and also that concepts like the counterfactual could be taught in a way that the inevitable passage of events in the right direction would not allow by forcing questions about the causes-of-effects rather than the effects-of-causes. moreover, this approach may better make the point of the political-ness of the telling of history – it is not just a march of facts but selected facts or near-facts or things-that-sound-like-they-could-be-facts selected by particular people at particular points in time.  this, too, is an important lesson.

have evidence, will… um, erm? (1 of 2)

this is a joint post with suvojit chattopadhyay, also cross-posted here.

commissioning evidence

among those who talk about development & welfare policy/programs/projects, it is tres chic to talk about evidence-informed decision-making (including the evidence on evidence-informed decision-making and the evidence on the evidence on…[insert infinite recursion]).

this concept — formerly best-known as evidence-based policy-making — is contrasted with faith-based or we-thought-really-really-hard-about-this-and-mean-well-based decision-making. it is also contrasted with the (sneaky) strategy of policy-based evidence-making. using these approaches may lead to not-optimal decision-making, adoption of not-optimal policies and subsequent not-optimal outcomes.

in contrast, proponents of the evidence-informed decision-making approach believe that through approach, decision-makers are able to make more sound judgments between those policies that will provide the best way forward, those that may not and/or those that should maybe be repealed or revised. this may lead them to make decisions on policies according to these judgments, which, if properly implemented or rolled-back may, in turn, improve development and welfare outcomes. it is also important to bear in mind however that it is not evidence alone that drives policymaking. we discuss this idea in more detail in our next post.

in this post, we work with a scenario where evidence is accepted as an important determinant of decision-making and this is acknowledged at least broadly by stakeholders who make explicit (or implicit) commitments to ‘use’ the evidence generated to drive their decisions. as good as this may sound, there are barriers to making decisions informed by evidence. one is the stock of accessible well-considered data and rigorous analyses, including the stock in readable-yet-appropriately-nuanced, relevant, timely forms. several organizations’ raison d’etre is to increase this supply of ‘much needed’ evidence. another barrier is lack of demand among decision-makers for (certain types of rigorous) evidence (not just for per diems that come with listening about evidence) – including evidence that could have positive or negative outcomes.

we don’t disagree that both supply and demand for high-quality evidence are important issues. but these two posts are not about those scenarios. rather, we focus on a scenario in which there is, at least, the demand for commissioning evidence.

key examples are donor agencies, big (I)NGOs (BINGOs, if we must) or even government ministries that engage in evidence-generating activities, particularly when the stated goal is to make decisions about piloted programs (continue funding, scale-up, scrap, etc) or make significant tweaks to on-going programs. this should be the ‘easiest’ case of using evidence to inform a decision, where demand for evidence leads to the generation of a supply of by-definition-relevant evidence.

and yet, from what we have seen and experienced, even agencies that have made it to this seeming enlightened precipice of evidence-informed decision-making don’t know, at a practical level, what to do with that evidence once they’ve got it. we are not suggesting that those inside such agencies are not skilled at reading and interpreting evidence. rather, we suggest that so much attention has been given to supplying and demanding evidence that use has been overlooked.

absent attention on use, how generated evidence informs decision-making, if it does at all, is something of a mystery. absent a plan for use, it can also be mysterious (or, at least, not transparent) as to why the agency bothered to commission the evidence-generation at all. we suspect that better considered evidence and better plans for use can improve the use of evidence. our hunches drive these two blog posts.

in this post, we make two main points.

one, we hold that that a careful formative stage during which stakeholders are engaged to help develop research questions while remaining mindful of the policy process can help generate evidence that those stakeholders will know how to use. there is overlap and complementarity between our suggestions and the recent ideas of Monitoring, Structured experiential Learning & Evaluation (MeE; Pritchett, Samji & Hammer) and Problem-Driven Iterative Adaptation (PDIA; Andrews, Pritchett & Woolcock). however, here, we remain focused on planning for evaluation and setting the questions.

two, and relatedly, we advocate for more careful planning of how the generated evidence will be used in decision-making, regardless of the outcomes. in our next post, we take seriously that evidence is far from the only decision-making criterion. we discuss how evidence might be fit into a fair, deliberative process of decision-making by agencies and what such a process might entail.

at the outset, we recognize that there is a poor one-to-one mapping of the results of a single rigorous study or paper with policy changes (e.g. and also fun). in these two posts, however, we stay focused on studies that are set up specifically to guide future decisions and thus *should*, by definition, be immediately relevant to policy/programmatic funding/scaling decisions.

formative work: assessing needs and interests of decision-makers and other stakeholders

an early and wise step, we think, in planning evaluation that is not only policy-associated (we looked at a real, live policy!) but explicitly policy-relevant in terms of decision-making is to identify what kinds of decisions may be made at the end of the evaluation (i.e. what will be informed) and who may be involved. ‘involved’ includes elite decision-makers and possible policy champions and heroes; it also includes middle- and street-level bureaucrats who will implement the policy/program if that is the decision taken (see, e.g. here and here on getting buy-in beyond visible leaders).

among those who talk about demand-generation for evidence, there’s increasing recognition that stakeholder buy-in for the process of using evidence (not just for the policy under investigation) is required early on. but there seems to be less talk on actually asking stakeholders what they want to know to make decisions. we don’t suggest that what stakeholders deem most interesting should define the limits of what will be collected, analyzed and presented. many decision-makers won’t spontaneously crave rigorous impact evaluation.

there is plenty of evidence that decision-makers are heavily influenced by stories, images, even immersive experiences. this is not categorically bad and it certainly should not be ignored or discounted. rather, in addition to the types of data and analyses readily labelled as rigorous in the impact evaluation arena, we can be creative about collecting and analyzing additional types of data in more rigorous and positioned within a counterfactual framework. because, in the end, incorporating stakeholder preference for the kinds of evidence they need to drive policy change would enhance the quality of the evidence generation process.

another consideration relates to asking what magnitude of impacts decision-makers feel they need to see to be confident in making their decisions. we don’t suggest this is an easy question to ask — nor to answer. we only suggest that it could be a useful exercise to undertake (as with all our suggestions, empirical evidence from process data about decision-making would be very helpful) .

a related exercise is to honestly assess reasoned expectations for the elapsed time between introducing an intervention and the potential expression of relevant impacts. the evaluation should be planned accordingly, as a shorter evaluation period may not generate outcomes related to the issues .

planning to use evidence

it often seems that commissioners of evidence (and even those who generate the evidence) don’t actively consider how the evidence will actually be used in design or funding or whatever decisions will be made. there seems to be that there seems to be even less consideration of how the evidence will be used regardless of what the outcome is – positive, negative, mixed, null (a point made by, among others in other fora, Jeannie Annan, here). this may be one reason null and negative results seem to go unaddressed.

if there is a (potentially imposed) desire to commission rigorous evidence, one might assume there is genuine equipoise (or uncertainty, also here) about the efficacy, effectiveness or cost-effectiveness of a policy/program. yet many talks about early buy-in are actually about the program and the potential to validate a flagship programme and justify related spending through evaluation — not about the value of the evaluation process itself for learning. we don’t think this represents the best use of evaluation resources.

an exercise early in the formative phase during which decision-makers consider how the evidence help them make a decision may be useful – if they are asked to consider scenarios in which the evidence is clearly positive, clearly negative or null, mixed, fuzzy or indeterminant. this might also help to clarify research questions that should be asked as part of an evaluation.

in a recent blog post, dr. ian goldman suggests getting decision-maker buy-in by asking “departments to submit proposals for evaluations so that they will want to use the findings.” this is an important step. but it does not mean that proposal-submitters have considered how they will use the evidence if it comes back anything but unequivocally positive for the policy/program/project in question.

dr. goldman also proposes asking departments to design “improvement plans” after their evaluations are complete. we’d like to hear more about this process. but we suspect that drafting such a plan early in the formative stage might actually inform some of the research questions, thus better linking the evaluation to action plans for improvement. for example, sophie oxfam has written about IE results that left them with an “evidence puzzle” rather than a clear idea of how to improve the program. we don’t know if an early exercise in drafting an “improvement plan” would have yielded less puzzling outcomes — but that is an empirical question.

we hope that agencies doing such formative work will document and share the processes and their experiences.

be honest about the full theory of change for using evidence

in a good evaluation, positive validation is not the only possible outcome. therefore, the commissioning agency should honestly consider whether, if the results come back null or negative, the agency would actually be willing to pull or roll-back the policy. in many cases, programs have political cache and entitlement value regardless of objective welfare benefits delivered. rolling-back will not be a politically viable option in such cases. while it is important to build the general evidence base about policy/program cost/effectiveness, when an agency asks for evidence towards a particular decision that it isn’t actually willing to make, we are not sure the eval should go forward.

or, at least, we are uncertain if it should go forward as a yes/no question, where a negative result implies stopping the program. we suspect that evaluation will start to be more appreciated by decision-makers if designed to compare the effectiveness of option A or option B in delivering the favored program, rather than only examining whether option A works (and why). the former set-up provides ways forward regardless of the outcome; the latter may, in the political sense, not.

moving forward

in sum, we think that careful formative and needs-assessment work on what decision-makers (and potential implementers) want to see to be convinced and what types of evidence will inform decision-making may lead to the generation of evidence that is not only policy-related but genuinely policy-relevant.  when an agency or ministry specifically commissions an evaluation with the stated goal of using it in decision-making, this seems particularly important. doing this work well will require collaboration between commissioners, implementers and evaluators.

in the next post, we (humbly) consider the overall role evidence plays in decision-making and consider how it might fit into an overall fair and deliberative process.