have evidence, will… um, erm (5 of 6, revisibility)

this is part of a series of joint posts with suvojit. it is also cross-posted at people, spaces, deliberation.

throughout this series of posts (1, 2, 3, 4), we have considered two main issues. first, how can evidence and evaluation be shaped to be made more useful – that is, directly useable – in guiding decision-makers to initiate, modify, scale-up or drop a program? or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

for such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. if the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he further argues, consequent fairness to the decision and coherence to decisions over time.

the first two criteria set us up for the third: first, decision-makers agree ex ante to constrain themselves to relevant reasons (determined by stakeholders) in deliberation and, second, make public the grounds for a decision after the deliberation. these first two, we argue, can aid organizational learning and coherence in decision-making over time by setting and using precedent over time – an issue that has been bopping around the blogosphere this week.

these criteria, and an approach ensuring A4R more generally, are also a partial response to increasing calls for donor transparency, made loudly in Mexico City this week via the Global Partnership for Effective Development Co-operation. these calls focus on the importance of public availability of data as the key ingredient of donor (and decision-maker) transparency. we concur on their importance. but we argue that it is incomplete without an inclusive process of setting relevant reasons on how those data are used (recognizing that they will always only be part of the process) and making the decision criteria as well public.

the publicity and transparency around decision-making opens the door for A4R’s third criterion (and the subject of this post): the possibility to appeal and revise decisions. as Daniels notes, this condition “closes the loop between decision-makers and those who are affected by their policies.”

as a quick reminder of our guiding scenario: we specifically focus on the scenario of an agency deciding whether to sustain, scale, or shut-down a given program after piloting it with an accompanying evaluation — commissioned explicitly to inform that decision.

in most decision-making of this kind, some stakeholders — often would-be beneficiaries — will not agree with the decision and even feel or be adversely affected. while we suggest that stakeholders be involved in the earlier process of setting relevant reasons, a grievance-redressal or dispute-resolution mechanism, as provided by the revisibility criterion, gives these stakeholders an opportunity to voice their perspectives, based on the original grounds of the decision.

they can do this because the decision-criteria are made public, via criterion 2. this “visible and public” space for further deliberation provides stakeholders have a route “back into the policy formulation process.” stakeholders can use evidence available to them to advocate a certain way forward; it also allows for stakeholders to revisit the decision-making criteria and the decisions they fostered. stakeholders therefore have the opportunity to make a case for a change in the decision.

why might past decisions be questioned? since the appeals process is largely based on the original decision criteria, appeals come if circumstances around those reasons changed. for example, in considering relevant reasons, feasibility was one category of criteria we proposed, such as government’s capacity to scale a program or their interest in the program. one can imagine that over time, over changes in regime, and over changes in politics and policy, the original answers to these criteria could change, opening space for appeals. an additional set of proposed relevant reasons related to cost, effectiveness, and cost-effectiveness. the costs of technologies and materials may change over time or fresh evidence could come out about long-term benefits of programs. this alters the original cost-benefit ratio, again, opening a space for appeals against the original decision.

such appeals may come from members of civil society (or government) that would like to see the program brought back to life (or to see it go away). these may also come from donors themselves wanting to look at their decision-making over time and implement changes in line with the changing context.

Daniels is careful to note, and we emphasize, that the power and purpose of this criterion is not that citizens will always overturn prior decisions.* decisions on limits are requisite, as needs generally outstrip resources. rather, the revisability criterion allows for reconsideration and reflection on those decisions by those knowledgeable about the topic and empowered to alter decisions, if seen fit and feasible. this can, Daniels notes, bring further legitimacy to decision-making processes and, again, improved decision-making over time.

we want to stress that these deliberations over decision-making and their ‘revisibility’ have to be situated in a rational and ethical decision-making framework, predicated on meeting needs fairly when not all can be met (distinct from, say, a legal framework). appeals will have to be judged on the original merits of the arguments as well as with recognition that aid resources have limits (although obviously, a different argument can be made that aid budgets should simply be bigger).  moreover, appeals need to be judged by people who understand the original decision and have the power to change it, if that is the decision taken. when decision-making criteria are set, they set the roadmap for a possible appeals process and should be accordingly discussed and agreed upon.

we started this series of posts by admitting the limited role evidence plays in decision-making — even when those commissioning evidence intend specifically to inform that decision. we considered how planning for decision-making can help in the production of more useful evidence and also how decisions can be made fairly, through the delineation of relevant reasons, the publicity of the decision criteria ultimately used, and now, the possibility of revisiting through criteria and revising decisions.

our thoughts in this series of posts should not make fair decision-making seem like an impossible task. not all aspects of each of these considerations can be taken into account – the constraints of the real world are not lost on us and A4R remains an ideal, though we think one that can be approached. in our final post of this series, we therefore attempt to close the loop by looking at enforcement – asking how these ideas can be enforced and decision-makers held accountable.

*see. e.g., Richard Horton’s recent slide about the limit-breaking decisions by courts and the effects on health care systems, as in cases like Colombia. experiments with health courts may be instructive. picture via @fanvictoria, citing @richardhorton1.

have evidence, will… um, erm? (4 of 6, going public)

this is a joint post with suvojit. it is also posted on people, spaces, deliberation.

in our last post, we discussed how establishing “relevant reasons” for decision-making ex ante may enhance the legitimacy and fairness of deliberations on resource allocation. we also highlight that setting relevant decision-making criteria can inform evaluation design by highlighting what evidence needs to be collected.

we specifically focus on the scenario of an agency deciding whether to sustain, scale or shut down a given program after piloting it with an accompanying evaluation — commissioned explicitly to inform that decision. our key foci are both how to make evidence useful to informing decisions and how, recognizing that evidence plays a minor role in decision-making, to ensure decision-making is done fairly.

for such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. if the four included criteria are met, Daniels argues, it will bring legitimacy to deliberations and, he further argues, consequent fairness to the decision.

in this post, we continue with the second criterion to ensure A4R: the publicity of decisions taken drawing on the first criterion, relevant reasons. we consider why transparency – that is, making decision criteria public – enhances the fairness and coherence of those decisions. we also consider what ‘going public’ means for learning.

disclaimer: logistical uncertainties / room for conversation and experimentation

from the outset, we acknowledge the many unanswered questions about how much publicity or transparency suffice for fairness and how to carry it out.

  • should all deliberations be opened to the public? Made available ex post via transcripts or recordings? or, is semi-transparency — explicitly and publicly announcing ex post the criteria deemed necessary and sufficient to take the final decision — acceptable, while the deliberation remains behind closed doors?
  • who is the relevant public?
  • can transparency be passive – making the information available to those who seek it out – or does fairness require a more active approach?
  • what does ‘available’ or ‘public’ mean in contexts of low-literacy and limited media access?

we do not address these questions — which are logistical and empirical as well as moral — here. as the first-order concern, we consider why this criterion matters.

 

fairness in specific decisions

any decision about resource allocation and limit-setting will be contrary to the preferences of some stakeholders – both those at and not at the decision table. in our scenario, for example, some implementers will have invested some quantity of blood, sweat and tears into piloting a program and may, as a result, have opinions on whether the program should continue; or, those that were comfortable in their inaction (as a result of lack of directives or funds or just plain neglect) who will now have to participate in a scale-up. there will be participants who benefited during the pilot – and those who would have done so if the program were scaled – that may prefer to see the program maintained.

these types of unmet preferences shape Daniels’s central concern: what can an agency* say to those people whose preferences are not met by a decision to convince them that, indeed, the decision “seems reasonable and based on considerations that take… [their] welfare into account?”** being able to give acceptable explanations to stakeholders about a decision is central to fairness.

 

coherence across decisions

the acceptability of criteria for a given decision contribute to the fairness of that decision. But long-run legitimacy of decision-makers benefits from consistency and coherency in organizational policy. transparency, and the explicitness it requires, can foster this.

once reasons for a decision are made public, it is more difficult to not deal with similar cases similarly – the use of ‘precedent’ in judicial cases aptly illustrates this phenomenon. treating like as like is an important requirement of fairness. Daniels envisions that a series of explicated decisions can function as an organizational counterpart of ‘case law’. future decision-makers can draw on past deliberations to establish relevant reasons. deviations from past decisions would need to be justified by relevant reasons.

 

 

implications for learning, decision-making and evaluations

if all decision-makers acknowledge that, at least, the final reasons for their decisions will be publicly accessible, how might that change the way they commission an evaluation and set about using the evidence from it?

it should encourage a review of past deliberations to help determine currently relevant reasons. second, it might encourage decision-makers and evaluators to consider as relevant reasons and measures that will be explainable and understandable to the public(s) when justifying their decisions.

  • in planning evaluations, decision-makers and researchers will have to consider the clarity in methods of data collection and analysis — effectively, will it pass a ‘grandmother test’? moreover, does it pass such a test when that granny is someone affected by your allocative decision? remember the central question that makes this criterion necessary: what can an agency say to those whose preferences are not met by a decision that, indeed, the decision “seems reasonable and based on considerations that take… [their] welfare into account?”
  • there are reasons that decision-makers might shy away from transparency. in his work on health plans, Daniels notes that such organizations speculatively feared media and litigious attacks. in our pilot-and-evaluate scenario, some implementers may not be comfortable with publicizing pilots that may fail; or from raising expectations of beneficiaries that are part of pilots.
  • the fear of failure may influence implementers; this may lead to low-risk/low-innovation pilots. again, this is an important consideration raised above, in the questions we did not answer: when and how much transparency suffices for fairness?

 

in our last blog, we stressed on the importance of engaging stakeholders in setting ‘relevant reasons’ before a project begins, as a key step towards fair deliberative processes as well as a way of shaping evaluations to be useful for decision-making. ensuring publicity and transparency of the decision-making criteria strengthens the perception of a fair and reasonable process in individual cases and over time.

this also sets the stage for an appeals process, where stakeholders can use evidence available to them to advocate a certain way forward; it also allows for stakeholders to revisit the decision-making criteria and the decisions they fostered – the subject of our next post in this series.

***

*we note that donors don’t actually often have to answer directly to implementers and participants for their decisions. We do not, however, dismiss this as a terrible idea.

**we are explicitly not saying ‘broader’ welfare because we are not endorsing a strictly utilitarian view that the needs of some can be sacrificed if the greater good is enhanced, no matter where or how  that good is concentrated.

have evidence, will… um, erm (2 of 2)

this is the second in a series of posts with suvojit (and cross-posted here), initially planned as a series of two but growing to six…

reminder: the scenario 
in our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

and yet. the role of evidence in decision-making of this kind is unclear.

in response, we argued for something akin to patton’s utilisation-focused evaluation. such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

(this parallels woolcock’s definition of rigor, here. to the extent that we focus on a scenario in which donors and the M(e)E team design an evaluation intended to inform scale-up decisions (of effectiveness (not efficacy) across the relevant geographies and with likely implementers, say), we sidestep some – though not all – of pritchett (and sandefur’s) critiques of rigor vis-a-vis the multiple dimensions of context.)

in this post, we continue to explore this scenario, which sets up a seemingly ideal case of evidence-informed decision-making (donor-commissioned directly instrumental evaluations (rather than symbolic or conceptual)). we move beyond designing usable/useful evaluations to ask what might facilitate donors making decisions that are, if not necessarily “right,” at least informed, reasoned and justifiable given available evidence. to this end, we present a potential set of criteria to (begin a conversation on how to) set-up a process that can yield thoughtful, reasoned and ‘fair’ decisions that take evidence into account.

to begin, we ask what does influence decision-making at present.

what does drive decision-making?

the recent semantic shift from “evidence-based” to “evidence-informed” decision-making reflects a brewing recognition among evidence nerds that decision-making isn’t — can’t be (?), shouldn’t be (??)** — made in a strictly technocratic way. most political scientists and policymakers — and certainly politicians — have known this for a very long time.

politics are constitutive of policymaking. full stop. it is naive to proceed under any illusions about evidence replacing politics or the need to think more generally. researchers, M&Eers can learn more about the processes; party platforms and ideologies; election cycles and decision timetables — and potentially understand how to leverage them — but they don’t go away no matter how hard we wish for technocratic decision-making.

participants at a 2012 conference on evidence-based policy-making generally agreed that “evidence is a relatively minor factor in most policy maker’s decision making” and that “many other factors” influence the decisions made. additional factors in policy decision-making include:

  • inertia, path-dependence, habit
  • administrative feasibility to implement
  • decision-maker and public values, ideologies and perceptions about the way things are and ought to be
  • political benefit/cost of adding or removing a visible program
  • alignment of program’s expected impact trajectory with political cycles, opportunity  windows
  • political manipulation
  • personal & professional ambition, the interests of powerful advocates and lobbyists
  • justifying past budgets and decisions
  • personal and expert experience, gut feelings
  • given that all this (and more) is usually part of any decision-making reality, we try lay out, below, an approach to guide decision-making.

deliberative process
our proposal draws heavily on norman daniels’s work on “accountability for reasonableness, (A4R)” a rawlsian-influenced approach to procedural justice with regards to distributing resources scarcer than the needs that require fulfilling.***** daniels asks whether in the absence of clearly fair outcomes or principles, if a fair process could be established in a particular context.

to this end, A4R pursues “pure (if imperfect) procedural justice” – a process by which, in the absence of clear principles4 of decision-making (for example, strictly following the results of a cost-effectiveness analysis** or giving complete priority to the worst-off), ex ante agreement on the process of decision-making will lead to outcomes that can be accepted as “fair.”***

in this case, we ask how we could shape the decision-making deliberation process ex ante so that, regardless of the decision taken by designated decision-makers, all stakeholders feel the decision is ‘fair’ because the process was deemed fair, even if it was not their favored outcome? daniels proposes four criteria to guide the formation of such a process.

below, we introduce the basic criteria. we will look at each of these in greater detail in a set of posts in future. (get excited!)

  1. relevant reasons

what types of reasons will be considered “relevant,” and therefore permissible, in decision-making? these reasons, once agreed could also influence the types of data collected in the evaluation itself. we are not proposing that each of the criteria could be giving an ex ante weight so that there is a precise algorithm for decision-making, only that it will be agreed in advance what is on and off the table.

another key consideration, of course, is who will be involved in setting the relevant reasons and who will be involved in the actual decision-making. would there, for example, be a mechanism for public participation or comment?

2. transparency

how transparent should the decision-making process be, including those reasons deemed relevant for decision-making? should everything be made public or does that make it too to speak honestly, some stakeholders will need ‘cover’ and to not have their full views publicized. might a commitment to transparency scare away implementing organisations from trying out innovative ideas for fear of failure – especially if it might be publicly known?

a commitment to transparency includes deciding the extent to which each of the following will be made public and at what point in time: the determined relevant reasons public, the process of deliberation, the full transcript or just a summary of the deliberation.

3. revisibility

in the initial A4R framework, based on health insurance, the revisibility criterion related to appeals made given new evidence. for donor programmes that employ a particular technology that yields it prohibitively expensive to scale, we can imagine that a breakthrough that lowers the price the technology should also lead the donor to revisit their decision not to scale.

another twist on revisibility in the case of development programmes could be an appeals process for the public / civil society that were part of the programme pilot, to argue for (or against) the programme being continued.

4. enforce-ability

the enforceability criterion requires that someone have the institutional authority to make sure that the other conditions are met.

summary / conclusion

we recognise there are many unresolved questions in operationalising the considerations we offer – relevant reasons, transparency, revisibility and enforceability. but this is an attempt to lay out considerations that matter and begin a much-needed conversation. thinking along the lines we propose and discussing how to expand on it is, we feel, the first step towards operationalising towards support decisions that evidence-informed but also, moreover, reasonable.

in sum, we want to call for better preparing evidence for decision-making (previous post) but also remind ourselves to recognise the actual – and desirable – place of evidence in these decisions (this post).

  • * we would like to acknowledge the sounding-board excellence of arjun, payal, sameer and urmy, as representatives of MNDC.
  • ** as a case of strictly following CEA and why evidence perhaps shouldn’t be - setting aside whether it can be -- the only driving reason for a decision, consider the 1990s’ Oregon effort at strictly technocratic priority-setting for medicaid.
  • *** daniels notes that the validity of the approach is premised on the supposition that we can better agree on a fair process than on principles -- this premise needs to be empirically documented and tested to move the conversation forward.
  • **** see daniels on the ‘four unsolved rationing problems’ with regard to health decision-making at a population level.
  • ***** daniels’ ideas have yet to be tested empirically.