have evidence, will… um, erm (5 of 6, revisibility)

this is part of a series of joint posts with suvojit. it is also cross-posted at people, spaces, deliberation.

throughout this series of posts (1, 2, 3, 4), we have considered two main issues. first, how can evidence and evaluation be shaped to be made more useful – that is, directly useable – in guiding decision-makers to initiate, modify, scale-up or drop a program? or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

for such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. if the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he further argues, consequent fairness to the decision and coherence to decisions over time.

the first two criteria set us up for the third: first, decision-makers agree ex ante to constrain themselves to relevant reasons (determined by stakeholders) in deliberation and, second, make public the grounds for a decision after the deliberation. these first two, we argue, can aid organizational learning and coherence in decision-making over time by setting and using precedent over time – an issue that has been bopping around the blogosphere this week.

these criteria, and an approach ensuring A4R more generally, are also a partial response to increasing calls for donor transparency, made loudly in Mexico City this week via the Global Partnership for Effective Development Co-operation. these calls focus on the importance of public availability of data as the key ingredient of donor (and decision-maker) transparency. we concur on their importance. but we argue that it is incomplete without an inclusive process of setting relevant reasons on how those data are used (recognizing that they will always only be part of the process) and making the decision criteria as well public.

the publicity and transparency around decision-making opens the door for A4R’s third criterion (and the subject of this post): the possibility to appeal and revise decisions. as Daniels notes, this condition “closes the loop between decision-makers and those who are affected by their policies.”

as a quick reminder of our guiding scenario: we specifically focus on the scenario of an agency deciding whether to sustain, scale, or shut-down a given program after piloting it with an accompanying evaluation — commissioned explicitly to inform that decision.

in most decision-making of this kind, some stakeholders — often would-be beneficiaries — will not agree with the decision and even feel or be adversely affected. while we suggest that stakeholders be involved in the earlier process of setting relevant reasons, a grievance-redressal or dispute-resolution mechanism, as provided by the revisibility criterion, gives these stakeholders an opportunity to voice their perspectives, based on the original grounds of the decision.

they can do this because the decision-criteria are made public, via criterion 2. this “visible and public” space for further deliberation provides stakeholders have a route “back into the policy formulation process.” stakeholders can use evidence available to them to advocate a certain way forward; it also allows for stakeholders to revisit the decision-making criteria and the decisions they fostered. stakeholders therefore have the opportunity to make a case for a change in the decision.

why might past decisions be questioned? since the appeals process is largely based on the original decision criteria, appeals come if circumstances around those reasons changed. for example, in considering relevant reasons, feasibility was one category of criteria we proposed, such as government’s capacity to scale a program or their interest in the program. one can imagine that over time, over changes in regime, and over changes in politics and policy, the original answers to these criteria could change, opening space for appeals. an additional set of proposed relevant reasons related to cost, effectiveness, and cost-effectiveness. the costs of technologies and materials may change over time or fresh evidence could come out about long-term benefits of programs. this alters the original cost-benefit ratio, again, opening a space for appeals against the original decision.

such appeals may come from members of civil society (or government) that would like to see the program brought back to life (or to see it go away). these may also come from donors themselves wanting to look at their decision-making over time and implement changes in line with the changing context.

Daniels is careful to note, and we emphasize, that the power and purpose of this criterion is not that citizens will always overturn prior decisions.* decisions on limits are requisite, as needs generally outstrip resources. rather, the revisability criterion allows for reconsideration and reflection on those decisions by those knowledgeable about the topic and empowered to alter decisions, if seen fit and feasible. this can, Daniels notes, bring further legitimacy to decision-making processes and, again, improved decision-making over time.

we want to stress that these deliberations over decision-making and their ‘revisibility’ have to be situated in a rational and ethical decision-making framework, predicated on meeting needs fairly when not all can be met (distinct from, say, a legal framework). appeals will have to be judged on the original merits of the arguments as well as with recognition that aid resources have limits (although obviously, a different argument can be made that aid budgets should simply be bigger).  moreover, appeals need to be judged by people who understand the original decision and have the power to change it, if that is the decision taken. when decision-making criteria are set, they set the roadmap for a possible appeals process and should be accordingly discussed and agreed upon.

we started this series of posts by admitting the limited role evidence plays in decision-making — even when those commissioning evidence intend specifically to inform that decision. we considered how planning for decision-making can help in the production of more useful evidence and also how decisions can be made fairly, through the delineation of relevant reasons, the publicity of the decision criteria ultimately used, and now, the possibility of revisiting through criteria and revising decisions.

our thoughts in this series of posts should not make fair decision-making seem like an impossible task. not all aspects of each of these considerations can be taken into account – the constraints of the real world are not lost on us and A4R remains an ideal, though we think one that can be approached. in our final post of this series, we therefore attempt to close the loop by looking at enforcement – asking how these ideas can be enforced and decision-makers held accountable.

*see. e.g., Richard Horton’s recent slide about the limit-breaking decisions by courts and the effects on health care systems, as in cases like Colombia. experiments with health courts may be instructive. picture via @fanvictoria, citing @richardhorton1.

Allowing ‘revisibility’ in decisionmaking

Suvojit Chattopadhyay

This is a joint post with Heather – fifth in our series on decisionmaking

***

Throughout this series of posts (1, 2, 3, 4), we have considered two main issues. First, how can evidence and evaluation be shaped to be made more useful – that is, directly useable – in guiding decision-makers to initiate, modify, scale-up or drop a program? Or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. Second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

For such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. If the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he…

View original post 1,006 more words

Aside

i’m not sure that means what you think it means (gold standard)

some thoughts, from peter byass, here, for the next time you want to refer to a technique as the ‘gold standard’ and what may be behind such a guarantee:

The verbal autopsy literature has extensively used and abused the concept of “gold standards” for validating cause of death determination. Metallurgists would say that 100% pure gold is an impossibility; the highest possible quality is normally certified as being 99.9% gold, while most of the quality-assured gold we encounter on an everyday basis ranges from 37% to 75% purity. It is perhaps also worth reflecting that 99% pure gold is an extremely soft and somewhat impractical material. Cause of death, on the spectrum of measurable biomedical phenomena, is also a somewhat soft commodity. For that reason, any approach to assessing cause of death involves alloying professional expertise with the best evidence in order to generate robust outcomes.

h/t jq

have evidence, will… um, erm? (4 of 6, going public)

this is a joint post with suvojit. it is also posted on people, spaces, deliberation.

in our last post, we discussed how establishing “relevant reasons” for decision-making ex ante may enhance the legitimacy and fairness of deliberations on resource allocation. we also highlight that setting relevant decision-making criteria can inform evaluation design by highlighting what evidence needs to be collected.

we specifically focus on the scenario of an agency deciding whether to sustain, scale or shut down a given program after piloting it with an accompanying evaluation — commissioned explicitly to inform that decision. our key foci are both how to make evidence useful to informing decisions and how, recognizing that evidence plays a minor role in decision-making, to ensure decision-making is done fairly.

for such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. if the four included criteria are met, Daniels argues, it will bring legitimacy to deliberations and, he further argues, consequent fairness to the decision.

in this post, we continue with the second criterion to ensure A4R: the publicity of decisions taken drawing on the first criterion, relevant reasons. we consider why transparency – that is, making decision criteria public – enhances the fairness and coherence of those decisions. we also consider what ‘going public’ means for learning.

disclaimer: logistical uncertainties / room for conversation and experimentation

from the outset, we acknowledge the many unanswered questions about how much publicity or transparency suffice for fairness and how to carry it out.

  • should all deliberations be opened to the public? Made available ex post via transcripts or recordings? or, is semi-transparency — explicitly and publicly announcing ex post the criteria deemed necessary and sufficient to take the final decision — acceptable, while the deliberation remains behind closed doors?
  • who is the relevant public?
  • can transparency be passive – making the information available to those who seek it out – or does fairness require a more active approach?
  • what does ‘available’ or ‘public’ mean in contexts of low-literacy and limited media access?

we do not address these questions — which are logistical and empirical as well as moral — here. as the first-order concern, we consider why this criterion matters.

 

fairness in specific decisions

any decision about resource allocation and limit-setting will be contrary to the preferences of some stakeholders – both those at and not at the decision table. in our scenario, for example, some implementers will have invested some quantity of blood, sweat and tears into piloting a program and may, as a result, have opinions on whether the program should continue; or, those that were comfortable in their inaction (as a result of lack of directives or funds or just plain neglect) who will now have to participate in a scale-up. there will be participants who benefited during the pilot – and those who would have done so if the program were scaled – that may prefer to see the program maintained.

these types of unmet preferences shape Daniels’s central concern: what can an agency* say to those people whose preferences are not met by a decision to convince them that, indeed, the decision “seems reasonable and based on considerations that take… [their] welfare into account?”** being able to give acceptable explanations to stakeholders about a decision is central to fairness.

 

coherence across decisions

the acceptability of criteria for a given decision contribute to the fairness of that decision. But long-run legitimacy of decision-makers benefits from consistency and coherency in organizational policy. transparency, and the explicitness it requires, can foster this.

once reasons for a decision are made public, it is more difficult to not deal with similar cases similarly – the use of ‘precedent’ in judicial cases aptly illustrates this phenomenon. treating like as like is an important requirement of fairness. Daniels envisions that a series of explicated decisions can function as an organizational counterpart of ‘case law’. future decision-makers can draw on past deliberations to establish relevant reasons. deviations from past decisions would need to be justified by relevant reasons.

 

 

implications for learning, decision-making and evaluations

if all decision-makers acknowledge that, at least, the final reasons for their decisions will be publicly accessible, how might that change the way they commission an evaluation and set about using the evidence from it?

it should encourage a review of past deliberations to help determine currently relevant reasons. second, it might encourage decision-makers and evaluators to consider as relevant reasons and measures that will be explainable and understandable to the public(s) when justifying their decisions.

  • in planning evaluations, decision-makers and researchers will have to consider the clarity in methods of data collection and analysis — effectively, will it pass a ‘grandmother test’? moreover, does it pass such a test when that granny is someone affected by your allocative decision? remember the central question that makes this criterion necessary: what can an agency say to those whose preferences are not met by a decision that, indeed, the decision “seems reasonable and based on considerations that take… [their] welfare into account?”
  • there are reasons that decision-makers might shy away from transparency. in his work on health plans, Daniels notes that such organizations speculatively feared media and litigious attacks. in our pilot-and-evaluate scenario, some implementers may not be comfortable with publicizing pilots that may fail; or from raising expectations of beneficiaries that are part of pilots.
  • the fear of failure may influence implementers; this may lead to low-risk/low-innovation pilots. again, this is an important consideration raised above, in the questions we did not answer: when and how much transparency suffices for fairness?

 

in our last blog, we stressed on the importance of engaging stakeholders in setting ‘relevant reasons’ before a project begins, as a key step towards fair deliberative processes as well as a way of shaping evaluations to be useful for decision-making. ensuring publicity and transparency of the decision-making criteria strengthens the perception of a fair and reasonable process in individual cases and over time.

this also sets the stage for an appeals process, where stakeholders can use evidence available to them to advocate a certain way forward; it also allows for stakeholders to revisit the decision-making criteria and the decisions they fostered – the subject of our next post in this series.

***

*we note that donors don’t actually often have to answer directly to implementers and participants for their decisions. We do not, however, dismiss this as a terrible idea.

**we are explicitly not saying ‘broader’ welfare because we are not endorsing a strictly utilitarian view that the needs of some can be sacrificed if the greater good is enhanced, no matter where or how  that good is concentrated.