have evidence, will… um, erm (2 of 2)

this is the second in a series of posts with suvojit (and cross-posted here), initially planned as a series of two but growing to six…

reminder: the scenario 
in our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

and yet. the role of evidence in decision-making of this kind is unclear.

in response, we argued for something akin to patton’s utilisation-focused evaluation. such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

(this parallels woolcock’s definition of rigor, here. to the extent that we focus on a scenario in which donors and the M(e)E team design an evaluation intended to inform scale-up decisions (of effectiveness (not efficacy) across the relevant geographies and with likely implementers, say), we sidestep some – though not all – of pritchett (and sandefur’s) critiques of rigor vis-a-vis the multiple dimensions of context.)

in this post, we continue to explore this scenario, which sets up a seemingly ideal case of evidence-informed decision-making (donor-commissioned directly instrumental evaluations (rather than symbolic or conceptual)). we move beyond designing usable/useful evaluations to ask what might facilitate donors making decisions that are, if not necessarily “right,” at least informed, reasoned and justifiable given available evidence. to this end, we present a potential set of criteria to (begin a conversation on how to) set-up a process that can yield thoughtful, reasoned and ‘fair’ decisions that take evidence into account.

to begin, we ask what does influence decision-making at present.

what does drive decision-making?

the recent semantic shift from “evidence-based” to “evidence-informed” decision-making reflects a brewing recognition among evidence nerds that decision-making isn’t — can’t be (?), shouldn’t be (??)** — made in a strictly technocratic way. most political scientists and policymakers — and certainly politicians — have known this for a very long time.

politics are constitutive of policymaking. full stop. it is naive to proceed under any illusions about evidence replacing politics or the need to think more generally. researchers, M&Eers can learn more about the processes; party platforms and ideologies; election cycles and decision timetables — and potentially understand how to leverage them — but they don’t go away no matter how hard we wish for technocratic decision-making.

participants at a 2012 conference on evidence-based policy-making generally agreed that “evidence is a relatively minor factor in most policy maker’s decision making” and that “many other factors” influence the decisions made. additional factors in policy decision-making include:

  • inertia, path-dependence, habit
  • administrative feasibility to implement
  • decision-maker and public values, ideologies and perceptions about the way things are and ought to be
  • political benefit/cost of adding or removing a visible program
  • alignment of program’s expected impact trajectory with political cycles, opportunity  windows
  • political manipulation
  • personal & professional ambition, the interests of powerful advocates and lobbyists
  • justifying past budgets and decisions
  • personal and expert experience, gut feelings
  • given that all this (and more) is usually part of any decision-making reality, we try lay out, below, an approach to guide decision-making.

deliberative process
our proposal draws heavily on norman daniels’s work on “accountability for reasonableness, (A4R)” a rawlsian-influenced approach to procedural justice with regards to distributing resources scarcer than the needs that require fulfilling.***** daniels asks whether in the absence of clearly fair outcomes or principles, if a fair process could be established in a particular context.

to this end, A4R pursues “pure (if imperfect) procedural justice” – a process by which, in the absence of clear principles4 of decision-making (for example, strictly following the results of a cost-effectiveness analysis** or giving complete priority to the worst-off), ex ante agreement on the process of decision-making will lead to outcomes that can be accepted as “fair.”***

in this case, we ask how we could shape the decision-making deliberation process ex ante so that, regardless of the decision taken by designated decision-makers, all stakeholders feel the decision is ‘fair’ because the process was deemed fair, even if it was not their favored outcome? daniels proposes four criteria to guide the formation of such a process.

below, we introduce the basic criteria. we will look at each of these in greater detail in a set of posts in future. (get excited!)

  1. relevant reasons

what types of reasons will be considered “relevant,” and therefore permissible, in decision-making? these reasons, once agreed could also influence the types of data collected in the evaluation itself. we are not proposing that each of the criteria could be giving an ex ante weight so that there is a precise algorithm for decision-making, only that it will be agreed in advance what is on and off the table.

another key consideration, of course, is who will be involved in setting the relevant reasons and who will be involved in the actual decision-making. would there, for example, be a mechanism for public participation or comment?

2. transparency

how transparent should the decision-making process be, including those reasons deemed relevant for decision-making? should everything be made public or does that make it too to speak honestly, some stakeholders will need ‘cover’ and to not have their full views publicized. might a commitment to transparency scare away implementing organisations from trying out innovative ideas for fear of failure – especially if it might be publicly known?

a commitment to transparency includes deciding the extent to which each of the following will be made public and at what point in time: the determined relevant reasons public, the process of deliberation, the full transcript or just a summary of the deliberation.

3. revisibility

in the initial A4R framework, based on health insurance, the revisibility criterion related to appeals made given new evidence. for donor programmes that employ a particular technology that yields it prohibitively expensive to scale, we can imagine that a breakthrough that lowers the price the technology should also lead the donor to revisit their decision not to scale.

another twist on revisibility in the case of development programmes could be an appeals process for the public / civil society that were part of the programme pilot, to argue for (or against) the programme being continued.

4. enforce-ability

the enforceability criterion requires that someone have the institutional authority to make sure that the other conditions are met.

summary / conclusion

we recognise there are many unresolved questions in operationalising the considerations we offer – relevant reasons, transparency, revisibility and enforceability. but this is an attempt to lay out considerations that matter and begin a much-needed conversation. thinking along the lines we propose and discussing how to expand on it is, we feel, the first step towards operationalising towards support decisions that evidence-informed but also, moreover, reasonable.

in sum, we want to call for better preparing evidence for decision-making (previous post) but also remind ourselves to recognise the actual – and desirable – place of evidence in these decisions (this post).

  • * we would like to acknowledge the sounding-board excellence of arjun, payal, sameer and urmy, as representatives of MNDC.
  • ** as a case of strictly following CEA and why evidence perhaps shouldn’t be - setting aside whether it can be -- the only driving reason for a decision, consider the 1990s’ Oregon effort at strictly technocratic priority-setting for medicaid.
  • *** daniels notes that the validity of the approach is premised on the supposition that we can better agree on a fair process than on principles -- this premise needs to be empirically documented and tested to move the conversation forward.
  • **** see daniels on the ‘four unsolved rationing problems’ with regard to health decision-making at a population level.
  • ***** daniels’ ideas have yet to be tested empirically.
Advertisements

Mo money, mo problems? AMF does not make Givewell’s top-three for 2013 #giving season

This blog is a cross-post with Suvojit. Update 21 December: the conversation has also continued here.

Recently, Givewell has revised its recommendation on one of its previously top-ranked ‘charities,’ the Against Malaria Foundation (AMF), which focuses on well-tracked distributions of bednets.  Givewell “find[s] outstanding giving opportunities and publish the full details of our analysis to help donors decide where to give.” This approach seems to have succeeded in moving donors beyond using tragic stories and heart-wrenching images to raise funds, looking rather at effectiveness and funding gaps.

In their latest list, AMF does not rank amongst the top three recommended charities.  Here, based on the experience with AMF, we outline the seeming result of Givewell’s attention on AMF, consider the possible lessons and ask whether Givewell seems to have learnt from this episode, taking clear steps towards changing their ranking methods to avoid similar mishaps in future. As it stands, around US$ 10m now lie parked (transparently and hopefully temporarily) with AMF as a result of its stalled distributions, a fact for which Givewell shares some responsibility.

Givewell lays out its thinking on revising AMF’s recommendation in detail.  As a quick re-cap of that blog post: when Givewell looked at AMF two years ago, AMF was successfully delivering bednets at the small- to medium-scale (up to hundreds of thousands in some cases) through partnerships with NGOs (only the delivery of health products such as bednets and cash transfers meet Givewell’s current eligibility criteria). Following Givewell’s rating, a whole bunch of money came in, bumping AMF into a new scale, with new stakeholders and constraints. The big time hasn’t been going quite so well (as yet).

This is slippery ground for a rating service seeking credibility in the eyes of its donors. Currently, Givewell ranks charities on several rating criteria, including: strong evidence of the intervention’s effectiveness and cost-effectiveness of intervention; whether a funding gap exists and resources can be absorbed; and the transparency of activities and accountability to donors.

In its younger/happier days, AMF particularly shone on transparency and accountability. Recognizing that supplies of bednets are often diverted and don’t reach the intended beneficiaries, AMF is vigilant about providing information on ‘distribution verification’ as well as household continued use and upkeep of nets.

These information requirements – shiny at the small scale – create a glare at large-scale, which is part of the problem AMF now faces. ‘Scale’ generally means ‘government’ unless you are discussing a country like Bangladesh with nationwide NGO networks. The first hurdle between information and governments is that the required data can be politically sensitive.  Distribution and use of information is great for donors’ accountability but it can be threatening  to government officials, who want to appear to be doing a good job (and/or may benefit from distributing nets to particular constituents or adding a positive price, etc).

As a second, equally important, hurdle: even if government agencies intend to carry out the distribution as intended (proper targeting etc), data collection has high costs (monetary, personnel, and otherwise)  – especially when carried out country-wide. AMF doesn’t actually fund or support collection of the data on distribution and use that they require of the implementing agencies. AMF is probably doing this to keep its own costs low, instead passing collection costs and burdens on to the local National Malaria Control Programmes (NMCP), which is definitely not the best way make friends with the government. Many government bureaucracies in Sub-Saharan Africa are constrained not only by funds but also capacity to collect and manage data about their own activities.

What do these data needs mean for donors and what do they mean for implementers? For donors, whose resources are scarce, information on transparency and delivery can guide where to allocate money they wish to give. Givewell, by grading on transparency of funding flows and activities, encourages NGOs to compete on these grounds. Donors feel they have made a wise investment and the NGOs that have invested in transparency and accountability benefit from increased visibility.

At issue is that there seems to exist a tension between focusing on transparency and the ability to achieve impact on the ground. If the donor, and possibly Givewell, do not fully take into account institutions (formal and informal), organizational relationships and bureaucratic politics, the problem of a small organization not being able to replicate their own successful results at scale may resurface. Givewell says that it vets a given charity but it is not clear what role potential implementing partners play in this process. Givewell likely needs to account for the views of stakeholders critical to implementation, including those people and organizations that may become more important stakeholders given a scale-up. The fact that NMCPs (or the relevant counterpart) as well as bilaterals and multilaterals are hesitant to work with AMF could have been weighed into Givewell’s algorithm.

Givewell seems to be listening and recognizing these challenges, first by its publicly reasoned response to AMF’s performance, second by posting reviews (in particular, this recent review by Dr. de Savigny) and third, updating its selection criteria for 2013, including a consideration of scalability. de Savingny’s review raises AMF’s strategies in working with governments, both coordinating with donor governments and supporting ‘recipient’ governments with determining data needs and collecting data.

What else can Givewell do now? Expand the criteria beyond need, evidence-base (intervention and organization) and commitment to transparency by also including:

  1. Feedback from previous implementing partners.

  2. Specific project proposals from applicants, in which they lay out a plan to implement their activity in a specific country. Potential funding recipients should think through and detail their government engagement strategy and gain statements of buy-in from likely implementing partners  – global and local – in that context.

  3. Givewell should more carefully calibrate how much money goes to organizations for proposed projects. Funding based on engagement in a particular country can help avoid problems of getting too much too fast: funding can be pegged to the requirements of the specific project that has been put up, for which the organization has need and absorptive capacity.