with great equipoise comes great responsibility?

i had the pleasure of giving two talks through 3ie seminar series over the past few weeks, in london (chaired by the wonderful daniel philips) and in delhi (chaired by the great stuti tripathi and ably discussed by the always-exciting colin bangay). i was also able to present at the london school of hygiene and tropical medicine — big thanks to catherine goodman and clare chandler for arranging and for great conversation. many thanks to everyone who participated (and everyone has listened about my thesis along the way). [slides for the interested.]

.

the presentations were on aspects of my thesis research, which centers on the experience of ‘doing’ the affordable medicines facility-malaria (amfm) in ghana. as often happens, the global narrative around the amfm and the decision to pilot-and-evaluate-and-decide was intriguing and became an important point of discussion. indeed, the pull of the story itself was a major drag on my getting on with aspects of my thesis (with apologies to my committee). what i present below is less about my actual work (relating more to national stakeholders opting in to the pilot and the implementation of it) and focuses on the global narrative, which is abbreviated and stylized in my telling.

.

i set aside the precise definition of equipoise that is most relevant in social science and development evaluations –- whether we should be talking about clinical equipoise (or efficacy equipoise?), some flavor of policy equipoise (or (relative) cost-efficiency uncertainty), or even operational equipoise (uncertainty around whether this thing can actually be implemented in this context, let alone produce the intended results).

.

rather, i begin with the assumption that meaningful uncertainty is a good starting point for commissioning an evaluation and also possibly a key part of the ethical justification for particular approaches to evaluation, such as random assignment. the former is of more interest to me here.

.

an important question – one that the amfm raises pointedly — following from this interest is what obligation, if any, follows from the establishment of equipoise (that a community of thinkers and/or implementers have meaningful uncertainty about a proposed program on a theoretical or practical level).

.

let’s look at the amfm. the goal was to make use of existing global and national public and private sector supply chains (from pharmaceutical manufacturers to small pharmaceutical sellers) to dramatically increase access to high-quality antimalarial treatment and, in turn, improve the (appropriate) use of such treatment and reduce the malaria burden.

.

this generated a situation of ‘active equipoise’ (read: sometimes heated controversy). some was more ideological: should the private sector be used to deliver public health, for example. i set this aside here. some were practical: if we use this specific mechanism to deliver a health commodity, will the subsidies (‘co-payments’) involved be captured along the supply chain or passed on to the end-user? will people not only obtain the high-quality, recommended anti-malarial treatments once they are made more accessible but use them (appropriately) and finally reduce the malaria burden?

.

given this degree of uncertainty about putting a theoretically ‘elegant’ (an oft-used epithet for the amfm) mechanism into practice, a decision was taken to pilot (at national-scale, in 7 countries, for 1.5 years, so the application of the term ‘pilot’ is debatable) and to commission an independent evaluation that would inform the decision to continue, modify, scale, or terminate the initiative. specifically, the global fund agreed to host the initiative for this pilot period and the evaluation was intended to inform the fate of the amfm, at least in the global fund’s portfolio. i am not going to wade into the confusion about the decision was ultimately made (also here) because i want to focus earlier in time than that, about the design of the evaluation itself given its intended decision-informing function.

.

note that there were three key points of (less-ideological) debate at the global level that prompted the pilot-evaluate-decide approach, that can plotted down a theory of change and also down a supply chain:

  • implementation feasibility and the possibility of supply-chain capture (and drug adulteration)
  • the translation of access into (appropriate) use
  • the translation of use into reduced malaria burden

.

before going on, please note that i am not arguing that all evaluation or research needs to lead to a decision or even have this as a goal. rather, i am asking, once it is determined that we will commission an evaluation to inform our decision -– a pinnacle of evidence-informed decision-making — what are our (researchers, evaluators, funders, decision-makers) obligations (ethical or otherwise)?

.

for a variety of reasons, the global fund decided that they wanted the pilot to 1.5 years and, following from this decision, set four benchmarks deemed achievable (through modelling work) within that timeframe that would define success. these related to gains in availability, price, market share (gained against less effective but cheaper and more familiar anti-malarial treatments), and household use. even though the link between use and malaria burden was a key point of uncertainty, this was determined to be beyond the scope of the evaluation from the outset (which people might agree or disagree with). at some point in the process, household surveys were also dropped from the evaluation plan, also cutting off the potential to make rigorous (or, really, any) statements about whether access translated into use.

.

a result of this, it seems, is that many global stakeholders have been able to use the results of the independent evaluation (which suggest that at least in 5 of the 7 pilot countries, moderate to high success in access was achieved) to support whatever position they had initially. (the story at the national level seems a bit different: whether because of experiential learning or the evaluation results or path dependency or other factors, many national-level stakeholders seem to have wound up more supportive of the initiative than they were initially –- something which warrants further investigation.)

.

a key question is how we should feel about the issue of the evaluation – again, explicitly intended to inform a decision – not being set up to address the key points of controversy. disappointed? angry? ethically outraged (note, to the extent that money and not just principle matters, that this evaluation had a $10 million pricetag and that the overall piloting process rang in around $460 million)? this issue of appropriateness and outrage was a key point of discussion, particularly in the delhi seminar.

.

i certainly don’t have an answer but the question merits further debate. if an evaluation is commissioned to address specific points of controversy (uncertainty, equipoise) and explicitly to inform a decision, what are the obligations and responsibilities (whether practical or moral):

  • of the evaluation design to address the controversy (in a way meaningful for those identified as key stakeholders or decision-makers)?
  • to use the evidence generated to make the decision? (and to put in place processes to help make this so)

.

for those of us that push for evidence to play a role in decision-making, these seem important questions to debate. i hope we start to.

on reporting processes and details

this is a joint post with urmy shukla.

.

in this blog, we advocate the importance of in-depth reporting on implementation processes, evaluation processes, and relevant contextual details of interventions and linked evaluations. this will facilitate research transparency, as well as assessments of both learning and the potential for generalizability beyond the original study setting (learning lessons from ‘there’ for ‘here,’ but not necessarily promoting the strict and exact duplication of a program from one setting to another, in line with an understanding of external validity that is appropriate for the social sciences in development).

.

we start with a hypothetical scenario of an intervention and associated evaluation, based on too-frequent experiences in the impact evaluation space. we hope that it doesn’t sound familiar to those of you who have been involved in evaluation or have tried to make sense of evaluation results — but suspect that it will.

.

a research team, connected to a larger research and evaluation organization, ran a study on an intervention. for reasons of statistical and political significance, they have deemed it sufficiently successful and worthy of scaling up, at least in a very specific new setting.

.

the intervention sought to overcome the following problem, for which there are supply-side and demand-side issues. people in malarious areas may procure a bednet (whether for free or for a positive price), but they do not always follow-through with maintenance (re-treatment or replacement).

.

for supply, the private sector only sporadically offers retreatment and replacement, and it is expensive, while the public sector does not always have supplies available. the intervention, therefore, concentrates provision of this service at a specific time and place through temporary service centers.

.

for demand, people with nets often don’t understand the need for retreatment and, even if they do, continuously put off doing so. the intervention, therefore, included a non-monetary incentive for which there is local demand (in this case, soap) to be picked up at the time of net retreatment.

.

this is, altogether, a relatively straightforward (or ‘technocratic’ in woolcock’s classification) but potentially powerful intervention that can improve the private and public good. as such, researchers in the research organization would like to try this intervention (with associated impact evaluation) in other locations, in which they suspect net retreatment and replacement faces a similar set of challenges.

.

however, when the research team in charge of the external replication looked back to the original reports from this experiment, they discovered relatively little information about how the intervention was designed and implemented. the publication is mum on the process itself and any lessons learned, including challenges faced and whether the researchers have done something different with the advantage of 20/20 hindsight. moreover, there aren’t many internal notes that lay out the operations of the intervention or the evaluation. what exists as tacit knowledge remains kept for elite seminar discussions or cathartic gossip over beers.

.

this hypothetical raises two key problems: (1) research transparency and (2) the potential for learning and assessing generalizability.

.

research transparency: from implementation to data collection

while the current focus of research transparency movements (across and within disciplines) is clear about making data and code available for internal/statistical replication, there is a critical piece missing about process. how was the evaluation run? how was the intervention run? what challenged and enabled the success of an intervention in a particular setting? from the hypothetical scenario, this includes questions such as:

  • who was supposed to organize and run the service centers and who actually did so?
  • did the evaluation put in place any kind of monitoring that would likely to be present if the implementers were acting alone? Should this be properly considered part of the intervention?
  • how was the procurement of soap supply managed and were there any relevant challenges?
  • how was soap determined to be a good incentive in this setting in the first place?

.

the research team should ideally be able to refer to a report or working paper, or at least set of internal notes to guide them. but a lack of documentation means that neither evidence users nor even those within the research organization know the answers to these questions. this isn’t just an issue of operations and redundant work, but one of research transparency and ethics: to understand what an intervention actually included, and what is required in order for it to be successful.

.

understanding the intricacies of both the implementation and study setting should include both systematic documentation of relevant factors (ideally informed by a theory of change), as well as ensuring that both quantitative and qualitative ‘process’ data are collected with the same rigor as ‘evaluation’ (baseline/endline) data. going beyond a bare-bones theory of change (and including theoretical mechanisms, implementation processes, and contextual interactions) requires extra work. this should, admittedly, fall with both researchers and donors/commissioners — to ensure that study teams have the necessary financial resources, time, and research capacity to effectively and systematically collect and process this information.

.

reporting for learning and generalizing

the ‘active ingredients’ of programs (if not whole programs) tested in one setting can also be tried in other settings (other times, geographies, scales, etc). indeed, some may say this is a key goal of policy-relevant evaluations. such trials may be done on a one-off basis or as part of a more systematic approach to external/field replications to learn whether some interventions are indeed effective in a variety of settings.

.

neither can be done well if details about how the active ingredients were implemented and measured are not reported. this, in turn, is quite difficult without tools to measure and document processes and decisions made along the way. but this needs to be sorted because reporting on implementation and evaluation experiences and challenges is central to a learning agenda in the social sciences and in programmatic and policy work.

.

this argument is not new. lincoln and guba call for “narrative developed about the [setting to allow] judgments about the degree of fit or similarity may be made by others who wish to apply all of part of the findings elsewhere.” it seems that similar concerns motivated woolcock to close his paper on external validity with a call for more case study work. thickness and richness of description, rather than thinness, helps users of evidence learn and make assessments and adjustments in light of their own setting. this description, guided by a good theory of change, can address directly some key challenges to external validity, such as site selection and partner selection biases.

.

in sum

failure to prioritize intervention details that track along a detailed theory of change could be potentially detrimental, with ill-advised implementation and/or a locally inappropriate intervention. in the case of the bed-net intervention, failure to report on challenges (e.g., need for extra community buy-in, developing efficient supply chains, clarity about what government workers could not handle alone, etc.), as well as enabling factors (e.g., community-level awareness of proper bed-net usage, the operational strength and local reputation of the implementing partner, etc.), could mean that the research team team conducts a study (and intervention) that, at the least, inefficiently uses both research and implementation resources, and at the most, has negative unintended consequences. ultimately, the conversations that combine research transparency and policy recommendations should prioritize high-quality, systematic, and readily available data from all parts of the impact evaluation cycle.

.

our current efforts to be ‘rigorous’ while conducting evaluations are insufficient. we must also be rigorous in our efforts to document evaluation and implement processes, to report on these, and to critically and openly reflect on how we might approach the same problem differently in the future. this level of transparency, though initially daunting, will only improve the potential for better-informed and better-implemented policies, aiding us in transferring lessons from ‘here’ to ‘there’.

thinking through funnels of attrition

when first introduced to the idea of a funnel of attrition (my early attempt at a slightly more nuanced and symetric — but still generic — version is here), i largely thought of it as a useful heuristic for thinking about sample size calculations, by being forced to think about issues of awareness and take-up as well as a few steps along a causal chain between initial participation or use and longer terms outcomes of interest.

.

more recently (including here), i  have tried to use it as a tool for thinking about articulating assumptions in a theory of change about where people might ‘fall out of’ (or never join) an intervention, thus leaving the funnelmore specifically, i tried (along with colleagues) using it as a goal for a conversation with implementing partners (that is, “let’s map out the funnel of attrition”), tackling the question from multiple perspectives. (various perspectives were brought in using personae, which i created beforehand relying partially on average results from the baseline as well as some stylizing to try to bring certain features into the conversation. at first i feared being overstylized but, in the end, i think i had too little detail. i reviewed my notes from the inmates are running the asylum and was reminded of the importance of specificity, even at the expense of accuracy.)

.

i liked this idea for guiding a conversation because the funnel of attrition is a little more straightforward than a full theory of change but, in constructing it, you still end up articulating some central assumptions, which can be added to thinking about change may/not happen. it seems like a handy building block in a well-considered theory of change.

.

the intervention in question is a multi-year engagement, with escalating levels of commitment required and difficulty of requirements. it initially struck me that given all the different program elements, it would be relatively easy to discuss about some of the different points at which it could become difficult to continue to stay engaged. this turned out not to be true. i played devil’s advocate far more than i had initially planned, effectively interviewing the personae with pretty specific questions that ended up reflecting my own thinking about the difficulties of sustained engagement more than revealing so many new ideas. this doesn’t mean the exercise was useless in getting others to think through these issues but i had certainly imagined it going a little differently.

.

my revised thinking is that it may help to have an early draft of funnel of attrition to guide the conversation, even if (or hoping that) this gets trashed and remade in the discussion process — it just helps to have something tangible. this draws on an interesting suggestion from steve montague of even building in a few ‘errors’ into an early draft understanding of a theory of change to present to implementers to help provoke discussionmy working plan for the future engagements i’ll do with the same implementing team is to walk through this draft funnel of attrition from multiple, quotidian perspectives, with specific thought about factors that might facilitate, reinforce, or hinder progression down the funnel (meaning, staying with the program and realizing the intended changes)

.

this need for tangibility is how i have found myself struggling — somewhat to my surprise — to put together a draft funnel of attrition. one of the key issues is whether each step is necessary to move to the next; this issue is nicely dealt with in rick davies’s effort at a different diagram of attrition. this gets around the problem of whether, say, changes in attitude are required to precede changes in behavior.

.

but it doesn’t fully get around the challenges of a multi-year, multi-phase program and how to represent this. as noted, different stages of the program present different challenges and so it seems difficult to sum this all up as a ‘maintenance of participation.‘ this glossing over three years of maintaining participation also seems undesirable from the point of view of facilitating the articulation of reasons for dropping out along the way. things also get muddled as to whether to put outputs and outcomes between different phases of the program. again, one issue is necessity and sufficiency and can therefore be addressed as per davies’s suggestion of layered circles. another issue is that it is getting to be just a really long funnel, which is more overwhelming to look at but also a more nuanced guide to conversation and thinking.

.

we’ll see what comes out!

*addendum from 2 march 2016: the approach of considering enablers and ‘blockers’ at each step of the funnel attrition went pretty well, even though the funnel we used focused almost entirely on the programmatic phases and activities involved in maintaining/using the program over the course of three years. this means for this particular activity, we didn’t really discuss how activities are expected to translate into a series of outputs and outcomes but it still provided fodder for about two hours of discussion. one small thing that would have made the exercise easier would be adding numbers to the spaces between the ‘rungs’ of the funnel so that it was easier to have people in the same spot.

for theories of change and study planning, assumptions don’t make an ass of u & me

two nice posts came out yesterday that relate directly or tangentially to building theories of change. david evans wrote about, inter alia, what lies behind a null finding, here. marcus jenal wrote here about how complexity shouldn’t stop us from building theories of chang, up front, so long as we stand ready to adapt (parts*) of them. these two posts sort of collided in my head as ways of thinking about how tocs link to study design (as well as program planning) — thanks for inspiring me to write on a saturday morning!

.

plenty has been written on the relevance of tocs and a good place to start if you’re catching up is with craig‘s work, such as here. he, marcus, and others highlight the centrality of assumptions about why things may not work to a good theory of change. one reason to spend time on carefully hashing out the assumptions is that they help strengthen both research design and program planning, hence the blog title (the received wisdom is that when you assume, you do indeed make an ass out of u + me).

.

what excites me about building theories of change (or conceptual models, as i originally learned about them) is getting to draw simultaneously on more formal theory, on practical lessons from the empirical literature (making it useful when empirical papers offer basic information and lessons about mundane (read: fascinating and useful) implementation details), and a healthy dose of common sense, where ‘common’ includes drawing on a variety of perspectives.

.

getting a good start on a toc (even if planning to adapt and iterate throughout) is important if you, like me (with vegard, as we try to lay out in our work-in-progress here), see almost every other aspect of program planning and evaluation design as flowing from it: the content of questionnaires, the timing of data collection, which methods are most appropriate for answering which types of questions (links in the toc), what monitoring plans are needed, the enabling factors on which program design can draw and enhance, the contextual constraints a program can try to loosen, and even the way that the final report may look and the story it will try to tell about what met expectations, what didn’t, and why.

.

marcus’s post has some useful ideas about building tocs, including new-to-me cynefin and accommodating competing hypotheses in early toc iterations on how change might occuri have also written a bit about ways to try to enhance the toc-building process, here (and also some lessons from putting this into practice, here). these and other exercises will (perhaps implicitly) shed light on the ways that a program may not work as expected.

.

another set of useful ideas, especially in light of david’s post (and the paper that inspired it), can be drawn from cartwright and hardie. the book is worth checking out, as i skip over many of their ideas related to toc-building (not what they call it) to focus on one they call the ‘pre-mortem.

.

less morbidly and more relevantly, we might call it a pre-null exercise. this type of thought experiment is useful because well-considered theories of change incorporate, through assumptions, hypotheses/theories of no change — that is, all the reasons why the expected pathways to change may get blocked or washed out or never be walked at all, culminating in a null result. the existence and tractability of such roadblocks and breakdowns are important lessons to learn from any research projectthis is why thinking, early on, through the assumptions component of a toc is so critical, so that research designs can build in ways to catch potential reasons for no change.

.

the basic pre-null thought exercise is, at the beginning of project and study design, to imagine yourself at the end an analysis, with a lack of significant results. think through and list the possible reasons for this (beyond a true null) and then incorporate them both into program and study design (especially as these two seem to be (again?) moving closer together, see here and also samii on deep engagement).

.

david’s post gives some nice broad categories to consider: (1) lack of implementation fidelity or other implementation snafus (making it particularly important for a toc to include implementer incentives as well as considering the beneficiary viewpoint, as advocated by pritchett et al, among others), (2) altered behavior among ‘beneficiaries’ over time, (3) ge effects, or (4) that the intervention may work differently for various sub-groups (yes, sub-group assumptions should appear in a toc!).

.

trying to anticipate these different ways we might end up with null results means they can be better represented as toc assumptions and, accordingly, incorporated into study and questionnaire design — and we all end up wiser as a result.

.

i think it is fair to say broadly that this and many other thought exercises go un-done during the study design phase of rigorous evaluation and research (i like, for example, this posner et al paper for its effort to do some of this work ex post but of course wish it had — and think much of it could have — happened before the study). these efforts certainly go unreported and perhaps even untracked by researchers themselves, not just in the academic literature but, perhaps more upsettingly, in study reports that have fewer restrictions on words.

.

i am hoping that exercises like a pre-null thought experiment will be useful to researchers planning studies. what i am struggling to figure out is: why they aren’t happening much now.

.

here are some of my working hypotheses:

  • lack of time during program and/or study planning stages.
  • lack of clarity about toc-building (or conceptual modelling or whatever term you fancy) as being a key goal of formative work and deep stakeholder engagement (or more general lack of formative work and meaningful stakeholder engagement).
  • lack of funding for this kind of toc-building work and engagement.
  • lack of clarity about what constitutes a good theory of change and how it links to broader study and program design.
  • lack of (sociological) imagination or a sense of not needing to employ this during study design.
  • limited discussion of implementation lessons-learned (including during the pilot phase) in the empirical literature and little value (or actual disincentives) placed on sharing implementation details — good, bad, and ugly — that can inform future tocs.
  • under-valuing of theory-of-change-building (along with needs assessment and diagnostics?) as part of formal research education (these are things that can be taught, you don’t need to only learn them during your first research project, though certainly some of the lessons may only hit home then).

.

the follow-up question is, of course, how we can start to try to do better, such that inexplicable nulls become a bit more endangered.

.

*i note ‘parts’ because while from a learning perspective, we might want to be able to scrap all our initial conceptions but from an accountability (and, actually, learning) perspective, we probably want to hold some things, such as goals, as fixed.

i feel like an #oddeven party pooper (reducing and working are not the same)

there are two nice, evidence-informed op-ed pieces out today on delhi’s odd-even scheme to try to reduce air pollution (here and here). the results are heartening because i didn’t have a good sense of whether a two week window of implementing a policy — to which there were many exceptions — was long enough to potentially detect a statistically significant change in meaningful measures of pollution. nor, admittedly, did i feel that i was breathing cleaner air the past two weeks. as one the articles points out, much of the anecdotal chatter has been about clearer roads, not about clearer skies.

.

since i live in delhi, am certainly affected by the air quality, and worried about my health accordingly (plume tells me every day that the situation is dire), i was pretty pleased to wake up to the headline “yes delhi, it worked.” and what has indeed happened is that good evidence (rigorously obtained, as laid out by suvojit) has been generated of a statistically significant reduction in nasty particulate matter (pm 2.5) (by 18%) during the hours the intervention was in effect.

.

this was a policy that i wanted to see work, so i am pleased that the evidence shows a reduction in the particulate matter that is driving many of my good friends out of the city (alongside many other woes). but we must be careful — whether something “worked” is more subjective than is the evidence of a reduction, which greenstone and colleagues have nicely and rapidly documented.

.

if models had predicted a 50% reduction, we wouldn’t have been so thrilled about 18%. if the government had said that every little bit counts and that even a 5% reduction would be counted by them as a success and a reason to commit to continuing the program, then indeed, 18% is quite impressive.

.

moving forward, as delhi tries to clean up its act and hopefully become a model for the rest of the country, clarifying up-front decision-points and definitions of success will be important. for the next pilots — because delhi desperately needs such measures — how will we declare, in a rigorous and defensible way, that a policy effort ‘worked’ well enough to be scaled and continued?  those of us interested in promoting the use of rigorous evidence and evaluation to inform decision-making need to be slightly cautious in our interpretations and celebrations of victory when we haven’t said up front what we’ll count as a triumph.

.

*as an addendum (31 jan 2016), it is not clear that the researchers themselves penned the title ‘yes delhi, it worked.’ for the benefit of the doubt, i am hoping that the researchers submitted something more along the lines of ‘yes delhi, odd-even reduced pollution’ and that the newspaper itself opted to change it. but the point holds that success is subjective and therefore requires a definition, preferentially ex ante.

Aside

delhi’s #oddeven plan had a significant effect on pollution

agree that this is a potentially good sign about individual citizens being willing to engage in collective action – note also that many were most excited about lessened traffic, which is a good reason to think about inspiring collective action in ways that bring about both a public good and a private gain, allowing the pursuit of direct and indirect policy goals. my sense is there is still a long way to go in convincing people that the pollution is a problem and getting worse.

Suvojit Chattopadhyay

Researchers Michael Greenstone, Santosh Harish and Anant Sudarshan have some news for us. Hard data that shows that the Odd-Even plan reduced pollution by significant levels in Delhi. The headline: this study finds there was an 18% reduction in PM 2.5 due to the pilot during the hours that the rule was in effect. The effect size is truly staggering, and is quite unusual for studies that use such rigorous methodology to look at the impact of policy interventions.

Starting January 1, while absolute pollution levels increased both inside and outside Delhi (for atmospheric reasons, as noted by other commentators), the increase in fine particle levels in Delhi was significantly less than in the surrounding region. Overall, there was a 10-13 per cent relative decline in Delhi.

Around 8 am, the gap between Delhi’s pollution and that in neighbouring regions begins to form and steadily increases until mid afternoon. As temperatures…

View original post 558 more words

gratitude.

though it feels far less monumental than perhaps it should, i have done the electronic submission of my thesis, which is a big milestone in calling the thing done or, more accurately, me degree-ed, regardless of how much more work there is to do.

.

i am sure i have forgotten many people but just in case some people don’t actually get around to checking out the thesis itself — a profound but simple ‘thank you’. here are my acknowledgments:

.

a thesis seems like a lone and lonely process, with only data and tea (or stronger) to keep you company, right up until you realize how many people you have to thank. no matter how i’ve tried to keep tabs, i am sure i have forgotten people – if you know you played a role, please give yourself a pat on the back.

 .

this is an empirical dissertation based almost entirely on primary data, which would not exist without willing respondents. in tamale, this includes many private-sector retailers who gave their time to answer a lot of tiresome questions. these answers, in turn, would not have materialized without the long-standing support of a core survey team, with special thanks to abass adam yidana, damba mohammed majeed, and alidu osman tuunteya. n tuma. in accra, many people not only consented to be interviewed but have been patient guides and kept in touch and helped this thesis over its long trajectory. these include: george amofah, kwabena asante, dennis sena awitty, frank boateng, samuel boateng, alex dodoo, keziah malm, yuniwo nfor, louis nortey, daniel norgbedzie, elianne oei, ellen sam, sylvester segbaya. alex dodoo, and daniel norgbedzie have gone above and beyond. there would literally be no words (or numbers) without you.

.

i would have not been in ghana without the trust and support of günther fink and julia raifman and I would not have survived ghana without the moral, emotional, intellectual, and nutritional support and levity of becky antwi, slawa rokicki, mollie barnathan, liz schultz venable, pace phillips, suvojit chattopadhyay, usamatu salifu, salifu amidu, abubakari bukari, lindsey o’shaughnessy, lolo dessein, aqil esmail, michael polansky, sam polley, emmanuel okyere, and rachel strohm. innovations for poverty action-ghana provided much needed infrastructural support and connections; jeff mosenkis has egged me on from headquarters. nathan blanchet has been a guide on ghana and to this whole process.

 .

this thesis as a completed product would not exist without michael reich. from inspiring the ideas that went in to providing a (mostly) patient guide and forcing me to articulate my own ideas beyond, ahem, “a fucking mess” to something that is hopefully readable and possibly even, with time, enjoyable: thank you. you’ve pulled me back from the brink more than once and words don’t suffice. i know sometimes your papers take up to thirty drafts; this has taken many more and you’ve been there throughout.

.

günther fink, jessica cohen, and barbara heil: thank you for keeping me in line and inspired. günther, your enthusiasm, and barbara (mom #2), your persistence, have made a huge difference.

 .

to the swapportive team of shahira ahmed, corrina moucheraud, pamela scorza, and elif yavuz: thanks for keeping me going on so many levels. corrina moucheraud, in particular, has listened to and read many ideas and drafts that constitute what follows, though with far less brevity than her council. elif, you’ve been there, reminding me that they don’t teach kingdon in europe and that anything i do with it better be good.

 .

to an assortment of men in cambridge — thank you. john quattrochi, who helped me survive a wide variety of the perils of working abroad to early engagement with ideas (“is that what you’re trying to say?”), to getting my defense in place, to making sure the final touches were set. peter rockers, for your early skepticism and patience. jeremy barofsky, for encouragement, even sometimes by example. guy harling, for answering every stupid question i could think of while only occasionally reminding me that there are no stupid questions, only stupid people. zubin shroff, for listening and read-throughs.

 .

victoria fan, livia montana, rifat hasan, and jen manne-goehler have been sounding boards of one sort of another at various times.

 .

to the team at the center for geographic analysis, in particular jeff blossom (near and far!) and sumeeta srinivasan: i would have been lost without you.

 .

jesse bump and ashley fox have constituted a political economy crisis unit and have pulled me together and pushed me forward on more than one occasion. thanks for being key stakeholders.

 .

thank you to an intellectually and emotionally supportive community in delhi, with particular thanks to payal hathi, james pickett, and suvojit chattopadhyay for suffering through chapter drafts. bhuvana anand, shreya ray, sangita vyas, urmy shukla, jessica pickett, diane coffey, dean spears, shagun sabarwal, and markus olapade have all engaged with these ideas and the ideas are better for it. subha ganguly shahi and avi kishore have come in with key moral support.

 .

michael schulman, ian reiley, and liz richardson contributed to this being readable. nikolaos zahariadis and owen barder strengthened ideas. catherine goodman, sarah tougher, melisse murray, prashant yadav, and nora petty have been stand-by and stand-up amfm resources. marcia inhorn and norm daniels have been important mentors and models.

 .

several coffeeshops and restaurants have provided clean, well-lighted places over the years: trident and render in boston; andala and voltage in cambridge; mike’s and swad in tamale; loulou’s beignets in the woodlands; and maison des desserts, coast café, and latitude in delhi. thank you for the tea refills and unhurried surface area. and seventh heaven in rishikesh for an extended stay and support.

 ..

for my family, thanks for understanding this whole ‘abroad’ thing as best as possible and, in particular, to aunt janet for patient engagement with early drafts of the manuscript.

 .

finally, a huge thank you to my parents for absolutely everything from the mundane to the massive, from the decision to travel to details to debates to disasters (real and imagined) to deadlines to drafts-upon-drafts to the defense — even though you almost certainly never wanted know a thing about malaria policy in ghana. tusen takk.

.

chapter I, for the curious about this thing we’ve built (all mistakes my own).