on science, from eula biss’s on immunity.

a nice reminder from eula biss (via on immunity: an inoculation) that science is a series of building blocks, with small tests and then bigger ones to see if each brick helps us reach higher and see farther.

Science is, as scientists like to say, “self-correcting,” meaning that errors in preliminary studies are, ideally, revealed in subsequent studies. One of the primary principles of the scientific method is that the results of a study must be reproducible. Until the results of a small study are duplicated in a larger study, they are little more than a suggestion for further research. Most studies are not incredibly meaningful on their own, but gain or lose meaning form the work that has been done around them… This doesn’t mean that published research should be disregarded but that, as John Ioannidis concludes, “what matters is the totally of the evidence” (p. 133)…

Thinking of our knowledge as a body suggests the harm that can be done when one part of that body is torn from its context. Quite a bit of this sort of dismemberment goes on in discussions about vaccination, when individual studies are often used to support positions or ideas that are not supported by the body as a whole… When one is investigating scientific evidence, on must consider the full body of information (p. 135).

center and peripherary in doing development differently

i have spent almost three weeks back in TX, which was supposed to be, in part, a time of immense productivity in front of our fireplace (yes, it is chilly here. probably enough to warrant a fire but still.) i brought home big batches of reading but am taking back far more of it with me to delhi than i had planned.

.

nevertheless, i did finally make it through duncan green’s post on his immediate thoughts on doing development differently from matt andrews and team. so, that’s only three months behind schedule.

.

many things are, of course, striking and exciting about this movement, including the idea of rapid iterations to promote (experiential) learning and tweaks, the importance of morale and relationships, and the time horizon.

.

but the most striking thing had to do with immersion, deep study and deep play*.

deep study of the system, based on continuous observation and listening. In Nicaragua, UNICEF sent public officials out to try and access the public services they were administering, and even made the men carry 30lb backpacks to experience what it’s like being pregnant! This is all about immersion, rather than the traditional ‘fly in, fly out’ consultant culture.

the idea is, it seema, to strike a blow at the ‘consultant culture’ of folks from dc, london and geneva parachuting in to solve problems (there’s probably an interesting discussion to be had about the relevance of area studies in this approach). but that is for another time. what is most immediately striking is that duncan doesn’t report on unicef folks making consultants visiting nicaragua from NYC head out to remote areas and try to access services with pregnant-backpacks.

.

if i read the anecdote correctly (is there more written about this somewhere?), the target was public officials, which i take to mean nicaraguan civil servants and politicians based in the capital or another metropole. which is an important (re-)lesson. being from X country doesn’t automatically make you knowledgeable about all areas and details of X country (duh). probably many of us have sat with civil servants who talk about ‘the hinterlands’ and ‘backwards’ areas and who seem quite surprised at what they find there, if they visit at all. there is a vast difference between the high-level and the street-level, between big decisions about adopting and championing a policy and the many small decisions involved in implementing that idea. implementation is, as always, profoundly local. (this idea, incidentally, also applies to study design and the relationships between PIs, their research assistants and the field teams.)

.

this all suggests that, maybe, doing development differently (and probably doing evaluation differently) also has to do with shifting ideas about center and periphery (globally as well as nationally), about who has relevant knowledge, and thinking about immersion for program designers and decision-makers of a variety of types, whether from the country in question or not. this, in part, raises questions about who is doing the iteration and learning and how lessons are passed up as well as down different hierarchies (and spread horizontally). looking forward to hearing and thinking more.

.

*it’s hard to resist a geertz reference, since ‘continual observation and listening’ sounds an awful lot like ‘participant-observation,’ a study technique that almost *never* comes up in “mixed-methods’ evaluation proposals.

Win a prize guessing how much trial registration reduces publication bias!

Originally posted on Berkeley Initiative for Transparency in the Social Sciences:

Does trial registration make an impact on publication bias? Knowing the answer could earn you a cash prize!


Macartan Humphreys  (Columbia, Political Science) and collaborators  Albert Fang  and  Grant Gordon  are doing research on how publication (and publication bias) changed after the introduction of registration in clinical trials. They also want you to guess what the changes were. The bidder with the closest guess will win a $200 cash prize.  Click here  to read more and enter a guess.

Enthusiastic supporters of research transparency are often keen on advocating for the registration of trial experiments. But in the social sciences the practice remains fairly rare and its impact on publication bias is relatively unknown. Fortunately, social scientists can learn from their peers in the medical sciences who have been required to register their medical trials since 2005. The research of Humphreys et al. will look to see if there was a…

View original 39 more words

buffet of champions: what kind do we need for impact evaluations and policy?

i realize that the thesis of “we may need a new kind of champion” sounds like a rather anemic pitch for guardians of the galaxy. moreover, it may lead to inflated hopes that i am going to propose that dance-offs be used more often to decide policy questions. while i don’t necessarily deny that this is a fantastic idea (and would certainly boost c-span viewership), i what to quickly dash hopes that this is the main premise of this post. rather, i am curious why “we” believe that policy champions will be keen on promoting and using impact evaluation (and subsequent evidence syntheses of these) and to suggest that another range of actors, which i call “evidence” and “issue” champions may be more natural allies. . there has been a recurring storyline in recent literature and musings on (impact) evaluation and policy- or decision-making:

  • first, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
  • second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report on the evidence on evidence-informed policy-making as well as here).
  • third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
  • fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. this even includes bringing policy champions (say, bureaucrats) on as research PIs.

. there seems to be a sleight of hand at work in the above formulation and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation. . while i am not sure that “policy champion” has been precisely defined (and, indeed, this may be part of the problem), this has been done for the policy entrepreneur concept. so far as i can tell, the first time to articulate the entrepreneurial (brokering, middle-man, risk-taking) role in policy-making comes from david e. price in 1971. the idea was repeated and refined in the 1980s and then became more commonplace in 1990s’ discussions of public policy, in part through the work of john kingdon. (there is also an formative and informative 1991 piece by nancy roberts and paula king.) much of the initial discussion, it seems, came out of studying US national and state-level congressional politics but the ideas have been repeatedly shown to have merit in other deliberative settings. much of the initial work also focused on agenda-setting — which problems and solutions gain attention — but similar functions are also important in the adoption and implementation of policy solutions. . kingdon is fairly precise about the qualities of a policy entrepreneur — someone who has, as kingdon calls it, a pet policy that they nurture over years, waiting for good moments of opportunity to suggest their policy as the solution to a pressing problem.

  • first, such a person must have a “claim to a hearing” — that is, at least behind-the-scenes, people must respect and be willing to be listen to this person on this topic (especially if this person is not directly in a position with decision-making power).
  • second, such a person must have networks and connections as well as an ability to bargain and negotiate within them. this is a person that can broker ideas across diverse groups of people, can “soften-up” people to the entrepreneur’s preferred policy solution, etc.
  • third, such a person must have tenacity, persistence and a willingness to risk personal reputation and resources for a policy idea.

in kingdon’s and others’ conception, a policy entrepreneur has to work at selling their idea over a long period of time (which is presumably why weissert (1991) also introduced the idea of policy opportunists, who only start to champion ideas once they make it to the deliberating table and seem likely to move forward.) . in short, policy entrepreneurs (and through the sloppy use of near-synonyms, policy champions,) believe strongly in a policy solution and for some reason and have put in time, effort, and reputation into moving the idea forward. note the nebulous use of “some reason” — i have not found a definition that specifies that policy entrepreneurs must come to promote a policy through a particular impetus. glory, gold, god, goodness, and (g’)evidence also seem to be viable motivators to fit the definition. . my question is: is this what we need to support the use of research (and, specifically impact evaluations and syntheses thereof) on decision-making. it is not clear to me that we do. policy entrepreneurs are people already sold on a particular policy solution, whereas the question behind much evaluation work is ‘is this the best policy solution for this context?’ (recognizing the importance of contextual and policy, if not clinical, uncertainty about the answer in order for an evaluation to be worthwhile. . it seems to me, then, that what we (researchers and evaluators) actually need, then, are people deeply committed to one of two things: (1) the use of data and evidence, in general, (“evidence champions” or, at least loosely, technocrats) as an important tool in sound decision-making and/or (2) a particular issue or problem (“issue champions” — no doubt a sexier phrase is available). i’ll spend more time on the second. . an “issue champion,” for example, may be someone who has similar qualities of a policy entrepreneur but, rather than using claims to a hearing, a network, and tenacity to bring forward a policy solution, s/he uses these tools to bring attention to a problem — say, malaria mortality. this person feels that malaria is a problem that must be solved — and is open to finding the most (cost-) effective solution to the problem (or means to do a good job with implementing that solution). s/he is not, by contrast, someone already committed to believing that prevention, diagnostics, or treatment in any particular form or at any particular price are the best way forward until s/he has seen evidence of this in a relevant context. this is different from a “policy champion” who has, for example, been pushing for universal bednet coverage for the past 20 years. . this is not to say that you don’t want the bednet champion to be well aware of your study and to even have input into defining the research questions and approving the research design (in fact, this seems vital in lending credibility and usefulness to the results). but, the way the study is structured will be important to whether the bednet champion is open to taking up the range of possible results from your study. if your question is: does approach A or approach B result in more efficient distribution of bednets, then yes, both sets of results will be interesting to the bednet champion. but if the question is more of the type: are bednets the most cost-effective approach to addressing malaria mortality in our country? then the bednet champion is likely to only be particularly interested in trumpeting about one set of results: those that are significantly in favor of bednets as a solution to the malaria problem. the malaria champion (or general evidence enthusiast), on the other hand, may be more open to thinking about how to interpret and use the range of possible results from the study, which may also be mixed, inconclusive, or even negative. (throughout this discussion, i recognize that malaria, like all problems in human and economic development, don’t have silver bullet answers and that, therefore, “A or not-A”-type evaluation questions will only get us so far in getting the right mix of tools in the right place at the right time. i.e. the answer is likely neither that bednets do not good nor that they are the only thing needed to tackle malaria.) . the worrisomeness, then, of the policy champion is that they are already committed to a policy solution. will they change their mind on the basis of one study? probably not (nor, necessarily, should they. but a meta-analysis may not sway them either.) but insofar as “we” want decision-makers to learn about our evidence and to consider it in the deliberations, it may be issue, rather than policy, champions that are particularly important. they may make use of the results regardless of what they are. we cannot necessarily expect the same of the policy champion. of course, a small army of evidence champions is also helpful. . i do want to stress that it is critical to have policy champions and other stakeholders involved early in the research-design process, so that the right questions can be asked and the politically and contextually salient outcomes and magnitudes considered. but as an ally in the evaluation process and, say, a potential PI on an evaluation, it seems that the issue champions are the folks likely to stick with it. . and, yes, issue champions should probably have some moves ready, in case of a dance-off (as there will always be factors beyond evidence and data influencing decisions).

hey, you’re sick? can we know where you live? (simple experiments, complicated designs)

i have been working on a blog for 3ie (*now posted, here!), based on a delhi seminar talk i gave recently, in turn based on this paper and this blog on using text messages to increase completion of anti-malarial treatment. not all of the material i initially wrote will appear in the final blog — which is good, it is more focused now. nevertheless, i thought i would share the excised part that i wrote about the actual experimental set-up. part of the point i made in the seminar — and which i intend to make in this blog — was that as a ‘simple’ and ‘hands-off’ intervention (meaning minimal input from either the surveyors or our hosts, medical vendors, in encouraging or facilitating take-up) can require an all-the-more complicated study design. .

..

rainy season!

rainy season!

the study was meant to be a proof-of-concept — it is the first randomized evaluation of a direct-to-patient (rather than to health workers) text messaging program for malaria in sub-saharan africa. as guided by our funder, CHAI (as an operational research project for the Affordable Medicines Facility – malaria (AMFm), we wanted to learn about the minimum of supportive moving parts required to get patients enrolled into a text messaging system of reminders to complete education. a relatively simple mHealth intervention – much of the intervention work was done by a computer, a modem, some lines of code, and occasionally an electric generator – it was the rainy season after all) – required a fairly intricate experimental set-up.

.. .

diligently organizing follow-up

diligently organizing follow-up

it was a deviation for ipa-ghana in terms of not-household-or-firm-based-surveying and it took a lot of logistical coordination (as shown in the photo), attention to timetables, and a willingness on the part of the surveyors to set out without perfectly clear instructions.  .. .

.

things worked roughly like this: with participating anti-malarial vendors, we (the research team) worked to maintain a façade of a separate flyer system to invite people to enroll in a mobile health information system, distinct from the surveyor talking to people obtaining an antimalarial medication. during that interview about the medicines purchased and price paid, if the respondent proved willing and eligible, the surveyor asked how we could find them at home. and for their phone number. and for directions to a neighbor’s house. and the neighbor’s phone. (i have written previously about the fun of trying to track people down based on the directions they give.) just in case we decided to follow-up to see how they were doing. and the surveyors were purposefully cagey about whether and when this would be happening and to what end. .

.

[i am still amazed that so many people agreed to this (thank you, participants!) — if someone approached me in a drug store while i was sick and asked me to chat and then how to find me at home, even my deepest beliefs in survey karma would probably be abandoned in favor of some fairly unbecoming language.] . .

.

but, in fact, we had a very precise purpose and timing in mind: to find the respondent between ~58 and 72 hours after we met them in a drug shop. with some basic assumptions about when they would start their antimalarial medication, we wanted to land up at people’s homes just after they had completed their medication. respondents often didn’t give very precise directions to their homes, resulting in a lot of motorcycle-mounted wild goose chases – but we also achieved a follow-up rate of 99% within our specified time window. phew! . this is really a tribute to the commitment and perseverance of the survey team and our field managers.

that may not mean quite what you think it means: john henry and americana edition

occasionally on this site, i try to provide some background on phrases and cliches in social science and global health (such as here and here). it is a small public service to help folks not be sicilians yelling “inconceivable!” (or from starting land wars in asia, if at all possible).

today, the john henry effect.

.

the john henry effect is a reactive effect we could find in the comparison group of an experiment (or an any non-intervention group) when the comparison group is aware it is not receiving treatment. with this knowledge, they might react by working harder to compensate for not having the intervention. the effect, apparently, also includes the reaction amongst the ‘non-treated’ of becoming discouraged at not having received the intervention and working less hard, though i am less familiar with this usage. in any case, we could just call them ‘reactive effects’ and given all the other cultural roles and meanings of john henry, i wonder if we just should.

the point of this post is not about the john henry effect but about john henry. however, a small point. david mckenzie‘s post on the john henry effect (and that we shouldn’t be too worried about it) concludes “often our best approach may be to try and reduce the likelihood of such effects in the first place – while it can be hard (or impossible) to hide from the treatment group the fact they are getting a treatment, in many cases the control group need not know they are controls.”

this seems at odds with mckenzie’s seeming support in other places for public randomization (example here)– in which case, the comparison group would very well know that they were not receiving the treatment. (the problem, in part, is that we have limited scope in the way of placebos in social science work. ethics aside, we simply don’t know how to give you a malaria-bednet-that-isn’t-really-protective in the way that i can give you a lookalike pill that has no active pharmaceutical ingredients. which is, perhaps, another argument for testing treatment variants against each other rather than treatment against just ‘business as usual’/nothing new.)

.

in any case, the real point of this post is about john henry the man/myth. from a recent conversation with a colleague, it was clear that, for him/her, the john henry effect could have just as easily been named for the researcher that discovered the effect or the site at which it was first noted (as in the hawthorne experiments).

which is fair enough. john henry is an element of americana folklore (though there may well be counterpart or antecedent stories in different cultures and i would be delighted to hear about them), so why should anyone else be clued in?

however, i had to sing a song about john henry in 5th grade choir performance about american tall tales (quite possibly the last time i was permitted to sing on stage), so i am fully qualified to provide some background on john henry.

.

it seems (mostly according to here and here) that john henry was likely a real man — definitely black, possibly born a slave. he worked for the railroads following the civil war (in the late 1860s and 1870s). he was well-suited to this work, as a “steel driving man”, as he was, from existing accounts, both quite tall and muscular. most accounts say he worked for the C&O Railroad (chesapeake & ohio) and many accounts put his work as drilling through the big bend mountain in west virgina, where it was decided it was more expedient to make a tunnel rather than go around the mountain (alternatively, he worked on the nearby lewis tunnel under similar circumstances).

“as the story goes, john henry was the strongest, fastest, most powerful man working on the rails. he used a 14-pound hammer to drill 10 to 20 feet in a 12-hour day – the best of any man on the rails. one day, a salesman came to camp, boasting that his steam-powered machine could outdrill any man. a race was set: man against machine. john henry won, the legend says, driving 14 feet to the drill’s nine. he died shortly after, some say from exhaustion, some say from a stroke.”

another account, by an alleged eyewitness account collected by sociologist guy johnson in the 1920s, is:

“when the agent for the steam drill company brought the drill here, john henry wanted to drive against it. he took a lot of pride in his work and he hated to see a machine take the work of men like him. well, they decided to hold a test to get an idea of how practical the steam drill was. the test went on all day and part of the next day. john henry won. he wouldn’t rest enough, and he overdid. he took sick and died soon after that.”

john henry became the subject of ballads and work/hammer songs (e.g. and here and here) and an important touchstone for the american labor movements and civil rights movements. he is a lot more than a possible effect in social experiments!

.

as a closing thought, when we discuss john henry effects, we mostly think about his working hard in compensation for not having the treatment (a machine) — or even proving that the treatment was unnecessary because of pride in the status quo. we think less about the fact that he died from it. given this part of the story, we may want to consider, should we find john henry effects, not just that it might mess up our effect estimation — but that harms could be coming to groups not receiving interventions if they are over-compensating in this way (more akin to how john henryism and soujourner truthism are used in sociology and health psychology (e.g. here and here) to describe the african-american experience and weathering).

learning by asking: a modest proposal to engage those who did the doing

this post was originally published at innovations for poverty action. it represents an effort to follow-through on some themes i have raised before and hope to continue to raise.

most development interventions are carried out and delivered by local research staff and residents. such implementation is rarely a straightforward ‘technical’ operation but, rather, there is social and political nuance in translating an idea into practice. on-the-ground partners therefore often have important insights about research engagement and implementation processes. but researchers often do not solicit such feedback as a requisite part of their work. moreover, even when such questions are asked, the results are often not synthesized into the lessons learnt from the project, either in terms of setting the future research agenda or in terms of tweaking on-going programs and interventions.

the Preserving ACTs (PACT) impact evaluation used text messages (via this code, built by @eokyere and @slawarokicki) to encourage the completion of the most effective anti-malarial treatment (artemisinin-based combination therapies, or ACTs (e.g.)) in and around Tamale, Ghana. as the study drew to a close, we wanted to thank the malaria-drug vendors for implementing the text-message intervention under investigation and for hosting the study’s surveyors. in addition, we wanted to solicit their feedback about the project and research process. our goals were to better understand their experience working with the research team and the study process, of the intervention process, and whether, how, and with what changes they thought the intervention should be scaled following the study.

photo credit: alidu osman  tuunteya

photo credit: alidu osman tuunteya

in mid- to late- 2011, one of our field managers went back to all 73 vendors – hospitals, clinics, pharmacists, and licensed chemical shops, both public and private – that had hosted our study, to thank them and to solicit their feedback on the study process and PACT intervention. 65 vendors were available and willing to be interviewed. these follow-ups took place in the third and fourth week of October 2011 and provide a unique set of data for considering the implementation of this and future studies. they also shed some light on the program that provided the context and basis for this operational research study – the Affordable Medicines Facility – malaria (AMFm) — discussed more at the end of this post.

as a token of our appreciation for their hosting PACT and our surveyors (and the only incentive provided to vendors for hosting the study and intervention), we provided a thank you note / certificate of participation. a few of the vendors noted that our follow-up was appreciated, such as “we thought you left without a goodbye” and  “I really never thought you would be back again after that long time.  But it is great you still remembered us and even come to officially say goodbye to us.”

as described in the paper, drug vendors played an important role in the study, which spanned much of the rainy season in northern Ghana as well as the Muslim fasting period of Ramadan, as mentioned in some of the follow-up interviews.

first, vendors were in charge of distributing the flyers to patients acquiring a malaria treatment. from this flyer, the patient would know to call or ‘flash’ (ring once for no charge) into our ‘mobile health information system,’ from which they were randomized to receive reminder text messages.

second, vendors provided an IPA surveyor a place to sit and talk to patients about their anti-malarial treatment and, if eligible, to recruit them into the study. Sometimes this meant giving them a chair in a hospital waiting room; other times, it meant the IPA team providing a chair for a surveyor to sit in the limited space offered by a one-man or one-woman drug shop.

given the study design, the distribution of flyers and the text messages were kept conceptually and, as feasible, spatially, separate from the IPA surveyor and his or her questions about the patient’s malaria medication. we included a question about this in the follow-up: the full range of vendor responses can be viewed here [link to the spreadsheet], as well as an overview of how responses were coded.* in short, the codes used to categorize responses emerged from the responses themselves in light of themes or ideas repeated across several answers.

given the sample-size needs of the study, surveyors were only placed at vendors that had ACTs in stock and were selling at a reasonable volume. this often meant longer stays at some of the hospitals, clinics, and popular pharmacies and LCSs but shorter stays with other vendors. some of the vendors expressed disappointment about this, such as, “I felt the surveyors did not spend much time in my shop.  You should have stationed a surveyor at my shop for at least three weeks or 4 weeks” and “I did not experience much due to a lack of patients during the study period at my shop.”

experiences with PACT.

though we did not explicitly ask vendors if PACT addressed a problem they thought was important, three-quarters volunteered that non-completion of ACTs was a problem they faced and many welcomed the support of a program like PACT to try to encourage people to finish their medication and to strengthen whatever advice they already gave to their patients – for example, saying that the text messages “made patients to take my advice more seriously” and that they “increase our credibility and patients to stick to these advice we give as a health service provider”.

others felt that the text messages led to patients feeling they were receiving addition concern for their wellbeing. one noted that PACT “gave patients the courage to come back to us and ask questions of the drugs given to them, and then some do come back later to report other issues” and another that “patients that did initially not want to participate have come later to commend the text reminder.”

prior to agreeing to host PACT, some vendors were concerned that the flyer and survey process would interrupt the patient flow, affect their business, or lead to an over-focus on malaria patients at the expense of other clients. after the study, about 40% felt their sales stayed normal during the study while about 20% felt their sales had increased, which they attributed to PACT. Nobody reported a disruption in their sales. about 40% did note that the questionnaires administered by surveyors were long and were uncomfortable for patients who were sick and tired – hence, being at a drug vendor in the first place.

about 20% also felt their status or credibility increased when they were offering the program, such as “business was normal but on social grounds, I earned respect from individuals that were sent SMS…  It was a remarkable experience of the concern they achieved from your end. So, I gained more credibility to my advantage.” some of these benefits may, of course, reflect a novelty effect and would not offer any one vendor an advantage if the program scaled-up to all vendors.

there were also a few unexpected benefits of the program likely not to be replicated in a scaled-up version of the program — for example, one vendor reported that “your surveyor also helped me track a thief that came to steal from me”!

several vendors highlighted that it’s not always easy for them when a study like this – which facilitates their providing an addition service to patients and clients — ends, noting, for example, that “some patients still ask me if they will get the SMS” and that “now I have to explain to my clients that I cannot send SMS to remind them now, which is sort of sad”.

continuing and scaling-up program.

most vendors (97%) explicitly said that it would be good the PACT program – or something like it – could be extended beyond Tamale and the time period of the study but also to other medications. for example, one noted “it should be scaled up so that at least I know my family that does not reside here in Tamale would also benefit from this great experience and education” while another said “the program should be nationwide because I know this attitude” – “the bad attitude of not completing the dosage” – “is widespread” and another similarly stated “the issue of people not completing their dosage is nationwide, so the program must be nationwide”.

about half saw no predictable barriers in being able to continue a program like PACT on their own; nearly a quarter thought that during busy times, it might be hard to make time to explain the program to patients and enroll them. 42% said they would need more training. Half noted that materials (such as flyers) and money for credit would need to be provided to them and several noted that extra components, such as t-shirts, would help motivate them and advertise the program better. for example, “I would also love to be given [something extra], like t-shirts, that could help motivate the vendors to do the work diligently.”

vendors also had a variety of suggestions on how enrollment into the text messaging system could be encouraged. One set of suggestions including removing any costs to patients, such as making the enrollment line toll-free as a way of encouraging more patients to enroll. to clarify what happened in practice, if patients texted into the program, their phone balance was deducted the cost of a text message. for people to ‘flash’ (dial and allow one ring) into the system, it did not cost them any money but phones (rather than our messaging system) required a positive balance to make this call.

more generally, 37 vendors (60%) suggested ways of increasing enrollment into the text messaging program, including the vendor or PACT team doing the actual enrollment as well as providing more information to patients about the program and why it was important. for example, one noted “if people are educated well, they will charge their phones and enroll later.” given the goal of the study — to see whether and how many people would enroll with a limited amount of encouragement and input — vendors provided limited (and surveyors provided no) information to the patient along with the flyer. in retrospect, however, some vendors saw this as a missed opportunity to educate patients about an important problem.

fourteen of the vendors raised concern that the text messages excluded illiterate patients – some suggested using voice calls as well as TV and radio to better educate about the importance of completing an ACT once it is started.

distinctions with AMFm (and implied suggestions for future AMFm-like programs).

the Affordable Medicines Facility – malaria (AMFm) provided the context for the PACT study, although the text messaging program was meant to be seen as independent of the ‘green leaf’ efforts explicit to AMFm marketing. as such, we asked questions of vendors to assess whether they saw the programs as distinct. In so doing, they ended up commenting on differences in the program in a way that may shed light on future AMFm-like efforts. for example, “programs for the ‘green leaf’ have more TV and radio coverage than your PACT program, but then you made more impact of encouraging people to take their drugs.  The other programs only encourage buying and awareness” or that with AMFm, “people do not get education on usage or encouragement to complete dosage”.

to clarify, the AMFm-linked advertisements focused on telling people to act fast when they suspected they had malaria, to look for the “green-leaf” quality logo, and what price these logoed ACTs should cost. as such, one vendor noted “the ‘green leaf’ programs don’t have any sort of attachment to the patient but your intervention seemed to involve and appreciate the patients more, so they felt respected and began to place some emphasis on their health.”

also, it is worth noting that elements of PACT seemed to alter the way patients viewed ACTs as well as the perceptions and stocking decisions of the vendors themselves. overall, 65% of vendors reported such changes. as part of AMFm, vendors should have received trainings provided by various government and professional bodies, as well as the mass media “green leaf” advertisements — but this seems to have not always been the case, at least by the time the PACT study was carried out.

for example, one vendor noted that “people are now more confident in the ACT drugs” and that “I now keep more stock of ACTs and on days that I do not have ACTs, the clients disturb me so much.  So for now, I make sure to keep more stock of ACTs.” Another said, “to be honest, it was the start of your study and the subsequent positioning of a surveyor at your shop that made me to purchase ACTs, and now I have many varieties of ‘green leaf’ drugs” while another reported “we were a bit skeptical about the new ACTs, so we had few in stock, but your presence also made us want to be more serious and also keep different types of ACTs.”

overall, the data presented above and in the spreadsheet (available below) suggest there is much that can be learned from soliciting feedback from program implementers, which can be taken into account in designing future studies and interventions. this type of engagement can ensure that humanity and humility remain part of the implementation and learning process.

you can download the spreadsheet with more information and responses here (excel format).

*where a vendor directly identified themselves or their workplace in their response, we have slightly altered the response, using italics in place of the missing word or phrase.

**important thanks go to Jeff Mosenkis, Corrina Moucheraud, Elianne Oei, John Quattrochi, and the late Elif Yavuz for encouragement in writing-up and thinking-through these results. the data and photographs above were collected by Usama Salifu and Alidu Osman Tuunteya. the biggest thanks go to the vendors for taking the time and energy to provide additional insight.