hey, you’re sick? can we know where you live? (simple experiments, complicated designs)

i have been working on a blog for 3ie (*now posted, here!), based on a delhi seminar talk i gave recently, in turn based on this paper and this blog on using text messages to increase completion of anti-malarial treatment. not all of the material i initially wrote will appear in the final blog — which is good, it is more focused now. nevertheless, i thought i would share the excised part that i wrote about the actual experimental set-up. part of the point i made in the seminar — and which i intend to make in this blog — was that as a ‘simple’ and ‘hands-off’ intervention (meaning minimal input from either the surveyors or our hosts, medical vendors, in encouraging or facilitating take-up) can require an all-the-more complicated study design. .


rainy season!
rainy season!

the study was meant to be a proof-of-concept — it is the first randomized evaluation of a direct-to-patient (rather than to health workers) text messaging program for malaria in sub-saharan africa. as guided by our funder, CHAI (as an operational research project for the Affordable Medicines Facility – malaria (AMFm), we wanted to learn about the minimum of supportive moving parts required to get patients enrolled into a text messaging system of reminders to complete education. a relatively simple mHealth intervention – much of the intervention work was done by a computer, a modem, some lines of code, and occasionally an electric generator – it was the rainy season after all) – required a fairly intricate experimental set-up.

.. .

diligently organizing follow-up
diligently organizing follow-up

it was a deviation for ipa-ghana in terms of not-household-or-firm-based-surveying and it took a lot of logistical coordination (as shown in the photo), attention to timetables, and a willingness on the part of the surveyors to set out without perfectly clear instructions.  .. .


things worked roughly like this: with participating anti-malarial vendors, we (the research team) worked to maintain a façade of a separate flyer system to invite people to enroll in a mobile health information system, distinct from the surveyor talking to people obtaining an antimalarial medication. during that interview about the medicines purchased and price paid, if the respondent proved willing and eligible, the surveyor asked how we could find them at home. and for their phone number. and for directions to a neighbor’s house. and the neighbor’s phone. (i have written previously about the fun of trying to track people down based on the directions they give.) just in case we decided to follow-up to see how they were doing. and the surveyors were purposefully cagey about whether and when this would be happening and to what end. .


[i am still amazed that so many people agreed to this (thank you, participants!) — if someone approached me in a drug store while i was sick and asked me to chat and then how to find me at home, even my deepest beliefs in survey karma would probably be abandoned in favor of some fairly unbecoming language.] . .


but, in fact, we had a very precise purpose and timing in mind: to find the respondent between ~58 and 72 hours after we met them in a drug shop. with some basic assumptions about when they would start their antimalarial medication, we wanted to land up at people’s homes just after they had completed their medication. respondents often didn’t give very precise directions to their homes, resulting in a lot of motorcycle-mounted wild goose chases – but we also achieved a follow-up rate of 99% within our specified time window. phew! . this is really a tribute to the commitment and perseverance of the survey team and our field managers.

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

2 thoughts on “hey, you’re sick? can we know where you live? (simple experiments, complicated designs)

  1. Good to read of these results and really appreciate the transparency in methods and data. In the spirit of follow-up questions, here are a few thoughts – based on this 3ie blog, the research article on the PLOS ONE site and Howard’s post on effect size:
    1. What is your sense of the implementation/policy significance of the 4.9 percentage point difference between control and the short reminder message treatment? In a scale up, what is the implication in terms of costs of the intervention and the number of additional individuals being healthy as a result of adherence?
    2. The article discusses the limitation of reaching those at the lowest socio-economic status – those that may not have mobile phones. But amongst those that do – those that enrolled for the study – what portion would constitute the “last mile?”
    3. If we put the HH head education and the Wealth Quintile data together, what can we say about the cost-benefit of these reminders being targeted, as opposed to being universal?
    4. Finally, any idea why the long message seems to have annoyed women into reduced adherence when it seems to have done just as well with men?

    Liked by 1 person

    1. (this reply is still awaiting moderation on the 3ie blog but is the same…)

      Hi Suvojit: Thanks for these important and interesting questions and apologies for the delay in response. And, thanks for the shout-out on attempts towards transparency and lesson-learning.

      Unfortunately, part of each of the answers below are will be that we don’t know. But I will take a stab nevertheless, in hopes of clarification and also providing suggestions for future research to consider during the design phase.

      1. I think the immediate policy implication of this study, as per the proof-of-concept idea, is the binary: mhealth / text messages seem ‘to work.’ That is, they are able to play some role, even in a ‘tough’ setting. Therefore, they are a tool worth considering and furthering tweaking in the pursuit of adherence (including not only dose completion, the focus of the present study, but using the right drug in the right dose at the right time and on the appropriate schedule — and only treating diagnosed malaria as with anti-malarial medication).

      As for whether 5%-points matter, in the pursuit towards 100% appropriate use — a pursuit which will surely have to have a variety of tools in its arsenal — yes, I think it matters. That said, we did not have a pre-discussed benchmark of what would be considered ‘successful.’ But, we moved the needle in a tough battle that has no silver bullet.

      On the cost end, most of our costs were associated with the development of the text message system. Once the platform is established (particularly if the government were to adopt a platform into which researchers and programmers could just ‘plug-in), the marginal cost of a text message is pretty small, so the text-messages alone stand a good chance of scoring well on cost-efficiency.

      However, some of the feedback from vendors, the discussion at the Delhi seminar, and some of the interesting work going on in Kenya and elsewhere suggest expanding the package and therefore the cost:
      – messaging to drug vendors as well,
      – increasing training for vendors,
      – better educational materials and flyers for vendors to complement the text messages, and so on.

      All of this could increase cost as well as effectiveness and it is hard to predict how the cost-efficiency ratio would shake-out. I think it is something worth testing and rigorously assessing (ideally against some pre-discussed benchmarks about what levels of cost-efficiency would warrant scaling-up the program).

      You also raise an important question on the outcome measure — one that Simon also raised at the seminar but not one I am equipped to answer. Our outcome was dose-completion – and self-reported, at that. We are not in a position to make causal attribution about the next (intended) link in the theory of change: are people healthier (and more quickly) as a result of finishing their pills (and, further along, do school and work attendance pick-up and does mortality drop)?

      In general, perhaps the malaria/research/impact eval community needs to think about whether appropriate use and adherence are sufficient outcomes or whether studies are only meaningful if they attempt to answer these next links in the causal chain (especially for the most vulnerable, children and pregnant women). These will be more costly studies (they will require blood samples, among other things) but if they are the only results that really matter to decision-makers, then researchers seek these outcomes. This is a discussion worth having and one area where trying to standardize some outcomes across studies could really make a difference for later evidence syntheses. Vivalt makes a few similar points in her paper.

      2. If I interpret your question correctly, “the last mile” (amongst those who do own phones) could be defined several different ways, at several different links in the causal chain (since not all of these links seem to be holding for all people, it’s hard to say which constitutes the last mile at present — all require attention).

      It could be those who never registered with the SMS system, in which case, future efforts may need to focus on assisting with registration (logistically or financially). This would suggest more active involvement, perhaps, from drug shop vendors in helping with registration.

      Or, we could consider the last mile those who were interested enough to register but then didn’t take action as a result. If literacy (or mobile literacy) are the culprit, then IVR is one option. Having the vendor, again, take a more active role in previewing the messages could also help as a second option. If people opened, read and understood the messages and still didn’t act, then we might assume that more effort needs to be put into the content in order to encourage the desired change in behavior.

      3. I am not 100% certain I understand this question. First, I caution against reading too much into any of the sub-group analyses because we get a shaky on power and being able to separate signal and noise. Second, to clarify the question, do you mean that the messages may not be targeted precisely to sick individuals because someone else in the household may be the one receiving and reading the message? The answer would depending on understanding more about intra-household communication about illness as well as incoming messages — which, again, I am not equipped to answer. However, it is important to understand in this context and would constitute useful follow-up work that recognizes that m-health is still very much part of human systems and exchange.

      4. The answer to this question partially repeats from above — sub-group analysis within a single treatment arm is pretty shaky on power, so be cautious in interpreting those results. If there is indeed a causal interpretation linking women, long messages and dose-completion, I don’t think we can jump to the conclusion that it was due to annoyance (I know you were being funny) because we just don’t have the data. Unfortunately, our open-ended questions that tried to solicit feedback about how people felt when the received the messages or how they could be improved are very heavy on the “I don’t know” answer. That’s definitely something to try to improve in future research. Vignettes could, potentially, be an interesting avenue to explore, if people have a hard time articulating their own opinion on the spot. I am sure there are some other creative tools that could be piloted and I would be delighted to hear about them. Teasing out these links definitely emphasizes the importance of incorporating a variety of inquiry methods in the same study to understand the full causal chain.

      More generally, no, I don’t think we had a sufficiently strong theory of change underlying why men and women might react different to the messages (especially one that was clearly linked to sex or gender roles, distinct from control of the household phone and literacy) and whether men and women would react differently when the patient was themselves versus when it was their child. Having better articulated this in advance, and building in associated questionnaire items, may have made this planned sub-group analysis more robust — as well, of course, as having the money to keep recruiting so that we were better powered on these important sub-group analyses.

      Hope this all helps to clarify some of the remaining uncertainty and offers some useful lessons for future research. Of course, feel free to write back and tell me that I didn’t interpret your questions correctly or to add some additional ones!


Share your thoughts, please! The more minds, the merrier

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: