pirates in the news! (also, kids, if you’re a famous pirate, you still maybe shouldn’t do drugs)

here but yet to be corroborated.

it’s unclear if the Iranians really have Garad. so far, only Africa Review is reporting it… “He was like Carlos the Jackal in the crime world,” Andrew Mwangura of the East African Seafarers Association told Africa Review [about Garad].

according to David F. Marley…the “secretive” Garad — it’s not his real name; it means “clan leader” — worked his way up from hijacking small vessels near the Somali fishing port of Eyl to seizing a Japanese bulk carrier, the Stella Maris, for 11 weeks in 2008 before its owners paid him $2 million. not many people have interviewed Garad. during one of his rare interviews, Marley recounts, Garad appeared “ragged” and his eyes were “scratched raw by constant rubbing — a textbook case of khat withdrawal.”

Garad may have been on drugs if he underestimated the Iranian navy.

fun with questions (I)

from a letter home from ghana, in which i tried to explain the basics of how the research on which i was working actually…worked. plus, the response from one of my very favorite people on the planet. roughly, in the research project, we tried to convince people obtaining a malaria medication to subscribe into a text messaging system that would send texts reminding them to complete their full doses of malaria medication. as a qualification, i have had several conversations with folks about the below since being back – and at least one experience asking directions in nyc – that have convinced me that people don’t do that much better with giving directions here in the US.

i write:

one thing we learned is that people are not spectacular at providing directions to their house or even providing details about their house. For example, we spent quite some time looking for a house with a blue door that, in fact, did not have what could even be generously called a blue-ish door. In addition, when we did locate the house (and it took awhile), it was right next to a well-known ‘spot’ (bar), though the only landmark provided by the respondent had been an unnamed provisions shop, of which there were many. This resulted in quite a lot of revision to the questions about how to find our respondents later (more or less ‘do you live near ANYTHING well known, including a church, a water pump, a public toilet, really, anything at all that someone other than you would know about?’)

to be fair, we can quickly review that we were putting our respondents in a somewhat difficult position and that it is kind of miraculous that we got so many eligible and willing participants. let’s quickly recast our research at your neighborhood drug store.

you are either sick or know someone who is sick (or like to be really, really prepared) because you are out buying cold/flu medicine. it’s entirely possible that you have a sick child with you and also a reasonable chance that you or the child (or both) are feverish. you check out and thank the cashier (you’re so nice!) and are getting ready to leave. either the cashier suggests that you might talk to the well-dressed person in the corner with a note-book or the cashier is forgetful and the well-dressed person has to approach you. either way, the well-dressed person asks if you can answer a few questions. maybe you are obliging; maybe you are more like me trying to actively dodge sidewalk canvassers (why is there not a ‘dodge Save the Children’ app yet?). if the person gets you to stop, suddenly s/he suddenly wants to know about a lot more than just the medicine you just bought but also all about the kind of water or toilet facilities your household has or the education level of different household members. i would say (soapbox alert!) the questions ‘would you answer that?’ or ‘could you answer that?’ – asked by the researchers of themselves and their peers – are among the most underutilized questions in questionnaire design.

anyway, the person also hands you a flyer with a phone number that you can call or text for further health information to be texted to your phone. so far, maybe this is a mild annoyance or, at best, a novel break in routine.

but then, the person wants to see you again. maybe. but can’t give you any specific dates or times or information about this potential next visit. the well-dressed person actually fully intends to see you again – at your house, approximately 72 hours later, to inspect your medicine – but can’t tell you that, lest the impending visit change your (or the intended recipient’s) propensity to take the medicine as directed. so, the person is elusive and asks more generally about when you can be found at home on, say, fridays. and asks for your phone number. and detailed directions to your house. and then asks for someone-else-who-knows-how-to-find-you’s phone number and direction to that person’s house as well. (yes, the people give consent to give us their contact information — but still.)

even asking about ‘where you generally are on fridays’ was a big concession in the original research design, as we were aiming for total surprise (see: Spanish Inquisition) as to the timing and purpose of our visit. but, we slowly had to erode the surprise after a lot of turning up at empty houses for follow-up interviews.

awesome follow-up:

just got around to reading this. it makes me want to write my own questionnaires. 

do you know where your house is? why or why not?
are you the only person who knows where your house is?
would you be able to find your own house?
could anyone else find your own house? describe. 
what are some secrets that may be used to find your house quickly?
what are a few possible ways to get lost when going to your house?
name three colors that could be used to describe your house. 
do you have a toilet?
do you like your toilet? 
would your toilet describe him or herself as clean or dirty?
would you trust your toilet’s judgment to answer that question? why or why not?
do people you entertain at home often leave sick? what types of ailments afflict them?
are goats ever allowed into your home?*
even if they are very loud in the back yard? 
what about during parties?
*please note that we did have pet goats in college and they were loud.

wait, we’re experimenting, right?

many of the descriptions of the ideal next World Bank president – at least the ones with which I agree – have called for a little more humility about how much we actually know about economic & human development and poverty reduction.

so it’s frustrating to see articles like this, which imply a low level of humility about the work we are doing and an unclear commitment to learning what actually does and does not work (regardless of felt commitment to poverty reduction & development).

a large part of the reason that experiments and impact evaluations in development have become popular is that we weren’t getting as far as we needed with theory, intuition or observation alone. money and other resources were being put into programs when we don’t know if they are effective (even if things seemed to be changing in the presence of the program), let alone how they compared to other programs in terms of efficacy or cost-efficiency. process and implementation evaluations that could have improved subsequent program interventions were not being conducted and/or shared.

it seems like we need to pause and think about how and why we are experimenting.

  • we experiment because we don’t know what works – or whether something that works in one location will work in another. if we knew what worked, we would potentially be under some ethical obligation to do that thing for all people in all places we thought it would work. when we don’t know what works, or when there is at least genuine disagreement about the best approaches, an experimental design is justified. in short, we need to bring equipoise into social science research. in part, this means that we should be testing our new (experimental) idea against the best known or available intervention with a similar goal. new drugs are usually tested against a placebo and a regularly used treatment.
  • because we are experimenting, we should encourage the publication of null findings and laud these as equally important learning experiences. this requires funders recognizing such reporting as essential for reporting on the accountability of studies and program implementations.  it also requires changing the strong bias of journal editors and reviewers to only publish significant findings. confidence intervals aside, null findings may be just as “significant” for our understanding of what works and doesn’t work in development as reporting statistically significant results.
  • evaluations probably need to start to look more like programs that could be scaled-up. there are good experimental reasons for manipulating only one or two key variable(s) at  time and trying to limit all other contamination, but there has to be increasing movement toward learning what works in situ, even if that means there is more than one moving part. and if it is really unclear how the findings from an experiment would be scaled-up in a program or policy, then the experiment likely needs to be re-thought.
  • also, we need to think more about the ethics of doing social science experiments in low- and middle-income countries. there are increasing obligations for clinical research by large pharmaceutical or academic institutions, if the drug proves effective, to – at a minimum – make the drug available to – again, at a minimum – the host community. this is because the host community bore some risk in participating in an experimental intervention — but more generally because any intervention alters biological and social patterns that will remain changed after the trial ends and the researchers leave the community to publish their results in scientific journals.
  • experimenting is good in a context in which we aren’t sure what works. NGO- and state-run programs need to be linked with evaluation efforts. there are roughly a bajillion graduate students interested in program evaluation, development economics, and so on and there are a large number of programs that are being run by governments or NGOs without any rigorous evaluation or clear delineation of ‘lessons learned’ – or at least evaluations that get talked about in the technocratic space. none of these programs will offer a perfect experimental design but, hey, that’s where the complex statistics come in. all we need is a yenta to link grad students to programs (and evaluation funding) and we’re set.
  • experiments, programs, policies, etc, need to allow us to learn about the implementation process as well as the outcomes. deviations from initial design and unexpected hurdles along the way should be reported so that everyone can learn from them. yes, the reality of actually running these programs may make it more difficult to make causal inference with certainty – but these aren’t just aberrations in an experimental design, they’re part of the reality into which any scaled-up effort would be plugged. this is similar to the distinction between “efficacy” and “effectiveness” in clinical research. knowing how an intervention performs under ideal experimental conditions (efficacy) may not tell us how the same intervention program performs applied under real world circumstances or scaled up to other communities (effectiveness).
  • replication is central to the natural sciences, but still largely under-utilized the social sciences and development research. but we need to recognize the importance of replication in confirming or dis-confirming the results from program implementation studies and encourage greater publication of replication studies.

*see, for example, “Moral standards for research in developing countries: from ‘reasonable availability’ to ‘fair benefits’” or “What makes clinical research in developing countries ethical? The benchmarks of ethical research”

*other inspiration

**big thanks to Mike for helping me sort through a lot of the ideas in this post