fresh fruit, folk flu, confessions, cold season, & conclusions

the main gist here is that it is not fun to have an illness with no name, no clear prognosis, and, insofar as someone thinks it is necessary, no clear, commensurate-to-symptoms treatment plan. the need to restate this grew out of recent discussions around malaria diagnosis and treatment of malaria and the broader range of illnesses that can cause malaria-like symptoms.

the introduction of diagnostics into the SOP of addressing malaria results from the change in WHO policy away from presumptively treating all fever with malaria medication. using malaria meds as the answer to all fevers has led to over-use of malaria meds, lowering the mojo of their api. and, the likelihood of mistreating a fever as malaria is rising as underlying malaria epidemiology changes. as such, this seemed like good moment to reflect on other common understandings of the causes of symptoms and how they should be treated, as well as the desire for some treatment that matches the symptoms. i focus on a few US/UK examples, especially as we move into cold season.

the first confession is the nearly obscene number of times i have watched this sketch, in which john cleese teaches his students to defend themselves against fresh fruit, which he sees as the weapon of choice of most criminals as well as the root of most illnesses (or the cure for illnesses, including flu, depending on which version you watch).  cleese fears fresh fruit – far more than point-ed sticks – and therefore proposes it as the cause of  any problem with nebulous threat, as well as enacting fairly disproportionate treatments given the threat.

the second confession is how much i didn’t care for the article i am about to summarize. at the time, as one of the first reads in my first medical anthropology class in undergrad, it seemed awfully mundane in the face of a selection of many more romantic and exotic articles about places i had not yet been and diseases which i had not yet heard of, let alone had. but, i have ended up returning to it many times – more than some of those other articles. this is mccombie’s piece on ‘folk flu and viral syndrome.’ the article is quite amusing, interesting, and worth reading in full. mccombie describes that in the southwestern US, at least, people use ‘flu’ as a catch-all for feeling crappy, including covering a probable hangover. this ‘folk flu’ and misinterpretations of ‘flu-like symptoms’ can be a real impediment to appropriately treating the actual cause of the symptoms. similarly, better understandings of ‘folk malaria’ and how to convince people that their symptoms are caused by something else would be useful (some research has, of course, been done on this, including by mccombie; i’d suggest that additional qualitative work would be quite helpful). the second half of mccombie’s article covers the same type of catch-all for non-specific, undiagnosed problems by providers, who label a problem as ‘a virus’ when ‘they aren’t sure’ what is wrong with you. it’s better than being told that you are imagining things, of course, but not all that much.

the third confession is that i am/was an airborne junkie at the first sign of illness, especially in the face of symptoms with no clear bacterial or otherwise medicate-able diagnosis with something more than fluids and rest. this craving extends to using echinacea lollipops (found only in the US, i presume?), terrible zinc tablets that make one vomit, etc – almost anything (not mint) to feel like i am being proactive. if there ever were a place where it makes sense to try to introduce placebos as an actual treatment option, the ‘treatment’ of non-life-threatening viruses that make you feel generally crappy must  be it. it would limit the temptation to mis-treat with powerful drugs whose powers we all benefit from preserving. (see @danariely on the excellent placebo qualities of airborne for guidance on the way forward.)

so, fourth confession, i wish we could introduce placebos as an actual treatment option, for ‘flu’ and ‘malaria.’ if you’ve watched the birdcage (the robin williams & nathan lane version), you’ll know the value of ‘pirin’ tablets. as for the ethics of this and the providers’ ability to keep the secret, that’s for another debate.

but the initial point holds. people want a named diagnosis that makes sense, that has some sort of prognosis, and that has a treatment that fits our conception of the problem. so, we may not be able to do placebos but we also probably don’t need to release a tiger to tackle a man wielding a banana or loganberries, threatening though that can seem. once a tiger’s (antimalarial, antibiotic) out, it’s kinda out. you need to give people a comforting way to feel proactive without calling down the thunder.

Advertisements

wait, we’re experimenting, right?

many of the descriptions of the ideal next World Bank president – at least the ones with which I agree – have called for a little more humility about how much we actually know about economic & human development and poverty reduction.

so it’s frustrating to see articles like this, which imply a low level of humility about the work we are doing and an unclear commitment to learning what actually does and does not work (regardless of felt commitment to poverty reduction & development).

a large part of the reason that experiments and impact evaluations in development have become popular is that we weren’t getting as far as we needed with theory, intuition or observation alone. money and other resources were being put into programs when we don’t know if they are effective (even if things seemed to be changing in the presence of the program), let alone how they compared to other programs in terms of efficacy or cost-efficiency. process and implementation evaluations that could have improved subsequent program interventions were not being conducted and/or shared.

it seems like we need to pause and think about how and why we are experimenting.

  • we experiment because we don’t know what works – or whether something that works in one location will work in another. if we knew what worked, we would potentially be under some ethical obligation to do that thing for all people in all places we thought it would work. when we don’t know what works, or when there is at least genuine disagreement about the best approaches, an experimental design is justified. in short, we need to bring equipoise into social science research. in part, this means that we should be testing our new (experimental) idea against the best known or available intervention with a similar goal. new drugs are usually tested against a placebo and a regularly used treatment.
  • because we are experimenting, we should encourage the publication of null findings and laud these as equally important learning experiences. this requires funders recognizing such reporting as essential for reporting on the accountability of studies and program implementations.  it also requires changing the strong bias of journal editors and reviewers to only publish significant findings. confidence intervals aside, null findings may be just as “significant” for our understanding of what works and doesn’t work in development as reporting statistically significant results.
  • evaluations probably need to start to look more like programs that could be scaled-up. there are good experimental reasons for manipulating only one or two key variable(s) at  time and trying to limit all other contamination, but there has to be increasing movement toward learning what works in situ, even if that means there is more than one moving part. and if it is really unclear how the findings from an experiment would be scaled-up in a program or policy, then the experiment likely needs to be re-thought.
  • also, we need to think more about the ethics of doing social science experiments in low- and middle-income countries. there are increasing obligations for clinical research by large pharmaceutical or academic institutions, if the drug proves effective, to – at a minimum – make the drug available to – again, at a minimum – the host community. this is because the host community bore some risk in participating in an experimental intervention — but more generally because any intervention alters biological and social patterns that will remain changed after the trial ends and the researchers leave the community to publish their results in scientific journals.
  • experimenting is good in a context in which we aren’t sure what works. NGO- and state-run programs need to be linked with evaluation efforts. there are roughly a bajillion graduate students interested in program evaluation, development economics, and so on and there are a large number of programs that are being run by governments or NGOs without any rigorous evaluation or clear delineation of ‘lessons learned’ – or at least evaluations that get talked about in the technocratic space. none of these programs will offer a perfect experimental design but, hey, that’s where the complex statistics come in. all we need is a yenta to link grad students to programs (and evaluation funding) and we’re set.
  • experiments, programs, policies, etc, need to allow us to learn about the implementation process as well as the outcomes. deviations from initial design and unexpected hurdles along the way should be reported so that everyone can learn from them. yes, the reality of actually running these programs may make it more difficult to make causal inference with certainty – but these aren’t just aberrations in an experimental design, they’re part of the reality into which any scaled-up effort would be plugged. this is similar to the distinction between “efficacy” and “effectiveness” in clinical research. knowing how an intervention performs under ideal experimental conditions (efficacy) may not tell us how the same intervention program performs applied under real world circumstances or scaled up to other communities (effectiveness).
  • replication is central to the natural sciences, but still largely under-utilized the social sciences and development research. but we need to recognize the importance of replication in confirming or dis-confirming the results from program implementation studies and encourage greater publication of replication studies.

*see, for example, “Moral standards for research in developing countries: from ‘reasonable availability’ to ‘fair benefits’” or “What makes clinical research in developing countries ethical? The benchmarks of ethical research”

*other inspiration

**big thanks to Mike for helping me sort through a lot of the ideas in this post