i know – but couldn’t we have asked, too?

things i know & understand:

  • this is cool research from @poverty_action
  • it’s fun and important to show and know that ‘we’ (development types) have poor assumptions about what will work
  • time and budgets are always constrained
  • this comment is super-predictable coming from me. but…

when i read this:

our results showed that although the consulting intervention caused short-term changes in business practices, these impacts dissipated within a year after the consulting ended. on average, we found no long-term benefit from the consulting, and actually lower short-term profits. we believe some business people hoped the advice would work and thus took it. but better bookkeeping and other business practices potentially took time away from the physical act of sewing clothes. once profits took a hit, enterprise owners likely abandoned the practices and reverted to their previous methods. [emphasis added]

i think to myself, ‘heather, why is it that you so rarely read:’

  • the respondents believed this program did/not have the intended effect because…” (this would be based on more qualitative research during our at the end of the study)
  • or, something along the lines of, “the respondents felt that future efforts could be improved by…” (more of that open-ended goodness)
  • or, “from our observations during the implementation of the intervention, we believe what happened is…”
  • or, “the implementers feel that x happened, that z presented serious challenges, and y could improve it in the future”
  • or, radically, “the respondents (or implementers) feel that a better way to have approached the same goal would have been…”

i am well aware that absolutely none of these things will provide definitive – or even ‘true’ – answers about what worked and why. but surely other people’s opinions besides the reseachers’ count and should be collected, despite the extra collection and analysis time? wouldn’t it be fun to have more information about why we are seeing the treatment effect that we see? among other sources, see here.

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

11 thoughts on “i know – but couldn’t we have asked, too?

    1. glad to know there are others out there!!!

      your post was great — i think one place to keep hammering is that this is important in ‘experiments’ as well as ‘programs,’ to the extent that those are or should be distinct.

      if anyone hasn’t checked out the ‘how matters’ post, here’s a highlight:

      Maybe funders should start judging organizations not on the “impact” of their projects, but on their ability to create, utilize, and maintain feedback loops with beneficiaries. It’s time for such efforts to no longer be “nice-to-have’s” but a central measure of success of organizational success. This would mean re-focusing everyone on the demand, rather than the supply side of NGO activities.

      Like

    1. this was a great suggestion, rachel! what is nice about this piece is that he clearly actually respects aspects of the book (BB, at least, if not WGV) as well as avoiding what he calls the ‘predictable ethnographic veto’ – which makes his critique more powerful. i think he is right, that somehow people who have just described a quantitative analysis are somehow given a ‘pass’ to make (unsupported) subjective claims and are not always called-out for asserting opinion as a causal mechanism, which is only minimally informed by the people the author claims to be ‘representing’ (with intentional reference to e. said’s reference of marx). what i find particularly frustrating is the number of cases in which, in the context of doing individual-level data collection, it would not have been so hard to add in a more thoughtful set of questions. this is only an approximation of long-term participant-observation and yet it would be such a big step in making claims of causality and in revealing mechanisms for future exploration.

      i’ll quote at length to save others from the productive procrastination of reading the article, though it is well-worth reading: “ethnographic nuance is neither a luxury nor the result of a kind of methodological altruism extended by the soft-hearted. it is, in purely positivist terms, the epistemiological due diligence required before one can talk meaningfully about other people’s motivations, intentions, and desires… [otherwise] [these anecdotes] may reflect the author’s own imagination of poor people’s lives more than the realities of those lives… the risk in foregoing [ethnographic nuance] is not simply that one might miss some of the local color… it is one of mis-recognition. analysis based on such mis-recognition may mistake symptoms for causes, or two formally similar situations as being comparable despite their different etiologies…
      by insisting on the credo of ‘just the facts, ma’am,’ [collier and similar authors] introduce many of their key analytical moves on the sly or via anecdote. what an anthropological (ethnographic, qualitative) approach to these seem questions would insist on is the attempt to see the dynamics of the bottom billion politics and economics through the actors’ points of view. this attempt has the positivist objective of fact-checking both one’s facts and one’s categories of analysis, so as to be sure that there is some semblance of fit between the motives, incentives, and rationales attributed to actors and those they may actually be using.”

      his other critique, that books like BB are ‘often more about us than them,’ is also one worth exploring, though i think this aspect of nick kristof’s writing took a reasonable lashing during kony2012.

      Like

Share your thoughts, please! The more minds, the merrier