experimenting with intention

this post revisits some issues i have touched on before.

first of all, good find by roving bandit. the gist is that an experimental program undertaken in ‘ideal’ (NGO-run) conditions did not show any effect when the same program was run by the government. oops.

i think this raises several possible questions related to carrying out experiments (i am sure there are more than i cover below):

  • before undertaking (getting funding for) an experimental intervention, how clear should we be on who would be sustaining the effort and/or taking it to scale? what kind of agreement would need to be in place? would we have some effect-size threshold that would mean that we would aim for scale and sustainability, below which an idea is scrapped?
  • how do we distinguish between proof-of-concept studies and if-this-works-it’s-going-to-scale studies? how many replications of the former would we want before we did the latter?
  • how involved should the putative implementer be in the design & conduct of the experiment?
  • how much training and capacity building with the future implementer should be built into the experimental process? would we start to consider ethical requirements in this regard (i.e. experimenters have some obligation to train as well, as needed)?
  • if something doesn’t work, what responsibility do we have to help enhance the public sector’s (or other implementer’s) capacity? i.e. is the response to a null finding a scrapping of the idea or a re-tooling of the implementer? or something else?
  • how much more process evaluation & monitoring should be put in place in ‘in situ’ experiments so that we can learn more about precisely went right and wrong in implementation? how can we encourage the publication and sharing of these results, not just the treatment effect? (i swear i have an ‘in praise of process evaluation’ post coming soon. i have to atone for all the times i have denigrated it.)
  • even when a program doesn’t work, how do we make sure that the public sector (or other) implementer doesn’t get blamed for the effort and reward honesty instead of only exciting results?

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

6 thoughts on “experimenting with intention

  1. i think the question goes beyond governments’ capacity – its also political, right? It is really the intention that matters!

    as evaluators, have we made sure government officials have the same motivation to make the program work as an NGO? in the contract teacher experiment for example, does the govt have an incentive to show that the intervention has no positive impact? of course, i am not implying that NGOs have no internal politics going on, but there are just so many more layers when it comes to governments.

    So even if everyone agrees with an evaluator’s idea, they may not agree with the method; or they may want to maintain status quo for themselves much more than they are committed to the idea. Lipsky’s street-level bureaucrat, anyone?

    Like

    1. you are, of course, completely right. i’ve just been stuck in the mode of thinking about responsible project design and what we owe as researchers to the areas in which we work — so i went with that.

      i’d be interested in hearing more thoughts on how we could better assess and consider the government’s motivation — and even to incentivize it without being coercive.

      i’d also be interested in hearing more about how we might negotiate the goals and the method better between what we have evidence works (from NGO-run, more tightly controlled experiments?) and what the government thinks is practically and politically feasible — including how electoral politics and randomized designs don’t necessarily make perfect bedfellows.

      design is the first signal of human intention (http://www.mcdonoughpartners.com/design_approach/philosophy).

      Like

      1. its tough to figure this one out actually. my personal opinion (which is a bit flaky, as usual) is this: going in with what we think is the ‘best’ solution that will lead to a ‘proven’ impact on an ‘undisputably obvious’ problem is often not good enough. its hard to come up with good examples (more proof that my opinion should not be taken seriously!) but i sometimes feel that we don’t really understand them.

        how can we negotiate better? have them tell us what want to know and what alternatives they want to try out. for example, in the education study, there are a bunch of interventions being tested – other than government and NGO implementation, differential salaries, variations in hiring and salary payments and SMC trainings. would the government have wanted to test all of this in the same study? after a point, do they even understand how the different interventions overlap and how each of the hypotheses are being tested? even if researchers claim that the government was fully on board, i am not sure its possible to be sure.

        Like

      2. do you think it is enough for ‘them’ to tell us what they want to know and/or the intervention options they are interested in exploring? who identifies the problem and how to begin with?

        (p.s. nothing in your opinion sounds flaky :))

        Like

      3. sure…i meant all the way through, starting with identifying the problem. but we are impatient and want to change the world, right? how long can we wait?

        Like

      4. i’ve been pretty impatient in general lately, so will avoid answering that question for some time! however, others’s opinions welcome.

        Like

Share your thoughts, please! The more minds, the merrier