causes, explanations, & getting stuff done

@edwardcarr, i also have a confession, which is that i have a small crush on you right now for the post in which you make a confession about causality and try to disentangle causes, mechanisms, and information that can be used to understand, revamp, and scale programs. i‘d like to try to tweak the argument, especially in light of the recent excitement about the publication of null results from a bombay-based cluster-randomized trial on health and pregnancy, described here.

the tweak is to separate out two categories of information that are important and may be better gleaned from qualitative inquiry and analysis, though more adaptive/iterative and process-focused quantitative data can also play a role. i think the general mindset on what constitutes (and who has) useful and useable information is as important as the difference between quant or qual analysis.

1. how the program actually brought about an effect. this point is the focus of ed’s post, as well as levy paluck’s nice paper on qualitative methods and field experiments, which distinguishes between causal effects and causal mechanisms. mechanisms can, to some degree, be pursued and examined through ever-proliferating treatment arms in RCTs… but observation and purposive, systematic conversation are very helpful. if i read ed carr’s piece correctly, he wants more (data collected with an eye towards) explanation in order to pursue (a) a deeper understanding – beyond ‘story time’ – of what moderates and mediates the (potentially causal) relationship between X and Y within the study context, to also help us gain (b) a deeper understanding of the external validity of the findings, which could inform adaptation and replication. both are important goals for studies with any intention of scaling.

2. how the program was experienced by a range of stakeholders and how it could have been done better. this part doesn’t feature in ed’s post or levy paluck’s piece but is important. sometimes i feel like when i talk about process, everyone breaks out in log-frame hives. take a deep breath. i don’t just mean process checklists and indicators. i mean recording how things went, deviations from the study design, and seeking feedback from study participants, study facilitators, study staff, and other study stakeholders. in the bombay experiment referenced above, the team had a process evaluation officer, who consistently surveyed staff and documented  meetings with the participants. these data allowed the researchers to know, among other things, that the participating urban women “balked” at collective action but were happy to share information one-on-one — a fairly useful finding for anyone else designing a program with similar goals or in a similar population. i think the researchers could have gone slightly further in asking participants about possible explanations for the similarity of outcomes in the treatment and comparison group — but the centrality of process evaluation is clear nevertheless. in a similar vein, campos et al draw lessons from experiments that didn’t happen and propose that researchers need to “work more on delivery and better incentivize project staff.” this, too, suggests a need to better collect (and use) information on delivery process and staff perceptions of projects, which means making time (and setting aside money) to solicit this information and finding ways to incorporate it into study findings. it also means taking program design as seriously as experimental design.

in sum, explanation matters. collect data that allows for better explanation of the mechanisms underlying causal effects as well as the process by which those mechanisms were put in place. in the meantime, everyone needs to do more work on figuring out how to present these types of data in a way that is easily accessible to and valued by a variety of researchers and practitioners.

asking people what they think (I)

this is only a small splinter off my much larger soapbox of “why don’t we ask a multitude of stakeholders at all levels what they think and why they think x program did/not work?” but… check it out.

instead of channeling the world’s poorest citizens through malfunctioning microphones, I am proposing a radical, yet extremely simple, approach. let’s just ask them. not through some kind of “inclusive” process where a handful of token representatives have a chance to speak their minds. no, that is hardly better than what’s currently on the table. we should ask the masses directly.

organizations like Afrobarometer already do this in sub-Saharan Africa. their surveys ask households across the demographic spectrum to state their most pressing concerns. And, some of their responses might surprise you. for example, poor infrastructure (e.g. roads and power) is the biggest concern for roughly one in five households in sub-Saharan Africa. just one in 20 say that health is their biggest concern, while education is even less important. who would have known? we need to build upon these existing efforts with a more targeted survey specifically for the MDGs 2.0 debate – which would be standardized across the developing world.

sign me up.