two nice posts came out yesterday that relate directly or tangentially to building theories of change. david evans wrote about, inter alia, what lies behind a null finding, here. marcus jenal wrote here about how complexity shouldn’t stop us from building theories of chang, up front, so long as we stand ready to adapt (parts*) of them. these two posts sort of collided in my head as ways of thinking about how tocs link to study design (as well as program planning) — thanks for inspiring me to write on a saturday morning!
.
plenty has been written on the relevance of tocs and a good place to start if you’re catching up is with craig‘s work, such as here. he, marcus, and others highlight the centrality of assumptions about why things may not work to a good theory of change. one reason to spend time on carefully hashing out the assumptions is that they help strengthen both research design and program planning, hence the blog title (the received wisdom is that when you assume, you do indeed make an ass out of u + me).
.
what excites me about building theories of change (or conceptual models, as i originally learned about them) is getting to draw simultaneously on more formal theory, on practical lessons from the empirical literature (making it useful when empirical papers offer basic information and lessons about mundane (read: fascinating and useful) implementation details), and a healthy dose of common sense, where ‘common’ includes drawing on a variety of perspectives.
.
getting a good start on a toc (even if planning to adapt and iterate throughout) is important if you, like me (with vegard, as we try to lay out in our work-in-progress here), see almost every other aspect of program planning and evaluation design as flowing from it: the content of questionnaires, the timing of data collection, which methods are most appropriate for answering which types of questions (links in the toc), what monitoring plans are needed, the enabling factors on which program design can draw and enhance, the contextual constraints a program can try to loosen, and even the way that the final report may look and the story it will try to tell about what met expectations, what didn’t, and why.
.
marcus’s post has some useful ideas about building tocs, including new-to-me cynefin and accommodating competing hypotheses in early toc iterations on how change might occur. i have also written a bit about ways to try to enhance the toc-building process, here (and also some lessons from putting this into practice, here). these and other exercises will (perhaps implicitly) shed light on the ways that a program may not work as expected.
.
another set of useful ideas, especially in light of david’s post (and the paper that inspired it), can be drawn from cartwright and hardie. the book is worth checking out, as i skip over many of their ideas related to toc-building (not what they call it) to focus on one they call the ‘pre-mortem.‘
.
less morbidly and more relevantly, we might call it a pre-null exercise. this type of thought experiment is useful because well-considered theories of change incorporate, through assumptions, hypotheses/theories of no change — that is, all the reasons why the expected pathways to change may get blocked or washed out or never be walked at all, culminating in a null result. the existence and tractability of such roadblocks and breakdowns are important lessons to learn from any research project. this is why thinking, early on, through the assumptions component of a toc is so critical, so that research designs can build in ways to catch potential reasons for no change.
.
the basic pre-null thought exercise is, at the beginning of project and study design, to imagine yourself at the end an analysis, with a lack of significant results. think through and list the possible reasons for this (beyond a true null) and then incorporate them both into program and study design (especially as these two seem to be (again?) moving closer together, see here and also samii on deep engagement).
.
david’s post gives some nice broad categories to consider: (1) lack of implementation fidelity or other implementation snafus (making it particularly important for a toc to include implementer incentives as well as considering the beneficiary viewpoint, as advocated by pritchett et al, among others), (2) altered behavior among ‘beneficiaries’ over time, (3) ge effects, or (4) that the intervention may work differently for various sub-groups (yes, sub-group assumptions should appear in a toc!).
.
trying to anticipate these different ways we might end up with null results means they can be better represented as toc assumptions and, accordingly, incorporated into study and questionnaire design — and we all end up wiser as a result.
.
i think it is fair to say broadly that this and many other thought exercises go un-done during the study design phase of rigorous evaluation and research (i like, for example, this posner et al paper for its effort to do some of this work ex post but of course wish it had — and think much of it could have — happened before the study). these efforts certainly go unreported and perhaps even untracked by researchers themselves, not just in the academic literature but, perhaps more upsettingly, in study reports that have fewer restrictions on words.
.
i am hoping that exercises like a pre-null thought experiment will be useful to researchers planning studies. what i am struggling to figure out is: why they aren’t happening much now.
.
here are some of my working hypotheses:
- lack of time during program and/or study planning stages.
- lack of clarity about toc-building (or conceptual modelling or whatever term you fancy) as being a key goal of formative work and deep stakeholder engagement (or more general lack of formative work and meaningful stakeholder engagement).
- lack of funding for this kind of toc-building work and engagement.
- lack of clarity about what constitutes a good theory of change and how it links to broader study and program design.
- lack of (sociological) imagination or a sense of not needing to employ this during study design.
- limited discussion of implementation lessons-learned (including during the pilot phase) in the empirical literature and little value (or actual disincentives) placed on sharing implementation details — good, bad, and ugly — that can inform future tocs.
- under-valuing of theory-of-change-building (along with needs assessment and diagnostics?) as part of formal research education (these are things that can be taught, you don’t need to only learn them during your first research project, though certainly some of the lessons may only hit home then).
.
the follow-up question is, of course, how we can start to try to do better, such that inexplicable nulls become a bit more endangered.
.
*i note ‘parts’ because while from a learning perspective, we might want to be able to scrap all our initial conceptions but from an accountability (and, actually, learning) perspective, we probably want to hold some things, such as goals, as fixed.