thanks to craig valters, i was recently pointed towards a new case study in the asia foundation’s working politically in practice series, focused on a ‘new’ approach called strategy testing. overall, i am sympathetic to much of the approach, though since i believe it has much in common with prototyping, product design and refinement, reasonable service delivery, etc, i am not sure if it is a wildly innovative new way of what i think many people would already see as good practice (as also acknowledged on p. 14 of the paper). it is, nevertheless, on its way to being practical.
.
the approach and what i like
as i understand it, the approach has three key features.
- a commitment to a theory of change as truly a set of hypotheses or best-guesses at a strategy, and therefore a living product. embedded in this is a greater commitment to humility.
- better individual tracking (daily? weekly?) of both external events, challenges faced, information received, and decisions taken.
- regular meetings (quarterly) of ‘program staff’ to review the theory of change and program approach and to refine as needed.
.
my sense is that the authors feel that the third point is the most radical of the suggestions they put forward. i disagree. i think it is point 2, having people take time out of their daily (“good”) work to document and reflect that would represent a much bigger and helpful change in the way development is practiced and will probably require more intensive skill development. future work that documents this more subtle but fundamental shift and makes suggestions to improve practice would be very useful. it shouldn’t be ignored because it is more mundane than the quarterly meetings at which an overhaul might happen.
.
overall, the approach represents an important commitment to continual learning as well as accountability in doing work that gets better and closer to success over time. it also moves a theory of change approach much more central to practice, taking it down off the dusty shelf. the approach also raises important questions about funding cycles and the power of the program team to make adjustments (see p. 14 but this should be explored more). one of the most difficult things about adaptive programming, which i do not take up in this post, will be how to make available adaptive budgeting.
.
what needs refinement
- no matter how flexible-iterative-adaptive-dynamic-intractable-complex-unpredictable-otherbuzzwords are the problem and the program and the management approach, there seems to be nothing in this paper to suggest that, say these strategy testing meetings could not happen on a regular, (gasp) planned basis. let’s push the anti-planning reaction only as far as it needs to go (more on this below).
.
- be clear about what is flexible; not everything is or should be. with an approach like strategy testing, it will be important to not make it too easy to redefine successful results (talked about as ‘ultimate outcomes’ in the paper). this matters not just from an accountability perspective (achieving what you said you were going to achieve, even if by a different route or on a different timeline) but also because, presumably, there was some real conviction and merit behind the goals in the first place vis-a-vis development and world-a-better-place-ness (if there wasn’t, then it is an entirely different type of problem with which we are dealing).
.
this is a key concern i have with the ‘adaptation’ movement in general: indicators, pathways, strategies, understandings of the problems, and the goals are often problematized and discounted in one breath, which glosses over too much. if all goalposts are movable, it will be quite difficult to deem any programs or strategy as simply unworthy of large resource outlay and let them go extinct.
.
in different parts of the paper, the authors say that “it is not possible to identify the outcomes and indicators at the outset of the program,” that “programs start with a broad articulation of the ultimate outcome,” and that “a precise plan of activities that will achieve results cannot be defined from the beginning.” i am more sympathetic to the framing of the second and third of these statements. the first statement seems to confuse humility with tabula rasa ignorance, which i don’t think helps move the conversation forward about how to do program planning better while also putting (structured) adaptation into practice.
.
- define “program teams.” this term is used throughout the paper but it is hard to figure out who it includes, which has implications for how i feel about the approach, as it has implications for whose evidence and insight is deemed important. does it include front-line workers? office-based staff in the capital? if only the latter, the approach currently does not suggest how roadblocks and experiences and suggestions and feedback will be collected from the street-level. yet surely this is critical to a holistic picture of events, roadblocks, and accomplishments — and therefore choosing the path forward. the absence of the semi-systematic feedback from front-line implementers, from intended beneficiaries, from other stakeholders is problematic (distinct from saying all these people need to be physically in the room during strategy testing meetings).
.
- the timeline and the ‘new information,’ ‘external changes,’ and ‘accomplishments and roadblocks’ seem out of sync. if the timeline is to be the key tool for daily or weekly reflection, it needs to move far beyond the sample provided in table 2 (acknowledging the potential for burdening program staff), which focuses on big-P political and high-level events. one question is who (and how) will be put in charge of documenting such changes, through more regular interaction with stakeholders or more careful monitoring of the news as part of a monitoring strategy. a second and possibly more important question is how a timeline-type tool can be better aligned with the theory of change and require staff to engage with the assumptions therein on a more regular basis. can some of the burden on program staff be relived if m&e (or mel or merl or whatever) teams do regular debriefing interviews with staff? drilling in on these practical, small details of how one might put strategy testing into practice would be hugely useful.
.
- at times, ‘traditional monitoring’ (which itself could be better defined so it is even clearer what strategy testing is being contrasted with or being appended onto) is painted as anachronistic; yet it must still be used in a strategy testing approach. for example, on page 11, the authors note that “by taking multiple small bets and continuously monitoring results, program teams are able to adjust and refine” (emphasis added). this suggests to me that a core set of indicators that measure progress/results towards some ultimate outcome (traditional monitoring?) are likely in place for much of the project, a reality that sometimes gets lost in the thrust to position strategy testing as an alternative approach to monitoring. it seems like response-to-monitoring rather than monitoring itself is the bigger contribution of strategy testing and, again, sometimes this gets lost in the paper and buzzword barrage.
- a key challenge raised on page 11 is not adequately addressed; the authors note: “whether a program strategy is worthy of continued investment may not be easy to decide.” more in-depth, ex ante discussion of just such decision points (see my series of blogs with suvojit, starting here) and what information will be needed to take such decisions are needed. these would need to be built into any monitoring plan, as part of the information needs for successful strategy testing. as is acknowledged in the paper, “it may be difficult for a team to accept that their strategy is not working and move on to something new, especially when they have invested heavily in that strategy.” this will make it all the more important to have up-front discussions about how to determine when something is not working (which relates to having clear, somewhat steady definitions of success).
.
i take away from this paper that being flexible requires planning and commitments, even though at times these are painted in a negative and out-of-sync tone. it requires more managerial planning and commitment to finding time and tools and skills for reflection, to agreeing early on as to how strategic decisions will be made on the basis of evidence gathered, who will weigh in on them, on how success will be defined even if different strategic approaches to achieving it are adopted. this is acknowledged at the end of the paper, in discussing the need for structure and discipline within (and to promote) flexibility. but it should be made much more central to marketing, refining, and disseminating the approach.
.
more generally, in the movement towards adaptive and flexible development work, we need to be careful about noting where the changes really need to happen (e.g. on monitoring itself, or on better tailoring monitoring to fit with decision-making needs, or on allowing time and scope to respond to monitoring findings) and where structure and planning are needed, making flexibility/planning and structure/planning complementary rather than contrasting ideas.