disclaimer: i always get quite frustrated when people seem to be reinventing the wheel, especially when at least the contours of the wheel could be found with a reasonable literature review that was somewhat cross-disciplinary (i am pretty sure this is still a reasonable expectation… perhaps part of the problem is that literature is insufficiently open-access?)
.
i’ll be blunt: everyone should read just a little more before they speak, realize that they are not necessarily entering uncharted territory (including the realms of program planning, product design, & evaluation), are not great pioneers until they have assessed that for themselves from amongst the existing literature, and need to cite their sources.
.
program planning
from a lot of different corners, it seems that people involved in evaluation are suddenly ‘discovering’ that they may have a role to play in programming planning and design, whether facilitating it or doing it more directly. this ranges from frequent topics of conversation at the recent #evalcon in kathmandu to smart policy design by ben olken and others.
.
it is a natural enough ‘discovery’ when involved in an evaluation that it may have been helpful if the evaluation team had been involved earlier — say, before the program was designed. that makes sense: folks doing an evaluation tend to get hung up on details that turn out to matter, like operationalizing key concepts and goalposts, clarifying who will do what, what that will look like and how long it will take, and so on. a lot of these details would show up in a well-done theory of change.
.
not only do people planning an evaluation ask these types of questions, they also fill a useful role as outsiders, clarifying language and ideas that insiders may take for granted (which raises interesting questions about the promises and pitfalls of internal evaluators, even well trained, especially those taking on a learning as well as an accountability function).
.
it’s just that this is link and role to planning not a new discovery. i’ll give the example of the precede-proceed model, because i am familiar with it, but there are assuredly lots of models linking planning and evaluation in useful ways. i admittedly like some of the older illustrations of the precede-proceed model but respect that larry green has updated his figures and that i should move on (but if you’re curious, you can see the old ones if you search for images ‘green precede proceed’).
.
precede-proceed starts as too few programs and evaluations do: with a need assessment, based on objective indicators (wealth, disease, etc) as well as subjective indicators and interests. this helps to form both a statement of the problem as well as setting targets for the evaluation to assess. this is an excellent time for those interested in participatory methods to employ them (rather than just employing the term ‘participatory’ whenever it makes you feel good) because this (and for program design itself) is when it really counts: getting the focus right.
.
from here, a series of diagnostics can be carried out to look for the factors (facilitating and blocking) that perpetuate the current, unsatisfactory state of the world but also allow for positive deviance. this can be a process of asking why 5 times or other tools to look for the points on which a program or policy might intervene.
.
this can then be followed by a process of assess the landscape of extant programs and policies and designing a new one, taking cues from product design, including the use of personae.
.
the evaluation may be broader than tracing these points backwards — the elements of the program or policy, the points of intervention, the different types of need identified — but this is effectively the building blocks for a well-aligned monitoring and evaluation strategy.
.
two points before moving on from the basic point that merging planning, design, and evaluation is charted territory:
- all of this suggests that people wanting to do good evaluation need to be better trained in the kinds of facilitating, mediating, needs assessing, and creative tasks implicated above.
- recognizing that design, implementation, & evaluation can be all part of the same processes is not somehow the same as saying that it is magically/conveniently unimportant to report on implementation details in an evaluation. if anyone outside the core implementation team of a project (a government agency, say, or an NGO) assists in planning, training, facilitating, framing, or any component of implementation, this needs to be reported for the sake of transparency, proper interpretation, and potential reproducibility.
.
.
questions about independence
one of the major points echoed in the #evalcon session that i covered in my last post is that independence and unbiasedness of evaluations are hugely important in enhancing evaluative effort’s credibility among policy makers. a key challenge for anyone involved in the shifts considered in the first bit of this blog — evaluative folks thinking about getting involved early on in program design — is going to be how to instill and project integrity and trustworthiness of evaluation while letting go a bit on strict independence, in the sense of remaining arms’ length from the evaluation subject. to the extent that decision-makers and other stakeholders are a key audience, evaluators will be well-served by taking the time to understand what they see as credible and convincing evidence.
.
One thought on “more from #evalcon: program planning”