Thoughts from #evalcon on evidence uptake, capacity building

I attended a great panel today, hosted by the thintankinitiative.org and IDRC and featuring representatives from three of TTI’s cohort of think tanks. This is part of the broader global evaluation week (#evalcon) happening in Kathmandu and focused on building bridges: use of evaluation for decision making and policy influence. The notes on evidence-uptake largely come from the session while the notes on capacity building are my own musings inspired by the event.

.

One point early-on was to contrast evidence-informed decision-making with opinion-informed decision-making. I’ve usually heard the contrast painted as faith-based decision-making and think the opinion framing was useful. It also comes in handy for one of the key takeaways from the session, which is that maybe the point (and feasible goal) isn’t to do away with opinion-based decision-making but rather to make sure that opinions are increasingly shaped by rigorous evaluative evidence. Or to be more Bayesian about it, we want decision-makers to continuously update their priors about different issues, drawing on evidence.

.

This leads to a second point. In focusing on policy influence, we may become too focused on influencing very specific decision-makers for very specific decisions. This may lead us to lose sight of the broader goal of (re-)shaping the opinions of a wide variety of stakeholders and decision-makers, even if not linked to the immediate policy or program under evaluation. So, again, the frame of shaping opinions and aiming for decision-maker/power-center rather than policy-specific influence may lead to altered approaches, goals, and benchmarks.

.

A third point that echoed throughout the panel is that policy influence takes time. New ideas need time to sink in and percolate before opinions are re-shaped. Secretary Suman Prasad Sharma of Nepal noted that from a decision-maker point of view, evaluations are better and more digestible when they aim to build bit by bit. Participants invoked a building blocks metaphor several times and contrasted it with “big bang” results. A related and familiar point about the time and timing required for evaluation to change opinions and shape decisions is that planning for the next phase of the program cycle generally begins midway through current programming. If evaluation is to inform this next stage of planning, it requires the communication of interim results — or a more thoughtful shift of the program planning cycle relative to monitoring and evaluation funding cycles in general.

.

A general point that came up repeatedly was what constitutes a good versus a bad evaluation. This leads to a key capacity-building point: we need more “capacity-building” to help decision-makers recognize credible, rigorous evidence and to mediate between conflicting findings. way too often, in my view, capacity-building ends up being about how particular methods are carried out, rather than on the central task of identifying credible methodologies and weighting the findings accordingly (or on broader principles of causal inference). That is, capacity-building among decision-makers needs to (a) understand how they currently assess credibility (on a radical premise that capacity-building exercises might generate capacity on both sides) and (b) help them become better consumers, not producers, of evidence.

.

A point that surfaced continuously about how decision-makers assess evidence was about objectivity and neutrality. ‘Bad evaluations’ are biased and opinionated; ‘good evaluations’ are objective. There is probably a much larger conversation to be had about parsing objectivity from independence and engagement as well as further assessment of how decision-makers assess neutrality and how evaluators might establish and signal their objectivity. As a musing: a particular method doesn’t guarantee neutrality, which can also be violated in shaping the questions, selecting the site and sample, and so on.

.

Other characteristics of ‘good evaluation’ that came out included those that don’t confuse being critical with only being negative. Findings about what is working are also appreciated. ‘Bad evaluation’ assigns blame and accountability to particular stakeholders without looking through a nuanced view of the context and events (internal and external) during the evaluation. ‘Good evaluation’ involves setting eval objectives up front. ‘good evaluation’ also places the findings in the context of other evidence on the same topic; this literature/evidence review work, especially when it does not focus on a single methodology or discipline (and, yes, I am particularly alluding to RCT authors that tend to only cite other RCTs, at the expense of sectoral evidence and simply other methodologies), is very helpful to a decision-making audience, as is helping to make sense of conflicting findings.

..

A final set of issues related to timing and transaction costs. A clear refrain throughout the panel is the importance of the timing of sharing the findings. This means paying attention to the budget-making cycle and sharing results at just the right moment. It means seeing windows of receptivity to evidence on particular topics, reframing the evidence accordingly, and sharing it with decision-makers and the media. It probably means learning a lot more from effective lobbyists. Staying in tune with policy and media cycles in a given evaluation context is hugely time consuming. A point was made and is well-taken that the transaction costs of this kind of staying-in-tune for policy influence is quite high for researchers. perhaps goals for influence by the immediate researchers and evaluators should be more modest, at least when shaping a specific decision was not the explicit purpose of the evaluation.

.

One is to communicate the findings clearly to and to do necessary capacity-building with naturally sympathetic decision-makers (say, parliamentarians or bureaucrats with an expressed interest in x issue) to become champions to keep the discussion going within decision-making bodies. to reiterate, my view is that a priority for capacity-building efforts should focus on helping decision-makers become evidence champions and good communicators of specific evaluation and research findings. This is an indirect road to influence but an important one, leveraging the credibility of decision-makers with one another. Two, also indirect, is to communicate the findings clearly to and to do necessary capacity-building with the types of (advocacy? think tank?) organizations whose job is to focus on the timing of budget meetings and shifting political priorities and local events to which the evidence can be brought to bear.

.

The happy closing point was that a little bit of passion in evaluation, even while trying to remain neutral and objective, does not hurt.

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

5 thoughts on “Thoughts from #evalcon on evidence uptake, capacity building

    1. This is quite interesting if not upsetting. And really not at all the model that people talking about evidence-informed decision-making in low- and middle-income countries seem to be imagining.

      Like

  1. Compliments for sharing this info.My proposal on overcoming lack of independence is to always opt for external evaluators; who should only be used for a single evaluation to prevent altering the evaluation questions, selecting the site and sample among others. Otherwise, well planned for, executed, evidence based, gender focused(participatory) and effectively communicated evaluations lead to credible results that provoke to be used.

    Like

    1. That is a potential solution but I believe part of the tension is whether a slightly less external evaluator, who may better understand the program, can actually do a better evaluation. Thus, the question is whether integrity and independence/externality can be separated. I would also be cautious in conflating gender-focused and participatory. Plenty of methods exist for studying power and gender without necessarily being participatory and including a participatory methodology does not guarantee a focus on gender and power.

      Like

Share your thoughts, please! The more minds, the merrier