Addressing courtesy bias

I wrote this post for the IDinsight’s internal blog but I decided to share it here to encourage contributions. It will be great for people will weigh in with tactics they have tried!

Please share in the comments your experiences with courtesy bias and the tactics you have used (seemingly successful and unsuccessful) — including the ones here, which are not fool-proof.

Courtesy bias is one of the response biases, which we may encounter if we ask (quantitatively or qualitatively) what participants and other stakeholders think about a program. I point out that this could be quantitatively or qualitatively because we can ask people about program feedback and satisfaction in many ways, even though courtesy bias is often pegged as a particular challenge for qualitative work, such as here.

.

Courtesy bias is the tendency to understate dissatisfaction or challenges with a program, often driven by not wanting to offend or drive away the organization (that is, the tendency to portray the ‘benefactor’ organization in a positive light); it may also stem from not having permission to or practice at giving constructive criticism. Note that courtesy bias is different from social desirability bias, which is about portraying the self in a positive, norm-abiding light.

 .

Courtesy bias can be a real challenge (for example, here and here). As independent researchers, we want to give evidence-consumers accurate information. We also want that information to guide their program decisions and result in desired outcomes. Unfortunately, not all of the tools that have been developed for addressing social desirability bias (such as list randomization strategies (overviewed in this thesis), randomized response (including here), and non-verbal response cards to limit the effect; the Marlowe-Crowne scale and the Balanced Inventory of Desirable Reporting to calibrate it) automatically apply to courtesy bias. However, this could be an area for future research (for example, Feedback Labs suggests a Net Promoter Analysis that could be explored and may end up overlapping with social desirability tools).

 .

In the meantime, there are a few tactics you can try, before, during and after interviews / survey administration.

During interviews

  • Try to address concerns and power imbalances: In keeping with proper practice during the consenting process, we should stress: our independence, the anonymity of the responses and protection of identities and the respondent’s right to forgo any question. We may want to repeat this information at various points during the interview or administration of the questionnaire, rather than only saying it at the beginning. We may also want to highlight that we’d rather they ask to skip a question than to provide an untruthful or overly rosy answer. In addition, we can portray our client as eager to learn (if this is true!) and that the responses will not result in the removal of the program (if this is also true!) or other negative repercussions. Finally, we may want to point out that the client views constructive feedback as an important partnership they have with participants (yet again, if this is true!).

 .

  • Indirect questioning: Sometimes, it can help to ask about the views and experiences and actions of others, rather than starting with the respondent. This is to open up another door for expressing how one feels, potentially building up rapport and trust so that the respondent will eventually come around to talking about their own views (though we still learn something even if they do not). When I did my very first fieldwork in Nepal, it was amazing how everyone reported that everyone else in the village used a traditional healer besides themselves; then sometimes we’d move into the fact that they actually did go but they didn’t believe in it; then sometimes it would turn out that they believed in it in certain cases… We can ask questions that start, “it is sometimes the case that teachers find it [challenging to participate in the program]. Have you heard anything about that?” Of course, we need to be very transparent about the way we framed the question when we report the results (including the full question or questionnaire item in a footnote should be standard practice).

 .

  • ‘Ball rolling’ / social norming questions (note that I made up these terms): Sometimes, it can be helpful to show that it is okay to say something negative to get a conversation going. This could be hypothetical. For instance, “I wonder how I would find time to do those exercises or chat with those teachers — do you find this challenging?” Or it can be, “some teachers have been telling us that it is difficult or uncomfortable to do x… have you ever experienced that?” Depending on how far along you are in data collection, these questions to get the ball rolling, can reflect real data or they may, slightly deceptively, reflect expected data. Each team will have to decide what is comfortable for them. Note that while this may seem leading, it is not necessarily trying to elicit or incentivize a particular direction of response, just to give the respondent permission to tell their truth. As with indirect questioning, of course, we must be absolutely transparent about the question(s) we used to elicit the data we present in our results and recommendations, including the question we use in a footnote. (If you are doing qualitative interviews and this is a question you update over the course of the research, you should report on the varieties of ways you asked the question.)

As a side note, it may also be that you need to do repeated rounds of interviewing with the same person during a data collection wave to build up the necessary rapport to get to the truth. 

           .

  • Reinforce that we value honestyRemind the respondent not just that we value their time, but also that we chose them for their insights and honesty and really value these. That we consider them the experts on the experience with the program and that we have no other way of learning about what may be better than them telling us.

 .

Before & after interviews

  • Introduction from the client. This is a tricky suggestion because it seems at odds with presenting ourselves as independent from our clients; your team should consider carefully if it makes sense in your context. Still, if our client is trusted by the participants and they signal to our intended respondents their desire for us to hear the truth, this could be helpful. .
  • Consider the setting. Are there places to do interviews that will help to break the psychological link between the research team and the ‘benefactor?’ And, to physically limit the change of NGO or other staff overhearing the interview?
  • Do your interview preparation. As much as possible, we should try to learn what we can about a respondent before meeting with them (or after, to contextualize their responses and decide if a follow-up interview would be helpful). Have they attended all the meetings this academic year? Do we know if they have made use of new information presented in an agricultural training? Do program staff have anything to say about the respondent’s level of engagement or about the morale of other program staff? Gathering what information we can from administrative data, program staff, and other possible stakeholders can help us more effectively probe about what may not be working that well for our respondent, so that we can try to push beyond “the program is good” in our interviews (if, of course, the respondent has more to say — we aren’t trying to push them to make up complaints that don’t exist!).

*Some resource papers thanks to the wonderful Amber Peterman:

.

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

Share your thoughts, please! The more minds, the merrier

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: