This is a post that i recently penned for IDinsight’s internal blog but i thought was worth pushing out into the interwebs, in hopes that people will weigh in with tactics they have tried!
Please share in the comments your experiences with courtesy bias and the tactics you have used (seemingly successful and unsuccessful) — including the ones here, which are not fool-proof.
While i miss being in ‘the field’ and conducting interviews myself, one way that i remain connected to the research is by conducting frequent debriefs during the data collection (and early analysis phase). Debriefing isn’t just self-serving; it can improve the quality of qualitative data collected and the subsequent analysis.
After a particularly insightful debrief this morning, i thought i’d quickly put together some thoughts about the process and why it is so important if you are conducting qualitative work.
There are a few hallmarks of a qualitative approach research that make it important to check-in regularly during the data collection (interviews, observations, focus groups) process in order to strengthen and make more credible the information being produced.
- The data collector and data analyst are often the same person or small set of people (which generally implies a higher skill level for a qual data collector than the quant counterpart).
- A qualitative approach to research is an iterative endeavor — this is a key distinguishing feature from a lot of large-n, more quantitative work. In quantitative work, we usually take as fixed all the items will include in a questionnaire, which serves to enhance rigor and decrease bias.In qualitative work, we have a bit more flexibility and pursue rigor a little differently. This allows us to incorporate learnings from one day’s interview work into the next day’s interviews (or observations, or focus groups), as we continue to update our understanding. For example, we can incorporate into the interview guide a new probe that worked particularly well at eliciting a rich response; or, a new line of thinking might emerge that we want to explore with future respondents; or, we may notice a potential pattern emerging that we want to pressure-test the next day.
There are some important implications of these features.
- First, it is very easy to get stuck in the weeds if you are the data collector, focused on scheduling the next interview and generally being tired after so much human interaction, rather than taking time to reflect on what you have learned. It is not good to be mired in executing-but-not-reflecting-and-and-hypothesizing-and-adapting.
- Second, you need to have time built in to your schedule and budget to reflect and update your approach. This time can come between interviews during the day, as well as allotting time between days of interviewing. This protected time can be used for transcription and expanding field notes, for reviewing these notes and transcripts, and to begin early analysis (such as memoing and early coding). This helps to prevent many cases of “we should have asked that!” once we get back from ‘the field.’ By doing this as we go, it is easier to return for follow-up interviewing or at least ask the rest of our participants some additional questions. It also helps us to learn more from each interview.
To facilitate this reflection without getting stuck in your own head, it can be extremely valuable to have a debrief with someone else about what you have been learning and experiencing in the field, to start to make sense and meaning of what you have been hearing. These debriefs — sometimes called peer debriefs* — can take the form of written answers shared with the larger team or can take place on phone calls or face-to-face with another team member (who did or did not participate in the interview or observation). Technology and timezones will determine what mode is most sensible; it may change over the course of a project. (Note that debriefs are also a great way to make sure that team members not in ‘the field’ still feel involved in the project.)
These debriefs are not substitutes for early analysis or for another member of the team reviewing transcripts and field notes to provide feedback on: points that have been over- or under-emphasized, points where responses seem vague and more probing may have been required, or points the interviewer seems to have made unnecessary or unhelpful assumptions.
Below are the set of questions i am currently asking for reflection:
- What was an interesting point that arose in the interviews today? Does this suggest an additional set of questions to ask in future interviews?
- What surprised you most about the interviews today?
- Did anything particularly challenging or difficult or unpleasant come up during the interviews today? Should we think of ways to mitigate these challenges in the future?
- Are there any questions that don’t seem to be yielding useful information? Should we update or remove these questions?
- Do any patterns stand out to you in responses (early coding and categorization)? Does this suggest any hypotheses that we may want to pressure test in future interviews?
*Some people argue that peer debriefs should not be conducted by a member of the research team, in pursuit of impartiality. Ideally, both someone from the research team and a neutral person outside the team can be reviewing data as it is coming in and asking questions to strengthen the research process.