Reporting qualitative work (with skeptical readers in mind)

Inspired by a referee report completed recently, I decided to share my thoughts on reporting qualitative results. These aren’t codified norms but they reflect my understanding of qualitative (largely interview and observation) data collection and analysis, and personal – but I think reasonable — preferences. By commenting on report writing, I am obliquely making comments on collection and analysis processes too. And I welcome you to contribute.

The caveats: I largely speak about qual in the context of public health and global development. I make no claims about the approach of historians and other scholars that work primarily with texts (though I welcome input from all scholars). I also focus mostly on interview data. So, let’s begin.

A key goal of any report–quant or qual–is to give readers the tools to understand your data and analysis, then to consider your interpretations in light of strengths and gaps in data collection and analysis. My suggestions are to this end.

  • Help your reader situate your study in time. You can do this with a timeline figure showing your data collection, the policy or phenomenon of interest, and other relevant contextual events. I shouldn’t have to doodle a timeline on scrap paper to make sense of your data and what they capture. Create an easy way for your reader to understand your timeline.

  • Make the basis of your research clear. Make your research questions clear. If you have hypotheses or priors, even if you aren’t formally testing or updating them, make them clear. If you were guided by a theory or framework, make that clear. If you went in tabula rasa and totally exploratory, make that clear.
  • Choose the right kind of data. Qualitative data comes in four main flavors: text (including interview notes & transcripts), speech, observations, and images. State the type(s) you have and why that was the appropriate type or combo given your research questions.

  • State the rationale behind your site selection and sampling clearly. Site selection and sampling are important parts of credible qualitative work – you need people to believe that you went to the right places, talked to the right people, read the right things, and observed the right things to make claims about them.
  • Explain and justify how you choose your study sites. It is insufficient to say they were chosen ‘purposively’. What was the purpose? And how did you apply it? How do the selected sites fit into the universe of possible sites? Why should we think that your sites are sufficient and what might have been missed? There is a certain brand of researcher who equates purposive sampling with convenience sampling. Don’t be that researcher.
  • Clarify how your sites relate to one another? Are they similar or different and, if so, how and why? How does this help us understand your research?

  • Use the above points also to explain your sample selection. Yes, theoretical or purposive sampling may be the most appropriate strategy for your sample but you need to say why and to justify the sufficiency of your sample composition and size. Ideally with reference to your research questions and hypotheses.
  • Describe what heterogeneous views or experiences were necessary to understand your phenomenon of interest? How well did you capture them? If there is a relatively small number of relevant stakeholders, did you speak with or see all? If not, how did you fill in the gaps?
    • If the sample frame is much wider (e.g., not a small number of stakeholders involved in a particular decision), situate your sample in this wider context. Are they meant to be typical? Exceptional? (See: @evlieb’s nested analysis.)
  • State the consent you received and the compensation you provided for all primary data collected.
  • Explain your sample size. Not all qualitative researchers think thematic saturation is an appropriate goal but some reference to saturation, redundancy, or other ways of showing that you have captured the relevant breadth and depth of views and experiences are important. Sometimes your sample will be partially determined by the budget. Be honest about this but still doing the work of laying out what you think can/not be learned from the sample you do have.
  • Explain the intention behind design choices. If you do something with intention, explain it. For example, if you segregate focus groups by men and women, explain why this was important given the context and/or topic in order to get high-quality data. Sometimes (gasp!) biological sex will not be the most important or relevant division to explore.

  • Discuss positionality. Especially for interview data, you need to discuss positionality of those present vis-à-vis the respondent. Who was in the room (interviewer? notetaker? more?)? What kind of benefits and drawbacks exist given interviewer characteristics in terms of helping people open up and be honest or inducing one or more forms of bias. These could be inherent qualities (sex, accent) and could be intentional (clothing and transportation choices, for example).
    • If positionality could have induced bias, what steps were taken to mitigate this?
  • State how the interviews interviews recorded. Audio? Video? Was there a notetaker there? How were observations of context and non-verbal cues incorporated into the interview notes or transcripts for analysis?
  • For interviews, clarify how un/semi/fully structured were your guides – & why? Why was approach right for your research set-up given the skill of the interviewer, the number of interviewers, and the research set-up If you don’t know the answer, I suggest you get your hands on Bernard’s Anthro Methods.

  • State the interviewers’ opinions on respondent truthfulness. How is this accounted for in the analysis?
  • Discuss the role of iteration between collection and analysis in your work. Were you able to debrief and reflect after each interview and make implied adjustments to your guides or approach? Could you follow-up with respondents?
  • Clarify the relation between data collectors and analysts. Is there new positionality to consider given who is doing the analysis (making meaning and deciding what is important)?

  • Describe your analysis process. Uploading data into qualitative software is not analysis! Deploying quotes-in-colorful-boxes (or in block text) is not analysis. Coding is not complete analysis – it means you have tagged and categorized your data to begin to make sense of it. Keep going.
    • You do not need to use quotes in full, offsets as big blocks, to show that you did qualitative work. A quote is a data point. If you want to show one to illustrate a larger or unique point, make it is clear that is what you are doing.
    • You can also use smaller segments of quotes, integrated into the text, so that you use the respondents’ words without disrupting the flow of your results narrative. A parade of quotes will rarely stick with the reader.
    • Help the reader understand what you are trying to say. Is a code or a quote reflective of what many people said or did? Just one? What analytic value is it bringing?
  • How do you build faith in your analysis? For example, independent coders, discussions and consensus-building? Did you engage any member with checking or validation, such as taking preliminary analysis back to the respondents to see if they thought you were on the right track?