Reporting on qualitative work

Below, I am compiling some thoughts on writing up qualitative results, sparked by a recently completed referee report. I don’t believe there are many codified norms, so some of these reflect my understanding of solid qualitative (here, largely interview and observation) data collection and analysis…

as well as some strong personal – but I think reasonable — preferences. Of course, welcome others to contribute and to refine my thoughts. By commenting on what should be in a write-up, I am obliquely making comments on collection and analysis processes as well.

I am largely speaking about qual in the context of public health and global development and am making no claims about the approach of historians and other scholars that work primarily with texts. I focus mostly on interview data, with some wider points.

Remember that the goal of your write-up is to give your reader the tools they need to understand and believe your data and analysis, then to consider your interpretations in light of strengths and gaps in data collection and analysis.

This point is wider than qual (though qual should be A1 on bringing in context) but: *please* help your reader situate your study in time with a timeline figure showing your data collection, the policy or phenom of interest, and other relevant contextual events.

Next, make your research questions clear. If you have specific hypotheses or priors, even if you aren’t formally testing or updating, make them clear. If you were guided by a theory or framework, make that clear. If you went in tabula rasa and totally exploratory, make that clear.

Qual data comes in four key flavors: text (including interview notes & transcripts), talk/speech, observations, and images. Make clear what type(s) you have and why that was the appropriate type or combo given your research questions.

Site and sample selection are important parts of credible qualitative work – you need people to believe that you went to the right places, talked to the right people, read the right things, and observed the right things to make claims about them.

Explain and justify how you choose the sites you are investigating. It is insufficient to say they were chosen ‘purposively’ and leave it at that. What was the purpose? How do the selected sites fit into the universe of possible sites? Why should we think that your sites are sufficient and what might have been missed?

How are the ways in which your sites are similar or different from one another help us build the understanding we want?

Similar points on sample selection once you have your sites selected. Yes, theoretical and/or purposive sampling may be the most apt strategy to build your sample but you need to say why and you need to make some justification…

Of the sufficiency of your sample composition and size. Ideally with reference to your research questions and hypotheses.

What heterogenous views or experiences were necessary to understand your phenom of interest? How well did you capture them? If a relatively small # of relevant stakeholders, did you speak with or see all? If not, how did you fill in the gaps?

If the sample frame is much wider (e.g., not a small # of stakeholders involved in a particular decision), situate your sample in this wider context. Are they meant to be typical? Exceptional? Are they? (See: @evlieb’s nested analysis.)

For all primary data collection, explain what consent you received and what compensation you administered.

Not all qualitative researchers think thematic saturation is an appropriate goal but some reference to saturation, redundancy, or other ways of showing that you have captured the relevant breadth and depth of views and experiences are important.

If you do something with intention, explain it. For example, if you segregate focus groups by men and women, explain why this was important given the context and/or topic in order to get high-quality data.

Especially for interview data, you need to discuss positionality vis-à-vis the respondent. Who was in the room (interview and notetaker)? What kind of benefits and drawbacks exist given interviewer characteristics…

In terms of helping people open up and be honest or inducing one or more forms of bias. These could be inherent qualities (sex, accent) and could be intentional (clothing and transportation choices, say).

If positionality could have induced bias, what steps were taken to mitigate this?

How were interviews recorded? Audio? Video? Was there a notetaker there? How were observations of context and non-verbal cues incorporated into the interview notes or transcripts for analysis?

For interviews & obs, how un/semi/fully structured were your guides – & why? Why was approach right for your research set-up given the skill of the interviewer, the # of interviewers, and the research set-up (e.g., 1 v repeated interview)?

What were the interviewers’ opinions on the truthfulness of respondents? How is this accounted for in the analysis?

Discuss the role of iteration between collection and analysis in your work. Were you able to debrief and reflect after each interview and make implied adjustments to your guides? Could you follow-up with respondents?

What is the relation between those who did the data collection and did the analysis? Is there new positionality to consider given who is doing the analysis (making meaning and deciding what is important)?

Speaking of analysis… uploading data into qualitative software is not analysis. Deploying quotes is not analysis. Coding is not complete analysis – it means you have tagged and categorized your data to begin to make sense of it. Keep going.

You do not need to use quotes in full, offsets as big blocks, to show that you did qualitative work. A quote is a a data point. If you want to show us one to illustrate a larger or unique point, make it is clear that is what you are doing.

You can also use smaller segments of quotes, integrated into the text, so that you use the respondents’ words without disrupting the flow of your results narrative. A parade of quotes will rarely stick with the reader.

Help the reader understand what you are trying to say. Is a code or a quote reflective of what many people said or did? Just one? What analytic value is it bringing?

How do you build faith in your analysis? For example, independent coders, discussions and consensus-building? Did you engage in any member checking or validation, such as taking preliminary analysis back to the respondents to see if they thought you were on the right track?

Did you engage in any member checking or validation, such as taking preliminary analysis back to the respondents to see if they thought you were on the right track?

Share your thoughts, please! The more minds, the merrier

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s