This is an effort to develop a core set of questions for someone new to impact evaluation to start to learn it from the empirical literature. Specifically, this is for someone with limited exposure to these methods.
Working list:
What did this paper tell you about the context of the study? What else do you wish you knew?
What did this paper tell you about the intervention under study? What is/not convincing to you about the ‘theory of change?’ (They might not use that precise term but you can look up what it means!) What else do you wish you knew? I find it can be useful to draw a diagram.
What was the key outcome(s) explored in the study? Did you think the authors overlook anything important?
What is the key research question(s) under investigation? Specifically, what is the causal claim the authors hope to make?
What is the key assumption(s) the authors made to allow this analysis to answer the research question posed?
What did you understand about the methods that they used to collect data? What questions do you have?
What did you understand about the methods they used to analyze and make a causal claim from the data? What questions do you have?
(From what you understand) Do you believe the results? What are all the ways this paper could be wrong (and what do the authors do to reassure you it is not)?
How did the authors visualize the results (if at all)? What would you have done differently to make it clearer?
What do you think about the policy conclusions or recommendations they make? Do they seem like a tight fit with the results? What did they write that went ‘too far?’ What, if anything, did they miss?
You might also see if any blog posts or other items have been written about this paper to provide additional detail
Well, between getting ready to write this post, building on my previous one, Keren Landman of Vox has come out with a fabulous piece that covers some, but not all, of what I had hoped to say. I will reference her post along with what I planned to write!
It does set the tone, so I will start out by quoting it:
“Last month, the National Institutes of Health announced publication of a study it funded on menstrual changes associated with Covid-19 vaccination, the first peer-reviewed study of its kind. Earlier availability of this data might have enabled “prebunking” of vaccine-related fertility myths, dampening their spread and impact, but by the time the NIH noted in a press release thatthe results were reassuring, the damage had already been done.”
It may be hard to tell, because I was angry when writing this post, but overall I am pretty sure it has three points:
As in the previous companion post, when a clinician–on the waiting room or on TV or wherever–dismisses someone’s experiences, it opens up the door for that person to seek validation and other information elsewhere, which is often not optimal.
A particular set of vaccine-related concerns could have probably been foreseen and built into data collection and communication-related plans from the outset–but wasn’t. These have a ‘curious’ organ-specific flavor to them (people who menstruate & people at risk for breast cancer).
We *still* aren’t doing the research on how to communicate information to patients before, during, and after getting a COVID shot. What the hell are we doing?
*In this post, I use PWM for People Who Menstruate, which is more inclusive and accurate than ‘women’
A brief, and woefully incomplete, history of vaccine concerns related to fertility
My working (though as yet under-researched) hypothesis is that concerns about fertility are one of the oldest, as well as most globally pervasive, rumors about vaccines: some of the original misinformation. Certainly, these concerns have popped up regularly since the 1950s. My guess, though I am still on the hunt for evidence, is that fertility- or sterilization-related concerns cropped up nearly from the beginning of vaccination efforts, whether among women, and/or minorities, and/or the poor. It may be that, in part, because we lack subaltern histories, we can’t trace the full set of concerns people had when they encountered vaccines.
To begin to tease this apart, I reached out to fabulous–and, as it turns out, kind!–historian Andrew Wehrman, whose book (The contagion of liberty) is available for pre-order. In Andrew’s research, he did not come across fertility concerns (yet… next book, Andrew?). Nor did he find that these concerns were at the root of anti-vaccination protests.
But he did come across a lot of concern about the effect of vaccines on pregnant and lactating women, rooted in general common knowledge that smallpox itself could be particularly pernicious for pregnant people (that disease awareness was much higher among people may be an interesting variable for future consideration). Physicians in the second half of the 18th century gave heterogeneous advice about whether pregnant people should be vaccinated. Andrew pointed me towards an early medical experiment on inoculation, carried out by Dr. John Quier on enslaved people in Jamaica (as reported by Londa Schiebinger in a series of books I am now also adding to my reading list). He reported in 1770 that physicians could freely inoculate pregnant women without harm to mother or child; the guidance provided by individual physicians continued to vary.
Andrew speculates that all of this may have been the root of present-day concerns about links between vaccines and sterilization. I think this is possible but I do wonder if there are other roots (Malthus was writing in the 1790s for example). I realize that I don’t know enough about how concerns about population size and composition influenced public policy between, say, 1798 and the 1920s, such that various subalterns may have had reasons to worry about the types of power the state was exerting through vaccination efforts (nor do I know enough about how aggressively vaccination was targeted at poor people during these times). More research to do, more tea to drink…
In the meantime, though Heidi Larson, in Stuck, has pulled together at least some of the times and places that concerns about population control and vaccines have cropped up during post-WWII decolonization, in what are now mostly low- and middle-income countries. Heidi seems to draw heavily on the work of sociologist Amy Kaler, who examines the 1950s onward. Heidi writes:
“These rumors across multiple countries were attributed to the polio, smallpox, tetanus, and measles vaccine specifically and sometimes childhood vaccines broadly; all were feared to cause infertility… These rumors reflect a more generalized anxiety about survival and a deep distrust about the motives of the system. To those who spread and believed the rumors, it was not a matter of whether they were fact or fiction, but whether they seemed plausible and explained phenomena that were unfamiliar, not ‘logical’ in their culture or social setting or experience, or that somehow confirmed already brewing suspicions. As Kaler reflects, ‘even the most superficially bizarre rumors can be credible.” It is, after all, about their believability, not the facts.'”
These rumors trace from Nigeria to India, spreading around the globe. Heidi further notes:
“The rumors around the tetanus vaccine traveled the world, sparked by a misinterpreted research paper on a contraceptive injection and suspicions around a meeting convened by the WHO about expanding contraceptive options and focusing on ‘fertility-regulating vaccines.’ These rumors fall onto fertile ground in the context of growing attention to population control… In May 1995, a news wire went out in Washington DC: ‘Fr Matthew Habiger, PhD OBS, president of Human Life International today called for a congressional investigation of reports that millions of women in Mexico and the Philippines have unknowingly received anti-fertility vaccinations under the guise of being inoculated against tetanus.’ The vaccine sterilization rumors were circulated to 60 countries through Pro-Life chapters in the Catholic network.”
“In early-2017, WhatsApp and Facebook posts spread across the southern states of India instilling anxiety and doubt about a measles-rubella vaccination campaign… [The campaign] was disrupted by a social media storm fueled by a mix of conspiracy theories, safety concerns, and rumors that the vaccination was intended to sterilize the Muslim minority population… A year later, the same rumors sparked emotions through another WhatsApp campaign in Uttar Pradesh (UP), causing several Muslim schools to refuse the vaccines… In some cases, the madrassas did not allow health officials to enter the school while others encouraged students to stay home on vaccination days due to rumored fears that the vaccines caused infertility… These rumors echoed polio vaccine sterilization rumors that had circulated in UP in early 2002. The fears then were around the motives of Western countries supporting the global polio eradication effort but who were also engaged a post-9/11 war on terror, sometimes interpreted as a war on Muslims.”
I will note that I was given other promising leads, which I have not yet had time to explore, including looking at our better understanding of DNA, the HPV vaccine, and the beginning of parent advocacy about vaccination as places to further explore the links between vaccine mistrust and fertility concerns.
The importance of all of this is not that history is neat–though it is!–but that when something is an incredibly poignant touchstone for hesitancy and fears, you’d think it would be top-of-mind to get out ahead of it from the outset. There is more work to do here, though it may also be true that there are some subaltern voices that may simply be lost to history. Even if incomplete, social scientists and historians had important information about what we might expect during a vaccine rollout. That makes the next bit all the more infuriating.
My experience(s)–first- and second-hand
I usually get a one-day reaction to the flu vaccine, which I have gotten every year since college (and sometimes twice a year as there is a different vaccine for the northern and southern hemispheres!). As a brief aside, I asked clinicians *for years* why I got this reaction when it was not a live virus vaccine and so many told me they didn’t know before one took the–literally, used correctly–one minute to remind me that aches, headache, and chills are pretty of a pretty standard immune response, which is what I was expressing. And that the strength of my response actually meant my immune system was really working, not that it was bad.
Knowing this about myself, I assumed that I was highly likely to have a reaction to the COVID vaccine. In preparation, I pre-registered a sick day as soon as my vaccine appointment was made on the assumption I’d feel gross the day after the shot. I then also did absolutely no one should do and pulled an all-nighter before my vaccine appointment to get some work out in anticipation of being offline.
Which is to say that my body was definitely already in rough shape for vaccine day and that I felt generally grungy and achy, especially after the shot. I also got my period a few hours after getting my shot–heavy and mid-cycle. No matter what caused it–Stress? Eating chocolate late at night? The shot?–this is a factual thing that happened.
That this was A Thing I Experienced was completely dismissed when I went in for my next doctor’s appointment (female, if you are curious). Not with any reference to my charts, mind, even though I enter menstrual cycle information every time I go to the doctor. I was accused of just not really knowing my cycle (I am an avid tracker). The doctor just acted like, well, oopsy daisy, periods are soooooo weird.
I have talked to several people-who-menstruate (PWM) with similar experiences, both of some cycle weirdness (though not exactly like mine) and of dismissal. Not that it matters but most of these are also people-with-health-related-PhDs, so credentials aren’t making this situation better, as we all appear equally stupid in paper gowns, apparently.
Cait Lamberton is so much more eloquant than me! She wrote me: “If we fail to address a sacred and personal fear, we invite those who will gladly exploit it to address it in ways that serve their purposes.” [cry emoji]
As people with periods, we have talked, on organizational Slack channels and Zoom calls and beyond, about how it gets harder to talk to friends and loved ones who have fertility-linked vaccine concerns–as do some of my cousins–when you’ve had this experience and have no way of speaking intelligently about how this can happen but it does not have long-term consequences. I haven’t gone deep online but I am sure I could find many more menstrual mates with questions there if I looked (such as here, as a start).
And here’s the thing: the uterus is part of the reproductive system. So PWN who experience this and raise these questions are not crazy or stupid for wondering if it affects your reproductive system and fertility more broadly (it doesn’t, as I outline below). It may not be factually correct but it is not implausible or illogical to have questions about what is going on or what it means. And when clinicians dismiss them rather than engage with them, this is deeply problematic. It disaffirms that PWM know something about their own bodies and it belittles a question that is logical.
Both of these are precisely what might drive someone to look elsewhere for answers, where they might find validation for their experience and their questions, which could lead to openness to receiving advice on what to do about it (avoid the vaccine, disengage from the medical system, take various herbs, and so on).
Just to provide a bit more of an example, I reproduce below a Slack thread from Jan 2022 in which women (in this case, all the PWM who were conversing were women) are figuring it out–but we’re just guessing and, frankly, doing our own research when we shouldn’t have had to. This workplace-conversation happened after I had WhatsApp conversations with each about our own period-related reactions to the shot.
Amidst all of this, I want to call out one other teachable moment that was squandered. Remember Nikki Minaj’s Tweet about her cousin’s friend’s testicles and all the ‘hilarity’ that ensued? Fauci came out and said there was “there’s no evidence that it happens, nor is there any mechanistic reason to imagine that it would happen,” where ‘it’ is the “COVID vaccine causing any reproductive issues in men or women.” I like Fauci and want us to have strong institutional spokespeople who can be solid explainers-in-chief. But this was a dismissive comment that, so far as I can tell so far (going to try to put together a proper timeline), pre-dated any systematic data collection about the experience of specific male or female reproductive system reactions to the vaccine, no matter how minor or temporary. I have been diligently reporting my post-vaccine experiences to the CDC’s VSafe system (where do those data go???) and have yet to be asked about symptoms relevant to people who menstruate or any other biological-sex-specific experiences. If those entrusted to explain things to us speak too far beyond the evidence (recognizing we never have full information in evolving situations), this opens up space for distrust.
Moving forward
There are a few things that haven’t happened vis-a-vis this vaccine:
Behavior change, programming planning, and historical knowledge do not seem to have been at the table for any kind of vaccine & communication rollout brainstorming. I do not know who was there or if there was such a meeting but it is very clear now what did not happen.
Data weren’t collected on gendered health issues from the outset, as Keren covers well in her article.
There is no (widely known? incentivized?) mechanism for individual clinicians to report up to “the system” symptoms and experiences that they hear from their patients, so that information was being collected systematically even if not in a controlled trial to figure out what was fully going on.
Gynecologists and other clinicians are not systematically equipped with the kinds of specific, up-to-the-minute information they could use to talk to PWM who raise these concerns, nor necessarily with the soft skills to acknowledge patient’s experience and dignity before explaining how one thing can be affected without affecting the other.
Gynecologists and other clinicians may not be certain on whether and how to discuss these symptoms with their patients before they make their vaccination decision, nor is part of the (rare) education that happens when you actually go to get your shot (for the love, claim back those 15-minute post-shot observation periods to do some serious vaccine literacy work and turn the recently-vaccinated into more informed people who are better equipped to become vaccine ambassadors!!!).
There is a real empirical question here that should be researched: what do patients want to know? For example, I have been having an interesting conversation with an awesome practicing- and research-doing gynecologist, who writes, “Educating women that it could happen would have helped a lot! And yet, as a gynecologist, I don’t want any of my patients to think I’m telling them that it will cause infertility or basically give them any reason to think I’m questioning that it is absolutely the right thing to do right now due to a novel virus that might kill us [and itself cause fertility issues] rather than maybe throw off fertility.”
These are reasonable concerns and it may be that for different patients, different up-front information will have different effects. What else could we be asking or doing in waiting rooms to better understand this and tailor the patient experience accordingly? What soft skills, pieces of information, and teaching and learning materials do clinicians wish they had?
In all the many COVID mis- and disinformation studies out there, I have yet to see solid ideas with solid research designs about how clinicians and patients can better communicate about this kind of tough stuff, facilitating decisions consistent with personal and public health now–but also deeper trust going forward. I have not seen fertility-related facts at the forefront of any conversations, overshadowed by more sensational misinformation. Fertility concerns (and concerns about government curtailing biological autonomy more generally) are old but that doesn’t mean they aren’t more worrisome than the sexy, hyper-new concerns about microchips.
On a different note… inflammation and mammograms
Holy hell! Did you know this is not the only sex-specific experience that is being largely ignored? Did you know that women who have had lymph node swelling under their armpits post-vax have been worried about breast cancer? Did you know that inflammation post-vax has led to false positives on mammograms but that not all physicians have been equipped to advise their patients about not timing mammograms with the vaccine? The CDC does know this and tucks it away in a paragraph, here. I don’t have energy to get into this now but, again, it seems like opportunities to warn, prebunk, and generally teach people about their bodies have been squandered.
Once again, as I am almost certain Tressie said, we are having a DIY (do-it-yourself) pandemic where clinicians, patients, and everyone else are having to guess, confer with friends, and generally figure things out for ourselves even though the potential for systematic scanning and solutions-generation has been there all along.
As some of you know, I have been going deep into reading on trust lately, including Ben Ho’s book and the collection of Onora O’Neill‘s lectures on the topic. Trust is needed in times of uncertainty and imperfect information–by definition, we don’t need trust in contexts of certainty and perfect information.
The reason for my deep, if plodding, dive is two-fold.
First, trust is increasingly implicated in the disappointing handling of the present pandemic as well as past and future public health emergencies. Heidi Larson, in Stuck, writes about the trust deficit that underlies much vaccine hesitancy: “Among the driving sentiments behind the current waves of vaccine questioning and dissent are a sense of lost dignity and distrust… those who feel that they are herded like sheep, treated as if they are expected to follow without questioning, that they have no voice.” Dignity/respect and trust work side-by-side.
Meanwhile, Ezra Klein just highlighted the role of trust-in-government in effective pandemic and emergency response, including quoting Bollyky on both trust between citizens and their government as well as between citizens themselves. Alan Levinovitz just riffed on unmet needs in health-care settings: for empowerment, for having our inherent dignity acknowledged, for feeling someone has our unique case and interests at heart. In effect, in a clinical or other health-care encounter, we need to be heard (that is, have our dignity affirmed) and build trust. Trust between health care providers (broadly defined) and public health authorities, clinicians and patients, and in multiple other health-related relationships matters.
Not all of these authors are saying precisely the same things (I intend to write more on each and these readings in conversation with one another in future posts) but they are circling around similar basic ideas and ties that matter in securing and improving public health. Ties that, in many cases, need to be shored up before the next emergency as well as consciously and carefully maintained throughout. The needs for trust-building communication efforts at all levels and for trusted and trustworthy intermediaries and interpreters are pretty stark. In this post, I will focus, in particular, on respect/dignity and trust in the clinical encounter, which echoes some of what Alan Levinovitz, and certainly many before him, have said.
Second, (lack of) trust and (dis)respect were at the root of my early personal experiences that drove my early professional interests and career, into medical anthropology, then health behavior, then health systems. It still gets me fired up today, as I think part 2 of this post will make clear.
I am pretty excited about circling back to my early interests and passions in two roles: (1) my role with the Mercury Project as well as my affiliation with (2) the Dignity Project. With the former, in particular, I hope that we will dig deeper into how we might–through better communication–re/build the trust relationships required to get folks to tune into trustworthy information, tune out of misinformation, and ultimately behave in ways that are consistent with the public’s and their own health. Getting shots-in-arms, masks-on-faces, tests-in-noses now is important because way, way too many people are dead and dying. But also, simultaensouly laying the groundwork for future responsiveness to public health guidance is essential. Routine needs are, of course, ongoing and the next wave and/or the next illness are certainly coming. The risks right now seem high that we will see ‘backsliding’ on things like routine vaccination on the basis of concerns raised about the COVID vaccine–but, equally, conversations about the COVID vaccine could be used to shore up vaccine literacy and broader public (health) trust. This could have been happening all along but hopefully there is still time and will and skill to do things differently.
The key, if unoriginal, thesis of this post is that if we are worried about misinformation and/or people acting in accordance with their own and the public’s health, we should be worried about clinical trust and dignity-affirmation. When we don’t feel heard, when we don’t feel we have agency, and our bodily experience is rejected by an ‘expert’ in the intimate context of the clinic, it can tear the fabric of trust. When trust is damaged or lost in a clinical encounter, it may lead us to seek open ears and shared experiences elsewhere. These sources may seem more trustworthy, authentic, and empathic–even if the information they offer, intentionally or not, is inaccurate. This mismatch is challenging and plays out over and over (see Hammer & Das on competence versus effort in clinical encounters in Dilli for one of many examples). This echoes the comment, above from Alan, on our actual needs and the clinical encounter and the consequences of these needs not being met.
If we are worried (and I am) about people seeking information from inaccurate–if empathetic–sources, then considering how to build and maintain trust in the clinical encounter is crucial as a bulwark against this and future public health emergencies (coupled with, if course, the urgency of making sure that people have true access to a medical home and can build these relationships to begin with).
Cards on the table: I’m not always wild about MD and other clinician behavior and my starting premise is that education / a white coat does not confer trust but that, rather, clinicians must work to earn and maintain trust in each encounter even though they may have superior psychological and pharmacological knowledge. Acting like trust has already been earned is likely counterproductive.
To this end, it may be useful–for me and I hope for others–to lay out some of my past and current health care experiences that have felt disrespectful and, as a result, frayed the trust I have in any one clinical encounter, even while broadly maintaining trust in the medical and public health professions (and to be clear, both my masters and doctoral degrees are from schools of public health).
Based on my personal experiences, I know that if I were a slightly different kind of person–a slightly different education, a slightly different set of parents, a slightly different valuation of how I spend my time–I can see how I might seek legitimacy, confirmation, and answers in the kinds of online or offline fora where so much misinformation (even well-intentioned) finds fertile soil and grows. Like many, sitting stripped down in a hospital gown, I have not felt heard and not felt respected as a reasonable and reasonably intelligent person who, while certainly not expert in physiology and pharmacology, is pretty tuned in to my own body.
My experiences are surely not as severe, persistent, insidious, or structural as what many people have experienced in the clinical context. I come into this conversation with a fair amount of white, expensively-and-interminably educated privilege. Nevertheless, they have been pivotal to me personally and professionally. I have wanted to seek information, confirmation, and healing outside the clinical setting and to understand why others do so. Personally, I have sought out some of the more mainstream ‘alternative’ treatments (acupuncture, supplements) to fill gaps in what I saw as insufficient listening and incomplete advice in the clinical setting. I could have tipped farther or another way and so have some sympathy and resonance who take the first step into the rabbit hole. It also gives me an intuitive sense, in addition to all the empirical evidence, that being heard and respected could go a long way in repairing trust that is so important at personal, public, and policy levels for health.
An aside: some communication theory from the waiting room
I only took one actual communications class in college (public speaking, which in my opinion should be mandatory) but had several friends who were comms majors. One of these suitemates took me to the emergency room one night when I had stomach issues, while also studying for a test the next day (yes she was awesome). Through this mechanism, I ended up getting a dose of comms in the waiting room. In particular that day, she was studying the ‘onion theory‘ of communication, aka social penetration theory. The gist is that we often build (or dissolve) interpersonal relationships one layer at a time: I peel a layer and then you peel a layer getting deeper and more intimate at a similar pace as we move toward ‘core personality.’ When the balance or speed of the back-and-forth is off, it can cause trouble.
As we discussed that night, and as have often thought about in the doctor’s office in the years since, the clinical encounter violates this theory in many ways: you have peeled off your clothes and then you are asked extremely personal questions while, at best, knowing the doctor’s or the nurse’s name and maybe something about their personality communicated through the flair of their scrubs or nametag bobbles. You aren’t necessarily stripped down the core of your personality but you are extremely exposed on the surface and discussing the working of your innards and your less-desirable behaviors. It is a very weird, vulnerable, and uneven peeling experience that makes efforts to build trust all the more important.
[The very smart Leah Perkinson pointed out that the idea of ‘unearned familiarity’ might also be relevant here, such as discussed here. Will aim to explore this more in the future!]
“Scoliosis doesn’t cause pain”
When I was in 7th grade, I was diagnosed with scoliosis. It was first noticed by my ballet teacher. Yuri had a thick accent and it took a while to understand what he meant when he said I had an ‘ump,’ noticed during a forward fold. ‘Like a camel,’ he clarified, and I began to understand. Perhaps it is this initial experience–growing up with (appropriately) hands-on teachers who knew a lot about my body–that still makes me deeply trusting of my Pilates instructors today and their insights about my body. These are also stripped-down experiences but so much more satisfying than most clinical encounters.
In any case, the observation came before the pain started. Muscle spasms, occasional loss of feeling in one arm. For 8th grade, I was fitted for and wore a torso brace. It was molded plastic with big velcro straps and various custom-fitted foam bumps inside to nudge my spine closer to where it was supposed to be. Maybe it would help me grow straighter, the orthopod said, if I wore it all the time. It was uncomfortable to sit in, digging into the tops of my thighs or riding up into my armpits. It was hot and horrible to sleep in. I named it Spongella.
The issue here, though, isn’t that I went through this, nor that I effectively ended my dancing career by having my full thoracic spine fused after 9th grade, freezing me at 5′ 5.5″ and a very low arabesque forever. It was the clinical encounters and how they made me feel. I was told surgery was the answer–but the full set of problems I was experiencing was denied. Yes, I blatantly had scoliosis and the spinal angle was worsening each year as I went through puberty and a growth spurt. These were solid reasons to pursue surgery. But I also had pain, spasms, muscle locks and it wasn’t clear if the surgery would do anything here.
The orthopod would make incredibly unhelpful statements: “Scoliosis doesn’t cause pain,” he said. Mmmmk. But then why was I in pain and, in turn, what was I going to do about it? The question was never answered. The symptoms and the primary diagnosis didn’t fit and that was enough for him to dismiss the symptoms.
And so, it was through the owner of the dance studio that I was introduced to pain-management options like acupuncture and massage and, through this, began taking Chinese herbal medicine. All of this helped; not enough to make structural change and stave off surgery but it helped to manage the pain and to turn me into a convert of acupuncture and massage. In addition to the actual healing effects of this work, I was treated like a human (to the extent that teenagers are, in fact, human) and my pain was acknowledged as real. This is an excellent way to convert people.
My ‘alternative’ explorations only went this far. But at a different point in the internet’s ubiquity, in my own development, or under a longer wait-and-see period before deciding to have surgery, all of this could have been different. And so began my visceral understanding of the importance of being heard and having one’s symptoms acknowledged and addressed in building trust.
Fear and diabetes in Winston-Salem
After I dispelled–through two undergrad summer digs–the idea that I was going into archeology, I settled more comfortably into medical anthropology and health economics. I was lucky in that service-learning was en vogue and, for class, I did both work (filing) and research at a local free clinic that catered to a poor and often undocumented Hispanic population. This was a clinic staffed but a pretty cool group of clinicians dedicated to the population.
I was steeped in early Kleinman–Illness Narratives–at the time, appreciating the structure in approaching the unfamiliar, the parts you could classify, lay out, and make sense of. That we’re rational once you dig in but that you need to ask the right questions to dig in. Both Kleinman and I have moved beyond this, into deeper narratives since then, but the orderly structure is still compelling and a useful starting point.
A key component of Kleinman’s earlier framework was the importance of etiology, that by understanding how a person understood how a disease began, from where it originated, we might learn something about the treatments and illness management they saw fit, that, indeed, fit with the narrative.
It’s a simple observation and yet I was struck about how much it might matter. My question was patients thought their diabetes came about. When I saw my son fall out of a tree, one said. When I saw a gun pulled on my friend, said another. Fright is a commonly understood cause of susto, a culture-bound iteration of diabetes (simplified). How odd it might be (quite separate from the difficulties of complying) to be prescribed diet and exercise as a cure for a severe fright.
The well-meaning clinicians doing the good work of practicing at a free clinic didn’t ask patients how their illnesses came about. They just didn’t think it mattered to their diagnosis and the disease-management plan they would recommend.
I don’t know honestly if it would have been trust-building to ask about the origins of one’s illness and to tailor the (explanation of) treatment accordingly and, if turn, this would have been so impactful to see it in high compliance and lower HbA1c. What I did learn is that the questions not asked in a clinical encounter are many and they might matter for patient understanding and follow-through on clinical advice as well as propensity to seek information elsewhere.
***
In the next post, I will dive further into dismissal and consequences specifically in the case of menstrual and fertility concerns around the COVID vaccine. Because I have both a lot of rant and a lot of receipts for this argument, I decided to separate it out. So stay tuned to “the vaccine doesn’t have anything to do with your reproductive system.”
The word “unintended consequences” gets tossed around a lot in development and social sector work–often as a glib euphemism for “bad stuff we didn’t expect” or a “better call qual” moment.
However, many “unintended consequences” are indeed anticipate-able, especially with grounding in theory (+theory of change), the literature, and the context. It is the job of the program design and research teams to consider ways in which a program could lead to undesirable outcomes (those not consistent with social impact, progress) and to mitigate them (in the program) and capture them (in the research). It is not necessarily helpful to give all ‘bad’ outcomes the same label, especially an ‘oopsie-daisy, we didn’t mean for that to happen!’ label!
To walk quickly through my post-it:
NW quadrant: these are things that are ‘good’ and often the targets of the program or foreseeable good knock-on effects. These are often measured, celebrated.
NE quadrant: these may tie-in with positive deviance. It can be useful to capture these outcomes and, so, even in the context of quant/closed-ended research, may want to have an open-ended question that asks if an experience or intervention brought about any other things that the respondent deems ‘good.’ Too often, these outcomes are captured and celebrated idiosyncratically and anecdotally when they could have been captured systematically with a little thought.
SW quadrant: too often things that get the ‘unintended consequences’ label fall in this box, even though they are often quite foreseeable. They should be programmatically measured and captured as part of the monitoring, measurement, decision-making plan.
SE quadtrant: Of course, some ‘bad’ things will happen that are not foreseeable even with a reasonable amount of thought and hindsight will always be 20/20 while foresight is not. Similar to the NE quadrant, it is easy enough to include one open-ended question about things that were challenging or bad. It may be the work of deeper, richer qualitative work to investigate this more if the issues are surfaced but one open-ended question on that matter will go a long way to more systematically understanding problems and harms.
I will be speaking more about research ethics in the coming month (and beyond!). In light of this, I am documenting some formative ethical experiences I have had and that, to some extent, still puzzle me. Perhaps befitting someone initially trained in anthropology and grounded theory, these examples stem from individual interactions I had ‘on the ground’–moments when research participants or research staff directly critiqued me.*
I am glad they did.
Of course, these are only the ones who spoke up. (What if we asked more consistently?)
1. Ghandruk, Nepal
My first research experience was spending ~3 months in Nepal, including an extended stay in Ghandruk in the Annapurna range, not long after its ACAP office had been blown up by Maoists. There were many ‘firsts’ for me on that trip during undergrad: first time in an LMIC, first homestay, first research project, first primary data collection, first fieldnotes, first altitude sickness, first time working with an interpreter, first goat sacrifice… All around an influential time for me.
An unexpected breakfast, Ghandruk, Nepal
As with many first research projects (and we can definitely discuss some additional ethics around this!), the biggest impact was probably solely on me, not the local community in which I was working and not the world. It will likely come as no surprise that I did not solve the question of where, when, and why people seek care for what would be diagnosed as mental illness, and specifically depression, according to the DSM. Nor did I unravel how illness narratives might shape responsiveness to pharmaceutical treatment.
Nevertheless, I did learn some things! Two snapshots, in particular, have stayed with me from that time (besides the fact that my host mom said I should only come back and visit once I was married…).
1a. This is self-answering.
About three interviews into my project, the interpreter with whom I was working turned to me and suggested I was not making good use of people’s time because I was simply asking the same questions over and over (my first prepared interview guide!). Indeed, he thought I was quite daft and not ‘getting it.’
Some questions he deemed ‘self-answering’ and not worth asking at all (for example, was it hard to pay for health care in this village). Others, he felt that once we had heard from one or two people in the village, we had more or less settled the matter and there was no need to take anyone else’s time. While I ultimately prevailed that there was merit in asking the same questions over and over–and that at a minimum it was what was expected of me by my professor–the point still sticks with me, about how others might view not just the time researchers ask of individuals but also the collective time taken from a community and how they understand researchers’ strange ways.
1b. What will you do for me?
Probably the ‘epiphanic moment’ for me to want to pursue applied public health rather than medical anthropology** happened after interviewing a woman who entered my field notes as Old Woman #3 and later received a pseudonym. My questions–some closed-ended, some open-ended–were about illness etiology and health-seeking behavior. I asked her to list her ailments, to describe how she got it / it got her, what she had done and care she had sought, and whether it was challenging to pay for it.
As we closed the interview, and I believe I provided a university-logoed pen, she asked, now that she had listed all her problems, how I would help her with them. I, of course, had no answer.
This gets at a point raised in several places including forthcoming work from Alex Avdeenko: It’s weird for strangers to come and ask you about all your problems and, then, really weird for them to not offer to do something about it after you’ve provided the list. Or, in a bizarre non-sequitur, to give you a pen. It is no wonder that people may often believe that such question-asking is (or should be) a needs assessment that will ultimately be attached to succor.
2. Tamale, Ghana
I spent a year in Ghana as part of my thesis research, both working with IPA and doing my own work on the side.
2a. Ask me at the end and I will tell you if I agree.
While piloting for work on malaria medication completion in Ghana (and funneling my salary back into my thesis research, follow-up interviews***, and fabric), I accompanied data collectors. Plenty of times, the informed consent and other processes don’t quite go as was written on paper–you said interviews would be conducted in private but actually the respondent wanted someone else there with them. Challenges in keeping to the letter and/or the spirit of the Belmont principles have started getting more ink in the past decade (such as, though very much not limited to, here, here, here)–a trend that should continue and not only situations of randomization.
During one particular interview, an older gentleman said something that struck me. We went through the informed consent process and then he said, ‘but I don’t really know what you are asking. First ask me all of your questions and then I will tell you if I consent.’ From an IRB perspective, this is, of course, backwards, as you have to get consent before you get to your questions, on the assumption that the upfront information you provide is sufficient.
But from the perspective of someone less familiar with research in general and certainly the particulars of your research, it may not be. No matter how well-crafted your consent statement. Consent could be reimagined, potentially, to start by getting us to the point of beginning the interview, to have more scripted reminders throughout that people can skip or stop, and to have an ultimate ‘consent,’ at which point they could request to have their data trashed now that they know everything we planned to ask. If we started to take a view that research participants should learn something about research in the process, rather than just taking CYA measures, it might lead us to reconsider how the informed consent process (possibly reimagined) can be truly informative (and ultimately understood). A corollary is that we might wish to investigate what is of interest to potential research participants, so that we not only say what we are obligated to say more clearly (again, stay tuned for work from Alex!) but, also, to say what will be interesting and useful to those deciding to join.
2b. Honestly, imagine.
Sometimes the lessons on what is locally ‘ok’ don’t come from research participants but rather from research staff. This one has more to do with social norms and relationships but it was an element of positionality and potential offensiveness that I had not considered before. Specifically, our questionnaire included items on working memory, in which participants were giving increasingly long strings of numbers to remember and say back. I know it was a little nerve-wracking, as my thesis advisor/PI Prof Dr. Günther decided to give me a quiz!
Nevertheless, I did not expect it to be a major issue during data collection. I was wrong. However, when we did a feedback survey among the enumeration team at the end of data collection about what went well and what did not, it turns out that asking this question was mortifying to them. While it had not trickled up to me during data collection (my fault), the sheer horror and impropriety of asking someone older than you to remember and recite numbers came through very clearly in the open-ended responses. Imagine, they said, asking this to an uncle. Honestly, imagine.
This may not strictly be a ‘research ethics’ issue but it does hint at the fact that ‘we’ may not always understand what will be sensitive, uncomfortable, or even unacceptable in specific contexts. ‘Sensitive’ (and to whom) is one of the terms in research ethics (like coercion, undue burden, vulnerability, and a few others) that are important enough to be used a lot but so vague as to let you more or less define them as you will–that is, not actually terribly useful beyond bureaucratic CYA.
‘We’ do not have consensus on what constitutes appropriate compensation for research participation, veering away from coercion but also providing fair benefits, accounting for the risks imposed by the research. There has been a more robust conversation in medical and pharmaceutical research, such as dialogue on ‘fair benefits’ work by Emanuel, Wendler et al. across several papers and essays. Norms across different types of research and situations vary. The interpretations of non-monetary offerings by research participants may not be what was intended.
Into this space, and on a very low-budget project, we decided we would offer education/information about malaria as compensation for people’s time and also to make sure they better understood our research. This included seeding information about how antimalarial resistance developed when you don’t finish your course of medicine (as part of communication experiment that fell out of my thesis), including using these fabulous illustrations from Liz Venable, which we narrated out loud.
In addition, at the very end of the whole study, we planned (and ultimately did) send an SMS urging people to make use of bednets as well as to use as ACT and complete the course when ill with malaria. Again, the idea was that providing information might not only be useful but reflect some degree of compensation for time spent in our study.
The data collection team was NOT into it. First, they pointed out, almost certainly accurately, that people already knew about bednets. This contrasted with the reason you need to finish all your malaria medication, which might be genuinely new info. More interesting ethically, however, was how upset they were that we would tell people do something but not help them do it. In their view, it was very uncool to say you should use a bednet but not actually provide one.
3. Kiryandongo, Uganda
I have been lucky enough to make two month-plus trips to Uganda in the context of IDinsight’s work with GiveDirectly in Kiryandongo refugee settlement, with both quant-experimental and qualitative components. Both stints coincided with training and early data collection, for the quantitative and qualitative work.
Training out of the elements at Max Hotel.
3a. I can talk to you.
Our longitudinal, semi-structured work has involved, give or take, monthly conversations with a small number of respondents, who tell us about how they are doing and challenges they are facing. Each month we asked a set of similar questions (how they are doing, general sentiment in the settlement and the market, etc.) as well as a focused set of questions on a topic (education one month, health another, detailed transfer-spending plans in a third, say). We have two qualitative interviewers on this project (temporarily paused) and they have done a fabulous job building rapport with our respondents.
While data collection was active, Rico was diligent about reviewing incoming notes and transcripts in more-or-less real-time, which allowed him to give regularly feedback on interview technique and additional probe ideas to our team (blog post forthcoming on this remote management of qualitative data collection by less-experienced interviewers). He was also doing the regular management check-ins with our interviewers. He was the first to note, then, that Stephen and Christian were reporting that sometimes the respondents were calling them at unscheduled time just to chat. In a few cases, the fieldnotes and transcripts also revealed that the respondents felt they had no one else to talk to, that they had suicidal ideation, etc.
At base, this reveals the importance of having a resource-referral list (for which you have checked that the resources are truly operational) as part of standard practice. Working with a population that has recently and collectively experienced trauma makes this all the more important, even if you are not asking ‘sensitive’ questions. This can be really challenging when, for example, you are working in a country that does not host a national suicide hotline–and certainly not one in all the languages relevant to refugees. Nevertheless, it is critical.
This experience, born of extended engagement, also points at challenges, well-known in anthropology and increasingly recognized in other fields (with/out acknowledgement), when the line between ‘data collector’ and ‘confident’ become blurred.
3b. You’ve got a man in Africa.
This final point raises a question about who ‘confidentiality’ is for and who wants it. In some cases, of course, it is absolutely essential to maintain strict confidentiality, with literal life/death implications. But that is not always how it may be seen on the ground. Take Martin, pictured below. I sat in on an interview with him, led by the wonderful Stephen, who had just started as a qualitative interviewer on this project after serving as an enumerator at baseline. I know Stephen went through the process of providing study information carefully and sought a clear statement of consent. I know how much time walking around the settlement Rico and I spent reviewing the informed consent script with our team, making sure it came across as conversational and not a monologue.
First interview with Martin, Kiryandongo, Uganda.
Following consent, first interview guide for these opening talks in a series of longitudinal data collection focused on some history of arriving the settlement, whether people knew or got along with their neighbors, what people thought of the GiveDirectly lottery to determine order of transfer receipt. Following all of this, Martin had Stephen translate a request to me. Namely, that he wanted people to know his story and that I should spread it far and wide. That I should go back to the US and let people know that “I had a man in Africa” who needed help. He insisted on me taking the picture below, to share along with his story. Which, now I have.
Image taken by strong request from “my man in Africa,” Kiryandongo , Uganda
*This might be contrasted with, say, ex-post getting interested in ethics after your study is critiqued.
**Though another would come later when a mentor let me know that the medical anthropologist with whom I was considering pursuing a PhD would try to sleep with me and would make things quite unpleasant for me if I did not. That, too, was a decisive moment–and a different kind of ethics.
This post represents joint work with the fabulous Mike Frick. It has been a long time coming–to my horror, I see that we first drafted this at end-December of 2010. It’s gone through a lot of iterations, trying to fit in this journal (for example, this working/submitted version here) or that rather than just getting our ideas out in the world. It’s never been a ‘perfect fit.’
However, I find that I return to some of these ideas time and again, so want something out there. There are places in the manuscript that still had comments, in which Mike and I both acknowledge we wanted to return to the text and see if could push a little deeper, be a little clearer. Nevertheless, sharing out.
Key reasons for results-sharing and shared sense-making
Key takeaways & recommendations for action
Introduction
In this paper, we argue for recognizing and including research participants as important stakeholders interpreting and learning from evidence generated through social science research. Nigerian social commentator Onye Nkuzi asked, “If you are doing ‘ground-breaking social science research on Africa’ that Africans cannot access, what, ultimately, is the point?” (Nkuzi, 2011). We agree. “The point” matters.
Researchers often do not share research outputs and results to surveyed and studied communities and individuals, limiting access and engagement. (While we focus on study participants in this paper, many of our points have equal applicability to a range of involved stakeholders, including research field staff, street-level implementers, local (LMIC) ethics boards, and local media.)
This oversight does not always reflect the purposeful disregard of participants. There are several constraints to sharing results and co-interpreting with participants. Some of these may bind in ‘our’ (global health and development research) current operating set-up. Budgets and time may bind. Recognition of what kinds of participant-engagement activities can fit an on-going or longitudinal research program may constrain sharing. Creativity about meaningful modes of sharing may bind, given the limited precedent on which to draw (itself a product of the current lack of budget for such activities). And consideration of the benefits and drawbacks of sharing collected information with those from/with whom it was collected may be constrained.
Whatever the reasons, we contend that these constraints have negative implications for both study participants and researchers. It limits access and engagement in research. It elides the important role research participants can (and should) play in interpreting and learning from evidence generated through social science research.
We discuss the ethical and practical reasons for sharing results with study participants in social science and development research post-data collection. We propose that this practice has the potential to: (1) improve the credibility and validity of research findings; (2) provide fair benefits through knowledge enhancement and empowerment; and (3) promote long-term relationships, trust and collaboration between researchers and participants.
To develop the evidence on this, we call for more trials, errors, and research into the modalities and effects of sharing results with participant-communities to articulate the possibilities, benefits, and challenges. We close by examining the challenges of results sharing and discuss the roles different stakeholders—from investigators to funders to ethics boards to communities themselves—can play in this activity.
*We acknowledge that in the years since our first draft, there have been a few additional calls for similar action, though not necessarily embedded in the framework we propose and not all pushing beyond ‘results sharing’ into shared sense-making. To give credit where it is due (and you can let us know if we overlook anything major): a call for sharing results is made in this F2P2 post, ODI provides an example of results sharing (though we are not convinced of the usefulness of that particular mode), and the call for structured ethics appendices has a placeholder for results-sharing.
An ethical framework for results-sharing
We find it useful to develop an understanding why results-sharing and shared-sensemaking are important, rather than just stating it is so. Although ethics in the social sciences merit a broader outlook than the biomedical gaze, clinical trials still provide a good place to start for social scientists thinking on ethical engagement (Alderman, Das, and Rao 2013). A particularly active area of debate centers on clinical trials that lead to the development of a tangible and useful product, such as a novel drug or vaccine (Bhutta 2002).
The Council for International Organizations of Medical Sciences (CIOMS) guidelines require sponsoring agencies to “agree in advance…that any product developed through such research will be made reasonably available to the inhabitants of the host community” (CIOMS 2002). These and similar guidelines apply to tangible products of research. They may not perfectly map to social science research, which may not generate a new product or service but rather knowledge to inform future policy, programming, and research (Bhutta 2002). While social science research often does not impose the same kinds of risks on participants as testing vaccines or drug therapies, this does not attenuate ethical obligations to share research results. Relative to biomedical research, the opportunities for shared sensemaking, about social constructs and processes, are heightened in social science work.
Even in biomedical research, the final output of research may be knowledge—individually and/or collectively meaningful—rather than a marketable product or technology. For example, in genomic research, where the research product often is knowledge (i.e., the results of genetic testing), a key concern is sharing this information with specific participants (Dressler et al. 2012; Kristman and Kreiger 2008; Simon et al. 2012). Since the field’s start, its researchers have considered rapid and open release of genomic data as critical. For example, in the U.S., the National Human Genome Research Institute requires its grantees to release genetic sequences into open-access databases maintained by the National Institutes of Health (NIH) and other institutes (Lowrance and Collins 2007). While this system of data sharing enables scientific research, it does not connect individual study participants to relevant and medically actionable results. A review of consent forms from genomic studies found that participants are usually told genetic analysis will occur “without any explanation of with whom the resulting data will be shared” (McGuire and Gibbs 2006). An exception is research conducted among indigenous and aboriginal communities, who often have the right to access their genomic data and exercise some determination over its future use (Jacobs et al. 2010; Schuklenk and Kleinsmidt 2006; Taniguchi, Taualii, and Maddock 2012).
Against this backdrop, several thought-leaders in biomedical ethics—looking beyond products such as vaccines but still referring to clinical trials—have called for researchers to (1) meaningfully present research purposes and findings to study participants and (2) involve these participants in further results interpretation and manuscript preparation (Weijer and Emanuel 2000). These components are two in a larger ethical framework proposed by Ezekiel Emanuel and colleagues to govern clinical research in lower- and middle-income countries (LMICs). In order to promote the social value of such research, the architects of this framework suggest that research should be structured as a collaborative partnership between researchers and communities (Emanuel et al. 2004); sharing results is a component of this. This framework holds that consent forms, de-identification of data, locked files (physical and virtual), and other tools of confidentiality build the cornerstone of ethical research—protecting individuals—but remain only the starting point for research that produces fair benefits and social value (Benatar 2002). That is, that is ethical.
A recent review of the literature indicates that participants in biomedical and clinical research want “aggregate and clinically significant individual study results communicated to them,” or at least want the opportunity to choose to access these results (Shalowitz and Miller 2008). In a famous example, Richard Doll and Bradford Hill presented their findings on smoking and lung cancer to participants in the British Physicians Health Study. In a report on their findings, Doll and Hill write: “We thought it necessary, in view of the nature of our results, to lay these preliminary data before the survivors of the 40,000 men and women who made them possible” (DOLL and HILL 1954). Doll and Hill informed participants of the results through the British Medical Journal, an established information source for physicians such as those taking part in the study. The presentation of results to study participants in LMICs requires further creativity and effort given information channels and barriers in these settings.
At present, Institutional Review Boards / Ethical Review Committees and some grant-making agencies that support research in LMICs make researchers delineate how they will share their results. Yet these bodies give limited guidance or mandate on which information will be shared with whom, when, through what means, and with what aims. Scant research systematically assesses the desire of LMIC study participants to learn more about the research in which they take part, including the results. From our anecdotal experience, participant-communities and individuals would like to know the results of research and to share in benefits that may stem from them. This is an area for further, more systematic exploration—particularly in relation to two stages of the research process: (1) data validation and analysis; and (2) results dissemination and communication. This exploration is the scope of this paper; here, we make the case for why it is needed.
Sharing results during data validation and analysis: reciprocity, transparency, and validity
Dialogue and action in the social sciences may pave further the way for discussions about including research participants in validating and analyzing results prior to dissemination. These include conversations about limiting extractive research (spearheaded by Robert Chambers), about increasing research transparency, and about enhancing research validity.
Often, research is ‘extractive.’ That is, outsiders gather raw information to take away for processing and construction into valuable and meaningful (knowledge) products (Chambers 2007). Chambers has helped shape an appreciation for a participatory approach to development and research as the antithesis of extractive approaches, a dichotomy echoed by Leslie Groves (Groves 2015). But much of the current work on participation focuses on involving stakeholders in the generation and collection of data (such as mapping geographic space and power relationships as understood locally, as well as his newer focus on participatory numbers and statistics). This may be empowering in some measure but it does not necessarily achieve the true opposite of extraction, which is to insert or reciprocally create something of value. In our estimation, a more productive alternative to “extractive” (Groves 2015 – paper, not blog) research — which could still occur even if individuals or communities under study actively participate — is research that is reciprocal.
We encourage researchers to take this step beyond participatory data collection by calling for those providing raw data to play a role in the construction of knowledge products – that is, in the interpretation of results that considers multiple perspectives and that builds partnerships with study participants. This, we believe, goes some way to limit the extractive nature of research by attending to fair benefits and social value, and by producing interpretations and recommendations that accord with the experiences of participant-communities. This notion fits within the ‘feedback typology’ provided by Groves, who points out that beneficiary feedback can be elicited during four stages of program evaluation (an activity related to but distinct from research): (1) evaluation design; (2) data collection; (3) validation (sensemaking) and analysis; and (4) dissemination and communication (Groves 2015). Our interest in this paper is particularly on the latter two elements as a key frontier for ethical research.
Creating reciprocal exchange around research results will require increasing the transparency of research outcomes. Recently, research transparency has received increasing focus in the social sciences, particularly with regard to research design and data analysis (Miguel et al. 2014; Ioannidis et al. 2014). Popular measures to increase the transparency of research include pre-analysis plans to prevent later data-mining and specification-searching and open data and journal access to allow new or replicated analyses (for one example, see (King and Alvarez 2014)). These efforts are usually directed toward transparency vis-à-vis other researchers, as the assumed end-users of research. They only tangentially address transparency from the perspective of research participants themselves. Even broader conversations about transparency in research for the benefit of the public at large do not always acknowledge that a subset of the public may have special claim to receive research results, given their role as research participants.
In addition to increasing transparency, researchers may strengthen the quality and internal validity of their work by engaging study participants during data analysis and interpretation (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Often, researchers gain interpretive insight into their results through a chance participant comment (e.g. (Thornton 2008). Member-checking (also called shared sensemaking) offers a systematic means of verifying or co-interpreting the results with the participant-community, allowing them to also serve as analysts. Member-checking refers to a practice crystallized by Lincoln and Guba to enhance research ‘credibility’ (internal validity) by providing validity and reality checks (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Throughout the research process—as is currently more common in a qualitative research paradigm—researchers may gain insight from informal tests of initial interpretations among the participant (or a comparable) community. Researchers may reject some interpretations and consider new ones.
Lincoln and Guba also recommend “terminal, formal testing of the final case report with a representative sample of stakeholders” (Lincoln and Guba 1986). Terminal testing offers an opportunity to share research data, processes, results, and interpretations with participant-communities (Carlson 2010). Given possible concerns about presenting the raw data for auditing and clarification, we advocate sharing aggregated results and interpretations. Clear decisions on the array of information to be shared need not always be determined ex ante but can emerge over the course of the research—a flexibility that is easier to achieve in the social sciences, but also possible within biomedical research.
Taking time (and resources) to return to participants to involve them in member-checking results allows researchers to assess the internal validity of their work and ensures that participant-communities do indeed receive these results. It also acknowledges that insights arise from interactions and relationships between researchers and participants. Member-checking gives voice to participants in a way merely collecting data does not afford (Carlson 2010).
As an extension, participants may also contribute to manuscript preparation. What a research team might normally publish represents a single interpretation of the data collected. Consensus need not be reached on the interpretation of results; competing interpretations can be published, even within the same paper (Weijer and Emanuel 2000). Of course, study participants will not be involved in each manuscript decision. However, these discussions can pave the way for long-term research efforts and even the development of new intervention strategies (Emanuel et al. 2004).
Involving participants and other community stakeholders in the dissemination of research findings creates further avenues to build partnerships with participations that extend beyond data collection. The opposite of extractive research is research that is, variously, participatory, inclusive, authentic, and/or empowering, with empowerment seen as a continuum rather than a threshold (Chambers 2007; Lincoln and Guba 1986). Lincoln and Guba suggest that research authenticity stems, in part, from the extent to which it raises consciousness of a problem among participants, fosters appreciation of others’ views of an issue and its potential solutions, and, most challengingly, stimulates not only action by participants, but successful action (Lincoln and Guba 1986). Such consciousness-raising accords with the Hegelian or Freireian sense of enabling people to see the structures underlying problems facing them.
Sharing initial and final results, shared sensemaking, and communication
Participants stand to benefit most when shared information is clear and contextualized. Researchers need pertinent, creative ways to present information to participants. To do so, they may need to return to communities after data collection or during analysis. To package information, researchers should consider the literal translation of scientific terms into local languages and anticipate the challenges participants may experience in understanding the nature and implications of formal research (Appadurai 2006, building on ideas from Freire). The degree to which sharing the research purpose, design, and findings is empowering will depend on what is shared, through what modes, with whom, and with what degree of acceptance.
In the biomedical research field, many clinical trials networks have dedicated community engagement staff or community advisory boards (CABs) that play a primary role in disseminating results and communicating study findings to a broad constituency of involved stakeholders. One example is the Community Partners program, which guides community engagement work across the research networks supported by the Division of AIDS of the U.S. National Institute of Health. Community Partners produces lay summaries of trials protocols, relays information to community stakeholders during trial conduct and advises study teams on selecting appropriate and targeted dissemination strategies for releasing study findings (Kagan et al, 2012).
As a starting point for selecting modes of dissemination, social scientists should consider how laypersons and non-experts interpret scientific data, including the use of heuristics (Grotzer A., Miller B., and Lincoln A. 2012; Nickerson, Perkins, and Smith 1985; Sunstein 2002). In one example of results-sharing, at the end of a study on sexual networks and the spread of sexually transmissible infections (STIs) in East Africa, researchers held an educational session with participating women. Though the full analysis remained underway, the researchers shared de-identified sexual network images and engaged participants in a hands-on (quite literally, hand-holding) activity on the differential risk for STI transmission in serial versus concurrent sexual partnerships. This made the risks immediately interpretable to the women involved (M. Morris, personal communication, April 2009), although to our knowledge, the researchers did not conduct further research on whether this, in turn, influenced behavior within networks.
Such proactive results-sharing is the exception, even in biomedical research. The Ethics Advisory Group of the International Union Against Tuberculosis and Lung Disease (the Union) reviewed content from eight years’ worth of study protocols—clinical trials, observational studies, and record reviews—they supported. The Union found that “few researchers described how their results would be presented to study participants and their communities,” although all proposals mentioned dissemination in scientific journals (Edington et al. 2013). Members of the Union’s Ethics Advisory Board argue with force that sharing knowledge about health problems and potential solutions with communities creates opportunities for participants to “exercise their rights regarding health and other services” (Edington et al. 2013). In this way, results dissemination might be understood as an enabler of empowerment processes.
In an optimistic view of empowerment, people may be able to use research results to “gain or improve a service, assert their rights or obtain a better deal” (Chambers 2007). Participants may be able to use information shared with them to affect—directly or through collective action and advocacy—local behavior or improvements in well-being. This is, for example, part of the idea behind Vijayendra Rao’s work with the Social Observatory — and Rao notes that linking findings with action remains a challenge (World Bank 2016; Rao and Aiyer 2016).
In theory, if the information reveals that conditions or problems are different or more severe than participant-communities had understood, then study information may lead to reflection on the present situation and a new articulation of problems, which may also lead to a new framing of possible solutions. By hearing about the research process and results, respondents may think in a more critical and evidence-informed way about their communities, rather than relying on stereotype or other assumptions about their context. Understanding the research in which they took part may be enjoyable and have intrinsic value for participants and, further, lead respondents to garner a sense of control and even skill in considering their own problems in the future (Appadurai 2006; Chambers 2007; Zimmerman A. 1990).
However, from years of information-sharing campaigns in public health and beyond, we know that much lies between the provision of information and taking (collective) action (for a recent example outside of public health, see (Lieberman, Posner, and Tsai 2014). People must be ready, willing and able to translate information into action (Lesthahghe and Vanderhoeft 2001). Moreover, as in participatory approaches to development, much depends on elite capture within a community and the capacity and incentives for action within a community as a whole, which itself may be moderated by degree of heterogeneity (Mansuri and Rao 2012). Whether, when, and how these processes benefit participant-communities remains an empirical question. Evaluating the impact of sharing research results with participant-communities will enhance our understanding of the potential of the approach for which we advocate here.
Despite the limited evidence base, the development field has produced several examples of long-term research initiatives in communities that have included efforts to affect social change through knowledge-sharing and empowerment among community stakeholders. One example is the Townsend Thai Project, which has produced panel data on the spending of Thai households for over 150 consecutive months. Robert Townsend and his study team are now beginning to give research results to a randomly selected subset of households to see if sharing this information changes their financial decision-making. This approach suggests that results sharing itself can adopt an experimental, investigative form (Townsend, Sakunthasathien, and Jordan 2013).
Beyond fostering empowerment and inspiring collective action, including communities in results dissemination may also promote long-term ties with research teams and improved understandings of the place of research in social life. Emanuel et al warn that “very little can create more resentment, mistrust and a sense of exploitation than unfair distribution of the benefits of collaboration” (Emanuel et al. 2004). Particularly for researchers planning to work in a single study area with the same or similar participants over a long period, such resentment and mistrust must be avoided.
Mistrust can result when study participants feel disconnected from the research process or the conclusions it generates. Even before results are ready, information can be shared on the research process, including the importance of particular questions to the generation of results. Sharing this kind of information might respect a community’s desire to do more than offer its data to researchers while still protecting later study rounds from contamination. This may also help limit research fatigue among community members in the face of repeated research.
One of the authors (MWF) saw such fatigue among a community of intravenous drug users in an East Asian country who have participated in repeated public health surveys. These individuals have come to see research interviews as a rote exercise with little benefit to themselves, beyond the immediate, small financial incentive for participation. One participant asked, “Will this kind of research actually help people in our position? Will people in power actually listen to us?” Limiting research fatigue, and the linked tendencies to not cooperate or provide accurate or complete information, is a key benefit of continuing dialogue with participant-communities after—or between rounds of—data collection.
Challenges to results sharing
There are several significant challenges to results sharing in the social sciences. First, this effort may be costly in time or resources, especially as studies become larger or longer in geo-spatial or time scale (Alderman, Das, and Rao 2013). Often a long period elapses between a specific research study, the release of the results, and the possible changes in programs or policies aimed at improving well-being or development in the participant-community (Emanuel et al. 2004). However, experiences from clinical and social science randomized controlled trials show that results can be shared with large numbers of participants without adding undue costs. After concluding the first phase IIb efficacy trial of a new TB vaccine in over forty years, the South African TB Vaccine Initiative (SATVI) went door-to-door to inform the nearly 2,800 mothers with enrolled infants that the vaccine was not effective. Investigators at SATVI point out that this effort allowed them to “take the community’s pulse” after a negative result. Many mothers reportedly reacted to the disappointing news by asking when they could enroll their children in the next trial. They further expressed pride in their participation in the search for a new TB vaccine (M. Tameris, personal communication, October 2013).
While SATVI successfully disseminated results by going door-to-door, the method of results dissemination need not match the method of data collection. Researchers may beneficially use technology or media to share results and solicit feedback. For example, one of the authors (HEL) used text messaging to convey basic process and outcome information to those participating in a malaria medications study in Ghana that used text messaging as a key component of the intervention (Raifman et al. 2014). Creative approaches that consider results dissemination beyond village meetings or door-knocking may unlock feasible options to share research results on a wider scale without great financial burden.
Second, researchers may worry that sharing results might bias or “contaminate” studies by adding an additional intervention component (or “treatment”) that will influence the behavior or responses of participants over the course of the study. This can make it more difficult to make causal claims about the effect of the original intervention on the outcomes observed. For example, if researchers are studying or evaluating an intervention that distributes X health product, feeding back mid-term results to study communities about the impact of X on behavior or health outcomes may itself change behavior. This is a case in which sharing results may be best held until after the last round of data collection is complete or where data shared over the course of the study would need to be carefully selected as unlikely to influence outcomes.
Third, dissemination might pose special risks for vulnerable or minority groups within participant-communities and researchers will need to consider this possibility when designing subject protections. The process of preparing to share results can itself encourage researchers to thoughtfully engage with participant-communities throughout the research process. In this process, researchers will be guided by soliciting community input about what information is appropriate to share, how information could be disseminated, and who should have access. Social scientists must balance the community’s desire for certain data against the need to maintain confidentiality and anonymity. The ethical dissemination of study results will benefit greatly from meaningful engagement with community representatives early in the research process (Bhutta 2004).
In clinical trials, such decisions are often aided by community advisory boards (CABs), which facilitate effective communication between investigators and the host community on important ethical issues (Boulanger et al. 2013). Investigators studying HIV and TB have also received guidance on results dissemination in the Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials (GPP-HIV) and the Good Participatory Practice Guidelines for TB Drug Trials (GPP-TB) – two guidance documents developed in conjunction with affected communities, trial sponsors, civil society, and researchers (Regimens 2012; UNAIDS 2011).
For all of the activities discussed in this paper, it may be difficult to determine who constitutes “the community” and its leaders, although these questions arise at the beginning of most studies. Indeed, approaching research with the expectation of continued engagement may help researchers think about issues of community more clearly during the early stages of research design. Early engagement can allow researchers to better understand what information the participant-community feels would be “too sensitive” to collect or to share, which may then require either a careful articulation of the benefits of this information to the participants or a reconsideration of aspects of the research design (e.g. (Boga et al. 2011)).
Conclusions and considerations
Each of these challenges can be overcome, often in ways that may strengthen research processes and outcomes. To realize this potential, however, results sharing must occupy a more central position in the planning of research projects. The following activities may help social scientists and research institutions resolves some of the above challenges and integrate results dissemination into their work:
Create community advisory structures. Development organizations that conduct social science research, especially long-term projects in a given location, could consider establishing community advisory boards (CABs) that can maintain a dialogue with communities throughout the research process and guide results dissemination and co-interpretation. Here, social scientists can receive practical guidance from their colleagues involved in HIV and TB clinical trials, where CABs have played an increasingly important role in communicating study findings and maintaining reciprocal, sustained relationships between researchers and participant communities.
Build results sharing into experimental designs. Just as many clinical trials include small studies evaluating trial operations (for example, sub-studies exploring the quality of informed consent administration or laboratory conduct), development organizations and social science research institutions should build formal examinations of community engagement and results sharing into the larger plan and timeline of research projects. These process studies, which can often be nested into larger studies at modest additional cost, may help to generate comparative data on effective modes of results dissemination and post-trial engagement with communities. Building results sharing into protocols may also open the door for IRBs and ethics boards to gain experience and familiarity in evaluating these types of activities.
Plan for results sharing from the outset. Planning for results dissemination starts with protocol development. Including a commitment to share results with participant communities in early drafts of protocols will encourage researchers to anticipate and plan for results dissemination in a more forthright, participatory manner. One recent example comes from the Tuberculosis Trials Consortium (TBTC), a research consortium housed at the U.S. Centers for Disease Control and Prevention that conducts trials evaluating shorter-course therapies for TB drug regimens. One of the authors (MWF) coordinates the TBTC’s community advisory board (called the Community Research Advisors Group), which successfully advocated for the principal investigators planning a phase III study to include language in the protocol that “overall (aggregate) study results will be shared with study participants through mechanisms and materials reviewed and approved by the TBTC Community Research Advisors Group” (TBTC Study 31 protocol team, personal communication, December 2013). This statement ensures that community representatives will play a role in results dissemination without locking the investigators into overly prescriptive statements about what specific results will be shared or how this will occur. A similar approach could be used to mainstream results dissemination into protocols for social and behavioral science studies, whether descriptive surveys or experimental investigations.
Building greater participant-community engagement through results dissemination honors the ethical obligations to provide study participants with “fair benefits” in the face of potential harms and further acknowledges the expressed desire of communities to do more than give data. Whether engagement takes the form of “member-checking” to validate data or wider dissemination of research information, sharing results offers social scientists an avenue for building the kind of collaborative partnerships that make sustained research possible. Moreover, it allows researchers to refine and reconsider their interpretation of the results and the implications of research. To achieve this, researchers will need to think beyond the minimum requirement of extant IRB protocols and explicitly address how they will share results with study participants, what kind of information will be shared, and when. Our contention is that results sharing can generate research that is more transparent, more fair, and possibly more trustworthy and more useful. It does greater justice to allowing participants access to the research in which they participate. More empirical research is required on the means and effects of sharing results with participant-communities to fully understand the benefits of this approach for participants, researchers, and the partnership between them. But, as the clinical trials literature suggests, the ethical obligation to engage communities in the results of research is already clear.
Acknowledgement and notes
This paper began as a shared interest between the authors in a research methods course co-taught by Dr. Daniel Halperin (now at the University of North Carolina at Chapel Hill Gillings School of Global Public Health) and Dr. Heather Lanthorn. Both Mike and Heather were strongly influenced by not only what they had seen during data collection but also personal experiences of participating in clinical trials and Psych-101-required studies and never hearing anything back about results. We thus have deep empathy for communities that want to hear about findings, as well as intellectual/justice commitment to shared sensemaking.
Dr. Halperin encouraged our thinking and to pursue a paper (though without specific guidance on how to get a conceptual paper out in the world). Both authors have had important interactions while working on research projects, including with Abubakari Bukari and Salifu Amadu at Innovations for Poverty Action – Ghana and investigators at the Tuberculosis Trials Consortium. Drs. Richard Cash (Harvard T.H. Chan School of Public Health and the Public Health Foundation of India), Howard White (formerly of the International Initiative for Impact Evaluation), and Marc Zimmerman (University of Michigan School of Public Health) have offered important critiques on drafts on this paper. Dr. David McKenzie (World Bank), Dr. Biju Rao (World Bank), Dr. Robert Townsend (Massachusetts Institute of Technology), and Raag Bhatia (International Initiative for Impact Evaluation) have offered encouragement and insight at different stages of the process. Of course, all mistakes are our own.
The journey of this paper has been frustrating. When we submitted to journals like Social Science & Medicine, we were told our point was obvious and well-known to anthropologists. Meanwhile, economics-led development journals thought the ideas could be interesting, not broadly relevant, and offered little guidance on where a conceptual paper might live, at least allowing us to get a timestamp on our ideas. In any case, these ideas certainly seem they are becoming more relevant now than ever.
Amon, Joseph J, Stefan D Baral, Chris Beyrer, and Nancy Kass. 2012. “Human Rights Research and Ethics Review: Protecting Individuals or Protecting the State?” PLoS Medicine 9 (10). Public Library of Science: e1001325.
Angell, M. 1997. “The Ethics of Clinical Research in the Third World.” The New England Journal of Medicine 337 (12). UNITED STATES: 847–49. doi:10.1056/NEJM199709183371209.
Appadurai, Arjun. 2006. “The Right to Research.” Globalisation, Societies & Education 4 (2). Routledge: 167–77. doi:10.1080/14767720600750696.
Benatar, Solomon R. 2002. “Reflections and Recommendations on Research Ethics in Developing Countries.” Social Science & Medicine 54 (7). Elsevier: 1131–41.
Beskow, L M, W Burke, J F Merz, P A Barr, S Terry, V B Penchaszadeh, L O Gostin, M Gwinn, and M J Khoury. 2001. “Informed Consent for Population-Based Research Involving Genetics.” JAMA : The Journal of the American Medical Association 286 (18). Office of Genetics and Disease Prevention, Centers for Disease Control and Prevention, 4770 Buford Hwy NE, MS K-28, Atlanta, GA 30341-3724, USA. laura_beskow@unc.edu: 2315–21.
Bhutta, Z A. 2002. “Ethics in International Health Research: A Perspective from the Developing World.” Bulletin of the World Health Organization 80 (2). Department of Paediatrics, The Aga Khan University, Karachi, Pakistan. zulfiqar.bhutta@aku.edu: 114–20.
———. 2004. “Beyond Informed Consent.” Bulletin of the World Health Organization 82 (10). World Health Organization: 771–77. doi:10.1590/S0042-96862004001000013.
Boga, M, A Davies, D Kamuya, S M Kinyanjui, E Kivaya, F Kombe, T Lang, et al. 2011. “Strengthening the Informed Consent Process in International Health Research through Community Engagement: The KEMRI-Wellcome Trust Research Programme Experience.” PLoS Medicine 8 (9). Consent and Communication Committee, KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya.: e1001089. doi:10.1371/journal.pmed.1001089.
Boulanger, Renaud F, Stephanie Seidel, Erica Lessem, Lee Pyne-Mercier, Sharon D Williams, Laia Ruiz Mingote, Cherise Scott, Alicia Y Chou, and James V Lavery. 2013. “Engaging Communities in Tuberculosis Research.” The Lancet Infectious Diseases. Elsevier.
Carlson, J. 2010. “Avoiding Traps in Member Checking.” The Qualitative Report 15 (5): 1102–13.
Chambers, Robert. 2007. “Who Counts? The Quiet Revolution of Participation and Numbers.” Institute of Development Studies (UK).
CIOMS. 2002. “International Ethical Guidelines for Biomedical Research Involving Human Subjects.” Geneva, Switzerland: Council for International Organizations of Medical Sciences.
Cooper, M. 2008. “Sharing Data and Results with Study Participants: Report on a Survey of Cultural Anthropologists.” Journal of Empirical Research on Human Research Ethics : JERHRE 3 (4). McMaster University.: 19–34. doi:10.1525/jer.2008.3.4.19.
DOLL, R, and A B HILL. 1954. “The Mortality of Doctors in Relation to Their Smoking Habits; a Preliminary Report.” British Medical Journal 1 (4877). Not Available: 1451–55.
Dressler, L G, S Smolek, R Ponsaran, J M Markey, H Starks, N Gerson, S Lewis, et al. 2012. “IRB Perspectives on the Return of Individual Results from Genomic Research.” Genetics in Medicine : Official Journal of the American College of Medical Genetics 14 (2). Institute for Pharmacogenomics and Individualized Therapy, University of North Carolina, Chapel Hill, NC, USA. lynn_dressler@unc.edu: 215–22. doi:10.1038/gim.2011.10; 10.1038/gim.2011.10.
Edejer, T T. 1999. “North-South Research Partnerships: The Ethics of Carrying out Research in Developing Countries.” BMJ (Clinical Research Ed.) 319 (7207). Global Programme on Evidence for Health Policy, World Health Organisation, CH-1211 Geneva 27, Switzerland. tantorrest@who.ch: 438–41.
Edington, M, T Ornstein, J Denholm, A Sony, S.J. Kim, A Narain, and R O’Brien. 2013. “No Title.” Public Health Action 3 (4): 346–50.
Emanuel, E J, D Wendler, J Killen, and C Grady. 2004. “What Makes Clinical Research in Developing Countries Ethical? The Benchmarks of Ethical Research.” The Journal of Infectious Diseases 189 (5). Department of Clinical Bioethics, Warren G. Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland 20892-1156, USA. eemanuel@nih.gov.: 930–37. doi:10.1086/381709.
Grotzer A., Tina, Rebecca Miller B., and Rebecca Lincoln A. 2012. “Perceptual, Attentional, and Cognitive Heuristics That Interact with the Nature of Science to Complicate Public Understanding of Science.” In Advances in Nature of Science Research: Concepts and Methodologies, edited by M Khine, 27–49. Harvard University, Cambridge, MA, USA: Springer. doi:10.1007/978-94-007-2457-0_2.
Hamermesh, Daniel. 2012. “Six Decades of Top Economics Publishing: Who and How?” NBER Working Paper Series Working pa.
Ioannidis, John P A, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, and Robert Tibshirani. 2014. “Increasing Value and Reducing Waste in Research Design, Conduct, and Analysis.” The Lancet 383 (9912): 166–75. doi:10.1016/S0140-6736(13)62227-8.
Jacobs, B, J Roffenbender, J Collmann, K Cherry, L L Bitsoi, K Bassett, and C H Evans Jr. 2010. “Bridging the Divide between Genomic Science and Indigenous Peoples.” The Journal of Law, Medicine & Ethics : A Journal of the American Society of Law, Medicine & Ethics 38 (3). O’Neill Health Law Institute, Georgetown University.: American Society of Law, Medicine & Ethics, Inc: 684–96. doi:10.1111/j.1748-720X.2010.00521.x; 10.1111/j.1748-720X.2010.00521.x.
Kristman, V L, and N Kreiger. 2008. “Information Disclosure in Population-Based Research Involving Genetics: A Framework for the Practice of Ethics in Epidemiology.” Annals of Epidemiology 18 (4). Department of Public Health Sciences, University of Toronto, Toronto, Ontario, Canada. vicki.kristman@uhnresearch.ca: 335–41. doi:10.1016/j.annepidem.2007.10.003.
Lesthaeghe, Ron, and Camille Vanderhoef. 2001. “Ready, Willing, and Able: A Conceptualization of Transitions to New Behavioral Forms.” In Diffusion Processes and Fertility Transition, edited by John B. Casterline. Washington, D.C.: National Research Council.
Lieberman, Evan S., Daniel N. Posner, and Lily L. Tsai. 2014. “Does Information Lead to More Active Citizenship? Evidence from an Education Intervention in Rural Kenya.” World Development 60 (August): 69–83. doi:10.1016/j.worlddev.2014.03.014.
Lincoln, Yvonna S, and Egon G Guba. 1986. “But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Program Evaluation 1986 (30). Wiley Subscription Services, Inc., A Wiley Company: 73–84. doi:10.1002/ev.1427.
Lowrance, W W, and F S Collins. 2007. “Ethics. Identifiability in Genomic Research.” Science (New York, N.Y.) 317 (5838). United States: 600–602. doi:10.1126/science.1147699.
Mansuri, Ghazala, and Vijayendra Rao. 2012. Localizing Development: Does Participation Work? Vol. 1. World Bank Publications.
McGuire, A L, and R A Gibbs. 2006. “Genetics. No Longer de-Identified.” Science (New York, N.Y.) 312 (5772). Center for Medical Ethics and Health Policy, Baylor College of Medicine, Baylor College of Medicine, One Baylor Plaza, Suite 310D, Houston, TX 77030, USA. amcguire@bcm.edu: 370–71. doi:10.1126/science.1125339.
McKenzie, D. 2011. “Should We Share Results with Respondents?”
Miguel, E., C. Camerer, K. Casey, J. Cohen, K. M. Esterling, A. Gerber, R. Glennerster, et al. 2014. “Promoting Transparency in Social Science Research.” Science 343 (6166): 30–31. doi:10.1126/science.1245317.
Nickerson, R, D Perkins, and E Smith. 1985. “Chapter 5: Errors and Biases in Reasoning.” In The Teaching of Thinking, 111–42. Hillsdale, NJ: LEA.
Raifman, Julia R G, Heather E Lanthorn, Slawa Rokicki, and Günther Fink. 2014. “The Impact of Text Message Reminders on Adherence to Antimalarial Treatment in Northern Ghana: A Randomized Trial.” PloS One 9 (10): e109032. doi:10.1371/journal.pone.0109032.
Rao, Vijayendra, and Yamini Aiyer. 2016. “The Social Observatory: Building Adaptive Capacity in Large-Scale Projects.” presented at the 3ie Delhi Seminar Series, Delhi, February 1.
Regimens, Critical Path to TB Drug Stakeholder and Community Engagement Working Group. 2012. Good Participatory Practice Guidelines for TB Drug Trials 2012. Critical Path to TB Drug Regimens.
Schuklenk, U, and A Kleinsmidt. 2006. “North-South Benefit Sharing Arrangements in Bioprospecting and Genetic Research: A Critical Ethical and Legal Analysis.” Developing World Bioethics 6 (3). Centre for Ethics in Public Policy and Corporate Governance, Glasgow Caledonian University, Glasgow, UK.: 122–34. doi:10.1111/j.1471-8847.2006.00149.x.
Schwandt, Thomas A, Yvonna S Lincoln, and Egon G Guba. 2007. “Judging Interpretations: But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Evaluation 2007 (114). Wiley Subscription Services, Inc., A Wiley Company: 11–25. doi:10.1002/ev.223.
Shalowitz, D I, and F G Miller. 2008. “Communicating the Results of Clinical Research to Participants: Attitudes, Practices, and Future Directions.” PLoS Medicine 5 (5). Bioethics Program, University of Michigan Medical School, Ann Arbor, Michigan, United States of America. dshalowi@med.umich: e91. doi:10.1371/journal.pmed.0050091.
Simon, C, L A Shinkunas, D Brandt, and J K Williams. 2012. “Individual Genetic and Genomic Research Results and the Tradition of Informed Consent: Exploring US Review Board Guidance.” Journal of Medical Ethics 38 (7). Department of Internal Medicine, University of Iowa, Roy J. and Lucille A. Carver School of Medicine, 500 Newton Drive, 1-103 MEB, Iowa City, IA 52242-1190, USA. christian-simon@uiowa.edu.: 417–22. doi:10.1136/medethics-2011-100273.
Sunstein, C. 2002. “Chapter 2: Thinking about Risks.” In Risk and Reason: Safety, Law and the Environment, 28–52. Cambridge, UK: Cambridge University Press.
Taniguchi, N, M Taualii, and J Maddock. 2012. “A Comparative Analysis of Indigenous Research Guidelines to Inform Genomic Research in Indigenous Communities.” The International Indigenous Policy Journal 3 (1).
Thornton, R L. 2008. “The Demand For, and Impact Of, Learning HIV Status.” The American Economic Review 98 (5). University of Michigan, 426 Thompson St., Ann Arbor, MI, 48106 ( rebeccal@umich.edu ).: 1829–63. doi:10.1257/aer.98.5.1829.
Townsend, Robert M., Sombat Sakunthasathien, and Rob Jordan. 2013. Chronicles from the Field: The Townsend Thai Project. MIT Press.
UNAIDS, AVAC. 2011. Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials 2011. Second. Geneva, Switzerland: Joint United Nations Programme on HIV/AIDS.
Varmus, H, and D Satcher. 1997. “Ethical Complexities of Conducting Research in Developing Countries.” The New England Journal of Medicine 337 (14). National Institutes of Health, Bethesda, MD 20892-0148, USA.: 1003–5. doi:10.1056/NEJM199710023371411.
Weijer, C, and E J Emanuel. 2000. “Ethics. Protecting Communities in Biomedical Research.” Science (New York, N.Y.) 289 (5482). Department of Bioethics, Dalhousie University, Halifax, Nova Scotia, B3H 4H7, Canada. charles.weijer@dal.ca: 1142–44.
Wilson, D. 1999. “North-South Research in Developing Countries Must Respond to Community’s Priorities.” BMJ (Clinical Research Ed.) 319 (7223). ENGLAND: 1496–97.
Zimmerman A., Marc. 1990. “Taking Aim on Empowerment Research: On the Distinction between Individual and Psychological Conceptions.” American Journal of Community Psychology, no. 1 (February): 169–77. doi:10.1007/BF00922695.
In this post, I share a teaching plan as I am returning after 20 (omg) years to facilitate one session of an undergrad Intro to Ethnography seminar course by the same prof who taught me. All of my first interviewing experiences, which took place in undergrad, were directly or indirectly through this professor: through a summer in Nepal (Annapurna side, in Ghandruk) during which I tried to research interpretations of and care-seeking for mental illness; through service-learning for a medical anthropology course, helping with office work and learning about patient experiences interacting with doctors about diabetes at a local free clinic; and through interviewing farmworkers and food bank employees about food insecurity among the migrant farmworker community of North Carolina. These experiences were formative and I am incredibly grateful for them, even if I don’t think I was an excellent researcher in all cases. I suspect far more practice and feedback were warranted before I undertook any of those activities. Hopefully I have gotten better in the years that have followed.
Interviewing with Kiran Sawant (of PUKAR) in Kaula Bandar, Bombay, India. (Photo shared with permission.)
What I am sharing now
In this post, I share my draft student guide and slides and then revisit in a follow-up post with the final class prep guide and pre-work for students, slides, lecture notes, and reflections on what worked and what did not. Will be happy for anyone to pick up and use the materials at that point.
I most certainly welcome critique and ideas in advance of lecturing/facilitating!!!
DRAFT Class session prep sheet and pre-work for students (pdf)
Objectives
By the end of your pre-work and our session together, you will be able to:
Objective 1: Articulate a definition of positionality and some key dimensions in your own words.
Objective 2*: Articulate a working statement of your own positionality, including how it may vary in different contexts and how you will approach it in your research project.
Objective 3: Articulate a definition of epistemic justice in your own words.
Objective 4: Highlight steps Cramer took to mitigate such injustice and state two more steps you think she could have taken.
Objective 5*: Articulate a practical working definition of active listening and a list of three things you can do to support and demonstrate this given the modality in which you will conduct your own interviews.
Objective 6*: List interview techniques, including question wording and probes, that you can use to get at the information you want during an interview—without leading.
*s indicate key goals for our seminar sesssion
Before class
(Re)read Bernard Chapter 9, Interviewing: unstructured and semi-structured
Read Cramer Chapter 2, A method of listening
Read through this learning guide and answer the key questions, including mailing in the three requested definitions. This will help you be prepared in case of cold/warm calling in seminar, to engage with the material more deeply, and allow us to rapidly move beyond definitions to invigorating seminar discussion. That is, completing this guide will allow you to learn more and avoid embarrassment.
Prep guide and pre-work
Positionality
Positionality: a definition
Some working definitions of positionality:
“Positionality refers to the stance or positioning of the researcher in relation to the social and political context of the study—the community, the organization or the participant group. The position adopted by a researcher affects every phase of the research process.” (Coughlin and Brydon-Miller)
“The fact that a researcher’s social, cultural, and subject positions (and other psychological processes) affect: the questions they ask, how they frame them…; their relations with those they research in ‘the field’ or through interviews; interpretations they place on empirical evidence; access to data; institutions and outlets for research dissemination; and the likelihood they will be listened to and heard.” (Gregory et al.)
“Positionality is dynamic. Our lives are in flux and, as a result, so are our subject positions.” (Correia 2012)
Prepare and send: Drawing on these and your discussions with Dr. Folmar so far, please craft a 1-2 sentence definition of positionality that works for you, such as how you would describe it to your roommate, parents/guardians, or other, older family members. Now that autumn is upon us, you might consider how you would introduce this idea of what you are learning around the Thanksgiving or other holiday table. <Submit your definition to Folmar by…>
Positionality: In A method of listening
Prof Folmar has noted that one’s own positionality is nearly impossible to describe in full but that, for data collection, there are different ways to ‘lean into’ our own positionality to create more authentic interview and observation experiences. There are also ways to modify aspects of our appearance, behavior, and shared biography to mitigate aspects of our positionality that might otherwise diminish the quality of data we collect.
Despite the challenges in fully articulating our positionality, it is important for us to reflect deeply on it when deciding what to research and how to prepare for and conduct data collection in particular contexts. Positionality matters for the quality of data we collect. For this reason, there is value in stating aspects of stating those dimensions we think are likely to influence our work clearly, as Cramer does, so that readers of our research can take these factors into account when reading our work and determining whether our results seem credible to them. That is, why should they think you got close to ‘the truth.’ We should not strive for “a gaze from nowhere,” but, instead, to be honest about our own gaze. Some dimensions of positionality map onto Bourdieu’s discussion of capital, including cultural capital, and this can be useful additional reading and may provide an organizational principle that resonates with you.
Prepare: As you read through Cramer’s ‘A method of listening’ from her book The politics of resentment, please consider and note down your answers the following questions.
In what ways does Cramer identify herself as having an emic perspective on the people and places she investigates? (Be sure to refresh yourself on ‘emic’ and ‘etic’ if you don’t remember.)
In what ways does Cramer identify herself as having an etic perspective on the people and places she investigates?
Overall, do you think that Cramer was well-suited to conduct research in/on/with the participants and topics she did? Why/not?
Throughout the chapter, Cramer lists multiple dimensions of her positionality and how it might influence her work. Note down each dimension or characteristic she suggests and prepare to speak about one that was particularly interesting to you and why.
Throughout the chapter, Cramer suggests steps she took to adjust her positionality or how she would be perceived. Note down each adjustment she makes. Prepare to discuss either: (1) one alteration that resonated with you, which you might consider as you prepare for your own data collection or (2) if none of her alterations resonated with you, or you don’t think alterations should made, be prepared to explain why. If Bernard’s examples in ‘presentation of self’ resonate more with you, you can also speak about those.
Provide an example from popular media and culture (such as podcast, other interview format, movies, or TV shows) in which you think the questioner or presenter does a good job of being honest about his/her/their positionality and how it might influence what follows.
One thing that becomes clear from reading Cramer’s chapter is that the interview—including rapport-building and positionality-defining work—begins far before an interview starts, including how you arrive, how you introduce yourself, how you set things up before you ask your first “formal” question.
Prepare: Revisit the lists you made of Cramer’s dimensions of positionality and steps she took to play up or play down some of these dimensions.
Mark or sort the lists you made above, distinguishing between those that are particularly relevant before interviews, those that are particularly relevant during, and those that matter as you close and leave the interview.
With these lists in mind—and with reference to your intended research participants and setting you have in mind for your own research project—what is one thing you will plan to do (1) before your interviews, one thing you will plan to do (2) during your interviews, and one thing you will do as you (3) close your interviews to support more honest, authentic, and rich exchanges?
With reference to your particular research topic and intended participants, what are three key dimensions of your positionality you need to own, modify, or otherwise address?
Epistemic justice and shared sense-making
Epistemic justice: A definition
The theory of ‘epistemic justice’ has been articulated by Fricker (2007), and indicates that “we can be unfairly discriminated against in our capacity as a knower based on prejudices about the speaker, such as gender, social background, ethnicity, race, sexuality, tone of voice, accent, and so on.” We can also discriminate against others, such as our research participants, interpreters, and so on in their capacity as ‘knowers.’ We usually seek people out to interview because they are expert in something (including their own lives, jobs) and yet we may undervalue their role as ‘knowers’ as we analyze, interpret, and report findings. Issues of epistemic injustice have recently been raised and anthropology and related applied fields, such as global health (such as in Tsosai 2017 and in Bhukani & Abímbọ́lá 2021).
There are two dimensions of such injustice (drawing on Bhukani & Abímbọ́lá):
Testimonial injustice: occurs when a hearer (such as an interviewer or analyst) prejudicially ascribes lower credibility to a speaker’s (such as a respondent’s) word. This can include silencing, undervaluing, or distorting the knower’s contributions, marking them with a ‘credibility deficit.’
Hermeneutical/interpretive injustice: occurs, in part, when ‘knowers’ are not included in interpretive processes (are interpretively marginalized), such that their efforts to make sense of and share their experience of the world is not heard.
A call for epistemic justice might be summed up as, “nothing about us without us.”
Prepare and send: Drawing on these definitions, please craft a 1-2 sentence definition of epistemic justice that works for you, again with the Thanksgiving/family/friend table in mind. <Submit your definition to Folmar by…>
Epistemic justice, sense-making, and member-checking: In A method of listening
In a 1986 paper, But it is rigorous?, Lincoln and Guba articulate the idea of ‘member-checking,’ as one way of co-interpreting findings with research participants. They define this as “the process of continuous, informal testing of information by soliciting the reactions of respondents to the investigator’s reconstruction of what she/he/they have been told or otherwise found out, and the constructions [or sense-making] offered by other respondents/participants or sources. It also includes a terminal, formal testing of the final report with a representative sample of stakeholders.”
Prepare:
In her chapter, in what ways, if any, does Cramer pursue member-checking or, more broadly epistemic justice, in her research? What is one additional step you think she could have taken (and which you may wish to pursue in your own research)?
In his chapter, Bernard also raises examples of positionality, valuing what the ‘knower’ has to say, and co-creating answers. Reviewing the example of Bernard and Savas, what do we learn both about positionality and joint sense-making?
Getting what you came for with interviews
Active listening: a definition
“Assure your respondents that their participation is crucial and that you are truly interested in what they have to say (and you’d better mean it or you’ll never pull it off)” (Bernard).
Both Bernard and Cramer provide examples of ‘active listening,’ although they never provide a precise definition for the term. It is useful to have a working definition as well as some tactics to pursue it in the modality in which you plan to conduct your interviews.
Prepare and send: Using what Cramer and Bernard apply, as well as the internet[1], develop a 1-2 sentence Thanksgiving-table definition of ‘active listening.’ <Submit your definition to Folmar by…>
Active listening: In Cramer and Bernard
Like Moliere’s protagonist, Monsieur Jourdain, as you read more about actively listening to others, you may find that you and those around you have been doing it all along. You may also see how other professionals–medical, journalistic, and others–use (or fail to use) solid active listening techniques. In data collection, it is important to intentionally cultivate context-appropriate ways of demonstrating that you are really attending to what a participant is saying and feeling. you may have to adapt these for the modality you plan to use (for example, face-to-face, Zoom-to-Zoom, or over-the-phone).
Prepare:
Drawing examples from Cramer and Bernard, make a list of techniques think you can use to demonstrate that you are actively listening and attending to a research participant in your planned interview context/setting and modality.
Send: To practice active listening, please record a short 3-10 video of yourself demonstrating how you actively listen when Folmar is lecturing. It can be a selfie video or you can ask someone to film you. Nothing fancy! <Submit your video to Folmar by…>
Probing and encouraging deeper reflection: In A method of listening
As Bernard explains, effective probing is key to successful interviews, as that is how we pursue high-quality, rich data that isn’t just a reflection of ourselves. He provides seven probe types (separate from tactics to encourage particularly verbal and nonverbal respondents).
Prepare:
Make a list of the seven types of probes Bernard suggests that you can use.
Do you think there are any probe types that Bernard overlooked? If so, please add them to your list.
For each, include an example from Cramer, Bernard, or another class reading that shows the type in action. If you cannot find an example, you can note down how you think you might be able to use that probe type in your own research.
Posting, with a few amendments, something on which my pop has been working, since I am not sure where and when else he will get it posted!
From the van Gogh immersive experience (Dallas; September 2021)
In England, evidently, hereditary surnames (as distinct from patronymics and other names that changed with each generation) were widespread by the mid-1400s, after slow spread over the previous four centuries. These appear to have happened by ‘folk’ processes rather than decree or specific documented event, with people adopting place and occupation terms as names (though maybe new tax and census processes were coming into play (?), which warrants more investigation than pop has done so far). A common example is Smith for those working as blacksmiths.
The Oxford English Dictionary credits the first published use of the term lanthorn(e) to John Heywood’s “Proverbs and Epigrams” published in 1562, specifically, “Of a lanthorne and light” (p.205). Closely following its appearance in Heywood’s works, lanthorn[e] was published in Flemings 2nd edition (1587) of Holinshed’s (Chronicles) History of England, Scotland, & Ireland. This involved a description of the location and ordinances for an almshouse for poor widows, built by David Smith (Smyth), who was master embroiderer to Queen Elizabeth 1, in the ward of Baynard’s Castle in London in 1584.
At about the same time as this description of this almshouse containing a lanthorn[e], the earliest known use of the surname Lanthorn occurred. Robert Lanthorn was baptized on 14 June 1585, as recorded in the Hackney parish register for the Church of St. Augustine (now St. Paul’s Cathedral), in Greater London. The close proximity in time and place for the appearance of the term and surname could reasonably suggest a direct relation between the two.
A potential hiccup in placing the origin of the term lanthorn to 1500s is the Lanthorn Tower in the Tower of London, constructed in the 1220s and 1230s. As generally described now, the Lanthorn Tower had a lanthorn on its top to aid in navigation along the River Thames. As originally built, this tower was the private quarters for King Henry III (1216-1272). One might immediately object to the idea of the tower being topped with a lantern in the 1200s on the grounds of protection for the King; putting a bright light on top of where the King was sleeping would greatly aid enemies and assassins.
London Tower curators took up my question as to whether this tower was originally called the Lanthorn Tower. They found “most of the reliable go-to sources (the King’s Works, Buildings and Institutions of the Tower, etc.) state that the Tower was not always known by Lanthorn[e] – but none illuminate what it was previously known by.” The Tower is not named in the 1532 record of James Nedeham’s alterations to the Tower but it is so named in the 1597 Haiward and Gascoyne Plan (Fig. 1). It is likely the “Lanthorn Tower” was wisely left unnamed until it no longer served as a royal residence and after the lanthorn as a navigation aid was added. This timing is also consistent with the appearance of the term lanthorn[e] occurring in the mid-1500s.