This is a work-in-progress. You can view it here for now.

This is a work-in-progress. You can view it here for now.
The word “unintended consequences” gets tossed around a lot in development and social sector work–often as a glib euphemism for “bad stuff we didn’t expect” or a “better call qual” moment.
However, many “unintended consequences” are indeed anticipate-able, especially with grounding in theory (+theory of change), the literature, and the context. It is the job of the program design and research teams to consider ways in which a program could lead to undesirable outcomes (those not consistent with social impact, progress) and to mitigate them (in the program) and capture them (in the research). It is not necessarily helpful to give all ‘bad’ outcomes the same label, especially an ‘oopsie-daisy, we didn’t mean for that to happen!’ label!
To walk quickly through my post-it:
I will be speaking more about research ethics in the coming month (and beyond!). In light of this, I am documenting some formative ethical experiences I have had and that, to some extent, still puzzle me. Perhaps befitting someone initially trained in anthropology and grounded theory, these examples stem from individual interactions I had ‘on the ground’–moments when research participants or research staff directly critiqued me.*
I am glad they did.
Of course, these are only the ones who spoke up. (What if we asked more consistently?)
My first research experience was spending ~3 months in Nepal, including an extended stay in Ghandruk in the Annapurna range, not long after its ACAP office had been blown up by Maoists. There were many ‘firsts’ for me on that trip during undergrad: first time in an LMIC, first homestay, first research project, first primary data collection, first fieldnotes, first altitude sickness, first time working with an interpreter, first goat sacrifice… All around an influential time for me.
An unexpected breakfast, Ghandruk, Nepal
As with many first research projects (and we can definitely discuss some additional ethics around this!), the biggest impact was probably solely on me, not the local community in which I was working and not the world. It will likely come as no surprise that I did not solve the question of where, when, and why people seek care for what would be diagnosed as mental illness, and specifically depression, according to the DSM. Nor did I unravel how illness narratives might shape responsiveness to pharmaceutical treatment.
Nevertheless, I did learn some things! Two snapshots, in particular, have stayed with me from that time (besides the fact that my host mom said I should only come back and visit once I was married…).
About three interviews into my project, the interpreter with whom I was working turned to me and suggested I was not making good use of people’s time because I was simply asking the same questions over and over (my first prepared interview guide!). Indeed, he thought I was quite daft and not ‘getting it.’
Some questions he deemed ‘self-answering’ and not worth asking at all (for example, was it hard to pay for health care in this village). Others, he felt that once we had heard from one or two people in the village, we had more or less settled the matter and there was no need to take anyone else’s time. While I ultimately prevailed that there was merit in asking the same questions over and over–and that at a minimum it was what was expected of me by my professor–the point still sticks with me, about how others might view not just the time researchers ask of individuals but also the collective time taken from a community and how they understand researchers’ strange ways.
Probably the ‘epiphanic moment’ for me to want to pursue applied public health rather than medical anthropology** happened after interviewing a woman who entered my field notes as Old Woman #3 and later received a pseudonym. My questions–some closed-ended, some open-ended–were about illness etiology and health-seeking behavior. I asked her to list her ailments, to describe how she got it / it got her, what she had done and care she had sought, and whether it was challenging to pay for it.
As we closed the interview, and I believe I provided a university-logoed pen, she asked, now that she had listed all her problems, how I would help her with them. I, of course, had no answer.
This gets at a point raised in several places including forthcoming work from Alex Avdeenko: It’s weird for strangers to come and ask you about all your problems and, then, really weird for them to not offer to do something about it after you’ve provided the list. Or, in a bizarre non-sequitur, to give you a pen. It is no wonder that people may often believe that such question-asking is (or should be) a needs assessment that will ultimately be attached to succor.
I spent a year in Ghana as part of my thesis research, both working with IPA and doing my own work on the side.
While piloting for work on malaria medication completion in Ghana (and funneling my salary back into my thesis research, follow-up interviews***, and fabric), I accompanied data collectors. Plenty of times, the informed consent and other processes don’t quite go as was written on paper–you said interviews would be conducted in private but actually the respondent wanted someone else there with them. Challenges in keeping to the letter and/or the spirit of the Belmont principles have started getting more ink in the past decade (such as, though very much not limited to, here, here, here)–a trend that should continue and not only situations of randomization.
During one particular interview, an older gentleman said something that struck me. We went through the informed consent process and then he said, ‘but I don’t really know what you are asking. First ask me all of your questions and then I will tell you if I consent.’ From an IRB perspective, this is, of course, backwards, as you have to get consent before you get to your questions, on the assumption that the upfront information you provide is sufficient.
But from the perspective of someone less familiar with research in general and certainly the particulars of your research, it may not be. No matter how well-crafted your consent statement. Consent could be reimagined, potentially, to start by getting us to the point of beginning the interview, to have more scripted reminders throughout that people can skip or stop, and to have an ultimate ‘consent,’ at which point they could request to have their data trashed now that they know everything we planned to ask. If we started to take a view that research participants should learn something about research in the process, rather than just taking CYA measures, it might lead us to reconsider how the informed consent process (possibly reimagined) can be truly informative (and ultimately understood). A corollary is that we might wish to investigate what is of interest to potential research participants, so that we not only say what we are obligated to say more clearly (again, stay tuned for work from Alex!) but, also, to say what will be interesting and useful to those deciding to join.
Sometimes the lessons on what is locally ‘ok’ don’t come from research participants but rather from research staff. This one has more to do with social norms and relationships but it was an element of positionality and potential offensiveness that I had not considered before. Specifically, our questionnaire included items on working memory, in which participants were giving increasingly long strings of numbers to remember and say back. I know it was a little nerve-wracking, as my thesis advisor/PI Prof Dr. Günther decided to give me a quiz!
Nevertheless, I did not expect it to be a major issue during data collection. I was wrong. However, when we did a feedback survey among the enumeration team at the end of data collection about what went well and what did not, it turns out that asking this question was mortifying to them. While it had not trickled up to me during data collection (my fault), the sheer horror and impropriety of asking someone older than you to remember and recite numbers came through very clearly in the open-ended responses. Imagine, they said, asking this to an uncle. Honestly, imagine.
This may not strictly be a ‘research ethics’ issue but it does hint at the fact that ‘we’ may not always understand what will be sensitive, uncomfortable, or even unacceptable in specific contexts. ‘Sensitive’ (and to whom) is one of the terms in research ethics (like coercion, undue burden, vulnerability, and a few others) that are important enough to be used a lot but so vague as to let you more or less define them as you will–that is, not actually terribly useful beyond bureaucratic CYA.
‘We’ do not have consensus on what constitutes appropriate compensation for research participation, veering away from coercion but also providing fair benefits, accounting for the risks imposed by the research. There has been a more robust conversation in medical and pharmaceutical research, such as dialogue on ‘fair benefits’ work by Emanuel, Wendler et al. across several papers and essays. Norms across different types of research and situations vary. The interpretations of non-monetary offerings by research participants may not be what was intended.
Into this space, and on a very low-budget project, we decided we would offer education/information about malaria as compensation for people’s time and also to make sure they better understood our research. This included seeding information about how antimalarial resistance developed when you don’t finish your course of medicine (as part of communication experiment that fell out of my thesis), including using these fabulous illustrations from Liz Venable, which we narrated out loud.
In addition, at the very end of the whole study, we planned (and ultimately did) send an SMS urging people to make use of bednets as well as to use as ACT and complete the course when ill with malaria. Again, the idea was that providing information might not only be useful but reflect some degree of compensation for time spent in our study.
The data collection team was NOT into it. First, they pointed out, almost certainly accurately, that people already knew about bednets. This contrasted with the reason you need to finish all your malaria medication, which might be genuinely new info. More interesting ethically, however, was how upset they were that we would tell people do something but not help them do it. In their view, it was very uncool to say you should use a bednet but not actually provide one.
I have been lucky enough to make two month-plus trips to Uganda in the context of IDinsight’s work with GiveDirectly in Kiryandongo refugee settlement, with both quant-experimental and qualitative components. Both stints coincided with training and early data collection, for the quantitative and qualitative work.
Our longitudinal, semi-structured work has involved, give or take, monthly conversations with a small number of respondents, who tell us about how they are doing and challenges they are facing. Each month we asked a set of similar questions (how they are doing, general sentiment in the settlement and the market, etc.) as well as a focused set of questions on a topic (education one month, health another, detailed transfer-spending plans in a third, say). We have two qualitative interviewers on this project (temporarily paused) and they have done a fabulous job building rapport with our respondents.
While data collection was active, Rico was diligent about reviewing incoming notes and transcripts in more-or-less real-time, which allowed him to give regularly feedback on interview technique and additional probe ideas to our team (blog post forthcoming on this remote management of qualitative data collection by less-experienced interviewers). He was also doing the regular management check-ins with our interviewers. He was the first to note, then, that Stephen and Christian were reporting that sometimes the respondents were calling them at unscheduled time just to chat. In a few cases, the fieldnotes and transcripts also revealed that the respondents felt they had no one else to talk to, that they had suicidal ideation, etc.
At base, this reveals the importance of having a resource-referral list (for which you have checked that the resources are truly operational) as part of standard practice. Working with a population that has recently and collectively experienced trauma makes this all the more important, even if you are not asking ‘sensitive’ questions. This can be really challenging when, for example, you are working in a country that does not host a national suicide hotline–and certainly not one in all the languages relevant to refugees. Nevertheless, it is critical.
This experience, born of extended engagement, also points at challenges, well-known in anthropology and increasingly recognized in other fields (with/out acknowledgement), when the line between ‘data collector’ and ‘confident’ become blurred.
This final point raises a question about who ‘confidentiality’ is for and who wants it. In some cases, of course, it is absolutely essential to maintain strict confidentiality, with literal life/death implications. But that is not always how it may be seen on the ground. Take Martin, pictured below. I sat in on an interview with him, led by the wonderful Stephen, who had just started as a qualitative interviewer on this project after serving as an enumerator at baseline. I know Stephen went through the process of providing study information carefully and sought a clear statement of consent. I know how much time walking around the settlement Rico and I spent reviewing the informed consent script with our team, making sure it came across as conversational and not a monologue.
Following consent, first interview guide for these opening talks in a series of longitudinal data collection focused on some history of arriving the settlement, whether people knew or got along with their neighbors, what people thought of the GiveDirectly lottery to determine order of transfer receipt. Following all of this, Martin had Stephen translate a request to me. Namely, that he wanted people to know his story and that I should spread it far and wide. That I should go back to the US and let people know that “I had a man in Africa” who needed help. He insisted on me taking the picture below, to share along with his story. Which, now I have.
*This might be contrasted with, say, ex-post getting interested in ethics after your study is critiqued.
**Though another would come later when a mentor let me know that the medical anthropologist with whom I was considering pursuing a PhD would try to sleep with me and would make things quite unpleasant for me if I did not. That, too, was a decisive moment–and a different kind of ethics.
***This is how I met Jeff!
This post represents joint work with the fabulous Mike Frick. It has been a long time coming–to my horror, I see that we first drafted this at end-December of 2010. It’s gone through a lot of iterations, trying to fit in this journal (for example, this working/submitted version here) or that rather than just getting our ideas out in the world. It’s never been a ‘perfect fit.’
However, I find that I return to some of these ideas time and again, so want something out there. There are places in the manuscript that still had comments, in which Mike and I both acknowledge we wanted to return to the text and see if could push a little deeper, be a little clearer. Nevertheless, sharing out.
In this paper, we argue for recognizing and including research participants as important stakeholders interpreting and learning from evidence generated through social science research. Nigerian social commentator Onye Nkuzi asked, “If you are doing ‘ground-breaking social science research on Africa’ that Africans cannot access, what, ultimately, is the point?” (Nkuzi, 2011). We agree. “The point” matters.
Researchers often do not share research outputs and results to surveyed and studied communities and individuals, limiting access and engagement. (While we focus on study participants in this paper, many of our points have equal applicability to a range of involved stakeholders, including research field staff, street-level implementers, local (LMIC) ethics boards, and local media.)
This oversight does not always reflect the purposeful disregard of participants. There are several constraints to sharing results and co-interpreting with participants. Some of these may bind in ‘our’ (global health and development research) current operating set-up. Budgets and time may bind. Recognition of what kinds of participant-engagement activities can fit an on-going or longitudinal research program may constrain sharing. Creativity about meaningful modes of sharing may bind, given the limited precedent on which to draw (itself a product of the current lack of budget for such activities). And consideration of the benefits and drawbacks of sharing collected information with those from/with whom it was collected may be constrained.
Whatever the reasons, we contend that these constraints have negative implications for both study participants and researchers. It limits access and engagement in research. It elides the important role research participants can (and should) play in interpreting and learning from evidence generated through social science research.
We discuss the ethical and practical reasons for sharing results with study participants in social science and development research post-data collection. We propose that this practice has the potential to: (1) improve the credibility and validity of research findings; (2) provide fair benefits through knowledge enhancement and empowerment; and (3) promote long-term relationships, trust and collaboration between researchers and participants.
To develop the evidence on this, we call for more trials, errors, and research into the modalities and effects of sharing results with participant-communities to articulate the possibilities, benefits, and challenges. We close by examining the challenges of results sharing and discuss the roles different stakeholders—from investigators to funders to ethics boards to communities themselves—can play in this activity.
*We acknowledge that in the years since our first draft, there have been a few additional calls for similar action, though not necessarily embedded in the framework we propose and not all pushing beyond ‘results sharing’ into shared sense-making. To give credit where it is due (and you can let us know if we overlook anything major): a call for sharing results is made in this F2P2 post, ODI provides an example of results sharing (though we are not convinced of the usefulness of that particular mode), and the call for structured ethics appendices has a placeholder for results-sharing.
We find it useful to develop an understanding why results-sharing and shared-sensemaking are important, rather than just stating it is so. Although ethics in the social sciences merit a broader outlook than the biomedical gaze, clinical trials still provide a good place to start for social scientists thinking on ethical engagement (Alderman, Das, and Rao 2013). A particularly active area of debate centers on clinical trials that lead to the development of a tangible and useful product, such as a novel drug or vaccine (Bhutta 2002).
The Council for International Organizations of Medical Sciences (CIOMS) guidelines require sponsoring agencies to “agree in advance…that any product developed through such research will be made reasonably available to the inhabitants of the host community” (CIOMS 2002). These and similar guidelines apply to tangible products of research. They may not perfectly map to social science research, which may not generate a new product or service but rather knowledge to inform future policy, programming, and research (Bhutta 2002). While social science research often does not impose the same kinds of risks on participants as testing vaccines or drug therapies, this does not attenuate ethical obligations to share research results. Relative to biomedical research, the opportunities for shared sensemaking, about social constructs and processes, are heightened in social science work.
Even in biomedical research, the final output of research may be knowledge—individually and/or collectively meaningful—rather than a marketable product or technology. For example, in genomic research, where the research product often is knowledge (i.e., the results of genetic testing), a key concern is sharing this information with specific participants (Dressler et al. 2012; Kristman and Kreiger 2008; Simon et al. 2012). Since the field’s start, its researchers have considered rapid and open release of genomic data as critical. For example, in the U.S., the National Human Genome Research Institute requires its grantees to release genetic sequences into open-access databases maintained by the National Institutes of Health (NIH) and other institutes (Lowrance and Collins 2007). While this system of data sharing enables scientific research, it does not connect individual study participants to relevant and medically actionable results. A review of consent forms from genomic studies found that participants are usually told genetic analysis will occur “without any explanation of with whom the resulting data will be shared” (McGuire and Gibbs 2006). An exception is research conducted among indigenous and aboriginal communities, who often have the right to access their genomic data and exercise some determination over its future use (Jacobs et al. 2010; Schuklenk and Kleinsmidt 2006; Taniguchi, Taualii, and Maddock 2012).
Against this backdrop, several thought-leaders in biomedical ethics—looking beyond products such as vaccines but still referring to clinical trials—have called for researchers to (1) meaningfully present research purposes and findings to study participants and (2) involve these participants in further results interpretation and manuscript preparation (Weijer and Emanuel 2000). These components are two in a larger ethical framework proposed by Ezekiel Emanuel and colleagues to govern clinical research in lower- and middle-income countries (LMICs). In order to promote the social value of such research, the architects of this framework suggest that research should be structured as a collaborative partnership between researchers and communities (Emanuel et al. 2004); sharing results is a component of this. This framework holds that consent forms, de-identification of data, locked files (physical and virtual), and other tools of confidentiality build the cornerstone of ethical research—protecting individuals—but remain only the starting point for research that produces fair benefits and social value (Benatar 2002). That is, that is ethical.
A recent review of the literature indicates that participants in biomedical and clinical research want “aggregate and clinically significant individual study results communicated to them,” or at least want the opportunity to choose to access these results (Shalowitz and Miller 2008). In a famous example, Richard Doll and Bradford Hill presented their findings on smoking and lung cancer to participants in the British Physicians Health Study. In a report on their findings, Doll and Hill write: “We thought it necessary, in view of the nature of our results, to lay these preliminary data before the survivors of the 40,000 men and women who made them possible” (DOLL and HILL 1954). Doll and Hill informed participants of the results through the British Medical Journal, an established information source for physicians such as those taking part in the study. The presentation of results to study participants in LMICs requires further creativity and effort given information channels and barriers in these settings.
At present, Institutional Review Boards / Ethical Review Committees and some grant-making agencies that support research in LMICs make researchers delineate how they will share their results. Yet these bodies give limited guidance or mandate on which information will be shared with whom, when, through what means, and with what aims. Scant research systematically assesses the desire of LMIC study participants to learn more about the research in which they take part, including the results. From our anecdotal experience, participant-communities and individuals would like to know the results of research and to share in benefits that may stem from them. This is an area for further, more systematic exploration—particularly in relation to two stages of the research process: (1) data validation and analysis; and (2) results dissemination and communication. This exploration is the scope of this paper; here, we make the case for why it is needed.
Dialogue and action in the social sciences may pave further the way for discussions about including research participants in validating and analyzing results prior to dissemination. These include conversations about limiting extractive research (spearheaded by Robert Chambers), about increasing research transparency, and about enhancing research validity.
Often, research is ‘extractive.’ That is, outsiders gather raw information to take away for processing and construction into valuable and meaningful (knowledge) products (Chambers 2007). Chambers has helped shape an appreciation for a participatory approach to development and research as the antithesis of extractive approaches, a dichotomy echoed by Leslie Groves (Groves 2015). But much of the current work on participation focuses on involving stakeholders in the generation and collection of data (such as mapping geographic space and power relationships as understood locally, as well as his newer focus on participatory numbers and statistics). This may be empowering in some measure but it does not necessarily achieve the true opposite of extraction, which is to insert or reciprocally create something of value. In our estimation, a more productive alternative to “extractive” (Groves 2015 – paper, not blog) research — which could still occur even if individuals or communities under study actively participate — is research that is reciprocal.
We encourage researchers to take this step beyond participatory data collection by calling for those providing raw data to play a role in the construction of knowledge products – that is, in the interpretation of results that considers multiple perspectives and that builds partnerships with study participants. This, we believe, goes some way to limit the extractive nature of research by attending to fair benefits and social value, and by producing interpretations and recommendations that accord with the experiences of participant-communities. This notion fits within the ‘feedback typology’ provided by Groves, who points out that beneficiary feedback can be elicited during four stages of program evaluation (an activity related to but distinct from research): (1) evaluation design; (2) data collection; (3) validation (sensemaking) and analysis; and (4) dissemination and communication (Groves 2015). Our interest in this paper is particularly on the latter two elements as a key frontier for ethical research.
Creating reciprocal exchange around research results will require increasing the transparency of research outcomes. Recently, research transparency has received increasing focus in the social sciences, particularly with regard to research design and data analysis (Miguel et al. 2014; Ioannidis et al. 2014). Popular measures to increase the transparency of research include pre-analysis plans to prevent later data-mining and specification-searching and open data and journal access to allow new or replicated analyses (for one example, see (King and Alvarez 2014)). These efforts are usually directed toward transparency vis-à-vis other researchers, as the assumed end-users of research. They only tangentially address transparency from the perspective of research participants themselves. Even broader conversations about transparency in research for the benefit of the public at large do not always acknowledge that a subset of the public may have special claim to receive research results, given their role as research participants.
In addition to increasing transparency, researchers may strengthen the quality and internal validity of their work by engaging study participants during data analysis and interpretation (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Often, researchers gain interpretive insight into their results through a chance participant comment (e.g. (Thornton 2008). Member-checking (also called shared sensemaking) offers a systematic means of verifying or co-interpreting the results with the participant-community, allowing them to also serve as analysts. Member-checking refers to a practice crystallized by Lincoln and Guba to enhance research ‘credibility’ (internal validity) by providing validity and reality checks (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Throughout the research process—as is currently more common in a qualitative research paradigm—researchers may gain insight from informal tests of initial interpretations among the participant (or a comparable) community. Researchers may reject some interpretations and consider new ones.
Lincoln and Guba also recommend “terminal, formal testing of the final case report with a representative sample of stakeholders” (Lincoln and Guba 1986). Terminal testing offers an opportunity to share research data, processes, results, and interpretations with participant-communities (Carlson 2010). Given possible concerns about presenting the raw data for auditing and clarification, we advocate sharing aggregated results and interpretations. Clear decisions on the array of information to be shared need not always be determined ex ante but can emerge over the course of the research—a flexibility that is easier to achieve in the social sciences, but also possible within biomedical research.
Taking time (and resources) to return to participants to involve them in member-checking results allows researchers to assess the internal validity of their work and ensures that participant-communities do indeed receive these results. It also acknowledges that insights arise from interactions and relationships between researchers and participants. Member-checking gives voice to participants in a way merely collecting data does not afford (Carlson 2010).
As an extension, participants may also contribute to manuscript preparation. What a research team might normally publish represents a single interpretation of the data collected. Consensus need not be reached on the interpretation of results; competing interpretations can be published, even within the same paper (Weijer and Emanuel 2000). Of course, study participants will not be involved in each manuscript decision. However, these discussions can pave the way for long-term research efforts and even the development of new intervention strategies (Emanuel et al. 2004).
Involving participants and other community stakeholders in the dissemination of research findings creates further avenues to build partnerships with participations that extend beyond data collection. The opposite of extractive research is research that is, variously, participatory, inclusive, authentic, and/or empowering, with empowerment seen as a continuum rather than a threshold (Chambers 2007; Lincoln and Guba 1986). Lincoln and Guba suggest that research authenticity stems, in part, from the extent to which it raises consciousness of a problem among participants, fosters appreciation of others’ views of an issue and its potential solutions, and, most challengingly, stimulates not only action by participants, but successful action (Lincoln and Guba 1986). Such consciousness-raising accords with the Hegelian or Freireian sense of enabling people to see the structures underlying problems facing them.
Participants stand to benefit most when shared information is clear and contextualized. Researchers need pertinent, creative ways to present information to participants. To do so, they may need to return to communities after data collection or during analysis. To package information, researchers should consider the literal translation of scientific terms into local languages and anticipate the challenges participants may experience in understanding the nature and implications of formal research (Appadurai 2006, building on ideas from Freire). The degree to which sharing the research purpose, design, and findings is empowering will depend on what is shared, through what modes, with whom, and with what degree of acceptance.
In the biomedical research field, many clinical trials networks have dedicated community engagement staff or community advisory boards (CABs) that play a primary role in disseminating results and communicating study findings to a broad constituency of involved stakeholders. One example is the Community Partners program, which guides community engagement work across the research networks supported by the Division of AIDS of the U.S. National Institute of Health. Community Partners produces lay summaries of trials protocols, relays information to community stakeholders during trial conduct and advises study teams on selecting appropriate and targeted dissemination strategies for releasing study findings (Kagan et al, 2012).
As a starting point for selecting modes of dissemination, social scientists should consider how laypersons and non-experts interpret scientific data, including the use of heuristics (Grotzer A., Miller B., and Lincoln A. 2012; Nickerson, Perkins, and Smith 1985; Sunstein 2002). In one example of results-sharing, at the end of a study on sexual networks and the spread of sexually transmissible infections (STIs) in East Africa, researchers held an educational session with participating women. Though the full analysis remained underway, the researchers shared de-identified sexual network images and engaged participants in a hands-on (quite literally, hand-holding) activity on the differential risk for STI transmission in serial versus concurrent sexual partnerships. This made the risks immediately interpretable to the women involved (M. Morris, personal communication, April 2009), although to our knowledge, the researchers did not conduct further research on whether this, in turn, influenced behavior within networks.
Such proactive results-sharing is the exception, even in biomedical research. The Ethics Advisory Group of the International Union Against Tuberculosis and Lung Disease (the Union) reviewed content from eight years’ worth of study protocols—clinical trials, observational studies, and record reviews—they supported. The Union found that “few researchers described how their results would be presented to study participants and their communities,” although all proposals mentioned dissemination in scientific journals (Edington et al. 2013). Members of the Union’s Ethics Advisory Board argue with force that sharing knowledge about health problems and potential solutions with communities creates opportunities for participants to “exercise their rights regarding health and other services” (Edington et al. 2013). In this way, results dissemination might be understood as an enabler of empowerment processes.
In an optimistic view of empowerment, people may be able to use research results to “gain or improve a service, assert their rights or obtain a better deal” (Chambers 2007). Participants may be able to use information shared with them to affect—directly or through collective action and advocacy—local behavior or improvements in well-being. This is, for example, part of the idea behind Vijayendra Rao’s work with the Social Observatory — and Rao notes that linking findings with action remains a challenge (World Bank 2016; Rao and Aiyer 2016).
In theory, if the information reveals that conditions or problems are different or more severe than participant-communities had understood, then study information may lead to reflection on the present situation and a new articulation of problems, which may also lead to a new framing of possible solutions. By hearing about the research process and results, respondents may think in a more critical and evidence-informed way about their communities, rather than relying on stereotype or other assumptions about their context. Understanding the research in which they took part may be enjoyable and have intrinsic value for participants and, further, lead respondents to garner a sense of control and even skill in considering their own problems in the future (Appadurai 2006; Chambers 2007; Zimmerman A. 1990).
However, from years of information-sharing campaigns in public health and beyond, we know that much lies between the provision of information and taking (collective) action (for a recent example outside of public health, see (Lieberman, Posner, and Tsai 2014). People must be ready, willing and able to translate information into action (Lesthahghe and Vanderhoeft 2001). Moreover, as in participatory approaches to development, much depends on elite capture within a community and the capacity and incentives for action within a community as a whole, which itself may be moderated by degree of heterogeneity (Mansuri and Rao 2012). Whether, when, and how these processes benefit participant-communities remains an empirical question. Evaluating the impact of sharing research results with participant-communities will enhance our understanding of the potential of the approach for which we advocate here.
Despite the limited evidence base, the development field has produced several examples of long-term research initiatives in communities that have included efforts to affect social change through knowledge-sharing and empowerment among community stakeholders. One example is the Townsend Thai Project, which has produced panel data on the spending of Thai households for over 150 consecutive months. Robert Townsend and his study team are now beginning to give research results to a randomly selected subset of households to see if sharing this information changes their financial decision-making. This approach suggests that results sharing itself can adopt an experimental, investigative form (Townsend, Sakunthasathien, and Jordan 2013).
Beyond fostering empowerment and inspiring collective action, including communities in results dissemination may also promote long-term ties with research teams and improved understandings of the place of research in social life. Emanuel et al warn that “very little can create more resentment, mistrust and a sense of exploitation than unfair distribution of the benefits of collaboration” (Emanuel et al. 2004). Particularly for researchers planning to work in a single study area with the same or similar participants over a long period, such resentment and mistrust must be avoided.
Mistrust can result when study participants feel disconnected from the research process or the conclusions it generates. Even before results are ready, information can be shared on the research process, including the importance of particular questions to the generation of results. Sharing this kind of information might respect a community’s desire to do more than offer its data to researchers while still protecting later study rounds from contamination. This may also help limit research fatigue among community members in the face of repeated research.
One of the authors (MWF) saw such fatigue among a community of intravenous drug users in an East Asian country who have participated in repeated public health surveys. These individuals have come to see research interviews as a rote exercise with little benefit to themselves, beyond the immediate, small financial incentive for participation. One participant asked, “Will this kind of research actually help people in our position? Will people in power actually listen to us?” Limiting research fatigue, and the linked tendencies to not cooperate or provide accurate or complete information, is a key benefit of continuing dialogue with participant-communities after—or between rounds of—data collection.
There are several significant challenges to results sharing in the social sciences. First, this effort may be costly in time or resources, especially as studies become larger or longer in geo-spatial or time scale (Alderman, Das, and Rao 2013). Often a long period elapses between a specific research study, the release of the results, and the possible changes in programs or policies aimed at improving well-being or development in the participant-community (Emanuel et al. 2004). However, experiences from clinical and social science randomized controlled trials show that results can be shared with large numbers of participants without adding undue costs. After concluding the first phase IIb efficacy trial of a new TB vaccine in over forty years, the South African TB Vaccine Initiative (SATVI) went door-to-door to inform the nearly 2,800 mothers with enrolled infants that the vaccine was not effective. Investigators at SATVI point out that this effort allowed them to “take the community’s pulse” after a negative result. Many mothers reportedly reacted to the disappointing news by asking when they could enroll their children in the next trial. They further expressed pride in their participation in the search for a new TB vaccine (M. Tameris, personal communication, October 2013).
While SATVI successfully disseminated results by going door-to-door, the method of results dissemination need not match the method of data collection. Researchers may beneficially use technology or media to share results and solicit feedback. For example, one of the authors (HEL) used text messaging to convey basic process and outcome information to those participating in a malaria medications study in Ghana that used text messaging as a key component of the intervention (Raifman et al. 2014). Creative approaches that consider results dissemination beyond village meetings or door-knocking may unlock feasible options to share research results on a wider scale without great financial burden.
Second, researchers may worry that sharing results might bias or “contaminate” studies by adding an additional intervention component (or “treatment”) that will influence the behavior or responses of participants over the course of the study. This can make it more difficult to make causal claims about the effect of the original intervention on the outcomes observed. For example, if researchers are studying or evaluating an intervention that distributes X health product, feeding back mid-term results to study communities about the impact of X on behavior or health outcomes may itself change behavior. This is a case in which sharing results may be best held until after the last round of data collection is complete or where data shared over the course of the study would need to be carefully selected as unlikely to influence outcomes.
Third, dissemination might pose special risks for vulnerable or minority groups within participant-communities and researchers will need to consider this possibility when designing subject protections. The process of preparing to share results can itself encourage researchers to thoughtfully engage with participant-communities throughout the research process. In this process, researchers will be guided by soliciting community input about what information is appropriate to share, how information could be disseminated, and who should have access. Social scientists must balance the community’s desire for certain data against the need to maintain confidentiality and anonymity. The ethical dissemination of study results will benefit greatly from meaningful engagement with community representatives early in the research process (Bhutta 2004).
In clinical trials, such decisions are often aided by community advisory boards (CABs), which facilitate effective communication between investigators and the host community on important ethical issues (Boulanger et al. 2013). Investigators studying HIV and TB have also received guidance on results dissemination in the Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials (GPP-HIV) and the Good Participatory Practice Guidelines for TB Drug Trials (GPP-TB) – two guidance documents developed in conjunction with affected communities, trial sponsors, civil society, and researchers (Regimens 2012; UNAIDS 2011).
For all of the activities discussed in this paper, it may be difficult to determine who constitutes “the community” and its leaders, although these questions arise at the beginning of most studies. Indeed, approaching research with the expectation of continued engagement may help researchers think about issues of community more clearly during the early stages of research design. Early engagement can allow researchers to better understand what information the participant-community feels would be “too sensitive” to collect or to share, which may then require either a careful articulation of the benefits of this information to the participants or a reconsideration of aspects of the research design (e.g. (Boga et al. 2011)).
Each of these challenges can be overcome, often in ways that may strengthen research processes and outcomes. To realize this potential, however, results sharing must occupy a more central position in the planning of research projects. The following activities may help social scientists and research institutions resolves some of the above challenges and integrate results dissemination into their work:
Building greater participant-community engagement through results dissemination honors the ethical obligations to provide study participants with “fair benefits” in the face of potential harms and further acknowledges the expressed desire of communities to do more than give data. Whether engagement takes the form of “member-checking” to validate data or wider dissemination of research information, sharing results offers social scientists an avenue for building the kind of collaborative partnerships that make sustained research possible. Moreover, it allows researchers to refine and reconsider their interpretation of the results and the implications of research. To achieve this, researchers will need to think beyond the minimum requirement of extant IRB protocols and explicitly address how they will share results with study participants, what kind of information will be shared, and when. Our contention is that results sharing can generate research that is more transparent, more fair, and possibly more trustworthy and more useful. It does greater justice to allowing participants access to the research in which they participate. More empirical research is required on the means and effects of sharing results with participant-communities to fully understand the benefits of this approach for participants, researchers, and the partnership between them. But, as the clinical trials literature suggests, the ethical obligation to engage communities in the results of research is already clear.
This paper began as a shared interest between the authors in a research methods course co-taught by Dr. Daniel Halperin (now at the University of North Carolina at Chapel Hill Gillings School of Global Public Health) and Dr. Heather Lanthorn. Both Mike and Heather were strongly influenced by not only what they had seen during data collection but also personal experiences of participating in clinical trials and Psych-101-required studies and never hearing anything back about results. We thus have deep empathy for communities that want to hear about findings, as well as intellectual/justice commitment to shared sensemaking.
Dr. Halperin encouraged our thinking and to pursue a paper (though without specific guidance on how to get a conceptual paper out in the world). Both authors have had important interactions while working on research projects, including with Abubakari Bukari and Salifu Amadu at Innovations for Poverty Action – Ghana and investigators at the Tuberculosis Trials Consortium. Drs. Richard Cash (Harvard T.H. Chan School of Public Health and the Public Health Foundation of India), Howard White (formerly of the International Initiative for Impact Evaluation), and Marc Zimmerman (University of Michigan School of Public Health) have offered important critiques on drafts on this paper. Dr. David McKenzie (World Bank), Dr. Biju Rao (World Bank), Dr. Robert Townsend (Massachusetts Institute of Technology), and Raag Bhatia (International Initiative for Impact Evaluation) have offered encouragement and insight at different stages of the process. Of course, all mistakes are our own.
The journey of this paper has been frustrating. When we submitted to journals like Social Science & Medicine, we were told our point was obvious and well-known to anthropologists. Meanwhile, economics-led development journals thought the ideas could be interesting, not broadly relevant, and offered little guidance on where a conceptual paper might live, at least allowing us to get a timestamp on our ideas. In any case, these ideas certainly seem they are becoming more relevant now than ever.
Alderman, Harold, Jishnu Das, and Vijayendra Rao. 2013. “Conducting Ethical Economic Research.”
Amon, Joseph J, Stefan D Baral, Chris Beyrer, and Nancy Kass. 2012. “Human Rights Research and Ethics Review: Protecting Individuals or Protecting the State?” PLoS Medicine 9 (10). Public Library of Science: e1001325.
Angell, M. 1997. “The Ethics of Clinical Research in the Third World.” The New England Journal of Medicine 337 (12). UNITED STATES: 847–49. doi:10.1056/NEJM199709183371209.
Appadurai, Arjun. 2006. “The Right to Research.” Globalisation, Societies & Education 4 (2). Routledge: 167–77. doi:10.1080/14767720600750696.
Benatar, Solomon R. 2002. “Reflections and Recommendations on Research Ethics in Developing Countries.” Social Science & Medicine 54 (7). Elsevier: 1131–41.
Beskow, L M, W Burke, J F Merz, P A Barr, S Terry, V B Penchaszadeh, L O Gostin, M Gwinn, and M J Khoury. 2001. “Informed Consent for Population-Based Research Involving Genetics.” JAMA : The Journal of the American Medical Association 286 (18). Office of Genetics and Disease Prevention, Centers for Disease Control and Prevention, 4770 Buford Hwy NE, MS K-28, Atlanta, GA 30341-3724, USA. laura_beskow@unc.edu: 2315–21.
Bhutta, Z A. 2002. “Ethics in International Health Research: A Perspective from the Developing World.” Bulletin of the World Health Organization 80 (2). Department of Paediatrics, The Aga Khan University, Karachi, Pakistan. zulfiqar.bhutta@aku.edu: 114–20.
———. 2004. “Beyond Informed Consent.” Bulletin of the World Health Organization 82 (10). World Health Organization: 771–77. doi:10.1590/S0042-96862004001000013.
Boga, M, A Davies, D Kamuya, S M Kinyanjui, E Kivaya, F Kombe, T Lang, et al. 2011. “Strengthening the Informed Consent Process in International Health Research through Community Engagement: The KEMRI-Wellcome Trust Research Programme Experience.” PLoS Medicine 8 (9). Consent and Communication Committee, KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya.: e1001089. doi:10.1371/journal.pmed.1001089.
Boulanger, Renaud F, Stephanie Seidel, Erica Lessem, Lee Pyne-Mercier, Sharon D Williams, Laia Ruiz Mingote, Cherise Scott, Alicia Y Chou, and James V Lavery. 2013. “Engaging Communities in Tuberculosis Research.” The Lancet Infectious Diseases. Elsevier.
Carlson, J. 2010. “Avoiding Traps in Member Checking.” The Qualitative Report 15 (5): 1102–13.
Chambers, Robert. 2007. “Who Counts? The Quiet Revolution of Participation and Numbers.” Institute of Development Studies (UK).
CIOMS. 2002. “International Ethical Guidelines for Biomedical Research Involving Human Subjects.” Geneva, Switzerland: Council for International Organizations of Medical Sciences.
Cooper, M. 2008. “Sharing Data and Results with Study Participants: Report on a Survey of Cultural Anthropologists.” Journal of Empirical Research on Human Research Ethics : JERHRE 3 (4). McMaster University.: 19–34. doi:10.1525/jer.2008.3.4.19.
DOLL, R, and A B HILL. 1954. “The Mortality of Doctors in Relation to Their Smoking Habits; a Preliminary Report.” British Medical Journal 1 (4877). Not Available: 1451–55.
Dressler, L G, S Smolek, R Ponsaran, J M Markey, H Starks, N Gerson, S Lewis, et al. 2012. “IRB Perspectives on the Return of Individual Results from Genomic Research.” Genetics in Medicine : Official Journal of the American College of Medical Genetics 14 (2). Institute for Pharmacogenomics and Individualized Therapy, University of North Carolina, Chapel Hill, NC, USA. lynn_dressler@unc.edu: 215–22. doi:10.1038/gim.2011.10; 10.1038/gim.2011.10.
Edejer, T T. 1999. “North-South Research Partnerships: The Ethics of Carrying out Research in Developing Countries.” BMJ (Clinical Research Ed.) 319 (7207). Global Programme on Evidence for Health Policy, World Health Organisation, CH-1211 Geneva 27, Switzerland. tantorrest@who.ch: 438–41.
Edington, M, T Ornstein, J Denholm, A Sony, S.J. Kim, A Narain, and R O’Brien. 2013. “No Title.” Public Health Action 3 (4): 346–50.
Emanuel, E J, D Wendler, J Killen, and C Grady. 2004. “What Makes Clinical Research in Developing Countries Ethical? The Benchmarks of Ethical Research.” The Journal of Infectious Diseases 189 (5). Department of Clinical Bioethics, Warren G. Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland 20892-1156, USA. eemanuel@nih.gov.: 930–37. doi:10.1086/381709.
Goldstein, Markus. 2014. “Ethical Validity Response #3: Would a Graph Help?” Blog. Development Impact. March 20. http://blogs.worldbank.org/impactevaluations/ethical-validity-response-3-would-graph-help.
Grotzer A., Tina, Rebecca Miller B., and Rebecca Lincoln A. 2012. “Perceptual, Attentional, and Cognitive Heuristics That Interact with the Nature of Science to Complicate Public Understanding of Science.” In Advances in Nature of Science Research: Concepts and Methodologies, edited by M Khine, 27–49. Harvard University, Cambridge, MA, USA: Springer. doi:10.1007/978-94-007-2457-0_2.
Groves, Leslie. 2015. “Breaking down ‘Feedback’ A Typology.” Blog. Beneficiary Feedback in Evaluation. January 6. https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/2015/01/06/breaking-down-feedback-a-typology/.
Hamermesh, Daniel. 2012. “Six Decades of Top Economics Publishing: Who and How?” NBER Working Paper Series Working pa.
Ioannidis, John P A, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, and Robert Tibshirani. 2014. “Increasing Value and Reducing Waste in Research Design, Conduct, and Analysis.” The Lancet 383 (9912): 166–75. doi:10.1016/S0140-6736(13)62227-8.
Jacobs, B, J Roffenbender, J Collmann, K Cherry, L L Bitsoi, K Bassett, and C H Evans Jr. 2010. “Bridging the Divide between Genomic Science and Indigenous Peoples.” The Journal of Law, Medicine & Ethics : A Journal of the American Society of Law, Medicine & Ethics 38 (3). O’Neill Health Law Institute, Georgetown University.: American Society of Law, Medicine & Ethics, Inc: 684–96. doi:10.1111/j.1748-720X.2010.00521.x; 10.1111/j.1748-720X.2010.00521.x.
King, Gary, and R. Michael Alvarez. 2014. “Gary King: An Update on Dataverse.” OUPblog. December 7. http://blog.oup.com/2014/12/gary-king-update-dataverse-journal-database/.
Kristman, V L, and N Kreiger. 2008. “Information Disclosure in Population-Based Research Involving Genetics: A Framework for the Practice of Ethics in Epidemiology.” Annals of Epidemiology 18 (4). Department of Public Health Sciences, University of Toronto, Toronto, Ontario, Canada. vicki.kristman@uhnresearch.ca: 335–41. doi:10.1016/j.annepidem.2007.10.003.
Lesthaeghe, Ron, and Camille Vanderhoef. 2001. “Ready, Willing, and Able: A Conceptualization of Transitions to New Behavioral Forms.” In Diffusion Processes and Fertility Transition, edited by John B. Casterline. Washington, D.C.: National Research Council.
Lieberman, Evan S., Daniel N. Posner, and Lily L. Tsai. 2014. “Does Information Lead to More Active Citizenship? Evidence from an Education Intervention in Rural Kenya.” World Development 60 (August): 69–83. doi:10.1016/j.worlddev.2014.03.014.
Lincoln, Yvonna S, and Egon G Guba. 1986. “But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Program Evaluation 1986 (30). Wiley Subscription Services, Inc., A Wiley Company: 73–84. doi:10.1002/ev.1427.
Lowrance, W W, and F S Collins. 2007. “Ethics. Identifiability in Genomic Research.” Science (New York, N.Y.) 317 (5838). United States: 600–602. doi:10.1126/science.1147699.
Mansuri, Ghazala, and Vijayendra Rao. 2012. Localizing Development: Does Participation Work? Vol. 1. World Bank Publications.
McGuire, A L, and R A Gibbs. 2006. “Genetics. No Longer de-Identified.” Science (New York, N.Y.) 312 (5772). Center for Medical Ethics and Health Policy, Baylor College of Medicine, Baylor College of Medicine, One Baylor Plaza, Suite 310D, Houston, TX 77030, USA. amcguire@bcm.edu: 370–71. doi:10.1126/science.1125339.
McKenzie, D. 2011. “Should We Share Results with Respondents?”
McKenzie, David. 2014. “Ethical Validity Response #2: Is Random Assignment Really That Unacceptable or Uncommon?” Blog. Development Impact. March 19. http://blogs.worldbank.org/impactevaluations/ethical-validity-response-2-random-assignment-really-unacceptable-or-uncommon.
Miguel, E., C. Camerer, K. Casey, J. Cohen, K. M. Esterling, A. Gerber, R. Glennerster, et al. 2014. “Promoting Transparency in Social Science Research.” Science 343 (6166): 30–31. doi:10.1126/science.1245317.
Nickerson, R, D Perkins, and E Smith. 1985. “Chapter 5: Errors and Biases in Reasoning.” In The Teaching of Thinking, 111–42. Hillsdale, NJ: LEA.
Ozler, Berk. 2014. “Taking Ethics Seriously: Response #1.” Blog. Development Impact. March 18. http://blogs.worldbank.org/impactevaluations/taking-ethics-seriously-response-1.
Raifman, Julia R G, Heather E Lanthorn, Slawa Rokicki, and Günther Fink. 2014. “The Impact of Text Message Reminders on Adherence to Antimalarial Treatment in Northern Ghana: A Randomized Trial.” PloS One 9 (10): e109032. doi:10.1371/journal.pone.0109032.
Rao, Vijayendra, and Yamini Aiyer. 2016. “The Social Observatory: Building Adaptive Capacity in Large-Scale Projects.” presented at the 3ie Delhi Seminar Series, Delhi, February 1.
Ravallion, Martin. 2014. “Taking Ethical Validity Seriously.” Blog. Development Impact. March 17. http://blogs.worldbank.org/impactevaluations/taking-ethical-validity-seriously.
Regimens, Critical Path to TB Drug Stakeholder and Community Engagement Working Group. 2012. Good Participatory Practice Guidelines for TB Drug Trials 2012. Critical Path to TB Drug Regimens.
Schuklenk, U, and A Kleinsmidt. 2006. “North-South Benefit Sharing Arrangements in Bioprospecting and Genetic Research: A Critical Ethical and Legal Analysis.” Developing World Bioethics 6 (3). Centre for Ethics in Public Policy and Corporate Governance, Glasgow Caledonian University, Glasgow, UK.: 122–34. doi:10.1111/j.1471-8847.2006.00149.x.
Schwandt, Thomas A, Yvonna S Lincoln, and Egon G Guba. 2007. “Judging Interpretations: But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Evaluation 2007 (114). Wiley Subscription Services, Inc., A Wiley Company: 11–25. doi:10.1002/ev.223.
Shalowitz, D I, and F G Miller. 2008. “Communicating the Results of Clinical Research to Participants: Attitudes, Practices, and Future Directions.” PLoS Medicine 5 (5). Bioethics Program, University of Michigan Medical School, Ann Arbor, Michigan, United States of America. dshalowi@med.umich: e91. doi:10.1371/journal.pmed.0050091.
Simon, C, L A Shinkunas, D Brandt, and J K Williams. 2012. “Individual Genetic and Genomic Research Results and the Tradition of Informed Consent: Exploring US Review Board Guidance.” Journal of Medical Ethics 38 (7). Department of Internal Medicine, University of Iowa, Roy J. and Lucille A. Carver School of Medicine, 500 Newton Drive, 1-103 MEB, Iowa City, IA 52242-1190, USA. christian-simon@uiowa.edu.: 417–22. doi:10.1136/medethics-2011-100273.
Sunstein, C. 2002. “Chapter 2: Thinking about Risks.” In Risk and Reason: Safety, Law and the Environment, 28–52. Cambridge, UK: Cambridge University Press.
Taniguchi, N, M Taualii, and J Maddock. 2012. “A Comparative Analysis of Indigenous Research Guidelines to Inform Genomic Research in Indigenous Communities.” The International Indigenous Policy Journal 3 (1).
Thornton, R L. 2008. “The Demand For, and Impact Of, Learning HIV Status.” The American Economic Review 98 (5). University of Michigan, 426 Thompson St., Ann Arbor, MI, 48106 ( rebeccal@umich.edu ).: 1829–63. doi:10.1257/aer.98.5.1829.
Townsend, Robert M., Sombat Sakunthasathien, and Rob Jordan. 2013. Chronicles from the Field: The Townsend Thai Project. MIT Press.
UNAIDS, AVAC. 2011. Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials 2011. Second. Geneva, Switzerland: Joint United Nations Programme on HIV/AIDS.
Varmus, H, and D Satcher. 1997. “Ethical Complexities of Conducting Research in Developing Countries.” The New England Journal of Medicine 337 (14). National Institutes of Health, Bethesda, MD 20892-0148, USA.: 1003–5. doi:10.1056/NEJM199710023371411.
Weijer, C, and E J Emanuel. 2000. “Ethics. Protecting Communities in Biomedical Research.” Science (New York, N.Y.) 289 (5482). Department of Bioethics, Dalhousie University, Halifax, Nova Scotia, B3H 4H7, Canada. charles.weijer@dal.ca: 1142–44.
Wilson, D. 1999. “North-South Research in Developing Countries Must Respond to Community’s Priorities.” BMJ (Clinical Research Ed.) 319 (7223). ENGLAND: 1496–97.
World Bank. 2014. “Taking a First Step toward Establishing World Bank Ethical Guidelines for Research.” February 19. http://www.worldbank.org/en/news/feature/2014/02/19/taking-a-first-step-toward-establishing-world-bank-ethical-guidelines-for-research.
———. 2016. “Social Observatory.” Accessed February 6. http://www.worldbank.org/en/programs/social-observatory.
Zimmerman A., Marc. 1990. “Taking Aim on Empowerment Research: On the Distinction between Individual and Psychological Conceptions.” American Journal of Community Psychology, no. 1 (February): 169–77. doi:10.1007/BF00922695.
In this post, I share a teaching plan as I am returning after 20 (omg) years to facilitate one session of an undergrad Intro to Ethnography seminar course by the same prof who taught me. All of my first interviewing experiences, which took place in undergrad, were directly or indirectly through this professor: through a summer in Nepal (Annapurna side, in Ghandruk) during which I tried to research interpretations of and care-seeking for mental illness; through service-learning for a medical anthropology course, helping with office work and learning about patient experiences interacting with doctors about diabetes at a local free clinic; and through interviewing farmworkers and food bank employees about food insecurity among the migrant farmworker community of North Carolina. These experiences were formative and I am incredibly grateful for them, even if I don’t think I was an excellent researcher in all cases. I suspect far more practice and feedback were warranted before I undertook any of those activities. Hopefully I have gotten better in the years that have followed.
In this post, I share my draft student guide and slides and then revisit in a follow-up post with the final class prep guide and pre-work for students, slides, lecture notes, and reflections on what worked and what did not. Will be happy for anyone to pick up and use the materials at that point.
I draw on chapters in two books, to which I link below in an experiment about whether that turns out OK. The first is Chapter 9 (Interviewing: unstructured and semi-structured) from Bernard’s Research methods in anthropology. The second is Chapter 2 (A method of listening) from Cramer’s A politics of resentment.
I most certainly welcome critique and ideas in advance of lecturing/facilitating!!!
By the end of your pre-work and our session together, you will be able to:
*s indicate key goals for our seminar sesssion
Some working definitions of positionality:
Prepare and send: Drawing on these and your discussions with Dr. Folmar so far, please craft a 1-2 sentence definition of positionality that works for you, such as how you would describe it to your roommate, parents/guardians, or other, older family members. Now that autumn is upon us, you might consider how you would introduce this idea of what you are learning around the Thanksgiving or other holiday table. <Submit your definition to Folmar by…>
Prof Folmar has noted that one’s own positionality is nearly impossible to describe in full but that, for data collection, there are different ways to ‘lean into’ our own positionality to create more authentic interview and observation experiences. There are also ways to modify aspects of our appearance, behavior, and shared biography to mitigate aspects of our positionality that might otherwise diminish the quality of data we collect.
Despite the challenges in fully articulating our positionality, it is important for us to reflect deeply on it when deciding what to research and how to prepare for and conduct data collection in particular contexts. Positionality matters for the quality of data we collect. For this reason, there is value in stating aspects of stating those dimensions we think are likely to influence our work clearly, as Cramer does, so that readers of our research can take these factors into account when reading our work and determining whether our results seem credible to them. That is, why should they think you got close to ‘the truth.’ We should not strive for “a gaze from nowhere,” but, instead, to be honest about our own gaze. Some dimensions of positionality map onto Bourdieu’s discussion of capital, including cultural capital, and this can be useful additional reading and may provide an organizational principle that resonates with you.
Prepare: As you read through Cramer’s ‘A method of listening’ from her book The politics of resentment, please consider and note down your answers the following questions.
One thing that becomes clear from reading Cramer’s chapter is that the interview—including rapport-building and positionality-defining work—begins far before an interview starts, including how you arrive, how you introduce yourself, how you set things up before you ask your first “formal” question.
Prepare: Revisit the lists you made of Cramer’s dimensions of positionality and steps she took to play up or play down some of these dimensions.
The theory of ‘epistemic justice’ has been articulated by Fricker (2007), and indicates that “we can be unfairly discriminated against in our capacity as a knower based on prejudices about the speaker, such as gender, social background, ethnicity, race, sexuality, tone of voice, accent, and so on.” We can also discriminate against others, such as our research participants, interpreters, and so on in their capacity as ‘knowers.’ We usually seek people out to interview because they are expert in something (including their own lives, jobs) and yet we may undervalue their role as ‘knowers’ as we analyze, interpret, and report findings. Issues of epistemic injustice have recently been raised and anthropology and related applied fields, such as global health (such as in Tsosai 2017 and in Bhukani & Abímbọ́lá 2021).
There are two dimensions of such injustice (drawing on Bhukani & Abímbọ́lá):
A call for epistemic justice might be summed up as, “nothing about us without us.”
Prepare and send: Drawing on these definitions, please craft a 1-2 sentence definition of epistemic justice that works for you, again with the Thanksgiving/family/friend table in mind. <Submit your definition to Folmar by…>
In a 1986 paper, But it is rigorous?, Lincoln and Guba articulate the idea of ‘member-checking,’ as one way of co-interpreting findings with research participants. They define this as “the process of continuous, informal testing of information by soliciting the reactions of respondents to the investigator’s reconstruction of what she/he/they have been told or otherwise found out, and the constructions [or sense-making] offered by other respondents/participants or sources. It also includes a terminal, formal testing of the final report with a representative sample of stakeholders.”
Prepare:
“Assure your respondents that their participation is crucial and that you are truly interested in what they have to say (and you’d better mean it or you’ll never pull it off)” (Bernard).
Both Bernard and Cramer provide examples of ‘active listening,’ although they never provide a precise definition for the term. It is useful to have a working definition as well as some tactics to pursue it in the modality in which you plan to conduct your interviews.
Prepare and send: Using what Cramer and Bernard apply, as well as the internet[1], develop a 1-2 sentence Thanksgiving-table definition of ‘active listening.’ <Submit your definition to Folmar by…>
Like Moliere’s protagonist, Monsieur Jourdain, as you read more about actively listening to others, you may find that you and those around you have been doing it all along. You may also see how other professionals–medical, journalistic, and others–use (or fail to use) solid active listening techniques. In data collection, it is important to intentionally cultivate context-appropriate ways of demonstrating that you are really attending to what a participant is saying and feeling. you may have to adapt these for the modality you plan to use (for example, face-to-face, Zoom-to-Zoom, or over-the-phone).
Prepare:
As Bernard explains, effective probing is key to successful interviews, as that is how we pursue high-quality, rich data that isn’t just a reflection of ourselves. He provides seven probe types (separate from tactics to encourage particularly verbal and nonverbal respondents).
Prepare:
[Draft slides to follow…]
Posting, with a few amendments, something on which my pop has been working, since I am not sure where and when else he will get it posted!
In England, evidently, hereditary surnames (as distinct from patronymics and other names that changed with each generation) were widespread by the mid-1400s, after slow spread over the previous four centuries. These appear to have happened by ‘folk’ processes rather than decree or specific documented event, with people adopting place and occupation terms as names (though maybe new tax and census processes were coming into play (?), which warrants more investigation than pop has done so far). A common example is Smith for those working as blacksmiths.
We can, at least, partially track the origin of ‘lanthorn(e)’ to the mid-1500s. Moreover, the surname Lanthorn seems to appear in tandem with the term’s first documentation. A lanthorn is a lantern whose enclosure is made from thin sheets of horn. The existence of lanterns made of horn has been known since at least the time of the Roman Empire, including as pictured in buildings from Pompeii and Herculaneum. However, these horn-enclosed lanterns were always called just that, ‘lanterns made of horn.’ The contracted form, lantern made of horn, did not appear until much later.
The Oxford English Dictionary credits the first published use of the term lanthorn(e) to John Heywood’s “Proverbs and Epigrams” published in 1562, specifically, “Of a lanthorne and light” (p.205). Closely following its appearance in Heywood’s works, lanthorn[e] was published in Flemings 2nd edition (1587) of Holinshed’s (Chronicles) History of England, Scotland, & Ireland. This involved a description of the location and ordinances for an almshouse for poor widows, built by David Smith (Smyth), who was master embroiderer to Queen Elizabeth 1, in the ward of Baynard’s Castle in London in 1584.
At about the same time as this description of this almshouse containing a lanthorn[e], the earliest known use of the surname Lanthorn occurred. Robert Lanthorn was baptized on 14 June 1585, as recorded in the Hackney parish register for the Church of St. Augustine (now St. Paul’s Cathedral), in Greater London. The close proximity in time and place for the appearance of the term and surname could reasonably suggest a direct relation between the two.
A potential hiccup in placing the origin of the term lanthorn to 1500s is the Lanthorn Tower in the Tower of London, constructed in the 1220s and 1230s. As generally described now, the Lanthorn Tower had a lanthorn on its top to aid in navigation along the River Thames. As originally built, this tower was the private quarters for King Henry III (1216-1272). One might immediately object to the idea of the tower being topped with a lantern in the 1200s on the grounds of protection for the King; putting a bright light on top of where the King was sleeping would greatly aid enemies and assassins.
London Tower curators took up my question as to whether this tower was originally called the Lanthorn Tower. They found “most of the reliable go-to sources (the King’s Works, Buildings and Institutions of the Tower, etc.) state that the Tower was not always known by Lanthorn[e] – but none illuminate what it was previously known by.” The Tower is not named in the 1532 record of James Nedeham’s alterations to the Tower but it is so named in the 1597 Haiward and Gascoyne Plan (Fig. 1). It is likely the “Lanthorn Tower” was wisely left unnamed until it no longer served as a royal residence and after the lanthorn as a navigation aid was added. This timing is also consistent with the appearance of the term lanthorn[e] occurring in the mid-1500s.