This post represents joint work with the fabulous Mike Frick. It has been a long time coming–to my horror, I see that we first drafted this at end-December of 2010. It’s gone through a lot of iterations, trying to fit in this journal (for example, this working/submitted version here) or that rather than just getting our ideas out in the world. It’s never been a ‘perfect fit.’
However, I find that I return to some of these ideas time and again, so want something out there. There are places in the manuscript that still had comments, in which Mike and I both acknowledge we wanted to return to the text and see if could push a little deeper, be a little clearer. Nevertheless, sharing out.
Key reasons for results-sharing and shared sense-making
Key takeaways & recommendations for action
In this paper, we argue for recognizing and including research participants as important stakeholders interpreting and learning from evidence generated through social science research. Nigerian social commentator Onye Nkuzi asked, “If you are doing ‘ground-breaking social science research on Africa’ that Africans cannot access, what, ultimately, is the point?” (Nkuzi, 2011). We agree. “The point” matters.
Researchers often do not share research outputs and results to surveyed and studied communities and individuals, limiting access and engagement. (While we focus on study participants in this paper, many of our points have equal applicability to a range of involved stakeholders, including research field staff, street-level implementers, local (LMIC) ethics boards, and local media.)
This oversight does not always reflect the purposeful disregard of participants. There are several constraints to sharing results and co-interpreting with participants. Some of these may bind in ‘our’ (global health and development research) current operating set-up. Budgets and time may bind. Recognition of what kinds of participant-engagement activities can fit an on-going or longitudinal research program may constrain sharing. Creativity about meaningful modes of sharing may bind, given the limited precedent on which to draw (itself a product of the current lack of budget for such activities). And consideration of the benefits and drawbacks of sharing collected information with those from/with whom it was collected may be constrained.
Whatever the reasons, we contend that these constraints have negative implications for both study participants and researchers. It limits access and engagement in research. It elides the important role research participants can (and should) play in interpreting and learning from evidence generated through social science research.
We discuss the ethical and practical reasons for sharing results with study participants in social science and development research post-data collection. We propose that this practice has the potential to: (1) improve the credibility and validity of research findings; (2) provide fair benefits through knowledge enhancement and empowerment; and (3) promote long-term relationships, trust and collaboration between researchers and participants.
To develop the evidence on this, we call for more trials, errors, and research into the modalities and effects of sharing results with participant-communities to articulate the possibilities, benefits, and challenges. We close by examining the challenges of results sharing and discuss the roles different stakeholders—from investigators to funders to ethics boards to communities themselves—can play in this activity.
*We acknowledge that in the years since our first draft, there have been a few additional calls for similar action, though not necessarily embedded in the framework we propose and not all pushing beyond ‘results sharing’ into shared sense-making. To give credit where it is due (and you can let us know if we overlook anything major): a call for sharing results is made in this F2P2 post, ODI provides an example of results sharing (though we are not convinced of the usefulness of that particular mode), and the call for structured ethics appendices has a placeholder for results-sharing.
An ethical framework for results-sharing
We find it useful to develop an understanding why results-sharing and shared-sensemaking are important, rather than just stating it is so. Although ethics in the social sciences merit a broader outlook than the biomedical gaze, clinical trials still provide a good place to start for social scientists thinking on ethical engagement (Alderman, Das, and Rao 2013). A particularly active area of debate centers on clinical trials that lead to the development of a tangible and useful product, such as a novel drug or vaccine (Bhutta 2002).
The Council for International Organizations of Medical Sciences (CIOMS) guidelines require sponsoring agencies to “agree in advance…that any product developed through such research will be made reasonably available to the inhabitants of the host community” (CIOMS 2002). These and similar guidelines apply to tangible products of research. They may not perfectly map to social science research, which may not generate a new product or service but rather knowledge to inform future policy, programming, and research (Bhutta 2002). While social science research often does not impose the same kinds of risks on participants as testing vaccines or drug therapies, this does not attenuate ethical obligations to share research results. Relative to biomedical research, the opportunities for shared sensemaking, about social constructs and processes, are heightened in social science work.
Even in biomedical research, the final output of research may be knowledge—individually and/or collectively meaningful—rather than a marketable product or technology. For example, in genomic research, where the research product often is knowledge (i.e., the results of genetic testing), a key concern is sharing this information with specific participants (Dressler et al. 2012; Kristman and Kreiger 2008; Simon et al. 2012). Since the field’s start, its researchers have considered rapid and open release of genomic data as critical. For example, in the U.S., the National Human Genome Research Institute requires its grantees to release genetic sequences into open-access databases maintained by the National Institutes of Health (NIH) and other institutes (Lowrance and Collins 2007). While this system of data sharing enables scientific research, it does not connect individual study participants to relevant and medically actionable results. A review of consent forms from genomic studies found that participants are usually told genetic analysis will occur “without any explanation of with whom the resulting data will be shared” (McGuire and Gibbs 2006). An exception is research conducted among indigenous and aboriginal communities, who often have the right to access their genomic data and exercise some determination over its future use (Jacobs et al. 2010; Schuklenk and Kleinsmidt 2006; Taniguchi, Taualii, and Maddock 2012).
Against this backdrop, several thought-leaders in biomedical ethics—looking beyond products such as vaccines but still referring to clinical trials—have called for researchers to (1) meaningfully present research purposes and findings to study participants and (2) involve these participants in further results interpretation and manuscript preparation (Weijer and Emanuel 2000). These components are two in a larger ethical framework proposed by Ezekiel Emanuel and colleagues to govern clinical research in lower- and middle-income countries (LMICs). In order to promote the social value of such research, the architects of this framework suggest that research should be structured as a collaborative partnership between researchers and communities (Emanuel et al. 2004); sharing results is a component of this. This framework holds that consent forms, de-identification of data, locked files (physical and virtual), and other tools of confidentiality build the cornerstone of ethical research—protecting individuals—but remain only the starting point for research that produces fair benefits and social value (Benatar 2002). That is, that is ethical.
A recent review of the literature indicates that participants in biomedical and clinical research want “aggregate and clinically significant individual study results communicated to them,” or at least want the opportunity to choose to access these results (Shalowitz and Miller 2008). In a famous example, Richard Doll and Bradford Hill presented their findings on smoking and lung cancer to participants in the British Physicians Health Study. In a report on their findings, Doll and Hill write: “We thought it necessary, in view of the nature of our results, to lay these preliminary data before the survivors of the 40,000 men and women who made them possible” (DOLL and HILL 1954). Doll and Hill informed participants of the results through the British Medical Journal, an established information source for physicians such as those taking part in the study. The presentation of results to study participants in LMICs requires further creativity and effort given information channels and barriers in these settings.
At present, Institutional Review Boards / Ethical Review Committees and some grant-making agencies that support research in LMICs make researchers delineate how they will share their results. Yet these bodies give limited guidance or mandate on which information will be shared with whom, when, through what means, and with what aims. Scant research systematically assesses the desire of LMIC study participants to learn more about the research in which they take part, including the results. From our anecdotal experience, participant-communities and individuals would like to know the results of research and to share in benefits that may stem from them. This is an area for further, more systematic exploration—particularly in relation to two stages of the research process: (1) data validation and analysis; and (2) results dissemination and communication. This exploration is the scope of this paper; here, we make the case for why it is needed.
Sharing results during data validation and analysis: reciprocity, transparency, and validity
Dialogue and action in the social sciences may pave further the way for discussions about including research participants in validating and analyzing results prior to dissemination. These include conversations about limiting extractive research (spearheaded by Robert Chambers), about increasing research transparency, and about enhancing research validity.
Often, research is ‘extractive.’ That is, outsiders gather raw information to take away for processing and construction into valuable and meaningful (knowledge) products (Chambers 2007). Chambers has helped shape an appreciation for a participatory approach to development and research as the antithesis of extractive approaches, a dichotomy echoed by Leslie Groves (Groves 2015). But much of the current work on participation focuses on involving stakeholders in the generation and collection of data (such as mapping geographic space and power relationships as understood locally, as well as his newer focus on participatory numbers and statistics). This may be empowering in some measure but it does not necessarily achieve the true opposite of extraction, which is to insert or reciprocally create something of value. In our estimation, a more productive alternative to “extractive” (Groves 2015 – paper, not blog) research — which could still occur even if individuals or communities under study actively participate — is research that is reciprocal.
We encourage researchers to take this step beyond participatory data collection by calling for those providing raw data to play a role in the construction of knowledge products – that is, in the interpretation of results that considers multiple perspectives and that builds partnerships with study participants. This, we believe, goes some way to limit the extractive nature of research by attending to fair benefits and social value, and by producing interpretations and recommendations that accord with the experiences of participant-communities. This notion fits within the ‘feedback typology’ provided by Groves, who points out that beneficiary feedback can be elicited during four stages of program evaluation (an activity related to but distinct from research): (1) evaluation design; (2) data collection; (3) validation (sensemaking) and analysis; and (4) dissemination and communication (Groves 2015). Our interest in this paper is particularly on the latter two elements as a key frontier for ethical research.
Creating reciprocal exchange around research results will require increasing the transparency of research outcomes. Recently, research transparency has received increasing focus in the social sciences, particularly with regard to research design and data analysis (Miguel et al. 2014; Ioannidis et al. 2014). Popular measures to increase the transparency of research include pre-analysis plans to prevent later data-mining and specification-searching and open data and journal access to allow new or replicated analyses (for one example, see (King and Alvarez 2014)). These efforts are usually directed toward transparency vis-à-vis other researchers, as the assumed end-users of research. They only tangentially address transparency from the perspective of research participants themselves. Even broader conversations about transparency in research for the benefit of the public at large do not always acknowledge that a subset of the public may have special claim to receive research results, given their role as research participants.
In addition to increasing transparency, researchers may strengthen the quality and internal validity of their work by engaging study participants during data analysis and interpretation (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Often, researchers gain interpretive insight into their results through a chance participant comment (e.g. (Thornton 2008). Member-checking (also called shared sensemaking) offers a systematic means of verifying or co-interpreting the results with the participant-community, allowing them to also serve as analysts. Member-checking refers to a practice crystallized by Lincoln and Guba to enhance research ‘credibility’ (internal validity) by providing validity and reality checks (Lincoln and Guba 1986; Schwandt, Lincoln, and Guba 2007). Throughout the research process—as is currently more common in a qualitative research paradigm—researchers may gain insight from informal tests of initial interpretations among the participant (or a comparable) community. Researchers may reject some interpretations and consider new ones.
Lincoln and Guba also recommend “terminal, formal testing of the final case report with a representative sample of stakeholders” (Lincoln and Guba 1986). Terminal testing offers an opportunity to share research data, processes, results, and interpretations with participant-communities (Carlson 2010). Given possible concerns about presenting the raw data for auditing and clarification, we advocate sharing aggregated results and interpretations. Clear decisions on the array of information to be shared need not always be determined ex ante but can emerge over the course of the research—a flexibility that is easier to achieve in the social sciences, but also possible within biomedical research.
Taking time (and resources) to return to participants to involve them in member-checking results allows researchers to assess the internal validity of their work and ensures that participant-communities do indeed receive these results. It also acknowledges that insights arise from interactions and relationships between researchers and participants. Member-checking gives voice to participants in a way merely collecting data does not afford (Carlson 2010).
As an extension, participants may also contribute to manuscript preparation. What a research team might normally publish represents a single interpretation of the data collected. Consensus need not be reached on the interpretation of results; competing interpretations can be published, even within the same paper (Weijer and Emanuel 2000). Of course, study participants will not be involved in each manuscript decision. However, these discussions can pave the way for long-term research efforts and even the development of new intervention strategies (Emanuel et al. 2004).
Involving participants and other community stakeholders in the dissemination of research findings creates further avenues to build partnerships with participations that extend beyond data collection. The opposite of extractive research is research that is, variously, participatory, inclusive, authentic, and/or empowering, with empowerment seen as a continuum rather than a threshold (Chambers 2007; Lincoln and Guba 1986). Lincoln and Guba suggest that research authenticity stems, in part, from the extent to which it raises consciousness of a problem among participants, fosters appreciation of others’ views of an issue and its potential solutions, and, most challengingly, stimulates not only action by participants, but successful action (Lincoln and Guba 1986). Such consciousness-raising accords with the Hegelian or Freireian sense of enabling people to see the structures underlying problems facing them.
Sharing initial and final results, shared sensemaking, and communication
Participants stand to benefit most when shared information is clear and contextualized. Researchers need pertinent, creative ways to present information to participants. To do so, they may need to return to communities after data collection or during analysis. To package information, researchers should consider the literal translation of scientific terms into local languages and anticipate the challenges participants may experience in understanding the nature and implications of formal research (Appadurai 2006, building on ideas from Freire). The degree to which sharing the research purpose, design, and findings is empowering will depend on what is shared, through what modes, with whom, and with what degree of acceptance.
In the biomedical research field, many clinical trials networks have dedicated community engagement staff or community advisory boards (CABs) that play a primary role in disseminating results and communicating study findings to a broad constituency of involved stakeholders. One example is the Community Partners program, which guides community engagement work across the research networks supported by the Division of AIDS of the U.S. National Institute of Health. Community Partners produces lay summaries of trials protocols, relays information to community stakeholders during trial conduct and advises study teams on selecting appropriate and targeted dissemination strategies for releasing study findings (Kagan et al, 2012).
As a starting point for selecting modes of dissemination, social scientists should consider how laypersons and non-experts interpret scientific data, including the use of heuristics (Grotzer A., Miller B., and Lincoln A. 2012; Nickerson, Perkins, and Smith 1985; Sunstein 2002). In one example of results-sharing, at the end of a study on sexual networks and the spread of sexually transmissible infections (STIs) in East Africa, researchers held an educational session with participating women. Though the full analysis remained underway, the researchers shared de-identified sexual network images and engaged participants in a hands-on (quite literally, hand-holding) activity on the differential risk for STI transmission in serial versus concurrent sexual partnerships. This made the risks immediately interpretable to the women involved (M. Morris, personal communication, April 2009), although to our knowledge, the researchers did not conduct further research on whether this, in turn, influenced behavior within networks.
Such proactive results-sharing is the exception, even in biomedical research. The Ethics Advisory Group of the International Union Against Tuberculosis and Lung Disease (the Union) reviewed content from eight years’ worth of study protocols—clinical trials, observational studies, and record reviews—they supported. The Union found that “few researchers described how their results would be presented to study participants and their communities,” although all proposals mentioned dissemination in scientific journals (Edington et al. 2013). Members of the Union’s Ethics Advisory Board argue with force that sharing knowledge about health problems and potential solutions with communities creates opportunities for participants to “exercise their rights regarding health and other services” (Edington et al. 2013). In this way, results dissemination might be understood as an enabler of empowerment processes.
In an optimistic view of empowerment, people may be able to use research results to “gain or improve a service, assert their rights or obtain a better deal” (Chambers 2007). Participants may be able to use information shared with them to affect—directly or through collective action and advocacy—local behavior or improvements in well-being. This is, for example, part of the idea behind Vijayendra Rao’s work with the Social Observatory — and Rao notes that linking findings with action remains a challenge (World Bank 2016; Rao and Aiyer 2016).
In theory, if the information reveals that conditions or problems are different or more severe than participant-communities had understood, then study information may lead to reflection on the present situation and a new articulation of problems, which may also lead to a new framing of possible solutions. By hearing about the research process and results, respondents may think in a more critical and evidence-informed way about their communities, rather than relying on stereotype or other assumptions about their context. Understanding the research in which they took part may be enjoyable and have intrinsic value for participants and, further, lead respondents to garner a sense of control and even skill in considering their own problems in the future (Appadurai 2006; Chambers 2007; Zimmerman A. 1990).
However, from years of information-sharing campaigns in public health and beyond, we know that much lies between the provision of information and taking (collective) action (for a recent example outside of public health, see (Lieberman, Posner, and Tsai 2014). People must be ready, willing and able to translate information into action (Lesthahghe and Vanderhoeft 2001). Moreover, as in participatory approaches to development, much depends on elite capture within a community and the capacity and incentives for action within a community as a whole, which itself may be moderated by degree of heterogeneity (Mansuri and Rao 2012). Whether, when, and how these processes benefit participant-communities remains an empirical question. Evaluating the impact of sharing research results with participant-communities will enhance our understanding of the potential of the approach for which we advocate here.
Despite the limited evidence base, the development field has produced several examples of long-term research initiatives in communities that have included efforts to affect social change through knowledge-sharing and empowerment among community stakeholders. One example is the Townsend Thai Project, which has produced panel data on the spending of Thai households for over 150 consecutive months. Robert Townsend and his study team are now beginning to give research results to a randomly selected subset of households to see if sharing this information changes their financial decision-making. This approach suggests that results sharing itself can adopt an experimental, investigative form (Townsend, Sakunthasathien, and Jordan 2013).
Beyond fostering empowerment and inspiring collective action, including communities in results dissemination may also promote long-term ties with research teams and improved understandings of the place of research in social life. Emanuel et al warn that “very little can create more resentment, mistrust and a sense of exploitation than unfair distribution of the benefits of collaboration” (Emanuel et al. 2004). Particularly for researchers planning to work in a single study area with the same or similar participants over a long period, such resentment and mistrust must be avoided.
Mistrust can result when study participants feel disconnected from the research process or the conclusions it generates. Even before results are ready, information can be shared on the research process, including the importance of particular questions to the generation of results. Sharing this kind of information might respect a community’s desire to do more than offer its data to researchers while still protecting later study rounds from contamination. This may also help limit research fatigue among community members in the face of repeated research.
One of the authors (MWF) saw such fatigue among a community of intravenous drug users in an East Asian country who have participated in repeated public health surveys. These individuals have come to see research interviews as a rote exercise with little benefit to themselves, beyond the immediate, small financial incentive for participation. One participant asked, “Will this kind of research actually help people in our position? Will people in power actually listen to us?” Limiting research fatigue, and the linked tendencies to not cooperate or provide accurate or complete information, is a key benefit of continuing dialogue with participant-communities after—or between rounds of—data collection.
Challenges to results sharing
There are several significant challenges to results sharing in the social sciences. First, this effort may be costly in time or resources, especially as studies become larger or longer in geo-spatial or time scale (Alderman, Das, and Rao 2013). Often a long period elapses between a specific research study, the release of the results, and the possible changes in programs or policies aimed at improving well-being or development in the participant-community (Emanuel et al. 2004). However, experiences from clinical and social science randomized controlled trials show that results can be shared with large numbers of participants without adding undue costs. After concluding the first phase IIb efficacy trial of a new TB vaccine in over forty years, the South African TB Vaccine Initiative (SATVI) went door-to-door to inform the nearly 2,800 mothers with enrolled infants that the vaccine was not effective. Investigators at SATVI point out that this effort allowed them to “take the community’s pulse” after a negative result. Many mothers reportedly reacted to the disappointing news by asking when they could enroll their children in the next trial. They further expressed pride in their participation in the search for a new TB vaccine (M. Tameris, personal communication, October 2013).
While SATVI successfully disseminated results by going door-to-door, the method of results dissemination need not match the method of data collection. Researchers may beneficially use technology or media to share results and solicit feedback. For example, one of the authors (HEL) used text messaging to convey basic process and outcome information to those participating in a malaria medications study in Ghana that used text messaging as a key component of the intervention (Raifman et al. 2014). Creative approaches that consider results dissemination beyond village meetings or door-knocking may unlock feasible options to share research results on a wider scale without great financial burden.
Second, researchers may worry that sharing results might bias or “contaminate” studies by adding an additional intervention component (or “treatment”) that will influence the behavior or responses of participants over the course of the study. This can make it more difficult to make causal claims about the effect of the original intervention on the outcomes observed. For example, if researchers are studying or evaluating an intervention that distributes X health product, feeding back mid-term results to study communities about the impact of X on behavior or health outcomes may itself change behavior. This is a case in which sharing results may be best held until after the last round of data collection is complete or where data shared over the course of the study would need to be carefully selected as unlikely to influence outcomes.
Third, dissemination might pose special risks for vulnerable or minority groups within participant-communities and researchers will need to consider this possibility when designing subject protections. The process of preparing to share results can itself encourage researchers to thoughtfully engage with participant-communities throughout the research process. In this process, researchers will be guided by soliciting community input about what information is appropriate to share, how information could be disseminated, and who should have access. Social scientists must balance the community’s desire for certain data against the need to maintain confidentiality and anonymity. The ethical dissemination of study results will benefit greatly from meaningful engagement with community representatives early in the research process (Bhutta 2004).
In clinical trials, such decisions are often aided by community advisory boards (CABs), which facilitate effective communication between investigators and the host community on important ethical issues (Boulanger et al. 2013). Investigators studying HIV and TB have also received guidance on results dissemination in the Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials (GPP-HIV) and the Good Participatory Practice Guidelines for TB Drug Trials (GPP-TB) – two guidance documents developed in conjunction with affected communities, trial sponsors, civil society, and researchers (Regimens 2012; UNAIDS 2011).
For all of the activities discussed in this paper, it may be difficult to determine who constitutes “the community” and its leaders, although these questions arise at the beginning of most studies. Indeed, approaching research with the expectation of continued engagement may help researchers think about issues of community more clearly during the early stages of research design. Early engagement can allow researchers to better understand what information the participant-community feels would be “too sensitive” to collect or to share, which may then require either a careful articulation of the benefits of this information to the participants or a reconsideration of aspects of the research design (e.g. (Boga et al. 2011)).
Conclusions and considerations
Each of these challenges can be overcome, often in ways that may strengthen research processes and outcomes. To realize this potential, however, results sharing must occupy a more central position in the planning of research projects. The following activities may help social scientists and research institutions resolves some of the above challenges and integrate results dissemination into their work:
- Create community advisory structures. Development organizations that conduct social science research, especially long-term projects in a given location, could consider establishing community advisory boards (CABs) that can maintain a dialogue with communities throughout the research process and guide results dissemination and co-interpretation. Here, social scientists can receive practical guidance from their colleagues involved in HIV and TB clinical trials, where CABs have played an increasingly important role in communicating study findings and maintaining reciprocal, sustained relationships between researchers and participant communities.
- Build results sharing into experimental designs. Just as many clinical trials include small studies evaluating trial operations (for example, sub-studies exploring the quality of informed consent administration or laboratory conduct), development organizations and social science research institutions should build formal examinations of community engagement and results sharing into the larger plan and timeline of research projects. These process studies, which can often be nested into larger studies at modest additional cost, may help to generate comparative data on effective modes of results dissemination and post-trial engagement with communities. Building results sharing into protocols may also open the door for IRBs and ethics boards to gain experience and familiarity in evaluating these types of activities.
- Plan for results sharing from the outset. Planning for results dissemination starts with protocol development. Including a commitment to share results with participant communities in early drafts of protocols will encourage researchers to anticipate and plan for results dissemination in a more forthright, participatory manner. One recent example comes from the Tuberculosis Trials Consortium (TBTC), a research consortium housed at the U.S. Centers for Disease Control and Prevention that conducts trials evaluating shorter-course therapies for TB drug regimens. One of the authors (MWF) coordinates the TBTC’s community advisory board (called the Community Research Advisors Group), which successfully advocated for the principal investigators planning a phase III study to include language in the protocol that “overall (aggregate) study results will be shared with study participants through mechanisms and materials reviewed and approved by the TBTC Community Research Advisors Group” (TBTC Study 31 protocol team, personal communication, December 2013). This statement ensures that community representatives will play a role in results dissemination without locking the investigators into overly prescriptive statements about what specific results will be shared or how this will occur. A similar approach could be used to mainstream results dissemination into protocols for social and behavioral science studies, whether descriptive surveys or experimental investigations.
Building greater participant-community engagement through results dissemination honors the ethical obligations to provide study participants with “fair benefits” in the face of potential harms and further acknowledges the expressed desire of communities to do more than give data. Whether engagement takes the form of “member-checking” to validate data or wider dissemination of research information, sharing results offers social scientists an avenue for building the kind of collaborative partnerships that make sustained research possible. Moreover, it allows researchers to refine and reconsider their interpretation of the results and the implications of research. To achieve this, researchers will need to think beyond the minimum requirement of extant IRB protocols and explicitly address how they will share results with study participants, what kind of information will be shared, and when. Our contention is that results sharing can generate research that is more transparent, more fair, and possibly more trustworthy and more useful. It does greater justice to allowing participants access to the research in which they participate. More empirical research is required on the means and effects of sharing results with participant-communities to fully understand the benefits of this approach for participants, researchers, and the partnership between them. But, as the clinical trials literature suggests, the ethical obligation to engage communities in the results of research is already clear.
Acknowledgement and notes
This paper began as a shared interest between the authors in a research methods course co-taught by Dr. Daniel Halperin (now at the University of North Carolina at Chapel Hill Gillings School of Global Public Health) and Dr. Heather Lanthorn. Both Mike and Heather were strongly influenced by not only what they had seen during data collection but also personal experiences of participating in clinical trials and Psych-101-required studies and never hearing anything back about results. We thus have deep empathy for communities that want to hear about findings, as well as intellectual/justice commitment to shared sensemaking.
Dr. Halperin encouraged our thinking and to pursue a paper (though without specific guidance on how to get a conceptual paper out in the world). Both authors have had important interactions while working on research projects, including with Abubakari Bukari and Salifu Amadu at Innovations for Poverty Action – Ghana and investigators at the Tuberculosis Trials Consortium. Drs. Richard Cash (Harvard T.H. Chan School of Public Health and the Public Health Foundation of India), Howard White (formerly of the International Initiative for Impact Evaluation), and Marc Zimmerman (University of Michigan School of Public Health) have offered important critiques on drafts on this paper. Dr. David McKenzie (World Bank), Dr. Biju Rao (World Bank), Dr. Robert Townsend (Massachusetts Institute of Technology), and Raag Bhatia (International Initiative for Impact Evaluation) have offered encouragement and insight at different stages of the process. Of course, all mistakes are our own.
The journey of this paper has been frustrating. When we submitted to journals like Social Science & Medicine, we were told our point was obvious and well-known to anthropologists. Meanwhile, economics-led development journals thought the ideas could be interesting, not broadly relevant, and offered little guidance on where a conceptual paper might live, at least allowing us to get a timestamp on our ideas. In any case, these ideas certainly seem they are becoming more relevant now than ever.
Alderman, Harold, Jishnu Das, and Vijayendra Rao. 2013. “Conducting Ethical Economic Research.”
Amon, Joseph J, Stefan D Baral, Chris Beyrer, and Nancy Kass. 2012. “Human Rights Research and Ethics Review: Protecting Individuals or Protecting the State?” PLoS Medicine 9 (10). Public Library of Science: e1001325.
Angell, M. 1997. “The Ethics of Clinical Research in the Third World.” The New England Journal of Medicine 337 (12). UNITED STATES: 847–49. doi:10.1056/NEJM199709183371209.
Appadurai, Arjun. 2006. “The Right to Research.” Globalisation, Societies & Education 4 (2). Routledge: 167–77. doi:10.1080/14767720600750696.
Benatar, Solomon R. 2002. “Reflections and Recommendations on Research Ethics in Developing Countries.” Social Science & Medicine 54 (7). Elsevier: 1131–41.
Beskow, L M, W Burke, J F Merz, P A Barr, S Terry, V B Penchaszadeh, L O Gostin, M Gwinn, and M J Khoury. 2001. “Informed Consent for Population-Based Research Involving Genetics.” JAMA : The Journal of the American Medical Association 286 (18). Office of Genetics and Disease Prevention, Centers for Disease Control and Prevention, 4770 Buford Hwy NE, MS K-28, Atlanta, GA 30341-3724, USA. email@example.com: 2315–21.
Bhutta, Z A. 2002. “Ethics in International Health Research: A Perspective from the Developing World.” Bulletin of the World Health Organization 80 (2). Department of Paediatrics, The Aga Khan University, Karachi, Pakistan. firstname.lastname@example.org: 114–20.
———. 2004. “Beyond Informed Consent.” Bulletin of the World Health Organization 82 (10). World Health Organization: 771–77. doi:10.1590/S0042-96862004001000013.
Boga, M, A Davies, D Kamuya, S M Kinyanjui, E Kivaya, F Kombe, T Lang, et al. 2011. “Strengthening the Informed Consent Process in International Health Research through Community Engagement: The KEMRI-Wellcome Trust Research Programme Experience.” PLoS Medicine 8 (9). Consent and Communication Committee, KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya.: e1001089. doi:10.1371/journal.pmed.1001089.
Boulanger, Renaud F, Stephanie Seidel, Erica Lessem, Lee Pyne-Mercier, Sharon D Williams, Laia Ruiz Mingote, Cherise Scott, Alicia Y Chou, and James V Lavery. 2013. “Engaging Communities in Tuberculosis Research.” The Lancet Infectious Diseases. Elsevier.
Carlson, J. 2010. “Avoiding Traps in Member Checking.” The Qualitative Report 15 (5): 1102–13.
Chambers, Robert. 2007. “Who Counts? The Quiet Revolution of Participation and Numbers.” Institute of Development Studies (UK).
CIOMS. 2002. “International Ethical Guidelines for Biomedical Research Involving Human Subjects.” Geneva, Switzerland: Council for International Organizations of Medical Sciences.
Cooper, M. 2008. “Sharing Data and Results with Study Participants: Report on a Survey of Cultural Anthropologists.” Journal of Empirical Research on Human Research Ethics : JERHRE 3 (4). McMaster University.: 19–34. doi:10.1525/jer.2008.3.4.19.
DOLL, R, and A B HILL. 1954. “The Mortality of Doctors in Relation to Their Smoking Habits; a Preliminary Report.” British Medical Journal 1 (4877). Not Available: 1451–55.
Dressler, L G, S Smolek, R Ponsaran, J M Markey, H Starks, N Gerson, S Lewis, et al. 2012. “IRB Perspectives on the Return of Individual Results from Genomic Research.” Genetics in Medicine : Official Journal of the American College of Medical Genetics 14 (2). Institute for Pharmacogenomics and Individualized Therapy, University of North Carolina, Chapel Hill, NC, USA. email@example.com: 215–22. doi:10.1038/gim.2011.10; 10.1038/gim.2011.10.
Edejer, T T. 1999. “North-South Research Partnerships: The Ethics of Carrying out Research in Developing Countries.” BMJ (Clinical Research Ed.) 319 (7207). Global Programme on Evidence for Health Policy, World Health Organisation, CH-1211 Geneva 27, Switzerland. firstname.lastname@example.org: 438–41.
Edington, M, T Ornstein, J Denholm, A Sony, S.J. Kim, A Narain, and R O’Brien. 2013. “No Title.” Public Health Action 3 (4): 346–50.
Emanuel, E J, D Wendler, J Killen, and C Grady. 2004. “What Makes Clinical Research in Developing Countries Ethical? The Benchmarks of Ethical Research.” The Journal of Infectious Diseases 189 (5). Department of Clinical Bioethics, Warren G. Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland 20892-1156, USA. email@example.com.: 930–37. doi:10.1086/381709.
Goldstein, Markus. 2014. “Ethical Validity Response #3: Would a Graph Help?” Blog. Development Impact. March 20. http://blogs.worldbank.org/impactevaluations/ethical-validity-response-3-would-graph-help.
Grotzer A., Tina, Rebecca Miller B., and Rebecca Lincoln A. 2012. “Perceptual, Attentional, and Cognitive Heuristics That Interact with the Nature of Science to Complicate Public Understanding of Science.” In Advances in Nature of Science Research: Concepts and Methodologies, edited by M Khine, 27–49. Harvard University, Cambridge, MA, USA: Springer. doi:10.1007/978-94-007-2457-0_2.
Groves, Leslie. 2015. “Breaking down ‘Feedback’ A Typology.” Blog. Beneficiary Feedback in Evaluation. January 6. https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/2015/01/06/breaking-down-feedback-a-typology/.
Hamermesh, Daniel. 2012. “Six Decades of Top Economics Publishing: Who and How?” NBER Working Paper Series Working pa.
Ioannidis, John P A, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, and Robert Tibshirani. 2014. “Increasing Value and Reducing Waste in Research Design, Conduct, and Analysis.” The Lancet 383 (9912): 166–75. doi:10.1016/S0140-6736(13)62227-8.
Jacobs, B, J Roffenbender, J Collmann, K Cherry, L L Bitsoi, K Bassett, and C H Evans Jr. 2010. “Bridging the Divide between Genomic Science and Indigenous Peoples.” The Journal of Law, Medicine & Ethics : A Journal of the American Society of Law, Medicine & Ethics 38 (3). O’Neill Health Law Institute, Georgetown University.: American Society of Law, Medicine & Ethics, Inc: 684–96. doi:10.1111/j.1748-720X.2010.00521.x; 10.1111/j.1748-720X.2010.00521.x.
King, Gary, and R. Michael Alvarez. 2014. “Gary King: An Update on Dataverse.” OUPblog. December 7. http://blog.oup.com/2014/12/gary-king-update-dataverse-journal-database/.
Kristman, V L, and N Kreiger. 2008. “Information Disclosure in Population-Based Research Involving Genetics: A Framework for the Practice of Ethics in Epidemiology.” Annals of Epidemiology 18 (4). Department of Public Health Sciences, University of Toronto, Toronto, Ontario, Canada. firstname.lastname@example.org: 335–41. doi:10.1016/j.annepidem.2007.10.003.
Lesthaeghe, Ron, and Camille Vanderhoef. 2001. “Ready, Willing, and Able: A Conceptualization of Transitions to New Behavioral Forms.” In Diffusion Processes and Fertility Transition, edited by John B. Casterline. Washington, D.C.: National Research Council.
Lieberman, Evan S., Daniel N. Posner, and Lily L. Tsai. 2014. “Does Information Lead to More Active Citizenship? Evidence from an Education Intervention in Rural Kenya.” World Development 60 (August): 69–83. doi:10.1016/j.worlddev.2014.03.014.
Lincoln, Yvonna S, and Egon G Guba. 1986. “But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Program Evaluation 1986 (30). Wiley Subscription Services, Inc., A Wiley Company: 73–84. doi:10.1002/ev.1427.
Lowrance, W W, and F S Collins. 2007. “Ethics. Identifiability in Genomic Research.” Science (New York, N.Y.) 317 (5838). United States: 600–602. doi:10.1126/science.1147699.
Mansuri, Ghazala, and Vijayendra Rao. 2012. Localizing Development: Does Participation Work? Vol. 1. World Bank Publications.
McGuire, A L, and R A Gibbs. 2006. “Genetics. No Longer de-Identified.” Science (New York, N.Y.) 312 (5772). Center for Medical Ethics and Health Policy, Baylor College of Medicine, Baylor College of Medicine, One Baylor Plaza, Suite 310D, Houston, TX 77030, USA. email@example.com: 370–71. doi:10.1126/science.1125339.
McKenzie, D. 2011. “Should We Share Results with Respondents?”
McKenzie, David. 2014. “Ethical Validity Response #2: Is Random Assignment Really That Unacceptable or Uncommon?” Blog. Development Impact. March 19. http://blogs.worldbank.org/impactevaluations/ethical-validity-response-2-random-assignment-really-unacceptable-or-uncommon.
Miguel, E., C. Camerer, K. Casey, J. Cohen, K. M. Esterling, A. Gerber, R. Glennerster, et al. 2014. “Promoting Transparency in Social Science Research.” Science 343 (6166): 30–31. doi:10.1126/science.1245317.
Nickerson, R, D Perkins, and E Smith. 1985. “Chapter 5: Errors and Biases in Reasoning.” In The Teaching of Thinking, 111–42. Hillsdale, NJ: LEA.
Ozler, Berk. 2014. “Taking Ethics Seriously: Response #1.” Blog. Development Impact. March 18. http://blogs.worldbank.org/impactevaluations/taking-ethics-seriously-response-1.
Raifman, Julia R G, Heather E Lanthorn, Slawa Rokicki, and Günther Fink. 2014. “The Impact of Text Message Reminders on Adherence to Antimalarial Treatment in Northern Ghana: A Randomized Trial.” PloS One 9 (10): e109032. doi:10.1371/journal.pone.0109032.
Rao, Vijayendra, and Yamini Aiyer. 2016. “The Social Observatory: Building Adaptive Capacity in Large-Scale Projects.” presented at the 3ie Delhi Seminar Series, Delhi, February 1.
Ravallion, Martin. 2014. “Taking Ethical Validity Seriously.” Blog. Development Impact. March 17. http://blogs.worldbank.org/impactevaluations/taking-ethical-validity-seriously.
Regimens, Critical Path to TB Drug Stakeholder and Community Engagement Working Group. 2012. Good Participatory Practice Guidelines for TB Drug Trials 2012. Critical Path to TB Drug Regimens.
Schuklenk, U, and A Kleinsmidt. 2006. “North-South Benefit Sharing Arrangements in Bioprospecting and Genetic Research: A Critical Ethical and Legal Analysis.” Developing World Bioethics 6 (3). Centre for Ethics in Public Policy and Corporate Governance, Glasgow Caledonian University, Glasgow, UK.: 122–34. doi:10.1111/j.1471-8847.2006.00149.x.
Schwandt, Thomas A, Yvonna S Lincoln, and Egon G Guba. 2007. “Judging Interpretations: But Is It Rigorous? Trustworthiness and Authenticity in Naturalistic Evaluation.” New Directions for Evaluation 2007 (114). Wiley Subscription Services, Inc., A Wiley Company: 11–25. doi:10.1002/ev.223.
Shalowitz, D I, and F G Miller. 2008. “Communicating the Results of Clinical Research to Participants: Attitudes, Practices, and Future Directions.” PLoS Medicine 5 (5). Bioethics Program, University of Michigan Medical School, Ann Arbor, Michigan, United States of America. firstname.lastname@example.org: e91. doi:10.1371/journal.pmed.0050091.
Simon, C, L A Shinkunas, D Brandt, and J K Williams. 2012. “Individual Genetic and Genomic Research Results and the Tradition of Informed Consent: Exploring US Review Board Guidance.” Journal of Medical Ethics 38 (7). Department of Internal Medicine, University of Iowa, Roy J. and Lucille A. Carver School of Medicine, 500 Newton Drive, 1-103 MEB, Iowa City, IA 52242-1190, USA. email@example.com.: 417–22. doi:10.1136/medethics-2011-100273.
Sunstein, C. 2002. “Chapter 2: Thinking about Risks.” In Risk and Reason: Safety, Law and the Environment, 28–52. Cambridge, UK: Cambridge University Press.
Taniguchi, N, M Taualii, and J Maddock. 2012. “A Comparative Analysis of Indigenous Research Guidelines to Inform Genomic Research in Indigenous Communities.” The International Indigenous Policy Journal 3 (1).
Thornton, R L. 2008. “The Demand For, and Impact Of, Learning HIV Status.” The American Economic Review 98 (5). University of Michigan, 426 Thompson St., Ann Arbor, MI, 48106 ( firstname.lastname@example.org ).: 1829–63. doi:10.1257/aer.98.5.1829.
Townsend, Robert M., Sombat Sakunthasathien, and Rob Jordan. 2013. Chronicles from the Field: The Townsend Thai Project. MIT Press.
UNAIDS, AVAC. 2011. Good Participatory Practice Guidelines for Biomedical HIV Prevention Trials 2011. Second. Geneva, Switzerland: Joint United Nations Programme on HIV/AIDS.
Varmus, H, and D Satcher. 1997. “Ethical Complexities of Conducting Research in Developing Countries.” The New England Journal of Medicine 337 (14). National Institutes of Health, Bethesda, MD 20892-0148, USA.: 1003–5. doi:10.1056/NEJM199710023371411.
Weijer, C, and E J Emanuel. 2000. “Ethics. Protecting Communities in Biomedical Research.” Science (New York, N.Y.) 289 (5482). Department of Bioethics, Dalhousie University, Halifax, Nova Scotia, B3H 4H7, Canada. email@example.com: 1142–44.
Wilson, D. 1999. “North-South Research in Developing Countries Must Respond to Community’s Priorities.” BMJ (Clinical Research Ed.) 319 (7223). ENGLAND: 1496–97.
World Bank. 2014. “Taking a First Step toward Establishing World Bank Ethical Guidelines for Research.” February 19. http://www.worldbank.org/en/news/feature/2014/02/19/taking-a-first-step-toward-establishing-world-bank-ethical-guidelines-for-research.
———. 2016. “Social Observatory.” Accessed February 6. http://www.worldbank.org/en/programs/social-observatory.
Zimmerman A., Marc. 1990. “Taking Aim on Empowerment Research: On the Distinction between Individual and Psychological Conceptions.” American Journal of Community Psychology, no. 1 (February): 169–77. doi:10.1007/BF00922695.