just a small collection of not-really compliments:
1. you talk fast but it seems to work for you -steve folmar
2. you use small words it seems to work for you -derrick matthews
3. you use run-on sentences but it seems to work for you -pop
this blog reflects conversations and on-going work with both mike frick (@mwfrick), shagun sabarwal (@shagunsabarwal), and urmy shukla (@urmy_shukla) — they should receive no blame if this blog is wacky and plenty of credit if it is not.
a recent post by monkey cage contributors on the washington post, then summarized by BITSS, asked/suggested whether “exploratory, qualitative, historical, and case-based research is much harder to present in a results-free manner, and perhaps impossible to pre-register.”
this was just a brief point in their larger argument, which i agree with. but it seems worth pausing to consider whether it is likely to be true. below, i discuss research design for qualitative work, sampling considerations, and analysis itself. throughout, i take a ‘pre-analysis plan’ to be a commitment on what the research will analyze and report on but not a constraint on doing analyses that are ‘off plan.’ rather, the researcher just needs to be explicit about what analyses were ‘on plan’ and which were not and there is a commitment to report everything that was ‘on plan’ – or why such reporting is infeasible.
my conclusion: a conversation on pre-analysis plans needs to distinguish whether planning is possible from whether planning is currently done. in general, I feel planning for analysis is possible when you plan to do analysis.
disclaimer: in this post, my reference to ‘qualitative research’ is to the slice of social science research that has to do with talking to and/or observing and/or participating with living people. i claim no knowledge on the analyses of historical manuscripts and a wide array of other qualitative research. by extension, i am mostly talking about planning for the collection and analysis of data from in-depth interviews, focus group discussion, and forms of (participant-) observation.
designing research: working with and asking questions of living people implies an ethics-review process, for which the research will have to lay out at least the ‘domains’ (aka themes, topics, categories) of information s/he hopes to observe and ask people about. usually, one does not get away with saying “i am just going to head over here, hang out, and see what i find.” this requires planning.
like a pre-analysis plan, the domains for qualitative work can set up some bounds for the minimum of what will be collected and reported (“we will collect and analyze data on the following topics: x, y, z“), even if the final report is that a particular domain ended up being a flop because no one wanted to talk about it or it proved uninteresting for some reason.
some of the most famous ethnographies (say, nisa and tuhami) focus on a single person, often to try to give perspective on a larger culture — which may not be what the ethnographer initially set out to study. but the ethnographer can still tell you that (“i went to look at x, and here’s what i found — but i also found this really interesting person and that’s what the rest of the book is about”). so this does not seem inconsistent with the underlying logic of a plan, with the understanding that such a plan does not dictate everything that follows but does mandate that one reports why things changed.
which brings us to the nature of qualitative work: it is often iterative and the researcher often conducts data collection, entry, and analysis in parallel. analysis from an early set of interviews informs questions that are asked and observations attended to later on. this is one of the exciting (to me) elements of qualitative research, that you get to keep incorporating new learnings as you go along.
this approach need not be inconsistent with having a set of domains that you intend to explore. within each, maybe the questions get sharper, deeper, or more elaborate over time. or maybe one planned domain turns out to be way too sensitive or way off-base. again, the researcher can report, relative to the initial plan, that this is what happened between designing the research and actually doing it.
sampling: certain aspects of qualitative research can be planned in advance. usually the aim is to be in some way representative. one way to aim for representation is to consider sub-groups of interest. in large-n analysis, the researcher may be able to hope that sufficient numbers of sub-groups will appear in the sample by default. in smaller-n analysis, more purposive sampling plans may be needed to be sure that different sub-groups are engaged in conversation. but, specifying sub-groups of interest can be done in advance — hence, plannable.
but, at least some branches of qualitative research suggest that representativeness is about outputs rather than inputs — and that what the researcher is seeking is saturation (i am thinking of lincoln and guba here), which has implications for planning. ‘saturation’ relates to whether the researcher is starting to here the same answer over and over.
in some cases, inputs are the determinant of representation — similar to the approach that can be taken in large-n work. let’s say that you want to get the input of the members of an elite government entity — a particular committee with 20 people on it. fine, plan to talk to all of them. representativeness is here achieved by talking to all of relevant people (the whole population of interest) – and then finding someway of summarizing and analyzing the viewpoints of all of them, even if it’s 20 different viewpoints. there’s your sampling plan. (this may or may not be part of a pre-analysis plan, depending how that is defined and at what stage of the research process it is required. i take these to be open questions.)
for less clearly bounded groups that nevertheless have clear characteristics and may be expected to think or behave differently — let’s say men versus women, older versus younger people, different economic quantiles, different ethnic groups, whatever — then planning for saturation may look more like: plan to talk to men until we start getting repeated answers on key questions or interest or conduct focus groups that are internally homogenous with respect to ethnicity until we start to hear similar answers within each ethnicity (because it may be different numbers within each).
that is, if representativeness is focused on output, then it is insufficient to plan at the beginning “we will do two focus groups in each village in which we collect data.” the researcher can specify the sub-groups of interest but probably not number of interviews, focus groups, or hours of observation required.
i make this point for two reasons. first, a pre-analysis plan for qualitative work should plan for iteration between considering what has been collected and whether more questions are necessary to make sense of the phenomena of interest. this makes it different in practice than a quantitative plan but the underlying principle holds.
second, a pre-analysis plan, if it covers sampling, probably cannot plan for specific numbers of inputs unless the population is clearly bounded (like the committee members). rather, the plan is to aim for saturation within each sub-group of interest.
analysis: finally! actual analysis. in general, i feel more emphasis in a moving-towards-mixed-methods world needs to be put on analysis of qualitative inputs (and incorporation of those inputs into the larger whole).
my hunch is that part of why people think planning for analysis of qualitative work may difficult is because, often, people don’t plan to ‘analyze’ qualitative data. instead, perhaps, the extent of the plan is to collect data. and then they plan to find a good quote or story (“anec-data”) — which may raise some questions about whether social science research is being done.
not planning for analysis can limit one’s ability to set out a plan for analysis. this is different than saying that planning is not possible — there’s plenty of books on qualitative data analysis (e.g. here, here, here, and many others). here are some things that can be planned in advance:
or, maybe the work is far more exploratory and you set out to learn what people think and do, in general, about a particular topic. it’s more likely that you’ll be open-coding your data — looking for patterns that emerge (ideas that are repeatedly raised). and it’s likely you’ll have some idea in advance that that is what you intend to do.
even if you start out closed-coding, it may turn out that a whole lot of your data end up falling outside the initially planned theoretical framework. fine. that doesn’t mean that you can’t report on what did fit in the framework (=plan) and then analyze all that interesting stuff that happened outside it as well. which, i think, is why are talking about pre-analysis plans rather than pre-analysis straitjackets.
to close, in discussing whether pre-analysis plans for qualitative research — in the sense of talking to and watching living people, perhaps as part of a mixed-methods research agenda — are feasible, I hope the conversation is guided by whether planning is indeed possible in the social sciences as opposed to whether such planning currently takes place.
Originally posted on Suvojit Chattopadhyay:
While it is always interesting to hear about interesting approaches to data collection and analysis – and a lot of this feeds into thoughts/reactions I had while attending (by complete happenstance) the first day of the Doing Development Differently workshop at the Kennedy school, in this blog, I wanted to quickly highlight something that caught my eye in Duncan’s blog
Its this bit in particular:
“What we did next was not to go to the program sites, but spend some more time in the headquarters. We were interested in the perspectives of a variety of people involved with implementation, from the managers to those involved in everyday activities on the ground. And what we sought to understand was not only what the project had achieved…
View original 197 more words
there is currently a good deal of attention on transparency of social science research – as there should be. much of this is focused on keeping the analysis honest, including pre-analysis plans (e.g.) and opening up data for re-analysis (internal replication, e.g. here and here). some of this will hopefully receive good discussion at an upcoming conference on research transparency, among other fora.
but, it seems at least two points are missing from this discussion, both focused on the generation of the analyzed data itself.
intervention description and external replication
first: academic papers in “development” rarely provide a clear description of the contents of an intervention / experiment, such that it could be, plausibly, reproduced. growing up with a neuroscientist / physiological psychologist (that’s my pop), i had the idea that bench scientists had this part down. everyone (simultaneously researchers and implementers) has lab notebooks and they take copious notes. i know because I was particularly bad at that part when interning at the lab.*
then, the researchers report on those notes: for example, on the precise dimensions of a water maze they built (to study rodent behavior in stressful situations) and gave you a nice diagram so that you could, with a bit of skill, build your own version of the maze and follow their directions to replicate the experiment.
pop tells me i am overly optimistic on the bench guys getting this totally right. he agrees that methods sections are meant to be exact prescriptions for someone else to reproduce your study and its results. for example, they are very detailed on exactly how you ran the experiment, description of the apparatus used , where reagents (drugs) were purchased from, etc. he also notes that one thing that makes this easier in bench science is that “most experimental equipment is purchased from a manufacturer which means others can buy exactly the same equipment. gone are the dark days when we each made our own mazes and such. reagents are from specific suppliers who keep detailed records on the quality of each batch…”
then he notes: “even with all this, we have found reproducibility to be sketchy, often because the investigators are running a test for the first time. a reader has to accept that whatever methodological details were missed (your grad student only came in between 1 and 3AM when the air-conditioning was off) were not critical to the results.” or maybe this shouldn’t go unreported and accepted.
the basic idea holds in and out of the lab: process reporting on the intervention/treatment needs to get more detailed and more honest. without it, the reader doesn’t really understand what the ‘beta’ in any regression analysis means – and with any ‘real world’ intervention, there’s a chance that beta contains a good deal of messiness, mistakes, and iterative learning resulting in tweaks over time.
as pop says: “an investigator cannot expect others to accept their results until they are reproduced by other researchers.” and the idea that one can reproduce the intervention in a new setting (externally replicate) is a joke unless detailed notes are kept about what happens on a daily or weekly basis with implementation and, moreover, these notes are made available. if ‘beta’ contained some things at one time in a study and a slightly different mix at a different time, shouldn’t this be reported? if research assistants don’t / can’t mention to their PIs when things get a bit messy in ‘the field’, and PIs in turn don’t report glitches and changes to their readers or other audiences, then there’s a problem.
coding and internal replication
as was raised not-so-long-ago by the nice folks over at political violence at a glance, the cleaning and coding of data for analysis is critical to interpretation – and therefore critical to transparency. there is not enough conversation happening about this – with “this,” in large part, being about construct validity. there are procedures for coding, usually involving independent coders working with the same codebook and then doing a check for inter-rater reliability. and reporting the resultant kappa or other relevant statistic. the reader really shouldn’t be expected to believe the data otherwise, on the whole “shit in, shit out” principle.
in general, checks on data that i have seen relate to double-entry of data. this is important but hardly sufficient to assure the reader that the findings reported are reasonable reflections of the data collected and the process that generated them. the interpretation of the data prior to the analysis – that is, coding and cleaning — is critical, as pointed out by political violence at a glance, for both quantitative and qualitative research. and, if we are going to talk about open data for reanalysis, it should be the raw data, so that it can be re-coded as well as re-analyzed.
in short, there’s more to transparency in research than allowing for internal replication of a clean dataset. i hope the conversation moves in that direction — the academic, published conversation as well as the over-beers conversation.
*i credit my background in anthropology, rather than neuroscience, with getting better with note-taking. sorry, pop.
Originally posted on Berkeley Initiative for Transparency in the Social Sciences:
Closely echoing the mission of BITSS, Nyhan identifies the potential of research transparency to improve the rigor and ultimately the benefits of federally funded scientific research writing:
The problem is that the research conducted using federal funds is driven — and distorted — by the academic publishing model. The intense competition for space in top journals creates strong pressures for novel, statistically significant effects. As a result, studies that do not turn out as planned or find no evidence of effects claimed in previous research often go unpublished, even though their findings can be important and informative.
View original 335 more words
Originally posted on Berkeley Initiative for Transparency in the Social Sciences:
A close partner of BITSS, the Center for Open Science (COS) has launched a free consulting service to anyone seeking help with “statistical and methodological questions related to reproducible practices, research design, data analysis, and data management.”
The Center is dedicated to increasing the “openness, integrity, and reproducibility of scientific research” and is looking to advance its mission through a more hands-on approach. Those with methodological questions can email email@example.com for free assistance from computer and data scientists trained in reproducibility and advanced research methods. If a question is too complicated to be answered via email, researchers can schedule a Google Hangout with a COS consultant to have their questions answered in real time. Visit the COS Google Calender for availability.
The Center also offers online and on-site workshops for those seeking to gain a greater understanding of open research topics and tools. For more information on the details of COS’s services…
View original 9 more words
when i need to decompress, zone out, and fall asleep, i have a tendency to watch familiar, i-can-recite-by-heart movies or TV shows to help. just enough distraction to close down my other thoughts, not gripping enough to keep me awake.
the ‘sherlock‘ TV series has joined these ranks. the more i watch ‘a study in pink‘ (if i stay awake to the near-end) the more i have to ask whether i, during the scene of selecting the good or bad bottle, could have resisted making the cab driver a sicilian (or making another princess bride reference).
props to the screenwriters on avoiding the temptation.