First, I am starting a small campaign to push towards calling ‘pre-analysis plans’ something else before the train gets too far from the station. Something like ‘commitment to analysis plans’ or ‘commitment to analysis and reporting plans.’ I have two reasons for this.
- PAP just isn’t a super acronym; it’s kind of already taken.
- I think the name changes moves the concept a step back from indicating that the researcher needs to pre-specify the entirety of the analysis plan but, rather, to indicate the core intended dating cleaning and coding procedures and the central analysis — and to commit to completing and reporting those results, whether significant or not. this shift, from a commitment rather than a straitjacket, seems like it would go some way towards addressing concerns expressed by Olken an others that the task of pre-specifying all possible analyses ex ante is both herculean and blinkered, in the sense of not incorporating learning’s from the field to guide parts of the analysis. the commitment, it seems to me, should be partly around making clear to the reader of a study which analyses were ‘on plan’ and which came later, rather than claiming perfect foresight.
Second, speaking of those learning’s from the field that may be incorporated into analysis… I had a moment today to think a bit about the possible views from the field that come from surveyors (as I am working on doing some of my dissertation analysis and already starting to form a list of questions to write back to the survey team with which I worked!). Among the decisions laid out by folks like Humphreys and Mckenzie in their lists of what should be specified in a commitment to analysis plan (doesn’t a ‘CAP’ sound nice?) about data cleaning, surveyors play very little role.
Yet a survey (or discussion) among survey staff about their experience with the questionnaire can yield information on whether there were any questions with which they systematically felt uncomfortable or uncertain about or that respondents rarely seemed to understand. Yes, many of these kinks should be worked out during piloting but, no, they aren’t always. Sometimes surveyors don’t get up the gumption to tell you a question is terrible until the research is underway and sometimes they themselves don’t realize it.
For example, in one field experiment with which i was involved, surveyors only admitted at the end (we conducted an end-of-survey among them) how uncomfortable they were with a short-term memory test module (which involved asking respondents to repeat strings of numbers) and that it was quite embarrassing to ask these questions of their elders. To the point that some of them breezed through these questions pretty quickly during interviews and considered some of the answers they reported suspect. Some wrote fairly agonizing short essays to me in the end-of-survey questionnaire (it’s a good thing to make them anonymous!), asking me to “Imagine that you have to ask this question to an elder…” and proceeded to explain the extreme horror of this.* As the short-term memory module was not part of the central research question or main outcomes of interest, it was not subjected to any of the audit, back-check, or other standard data-quality procedures in place, and so the problem was not caught earlier.
I can imagine a commitment-to-analysis plan that committed to collecting and incorporating surveyor feedback. For example, a CAP that stated that if >90% of surveyors reported being uncertain about the data generated by a specific question, those data would be discarded or treated with extreme caution (and that caution passed on to the consumers of the research). Maybe this could be one important step to valuing, in some systematic way, the experience and insights of a survey team.
*For the record, I can somewhat imagine this, having used to work in a call center to conduct interviews with older women following up on their pelvic floor disorder surgery and whether they were experiencing any urinary symptoms. In that case, however, most of the discomfort was on my side, as they were well versed in — and fairly keen to — talking about their health issues and experiences! Note to self: aim not to have pelvic floor disorder.
4 thoughts on “Brief Thought on Commitment-To-Analysis Plans”
Is there a contradiction between signing up for CAP & your calls to be flexible with phase-in RCT designs?
i think not but i would be interested to hear how you think they might. my sense is that regardless of changes made in design, the research should still be clear about the original intent and hypotheses and report on these. flexibility in terms of phase-in designs has to do with whether the phase-in should continue at all?
Isnt the flexibility in phase-in also about tweaks that incorporate learning – to an extent that’s a compromise between a clean (on-paper) study design and being responsive to real-world changes? In any case, focus on ‘flexibility’. Might CAP discourage flexibility if there is no room to review/revise as we get further into a study? (I understand that it need not)
yes, it also would incorporate tweaks. my sense is that a CAP doesn’t necessarily do this. if the CAP lays out the primary guiding hypotheses and outcome variables and then the main data cleaning and analytic procedures that will be done, it doesn’t seem like this will directly conflict with implementation tweaks during the course of an intervention. more importantly, i feel that it still sets the point of departure from which the author has to report, there is still room to discuss such tweaks. or even “we set out to look at how X relates to Y but it turns out doesn’t happen. instead, we looked at the relationship between Z and Y, here’s why, and here’s what we did…” that said, my guess is that some of the issues around data cleaning, when to prune variables from the dataset, etc, still would not change.