Pipeline Designs and Equipoise: How Can They Go Together?

I am writing about phase-in / pipeline designs. Again. I’ve already done it here. and more here. but.

The premise of a pipeline or phase-in design is that groups will be randomized or otherwise experimentally allocated to receive a given intervention earlier or later. The ‘later’ group can then serve as the comparison for the ‘early’ group, allowing for a causal claim about impact to be made. I am specifically talking about phase-in designs premised on the idea that the ‘later’ group is planned (and has perhaps been promised) to receive the intervention later. I take this to be a ‘standard’ approach to phase-in designs.

I’d like to revisit the issue of phase-in designs from the angle of equipoise, which implies some sense of uncertainty about the causal impact of a given intervention. This uncertainty provides the justification for studying making use of an ex ante impact evaluation. Equipoise literally translates to equal weight / force / interest. Here, the force in question is the force of argument about the impact of an intervention and which direction it will go (or whether there will be one at all).

There have already been some great conversations, if not decisive answers, as to whether, in social science research, the justification for using experimental allocation of an intervention needs to meet the standards of clinical equipoise or policy equipoise.* The key difference is the contrast between ‘a good impact’ (clinical equipoise) and ‘the best impact achievable the resources’ (policy equipoise). In either case, it is clear that some variant of equipoise is considered a necessary justification. For theoretical and/or empirical reasons, it just isn’t clear whether an intervention is (a) good (investment).

Whichever definition of equipoise you pursue, the underlying premise is one of a genuine uncertainty and an operational knowledge gap about how well a certain intervention will work in a certain setting at a certain point in time and at what degree of relative resource efficiency. This uncertainty is what lends credibility to an ex ante impact evaluation (IE) and the ethical justification for a leave-out (‘business as usual’ or perhaps ‘minimal/basic package’) comparison group. Hence, no RCTs on parachutes.

Uncertainty implies that the impact results could plausibly, if not with fully equal likelihood, come back positive, negative, null or mixed. At least some of those outcomes imply that a program is not a good use of resources, if not actually generating adverse effects. Such a program, we might assume, should be stopped or swapped for some alternative intervention (see Berk’s comments here).

To move forward from the idea of uncertainty, the following two statements simply do not go together despite often being implicitly paired:

  1. We are uncertain about the effectiveness impact our intervention will bring about / cause, so we are doing an (any type of ex ante) IE.
  2. We plan to scale this intervention for everyone (implicitly, at least, because we believe it works – that is, the impacts are largely in the desired direction). Because of resource constraints, we will have to phase it in over time to the population.

Yes, the second point could be and is carried on to say, ‘this offers a good opportunity to have a clean identification strategy and therefore to do IE.’ But this doesn’t actually square the circle between the two statements. It still requires the type of sleight of hand around the issue of uncertainty that I raised here about policy champions..

Unless there are some built-in plans to modify (or even cancel) the program along the phase-in process, the ethics of statement 2 rests solely on the resource constraint (relative to actual or planned demand), not on any variant of equipoise. This is an important point when justifying the ethics of ex ante IE. And it is worth noting how few development programs have been halted because of IE results. It would be a helpful global public good if someone would start compiling a list of interventions that have been stopped, plausibly, because of IE outcomes, perhaps making note of the specific research design used. Please and thank you.

Moreover, unless there is some built-in planning about improving, tweaking or even scrapping the program along the way, it is not clear that the ex ante IE based on a phase-in design can fully claim to be policy relevant. This is a point I plan to elaborate in a future post but, for now, suffice it to say that I am increasingly skeptical that being about a policy (being ‘policy adjacent’ by situating a study in a policy) is the same as informing decisions about that policy (being ‘decision relevant’).

To me, the latter has stronger claims on being truly policy relevant and helping making wise and informed decisions about the use of scarce resources – which I think is the crux of this whole IE game anyway. IEs of phase-in designs without clear potential for mid-course corrections (i.e. genuine decision points) seem destined for policy adjacency, at best. Again, the underlying premise of a phase-in design is that it is a resource constraint, not an evidence constraint, which is dictating the roll-out of the program. But the intention to make a decision at least partly based on the evidence generated by an IE again rests on the premise of ex ante uncertainty about the potential for (the most cost-efficient) impact.

To come back to the issue of equipoise and phase-in designs: if the ethics of much of the work we do rests on a commitment to equipoise, then more needs to be done to clarify how we assess it and whether IRB/ethics review committees take it seriously when considering research designs. What information does a review board need to make that assessment?

Moreover, it requires giving a good think on what types of research designs align with the agreed concept of equipoise (whichever that may be). My sense is that phase-in designs can only be commensurate with the idea of equipoise if they are well-conceived, with well-conceived indicating that uncertainty about impact is indeed recognized and contingencies planned for in a meaningful way – that is, that the intervention can be stopped or altered during the phase-in process.

* I don’t propose to settle this debate between clinical and policy equipoise here, though I am sympathetic to the policy equipoise argument (and would be more so if more ex ante IEs tended towards explicitly testing two variants of an intervention against one another to see which proves the better use of resources moving forward – because forward is the general direction people intend to move in development).

Aside

On Science, from Eula Biss’s On Immunity

A nice reminder from Eula Biss (via On Immunity: An Inoculation) that science is a series of building blocks, with small tests and then bigger ones to see if each brick helps us reach higher and see farther.

Science is, as scientists like to say, “self-correcting,” meaning that errors in preliminary studies are, ideally, revealed in subsequent studies. One of the primary principles of the scientific method is that the results of a study must be reproducible. Until the results of a small study are duplicated in a larger study, they are little more than a suggestion for further research. Most studies are not incredibly meaningful on their own, but gain or lose meaning form the work that has been done around them… This doesn’t mean that published research should be disregarded but that, as John Ioannidis concludes, “what matters is the totally of the evidence” (p. 133)…

Thinking of our knowledge as a body suggests the harm that can be done when one part of that body is torn from its context. Quite a bit of this sort of dismemberment goes on in discussions about vaccination, when individual studies are often used to support positions or ideas that are not supported by the body as a whole… When one is investigating scientific evidence, on must consider the full body of information (p. 135).

that may not mean quite what you think it means: john henry and americana edition

occasionally on this site, i try to provide some background on phrases and cliches in social science and global health (such as here and here). it is a small public service to help folks not be sicilians yelling “inconceivable!” (or from starting land wars in asia, if at all possible).

today, the john henry effect.

.

the john henry effect is a reactive effect we could find in the comparison group of an experiment (or an any non-intervention group) when the comparison group is aware it is not receiving treatment. with this knowledge, they might react by working harder to compensate for not having the intervention. the effect, apparently, also includes the reaction amongst the ‘non-treated’ of becoming discouraged at not having received the intervention and working less hard, though i am less familiar with this usage. in any case, we could just call them ‘reactive effects’ and given all the other cultural roles and meanings of john henry, i wonder if we just should.

the point of this post is not about the john henry effect but about john henry. however, a small point. david mckenzie‘s post on the john henry effect (and that we shouldn’t be too worried about it) concludes “often our best approach may be to try and reduce the likelihood of such effects in the first place – while it can be hard (or impossible) to hide from the treatment group the fact they are getting a treatment, in many cases the control group need not know they are controls.”

this seems at odds with mckenzie’s seeming support in other places for public randomization (example here)– in which case, the comparison group would very well know that they were not receiving the treatment. (the problem, in part, is that we have limited scope in the way of placebos in social science work. ethics aside, we simply don’t know how to give you a malaria-bednet-that-isn’t-really-protective in the way that i can give you a lookalike pill that has no active pharmaceutical ingredients. which is, perhaps, another argument for testing treatment variants against each other rather than treatment against just ‘business as usual’/nothing new.)

.

in any case, the real point of this post is about john henry the man/myth. from a recent conversation with a colleague, it was clear that, for him/her, the john henry effect could have just as easily been named for the researcher that discovered the effect or the site at which it was first noted (as in the hawthorne experiments).

which is fair enough. john henry is an element of americana folklore (though there may well be counterpart or antecedent stories in different cultures and i would be delighted to hear about them), so why should anyone else be clued in?

however, i had to sing a song about john henry in 5th grade choir performance about american tall tales (quite possibly the last time i was permitted to sing on stage), so i am fully qualified to provide some background on john henry.

.

it seems (mostly according to here and here) that john henry was likely a real man — definitely black, possibly born a slave. he worked for the railroads following the civil war (in the late 1860s and 1870s). he was well-suited to this work, as a “steel driving man”, as he was, from existing accounts, both quite tall and muscular. most accounts say he worked for the C&O Railroad (chesapeake & ohio) and many accounts put his work as drilling through the big bend mountain in west virgina, where it was decided it was more expedient to make a tunnel rather than go around the mountain (alternatively, he worked on the nearby lewis tunnel under similar circumstances).

“as the story goes, john henry was the strongest, fastest, most powerful man working on the rails. he used a 14-pound hammer to drill 10 to 20 feet in a 12-hour day – the best of any man on the rails. one day, a salesman came to camp, boasting that his steam-powered machine could outdrill any man. a race was set: man against machine. john henry won, the legend says, driving 14 feet to the drill’s nine. he died shortly after, some say from exhaustion, some say from a stroke.”

another account, by an alleged eyewitness account collected by sociologist guy johnson in the 1920s, is:

“when the agent for the steam drill company brought the drill here, john henry wanted to drive against it. he took a lot of pride in his work and he hated to see a machine take the work of men like him. well, they decided to hold a test to get an idea of how practical the steam drill was. the test went on all day and part of the next day. john henry won. he wouldn’t rest enough, and he overdid. he took sick and died soon after that.”

john henry became the subject of ballads and work/hammer songs (e.g. and here and here) and an important touchstone for the american labor movements and civil rights movements. he is a lot more than a possible effect in social experiments!

.

as a closing thought, when we discuss john henry effects, we mostly think about his working hard in compensation for not having the treatment (a machine) — or even proving that the treatment was unnecessary because of pride in the status quo. we think less about the fact that he died from it. given this part of the story, we may want to consider, should we find john henry effects, not just that it might mess up our effect estimation — but that harms could be coming to groups not receiving interventions if they are over-compensating in this way (more akin to how john henryism and soujourner truthism are used in sociology and health psychology (e.g. here and here) to describe the african-american experience and weathering).

Refereeing an academic paper

The below list is 100% taken from the following sources; my only contribution is to mix them up into a three-page document.

Nevertheless, may prove useful. Additions, of course, welcome.

  • Assume that no referee reports are truly anonymous.  It is fine to be critical but always be polite.
  • Skim the paper within a couple of days receiving the request- my metro rides are good for this – you can quickly tell whether this is a paper that is well below the bar for some obvious reason and can be rejected as quickly as possible.
    • Unless it is immediate junk, read the paper once and return to it a week later with deeper thoughts and a fresh mind.
    • Referee within one month.
  • Remember you are the referee, not a co-author. I hear a lot that young referees in particular write very long reports, which try and do way more than is needed to help make a paper clear, believable and correct. I think 2 pages or less is enough for most reports.
  • Your report should not assume that the editor has a working knowledge of the paper.
    • The first paragraph should summarize the contribution. Reviewers should provide a concise summary of the paper they review at the start of their report and then provide a critical but polite evaluation of the paper.
    • Explain why you recommend that the paper be accepted, rejected, or revised.
      • If you would like the editor to accept the paper, your recommendation must be strong. The more likely you think the paper is to merit a revision the more detailed should be the comments.
      • The referee report itself should not include an explicit editorial recommendation. That recommendation should be in a separate letter to the editor.
      • If you consistently recommend rejection, then the editor recognizes you are a stingy, overly critical person. Do not assume that the editor will not reveal your identity to the authors. In the long run, there are no secrets.
      • If you recommend acceptance of all papers, then the editor knows you are not a discriminating referee.

Possible considerations:

  • Research question and hypothesis:
    • Is the researcher focused on well‐defined questions?
    • Is the question interesting and important?
    • Are the propositions falsifiable?
    • Has the alternative hypothesis been clearly stated?
    • Is the approach inductive, deductive, or an exercise in data mining? Is this the right structure?
  • Research design:
    • Is the author attempting to identify a causal impact?
    • Is the “cause” clear? Is there a cause/treatment/program/fist stage?
    • Is the relevant counterfactual clearly defined? Is it compelling?
    • Does the research design identify a very narrow or a very general source of variation?
    • Could the question be addressed with another approach?
    • Useful trick: ask yourself, “What experiment would someone run to answer this question?”
  • Theory/Model:
    • Is the theory/model clear, insightful, and appropriate?
    • Could the theory benefit from being more explicit, developed, or formal?
    • Are there clear predictions that can be falsified? Are these predictions “risky” enough?
      • Does the theory generate any prohibitions that can be tested?
      • Would an alternative theory/model be more appropriate?
        • Could there be alternative models that produce similar predictions—that is, does evidence on the predictions necessarily weigh on the model or explanation?
      • Is the theory a theory, or a list of predictions?
      • Is the estimating equation clearly related to or derived from the model?
  • Data:
    • Are the data clearly described?
    • Is the choice of data well‐suited to the question and test?
    • Are there any worrying sources of measurement error or missing data?
    • Are there sample size or power issues?
    • How were data collected? Is recruitment and attrition clear?
      • Is it clear who collected the data?
      • If data are self-reported, is this clear?
      • Could the data sources or collection method be biased?
      • Are there better sources of data that you would recommend?
      • Are there types of data that should have been reported, or would have been useful or essential in the empirical analysis?
      • Is attrition correlated with treatment assignment or with baseline characteristics in any treatment arm?
  • Empirical analysis:
    • Are the statistical techniques well suited to the problem at hand?
    • What are the endogenous and exogenous variables?
    • Has the paper adequately dealt with concerns about measurement error, simultaneity, omitted variables, selection, and other forms of bias and identification problems?
    • Is there selection not just in who receives the “treatment”, but in who we observe, or who we measure?
    • Is the empirical strategy convincing?
    • Could differencing, or the use of fixed effects, exacerbate any measurement error?
    • Are there assumptions for identification (e.g. of distributions, exogeneity?)
      • Were these assumptions tested and, if not, how would you test them?
      • Are the results demonstrated to be robust to alternative assumptions?
      • Does the disturbance term have an interpretation, or is it just tacked on?
      • Are the observations i.i.d., and if not, have corrections to the standard errors been made?
      • What additional tests of the empirical strategy would you suggest for robustness and confidence in the research strategy?
      • Are there any dangers in the empirical strategy (e.g. sensitivity to identification assumptions)?
      • Is there potential for Hawthorne effects or John Henry-type biases?
  • Results:
    • Do the results adequately answer the question at hand?
    • Are the conclusions convincing? Are appropriate caveats mentioned?
    • What variation in the data identifies the elements of the model?
    • Are there alternative explanations for the results, and can we test for them?
    • Could the author have taken the analysis further, to look for impact heterogeneity, for causal mechanisms, for effects on other variables, etc?
    • Is absence of evidence confused with evidence of absence?
    • Are there appropriate corrections for multiple comparisons, multiple hypothesis testing?
  • Scope:
    • Can we generalize these results?
    • Has the author specified the scope conditions?
    • Have casual mechanisms been explored?
    • Are there further types of analysis that would illuminate the external validity, or the causal mechanism at work?
    • Are there other data or approaches that would complement the current one?

sunglasses (googles) in bombay

this is from the summer i spent in bombay (2009), when I was living in dharavi and part of a research project on an unregistered slum near reay road station (i’ll call this community RR).

as many of you may know, i am pretty attached to my sunglasses.  not attached in the sense that i have a favorite pair – because a single pair of sunglasses rarely lasts me long enough to form that kind of connection with them.  but, for the most part, they live on my head or in my face and, India being a sunny place, seem a perfectly reasonable part of my attire here. to be honest, i think they are appropriate attire in just about any weather or degree of darkness (along with corey hart, though for different reasons).

imagine my consternation, then, when we arrived in RR and kiran, one of my research partners, suggested that i not wear my sunglasses.  he explained that people would take it as a mark of celebrity and they would be less likely to talk with me.  so, now when kiran and i ride into RR, i obliging take off my sunglasses just before we turn into the community and slip them out of my bag – and put them back on the second we turn out of the community at the end of the work.

indeed, sunglasses do seem to carry a considerable amount of status – for something that i am pretty sure you can buy cheaply on the street here.  they are a major point of ‘conversation’ at breakfast at the home where i am staying.  a good chunk of breakfast time is spent with different people trying on the sunglasses (generally referred to as ‘googles’), me taking pictures of assorted people wearing the sunglasses, then passing around the camera so everyone can see said pictures and have a good laugh over the precise thing that they had just witnessed.  this, at least so far, seems to provide endless amusement.