survey design and methods matter (and sometimes people in ghana aren’t truthful about loans)

a key part of data quality (and therefore research quality) is the design of questionnaires, interview schedules, and observation plans for generating primary data. it is somewhere between upsetting and galling, then, that these issues feature minimally in the training of social scientists planning to do (especially quantitative) empirical work. there are plenty of researchers who have devoted their careers to this topic (often with a high-income country focus), resulting in wonderful courses taught on this subject and wonderful books written. two of my favorite recommendations are the offering from groves et al and bernard tome covering both qualitative and quantitative methodologies.

.

on a basic shit in, shit out principle, it is very odd that so much empirical training goes into the nuances of analysis (important, yes) while so much on questionnaire design, measurement, and measurement error is left to be learned on-the-job, often under time pressure. thoughtful pre-testing is generally ignored and piloting a questionnaire and piloting an intervention often get muddled into a compressed whirlwind, followed by ad hoc tweaks rather than considered analysis and refinement (and, gasp, reporting the lessons learned in the research report).

.

blame the research cycle or funding timelines or publication word limits or the challenges of cross-disciplinary work or whatever but this happens far too often if we accept that bad data results in bad research. moreover, an unrooted and rushed approach to questionnaire and study design leads to inefficiently belated stock-taking and reinventing-the-wheel with regard to interviewer effects, construct validity, or thinking about how to limit respondent errors and misreporting. getting this right is a technical expertise that takes time, practice, and cross-disciplinary training.

.

it is my hope that a normative shift towards pre-analysis plans/commitment-to-analysis plans specifically linked with questionnaire items (and, please, a trend towards reporting the actual wording of the question that yielded the variables analyzed and the coding strategies used when turning open-ended answers into quantitative variables) will swing norms towards allotting more time and effort up-front to getting questionnaires and other data collection tools right. the best place to learn about new cool questionnaire techniques is probably in work by survey methodologists, psychometricians, and sector experts. to be slightly more pointed, at least at present, reading and citing only impact evaluations is probably not the best way to learn about new and best survey practices.

.

there are at least two reasons to think seriously about these issues. one is, of course, to get the most truthful answers possible, (perhaps) especially when we have an eye towards having a given study inform practice, policy, and funding in some way. the second is respect for the people we interview.

.

i’ll start with the latter point: we should be careful in curating our data collection instruments to place the least possible time (and emotional) burden on our respondents while getting the most useful truths out of them. again, the process of carefully constructing an analysis plan (and, for that matter, a theory of change and other dimensions of transparency) may better focus researchers on the variables that matter. there will be trade-offs between asking more questions (or using other data collection techniques like structured observations) to build in redundancies to triangulate on central variables while discarding items on which we expect respondents to be inaccurate or which are less likely to enter analysis. (a good rule of thumb is to ask yourself whether you could or would answer a question; if not, revise or toss.)

.

on the first point, about aiming for accurate measurements, david evans called our twittention to a nice example in a new paper on microcredit in ghana by erikson and lensink, published in jdeff. they triangulate on how recipients of microloans spend their loan by using both direct and indirect questions. this information is potentially important to the microfinance institutions but also relates to the policy broader debates (here and elsewhere) of how households in low- and middle-income settings make use of loans and un/conditional cash transfers. a completely non-paternalistic argument about loans and transfers says that how households decide to spend a transfer is entirely up to them and thus conditions and checks are unneeded and possibly unethical.

.

a different line of argument still holds that donors have (here, valid and generally vice-free) preferences about how loan and transfer recipients spend their money but that, for the most part, recipients behave in these ways of their own accord, so there’s no need to spend the time and effort to impose conditions. this line of argument depends on our having a fairly clear picture of how recipients are likely to spend their money — and this is what the present study calls into question. as the authors note, the basic theory of change linking a micro-loan to a sustained improvement in household or business welfare requires an increase in productive investments. therefore, microfinance organization would like loan recipients to invest in this way.

.

in this study, the authors directly ask respondents how they spent their loan. they also employ a randomized listing technique long used in asking respondents about sensitive behaviors like sexual practices and drug use (see, for example, meta-analysis of randomized response research: thirty-five years of validation for uses of the approach and cautions). in this case, the potentially ‘sensitive behavior’ relates to how microloan funds are spent, particularly if not on productive investments. the authors are able to triangulate the results between the two means of asking the same question. when asked directly, 0.1% of clients report using the loan primarily on household consumption rather than the purchase of productive assets. this jumps to 41% under the listing exercise.

.

to come back to the beginning, techniques like randomized responses do yield interesting, important, and useful results. they are also not new, even if relatively new in empirical social sciences focused on low- and middle-income countries. there are many other similar methodological details and strategies that would be beneficial to research on improving welfare in low- and middle-income countries. we should invest the time and cross-disciplinary effort in finding out — and training on — the best ways to ask questions and approach reality in ways that impose limited burdens on respondents.

Published by hlanthorn

ORCID ID: 0000-0002-1899-4790

2 thoughts on “survey design and methods matter (and sometimes people in ghana aren’t truthful about loans)

  1. If this – “a good rule of thumb is to ask yourself whether you could or would answer a question; if not, revise or toss” – were to be applied, I cant imagine any survey happening anywhere.

    Like

Share your thoughts, please! The more minds, the merrier

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: