Day 1: Alex and the pilot study manual
Dear Reader, please spare a thought for the researcher who is responsible for piloting a survey questionnaire:
Our researcher, let’s call him Alex, is almost ready to start the pilot process. He has the client-approved questionnaire, the CAPI scripting, the computer department’s system has been set up for analysis and the fieldworkers are ready to be briefed.
Alex wants everything to be perfect for this study – he envisages a great future for himself at this research house! He looks for guidance on the i2i website. Great, found it!, Alex thinks to himself. A checklist for piloting under Guidelines for Piloting.
He looks at the checklist…....and has a private meltdown: How on earth will I get all this done in such limited time?
He takes a deep breath, settles down and starts on his action plan to make sure that all the items on the checklist are ticked. How can we test if respondents understand the questions if we must simultaneously test the length and flow of the questionnaire, order of questions, routing, AND all the technical stuff? It is not possible! Let's just ask them if there was anything that they didn’t understand. The client has already signed off the questionnaire anyway.
Day 6: Alex's maiden flight
Alex is contemplating a successful day over a glass of wine:
So glad it went well except, of course, for the client wondering afterwards whether an extra question or two could be added to make things clearer. Luckily, the boss was firm: the questionnaire has already been signed off and extra questions would make the questionnaire too long.
Two years later: Trouble on the runway
Alex is still working at the same research house, but now as a senior exec – overseeing pilots. He runs pilots like a machine, ticking off all the aspects in the process.
But this year he's had an unexpected challenge on one survey: there's a new director at the client’s company and of course, the new broom sweeps clean. This client has insisted that BEFORE any fieldwork is conducted, some qualitative “cognitive interviewing’’ must be done to determine whether respondents actually understand what is being asked.
An outside team of qualitative interviewers, linguists and Plain Language practitioners (really?) is asked to do these cognitive interviews.
Respondents answered all the survey questions, but can you believe, in about half of the interviews, when the researchers asked follow-up questions, it became clear that they did not understand what was being asked! Some had major problems with words and phrases that we've been using in questionnaires for ages, such as “household” and ”gender". And every third respondent counted the children in their household between the ages of 6 and 12 differently – how is that possible? What have I been doing wrong all these years?
A gap in the pilot manual
Dear Reader, you might have organised pilots yourself. Or you might have watched a pilot from behind a one-way mirror. Have you ever walked out of a pilot wondering how well respondents actually understood some of the questions?
Alex's story is based on facts.
Respondents understand far less than what we think. And the likelihood is high that they interpret questions differently from what the questionnaire developers intended. They are also unlikely to tell interviewers that they do not understand, for fear of looking stupid. A pilot typically tests the complete survey process and does not explore in detail whether, and how, respondents understand the questions.
Read the full case study here. And consider the implications of the findings for the integrity of survey data.