Common Sense in Market Research

By Andrey Maslov, Partner, Transcend Strategic Consulting

In the Spring of 2010, as a student in the Human Centered Design & Engineering department at the University of Washington, I’d produced a white paper about telephone surveys in the market research field. At that time, I was also managing projects at a market research firm in Seattle and had firsthand knowledge about the subject. While I revered many of the rigorous standards imposed by the industry with respect to statistical analysis of data, I was baffled by some of its outdated and flawed methodologies. Often, these systemic flaws were responsible for undermining the validity of otherwise good research. Among other things, I was troubled by the predominant use of computer-assisted telephone interviewing (CATI) used by all major firms to collect data.

If you are unfamiliar with the process, CATI is a telephone surveying technique in which the interviewer follows a script provided by a software application. The interviewer sits in front of a computer screen while the software program dials the telephone number to be called. When contact is made, the interviewer reads the questions posed on the screen and records the respondent's answers directly into the computer. After a predetermined number of respondents had been interviewed/surveyed, the aggregate raw data undergoes statistical analysis.

Although the overall CATI process itself is problematic, my main concern had been with the generating and processing of telephone survey sampling pools. At that time, a market research firm with CATI capabilities would generate or acquire lists of telephone numbers corresponding to a geographic area of interest to researchers. These lists would be compiled from public and private sources, and sometimes contain personal information. In practice, they usually contained mostly non-working numbers. Of those that were still active, most would route directly to voicemail or screen-out telemarketers some other way. Perhaps only 10% ever reached a human being. And in those cases, that human being usually declined to participate in the survey. By the way, all these contacts were over landline telephones.

So, on any given subject of interest to researchers, 100% of the data would be collected from individuals who still retained a landline telephone number, did not screen incoming calls, and were willing to waste 20 minutes answering questions—questions that they did not always understand. By 2010, even the despicable telemarketing and scamming industries realized that these methods were no longer effective. Yet, within the market research industry, thought leaders seemed oblivious. To them, it was business as usual. It’s no wonder that so many political polling results were far off the mark in recent elections. Of course, there are many other factors that impact research validity, but telephone survey sampling pools are seldom discussed.

After spending the first several years out of college working as a technical training consultant, I returned to find such research methodologies largely unchanged. Sure, there have been some innovations as a reaction to the disruptions created by the tech sector, but the legacy market research industry still thinks inside the box. Conversely, I am not implying that traditional approaches to market research are somehow dead or irrelevant, as some tech-heavy newcomers like to claim. Those approaches have lasted for decades precisely because they work, and big data still has its limitations. Rather, I believe that there’s an opportunity to make use of a whole array of modern tools and capabilities while retaining the tried-and-true methods of past generations.

Technology is a powerful tool that can be used alongside primary research. When used in a common-sense manner, tech-augmented research yields powerful and intuitive results. Why insist on obsolete CATI-based research when so many other options are available? It’s just common sense.