2020 has been something else, hasn’t it?  It’s been discouraging trying to run every project from home — even from the Côte d’Azur. I’ve built my business on my ability to engage personally with people, and the environments that shape their opinions. Without this interaction, I feel like I’ve lost one of the most important tools I use to design insightful research. Couple that with the challenges presented by a near-complete shutdown in quantitative and qualitative data collection in many countries, it’s a challenging time to be a pollster.

Everyone knows lots of things aren’t working. But let’s take a look at what is. The transition to CATI interviewing in some places, sped up by COVID, is a genuine bright spot.

CATI Phone Surveys

If you have survey projects in Western Europe or North America, you’re in great shape. Other than slower turnaround times and increased technical demands placed on data collectors, phone interviewing continues using CATI (Computer-Assisted Telephone Interviewing)  Strategic questions about whether it’s a good idea to poll in such an unpredictable environment remain but there aren’t health concerns or technical barriers to worry about.

It’s more complicated for my clients. I work in low- and middle-income countries where interviewers administer general population surveys face-to-face (F2F). It’s the only way to sample rural, poorer or older populations. COVID ended this type of data collection overnight almost everywhere. Even though the main tool I use to conduct surveys disappeared, my need to understand public opinion did not. Responsible field firms have scrambled to figure out how to stay afloat and design alternative means to get the work done safely.

During the dark depths of France’s total lockdown I spent a lot of time on helpful ESOMAR and WAPOR webinars. Designed to help research professionals figure out how to operate in this new environment, the seminars brought together data collectors from around the world to talk about what was working and what was not.

There was plenty of bad news. Low and poorly-distributed mobile telephone penetration in parts of Africa, Eurasia and Latin America means that nationally representative samples aren’t possible for now in lots of countries.

But some vendors highlighted the progress they’ve made in the transition to CATI in countries where mobile penetration has been growing to the point random sampling is becoming practical. Accordingly, the Big Kids, which include certain departments of USG, Pew and Gallup —  are driving the transition. They have been helping local data collectors build technical capacity. improve quality control and retrain interviewers. This benefits the small fish like me. Thanks guys!

Case Study: India Makes the CATI Transition

Over the past 10 years I have fielded multiple F2F surveys in India. Because of its geographic and demographic complexity, it had always been one of the most difficult countries to randomly sample with anything but a gold-plated budget. Clients with normal budgets have to accept trade-offs during sample design. With a survey scheduled to field in mid-March, my client and I had already come to terms with those trade-offs. I designed a sample that balanced the goals of the research with the confines of budget and time using F2F. Sadly, COVID lockdowns pushed the survey into data collection purgatory, unlikely be released unless I found an alternative to F2F.

Since I last fielded an India survey a few years ago, CATI has become a viable data collection option. Thanks to my Zooming, I learned that mobile penetration is 90%. The darkness of French lockdown lightened.

With the client’s consent, I prepped the survey to switch modes. Most significantly, we translated it into 11 languages, compared with the four or five I’d used in previous face-to-face surveys, which sampled fewer states. After translation backchecks (all of them!) and field testing, fieldwork started.

CATI Interview distribution in India

What a nationally representative sample looks like in India. Beautiful!

To my delight, fieldwork produced a genuinely national sample that includes Indian states proportionate to their contribution to the population as a whole, at only slightly greater cost. Some weighting was applied to balance demographics but nothing alarming.  So, not only has CATI become a viable option in India, it’s actually preferable to the old way. Hey, thanks COVID!

Where Else Is the CATI Transition Happening?

Advances in mobile penetration have made a big difference in sampling complex countries like India, as well as in Indonesia and the South Pacific. In these countries, the distribution of respondents among hundreds of sparsely-populated islands make random sampling using F2F complicated and expensive. I have been doing CATI surveys for internal use with a client’s proprietary software in Bangladesh for the last few years, so I know it’s possible. Professional data collectors are doing so as well. Firms are also conducting promising experiments in Kenya and Nigeria. If you’re curious about other countries, contact me and I can look into options.

Lest my exuberance seem irrational, CATI is off the table in many places in Africa, Eurasia, Latin America and the Middle East. Because they exclude too many important voices, I resist the siren call of online panels, SMS and email surveys. Younger, urban and well-off respondents are easy to sample because they have access to these tools and are comfortable using them. However, such voices are already over-represented in debates over public policy. I won’t let COVID distort the debate even further by opting for samples that by design exclude the rural, the poor, the illiterate and those without adequate bandwidth.

Next week: Online qual: the Bad and the Ugly. Hopefully some good, too.

 

 

 

 

 

Election polling in the US has a problem. Or, depending on who you ask, more than one problem.  Since the 2016 election, many big brains have tried to fix these issues with varying degrees of success.

I have problems too, but mine aren’t the same. Pre-COVID, my biggest problems were long fielding times, QC issues and inaccessible sampling points. I don’t have to deal with response rates in the low single digits.

For once this post isn’t about my problems. It is about what I have learned from other folks’ attempts to answer the question: what is the best way to obtain a random, proportionate sample when the tools we’ve always used (for North Americans, telephone surveys) aren’t working very well anymore?

Mixed Mode: Is it the Future?

The answer seems to be some form of “mixed mode” data collection. This means giving respondents several ways to participate in the survey. It could be by telephone, SMS, face-to-face, web, or even snail mail.

I’m not going to get in to the pros (there are some) and the cons (there are many) of each mode, particularly in low- and middle-income countries. Internet, smartphone and literacy distribution vary widely. Although the combination and proportionality are different for every country, mixed mode could mitigate some sampling and data collection problems that pre-date and/or have been exacerbated by COVID. Letting people respond on their terms, rather than mine, seems like an obvious step to take.

Meet Respondents Where They Are

While watching this very interesting University of Chicago panel, it occurred to me that even though I haven’t got the same worries as US election pollsters who can’t get people to answer the phone, I could make my surveys better by providing response options that are more convenient for respondents. These tools exist and COVID has forced us to start using them more!

                      Staying on top of things during the storm is hard

For example, if younger participants prefer online surveys, they should have that option. If it is better to reach older, technophobic or less literate respondents through face-to-face interviewing in their homes, then get out the Kish grid. With careful questionnaire design to mitigate design effects and framing that accounts for selection effects, it is possible to do both in the same survey. Creative use of incentives could also help.

Substituting one mode for another is rarely a solution. I’ve been resisting this approach through this whole COVID year. Running an online-only survey and expecting a representative sample in a country where many people lack smartphones, internet access or have difficulty reading is a big mistake. Younger, better-educated urbanites are already overrepresented in policy discussions. And don’t get me started about panels. Most aren’t, ahem, Pew-level quality.

Shorter Surveys Are Good

There’s more! Shifting away from long and complex face-to-face surveys will force clients to accept shorter questionnaires. As I’ve said before, making busy, less-educated, sometimes food insecure respondents slog through an hour-long survey on constitutional reform does not produce good data. Telephone and online surveys forcing simpler, shorter instruments is an unmitigated good.

COVID has forced us to rethink the ways we’ve always collected data. If one mode appeals more to one group and another group is easier to reach using different mode, why not figure out how to use them both? It’s worth putting in the extra thought. Respondents do us a big favor by answering our questions. It is incumbent on us to make it as easy as possible.

Contact QGS and we can talk about ways to make mixed mode work for your issue and country.

 

Looking Forward to 2021. May It Be Better than 2020

Some platform reminded me I started Quirk Global Strategies in Istanbul 14 years ago this week. Looking back, the process that led to that decision could generously be described as “ad hoc.” As it turns out, I didn’t have “a year-long travel shutdown and projects shelved because of a virus” in the threat assessment I didn’t make. While I am thankful that the year wasn’t as catastrophic as it could have been, I hope to rise out of a defensive crouch in 2021.

Above all else, the disruption of 2020 showed me that the hands-on work I do in the field is the foundational value I provide my clients. All my clients benefit from the context I gain in the field and the problem-solving skills I develop. Here are few of the other things I’ve learned.

Polls Still Work

When I hear someone say “polls don’t work,” I look at them with the same degree of respect as I do someone who says “the earth is flat.” Survey research is based on the science of statistics, the principles of which are well-established. Nothing has changed. Furthermore, polls are a tool. You wouldn’t say “hammers don’t work,” unless you’re trying to hammer a screw. In that case you’d be right and you’d be better off trying a screwdriver.

I don’t know if this image belongs to Jason Boxt of 3W Insights, but that’s who I took it from.

For a variety of complicated reasons, probability sampling by telephone has become a less useful tool for some purposes in some places. One purpose that everyone has opinions about is measuring vote intentions of the polarized, diverse US electorate. But that’s not the purpose of most survey research. I have lots of problems with data collection in places I work, but the challenges facing US pollsters are not among them (btw the public pollsters did pretty well in the recent special Senate election in Georgia). Those of us who do strategic research can keep hammering away at nails with reasonable confidence that, as long as we keep a sharp eye on quality control and COVID-related obstacles, our data and the conclusions we draw from it are sound.

Telephone Interviews Continue to Replace Face-to-Face

Interviews once done face-to-face are increasingly being conducted on mobile phones via CATI. In the short run, I still have reservations about the representativeness of phone samples from many Eurasian, African and Middle Eastern countries. However, in the long run, this a positive development. Data collection will be faster, easier to monitor and, once mobile penetration is evenly distributed, might even be more representative than face-to-face samples in some places. Additionally, telephone interviews require shorter, simpler questionnaires. This will result in better quality data. Hearing poorly educated, busy respondents trying their best to power through a 50 minute face-to-face questionnaire on constitutional reform in pre-tests is shame-inducing. With phone interviews, respondents can give their opinion on a bad questionnaire by hanging up. 

Online Qualitative Is Terrible

There, I said it. We can’t just stop doing qualitative research because in-person groups are impossible. Unfortunately, Zoom is the only option we have right now. I’ve learned ways to mitigate its shortcomings. But we shouldn’t for one minute forget what we’re losing when we collect five or six people for a discussion that looks and sounds like Hollywood Squares. So much quality data get left on the table! It actually pains me.

Everyone knows group dynamics can undermine qualitative research. Now that it works differently, I’ve seen how group dynamics can also be the brightest illuminator. I miss it. Zoom participants tend respond to the moderator rather than the person sitting across from them. The result is a lot of “going around the table,” which I hate. I miss seeing reactions of discomfort or alarm or enthusiasm to a fellow participant’s comment, particularly on a sensitive topic. Additionally, being able to google the answer distorts participants’ perceptions, making them appear more engaged or more knowledgeable than they are, especially on politics. Bored online participants have a thousand distractions right in front of them that they don’t have when sitting around a table together. Shorter groups with fewer participants mean less time for depth and breadth, less opportunity for that lightning bolt to strike when an engaged group generates useful ideas.

I miss out, too, as an analyst. Of course it’s possible to listen to groups via a video or audio link. I’ve done it many times when it’s too dangerous to travel or when observer facilities are improvised. But I miss the dynamic of the “back room” where I can check and crosscheck ideas with my local colleagues and moderators. I miss hearing them react to participants’ thoughts and the context they give. They make me a better analyst and project designer.

In-depth Interviews Work Well Online

Here’s one positive thing I’ve learned. Online platforms like Zoom facilitate near-simultaneous translation. This has made it easier for me to personally conduct in-depth interviews with non-English speakers. I wouldn’t do it for all projects. But for program assessments or KIIs, I can dig more deeply and formulate follow-ups that synthesize what I’ve heard respondents tell me. It’s also fun to interact with new and interesting people while I am isolated at home.

Relationships Matter

I recognize that waiting out a pandemic in the Côte d’Azur does not engender much sympathy

I received an email from a colleague in Georgia thanking me for a referral. He lamented the current situation and let me know that his firm has managed to hang on. I appreciated hearing the update. Having spent 14 years building these relationships, I miss them. I’ve skipped my usual trips to Washington, New York and London. The blocks of time I would have spent in Ukraine, Bangladesh, Iraq or Turkey I’ve instead spent in my apartment, staring at my monitor. The isolation has been one of the hardest parts of the last year. My clients and partners are my friends and are the best part of my job. Listening, having meals and drinks and sharing long car rides with them ground my insights in real life. Zoom can’t replace this loss.

As soon as I am vaccinated, I’ll get back on the horse. I’m looking forward to seeing as many of you out there as possible. Contact me if you’re looking for help navigating this complicated new world. Wishing everyone a more engaging, more productive 2021.

 

 

 

 

Six months into socially distant qualitative research and what have we learned? We’ve learned it’s hard. Online focus groups have a lot of flaws, many of which cannot be mitigated, especially in lower- and middle-income countries. But, like a lot of annoying things these days, they aren’t going away. We have to figure out how to collect the best data we can from them.

The default approach cannot be to transfer in-person groups to an online platform. Here are some useful questions to ask during the project design phase

  • How will the quality of the data collected about this topic be affected by the limitations of the platform?
  • Who is being excluded from this project because they lack the technological capacity to participate? How will that affect the strategy that will be based on the research?
  • Can we collect the data we need in the amount of time we have, given shorter and smaller groups?
  • How can we take advantage of the fact that, logistically speaking, geography doesn’t matter anymore?

I’ve learned these lessons in the last few months.

Groups Must Be Shorter

Lengthy Zoom interactions are known to be enervating for everyone. It is much easier for online participants to become distracted, lose the thread of the conversation or leave the group entirely. Disengaged, bored participants – already a problem with many public policy topics — result in poor data. Making things worse, online moderators lack many of the usual tools they use to engage in-person participants.

Groups Must Be Smaller

Think of how you behave on a big zoom call. Can you keep track of who is saying what every minute? Can you listen deeply without distraction so you can probe meaningfully on what that person is trying to communicate without the benefit of body language? Do you tune out and check twitter? Limiting groups to five to seven participants makes life easier for moderators and less boring for participants who have to wait their turns to speak.

Know Who is Excluded

Online groups require participants to have a broadband or 4G internet connection, probably a tablet or monitor and the savvy to know how to use them. Many people in lots of countries don’t have these. Know who your platform excludes, then decide if that’s acceptable according to the goals of the project. Younger, wealthier, urban and tech savvy voters already tend to be overrepresented in public policy discussions. Platforms that by design exclude their older, rural and less connected counterparts exacerbate this dynamic. This is less of a concern in high- and middle-income countries. It is a serious concern in low-income countries or those where digital access is unevenly distributed.

A Quality Recruit Is More Important than Ever

With a smaller group, each voice carries greater weight. That’s why everyone has to be qualified. Spend more time on screener design and prescreening to ensure every participant is the exact person you need.

Prepare for Technical Problems

Resolving technical issues takes up valuable time, distracts participants and the moderator and disrupts the flow of the discussion. Be prepared to redo groups plagued by technical problems. 

The Platform Exacerbates Moderator Flaws

Moderators who are poor at fostering a group dynamic or controlling dominators in person will struggle to do so online. If you rely on the same moderators, investing in a course for them to update their skills could benefit them and you.

Topics that Are Illuminated by Group Dynamics Are Less Well Suited to Online

Idea generation, group dynamics and evolution of opinion as respondents learn from each other are what make in-person discussions so valuable for developing insightful strategies. Online participants are more likely act as disconnected individuals responding to moderator-presented stimuli rather than a group that responds to ideas and thoughts of other participants. For a researcher who works on sensitive political and social topics, this is a huge loss. The most revealing groups are the ones where participants struggle with a complex topic and engage actively with prompts from the moderator and the views of their fellow participants.

The “Back Room” Doesn’t Exist

It’s more difficult to guide the reactions of inexperienced observers. It’s also harder to seek clarifications, answer immediate questions or make alterations on the fly among staff. I miss hearing nationals’ reactions to the discussion and benefitting from their insights while watching online groups. Their insights improve my analyses.

Time and Space Don’t Matter

If your moderators, field team and observers don’t have to travel, you have more flexibility on logistics, scheduling and geography. You also don’t have to coordinate and travel to remote regions to hear rural perspectives. Even groups that must be held in a specific area can be more geographically diverse when travel times, public transport and traffic considerations don’t matter. Homebound respondents might have more flexibility, allowing for a broader range of times for scheduling groups.

Have concerns about conducting online groups? Wonder if they are appropriate for your project and country? Contact QGS. We can help.

We can't just move in-person groups online

                            We can’t just move in-person groups online

 

 

People all over the world are offering social media takes on the US Presidential election that range from gullibly wrong to dangerously misinformed. Because the stakes are so high, spreading misinformation, even accidentally, could have a serious impact on public perceptions of the legitimacy of the process. Understanding basic facts about the US system and the dynamics shaping the race will help you avoid contributing to an information stream already filled with burning garbage.

  • Polling American Elections is Hard and Most Pollsters Try to Get it Right: It’s lazy to fall back on “but the 2016 polls were wrong!” When every adult in a country is both eligible and likely to vote, random selection for election surveys is easy. Everyone qualifies for the survey. In the US, about 70% of the electorate is registered to vote and only 55%-65% of those cast a vote. Pollsters have to apply art and science to predicting the profile of likely voters. Sometimes their assumptions are wrong.

    Actually Not That Funny

    Most pollsters operate in good faith though and are transparent about their selection methodology. Find out which pollsters are hacks and learn how to interpret the good ones’ methods. Amplifying garbage polls undermines public confidence in the science as well as the outcome of the election.

 

  • It’s Not 2016, or 1968. Many of us can’t forget 2016. We should. It is not instructive for 2020. The most important difference is that Trump is a very weak incumbent. Incumbent presidents enjoy such huge advantages they are typically very hard to beat (Obama, GWB, B. Clinton). At his current numbers, however, Trump’s re-election would be ahistorical. In defiance of common sense, he is “playing the law and order card.” It’s NOTHING like when Nixon tried it in 1968. Nixon was running for an open seat against an opponent (Hubert Humphrey) who was perceived as weak on law and order! Trump IS the incumbent and also, significantly, a career criminal. Law and order is HIS responsibility and voters think he’s not been doing a great job at it. As a challenger, former VP Biden is reasonably well-liked and very well-known, differentiating him from Democratic challengers in past years (John Kerry in 2004, for example, was less well-known and more easily defined). Finally, two very unpopular, well-known candidates competed for open seat in 2016. That’s a completely different campaign dynamic. Stop saying “but 2016!”

 

  • Many Data Files Are Publicly Available for Use (and Abuse): No, really, ANYONE can go to Ohio’s secretary of state website and download the entire state’s voter file, even Russians! Michigan’s requires a simple FOIA request. Russians figured that out, so you can too. Nearly every state’s voter registration records are accessible for free or a small cost. No one needs to “hack” anything to get them. Contribution records at the Federal Election Commission are also publicly available. Most states also provide up-to-date counts of how many mail ballots have been sent out and received and so much more. Every campaign runs on these data. The flip side is that legally-obtained public data can be also used for evil purposes. Get to know the Secretaries’ of State websites and learn to tell the difference.

 

  • Look for Vote Suppression Rather Than Fraud: In the US, states and counties administer elections. There is neither a central voter database nor administrative structure. With more than 3000 counties, coordinated, large scale interference in vote counting is extremely hard to do. Voter fraud is extremely rare. Most American election administrators at the state and county levels are non-partisan and professional. However, political power is a hell of a drug. Partisan counties and states have other tools at their disposal, such as voter file purges, closure of polling places in particular neighborhoods, using security forces for intimidation or tactical COVID lockdowns that could contribute to vote suppression. Voting and election fraud is rare, hard to do at scale and easy to detect. Vote suppression is well-documented, much easier to do and harder to detect. Look for it.

 

  • There is No “Election Day” Anymore: Nearly 25% of votes were cast by mail in 2016 and it will be higher in 2020. Many Americans, like me, will receive ballots by mail in mid-September and will send them back immediately. Elections in Oregon and Washington have been entirely vote by mail (VBM) for years. Voters

    Your Ballot May Vary

    and administrators are comfortable with the process. Other states, like California, have a hybrid system (65% vote by mail). Still other states make it very hard to vote by mail. COVID has forced those states allow forms of VBM to diminish the risk of spreading the virus. Because of COVID chaos, it’s hard to know which voters will choose to VBM if they have the option. Despite the fact that VBM fraud is extremely rare, that it advantages neither party when done at scale and voters who have it love it, some forces have chosen to politicize it (while at the same time encouraging their supporters to…vote by mail). Here’s a solid guide to what’s important to understand about VBM. Don’t buy into BS.

 

  • We Won’t Know the Results on “Election Night:” We all love watching the networks “call” races as votes are counted and “precincts report” on election night. That’s not going to happen in 2020. Get over it and go to bed early. States start counting ballots on election day after polls close, but how they process mail ballots differs. It may take days or even weeks (hello, California!) to know the results. Because in-person votes may be demographically different than mail votes, we shouldn’t make any calls until mail votes are counted. Anyone, especially the President, who draws conclusions about the results before mail ballots are counted is willfully undermining the process. Don’t help him. Count the votes first, then announce results. We can wait.

 

With traditional focus groups off the table during physical distancing, qualitative researchers face options that may require too many tradeoffs to make the effort worth the costs. The challenges for qualitative research could be even greater than those facing quant, particularly for social or opinion research projects.

If you’re interested in challenges facing quantitative researchers, check out my last two posts.

What Are My Options?

If your qualitative needs can be met with chatroom-style online focus groups, you’re in good shape. It continues to be a solid technique for ad and message testing with larger audiences and for mixing qual and quant methodologies. Recruiters report that homebound respondents have the time and inclination to participate in a variety of formats. This is good news.

No FGDs for now, whether in a room or under a tree

Focus group discussions present a greater challenge however. The most valuable part of a focus group is the personal interaction between participants that an engaged moderator can elicit through thoughtful probing and skillful management of group dynamics. If the success of your project depends on getting beyond top-of mind responses, generating new ideas, and exploring feelings about abstract concepts, remote in-person focus groups on Zoom or other proprietary platform fall short. I worry that so much illuminating data will be distorted or lost that it’s worth asking “what’s the point?”

Do You Enjoy Zoom Meetings? I Don’t

Think of your last Zoom(s) with eight friends or colleagues. Where you satisfied with the quality of interaction with others on the call? Did you enjoy it? How engaged were you with the topic? After more than two months of physical distancing, Zoom interactions are proving to be, at best, unsatisfying and at worst, exhausting. There are good reasons for this.

Axios technology editor Scott Rosenberg articulated my misgivings about video-conferencing as a qualitative tool.  Here’s what he writes:

Videoconferencing imposes cognitive and psychological frictions and aggravates social anxieties. As experts in human-computer interaction point out, using Zoom means putting on a show for others without being able to rely on the cues we primates depend on in physical encounters.

  • There’s usually a slight audio lag, as well as mute-button mistakes and “your internet connection is unstable”-style dropouts.
  • We’re also often opening a chunk of our homes for others to view, and that can trigger social worries.
  • By showing us our own image as well as others’, Zoom ensures that we will critique ourselves in real time.
  • On top of standard-grade performance anxiety, the “big face” image that Zoom uses by default in its “speaker view” can trigger a “fight-or-flight” surge of adrenaline, writes Jeremy Bailenson, founding director of Stanford’s Human Computer Interaction Lab.
  • If you switch to the “Hollywood Squares”-style “gallery view,” you’re confronted with a sea of separated faces, which is not how evolution has adapted us to group interactions.
  • As M. Sacacas observes,you can’t really achieve true eye contact with anyone: If you look right into someone else’s eyes, you will appear to them as if you aren’t looking right at them — to achieve that, you have to look right at the camera.
  • Nonetheless, the whole experience of a videoconference feels to us like an extended bout of direct staring at other people staring back at us. That’s draining, which is why it’s not what actually happens when we meet in person, where we only occasionally look right at one another.

How will participants react when deprived the visual and social cues that help them interpret the reactions of others? Will feeling uncertain, frustrated, insecure, or tired influence their opinions, particularly on sensitive or political topics? How will these dynamics shape strategy built on the data? Additionally, Rosenberg’s analysis is helpful for contextualizing the reactions of citizens of high-income countries who are comfortable interacting with technology. How will citizens of low- or medium-income countries, or those with different cultural expectations for social interaction, respond? I don’t know if we know the answers to these questions.

Traditional focus groups have many biases, all of which are well-known and factored into qualitative analysis. We know people respond to questions differently in a group setting than they would alone. Online video-conferencing multiplies these biases plus adds others that we’ve barely begun to understand. Not only are there different biases to adapt to, the medium itself diminishes the greatest advantage of a focus group: group/moderator interaction.

 So What Can We Do?

Because qualitative research can’t grind to a halt because of physical distancing, we have to come up with ways to mitigate these problems. Here are some suggestions.

  • Assess whether videoconferencing is the right tool for the job. Does the topic demand strong rapport among moderator and participants to explore controversial social topics ? If so, your findings may lack depth, mislead or entirely miss important findings.
  • Write a shorter, simpler moderator’s guide. This medium is not appropriate for a two-hour long discussion on constitutional reforms.
  • Recruit fewer participants. Aim for quality of responses, rather than quantity.
  • Use a moderator well-trained and experienced in managing the complicated dynamics of this type of discussion. A high energy, animated moderator might be better able to engage remote participants than one with a more low-key personality.
  • Manage client expectations. Video-conference groups and in-person groups are not interchangeable. They will provide different kinds of data. Clients who expect traditional focus group data are likely to be disappointed,
  • To avoid the impulse to compare data collected in a traditional focus group and that from a video-conference, start fresh with a new project.
  • Be highly cognizant of data privacy protocols. Compliance with privacy laws apply more stringently to shared and stored video.

It’s going to be hard to convince me that video-conference focus groups, despite being absolutely possible, are advisable for all projects. This is particularly true for social and opinion research projects. All researchers have to adapt to our changing environment. The first question before launching a project should always be “what is my research goal and what tools can I use to reach it?” Then you can decide if having some data is better than having no data. Contact Quirk Global Strategies and we can help you decide.

 

Everything is up in the air! No one knows what’s going to happen next! At Quirk Global Strategies, we’re used to unpredictability.

We’ve developed strategies to manage it. No matter where in the world we’re working, every project starts with a discussion of the research goals. What questions need to be answered? By whom? How will the research be used? When? Answering these basic questions lays the foundation for the project and makes it easier to answer more complicated design questions down the road. This process helps us adjust our research plans to changing circumstances.

A meter apart, ladies

All survey research is a snapshot in time. Using it to predict views in the future is always a mistake. In the Coronavirus era, when the long term social, health and economic impacts have yet to fully hit, trying to use today’s data to predict what people will be thinking in months, or even weeks, is a waste of money.

Views on issues can be harder to shift than you’d think. As a political pollster in the US in September 2001, I assumed that the 9/11 attack was the kind of event that would radically shift perceptions on political topics. After the initial feelings of fear and insecurity wore off, pollsters found that voters’ priorities for elected officials or party preferences had changed little. This is the closest analogy in my political career to what we’re facing now. We can expect the economic and social impacts of COVID-19 will likely be much more far reaching than 9/11’s. But right now, we just don’t know.

Once you decide on the goals of your research, ask yourself these questions:

             Do You Need to Know Now?

If you need to understand how the current environment is shaping public opinion on a policy question today, or if you need to know how consumers have adapted their behavior to physical distancing, you should consider moving forward. If you’re planning to launch an advocacy campaign in six months or a year, you should probably wait, especially if you have limited resources to change your strategy or go in the field again.

            Do You Have the Resources to Respond to the Findings?

There’s nothing worse than binning a costly research project because the landscape has shifted, rendering your data useless. Can your plan be changed if the data reveal something unexpected or counterintuitive? If the answer is no, you should wait. Good research often reveals such findings, particularly in an uncertain environment. It’s wasted if you can’t incorporate it.

             Do You Have the Resources to Poll Again?

For many campaigns, knowing what people are thinking right now, at the beginning of the crisis, is critical information. A snapshot of attitudes in Spring 2020 will provide a baseline for tracking attitudinal shifts later in this year and in the years to come. It might also reflect the “worst case scenario” for your issue, which is useful to know when preparing a campaign. Depending on your timeline and your goal, you will probably need to poll again to update your assumptions. Add that to your budget.

            Is Some Data Better Than No Data?

In the difficult environments where QGS typically works, we don’t let the perfect be the enemy of the good. We always operate under budgetary, security and time restrictions that force us to make concessions. But since we fully understand the goals of each project, we know which methodological trade-offs we can live with and which we cannot.  We adjust our sampling plan to adapt to realities on the ground, report our methodology transparently and adjust our analysis and strategic recommendations accordingly. If you need actionable data, even if it’s not perfect, you have more flexibility in your data collection options.

            Who Do You Need to Talk to?

The answer to this question will help you decide which data collection mode is optimal, given your research goal.

None of this, sorry

If you need a large, proportionate, general population sample in a country where the only way to collect data is via face-to face-interviews, you’ll need to wait. Face-to-face interviews and traditional focus groups are simply off the table right now. Sadly, it’s impossible to predict when interviewers will be able to go out in the field and discussants can sit around a table talking to a live moderator.

The good news is telephone survey research to mobiles and landlines is thriving in Europe, the Gulf and North America. Interviewers stationed safely at home are calling voter file or RDD samples, like always. There are even anecdotal reports of marginal response rate improvements. If your universe has high mobile/landline penetration, phone surveys remain the best way to collect a random sample of a general population universe.

Data collectors in many middle- and low-income countries are on the cusp of being able to field random sample mobile surveys, particularly of urban populations. Before we commit to this approach, however, we need a full understanding of which groups are underrepresented (usually older, rural, lower SES) and which are overrepresented (younger, urban, higher SES) in these samples. We pay particular attention to the gender split: In some places it’s easier to interview women on the phone than in person. In others, men control access to the mobile phone. Then we decide, based on the goals of the research, if we can live with the trade-off.

            Non-Probability Options Abound

If you’re interested in non-random views of urban, younger, educated, higher SES, respondents with mobile phones or internet access, there is no shortage of methodologies available. This is particularly true in high- and middle-income countries but these populations are accessible even in many low-income countries via panels as well. Methodologies such as online surveys, SMS surveys, IVR and social media analytics can also be combined to give a richer, more contextual view of the landscape. Keep in mind, these modes sacrifice randomness and are not a substitute for a proportional sample.  Review the research goal then decide if it matters.

           Quirk Global Strategies Can Help

So should you poll now? Return to the goal of your research. Look at your budget and decide which trade-offs you can tolerate and the ones you cannot. Email us and we can help you think through your options and suggest the best one. We might even suggest waiting.

Curious about how opinion polling can proceed when entire countries are closed down? I was, so I reached out to data collectors around the world to better understand what’s possible now and what kind of projects should wait.

Pollsters who use telephone interviews are in a good position to keep on working. Unfortunately, those who rely on interviewers who need to move through communities and interact with people in their homes are going to have a rougher time. In the short term, researchers will have to put those projects on hold or consider new methodologies.

Telephone Surveys Are Happening

From the early days of COVID-19, phone banks serving high-income countries of North America, Europe, and the Middle East started adapting. Obviously, big rooms filled with interviewers sitting in front of terminals are out of the question. In response, data collectors I’ve spoken to quickly set their interviewers up on secure systems that allow them to call from home. This is good news for pollsters and interviewers.

Given the unrelenting demand from election pollsters, this decision was a no brainer for US-based firms. Those with global calling capacity and multi-lingual interviewers can transfer also projects to offices in countries where workforces have been less affected by shutdowns. Response rates might also improve because people stuck at home could be more willing to respond to surveys. In the US at least, it would be hard for response rates to get worse.

For those who field in high-income countries, this is all good news. It’s important to keep a couple of things in mind, however. In the US, ordinary election year demands stretch collectors’ capacity up to and often beyond limits. Having limited remote interviewers will exacerbate the problem. Management capacity will also be stretched as quality control and training demands grow. Patience, oversight and regular communications with data collectors are critical.

If your project in high-income countries in North America, Europe or the Middle East needs to move forward, then experienced phone houses with strong management can get it done. If you can hold off, or wait for a lull in your collector’s schedule, the quality of your data will likely be higher. I analyze my data by day as it comes in with an eye for anomalies. It’s also smart to build in extra days for call-backs, or in case things go haywire and you need to replace interviews or do extra QC.

Face-To-Face Surveys Are Problematic

As a specialist in survey research in conflict and post-conflict environments and low/middle income countries, about 80% of my projects rely on face-to-face (F2F) interviews. Phone interviews are not an option in places where mobile penetration is low or disproportionately distributed.  In normal times, in-person interviewers face obstacles such as transport difficulties, bad weather and conflict-related threats. Those problems are manageable. But add a highly contagious disease spread by personal contact and F2F data collection suddenly becomes untenable. It could be this way for while.

Whether F2F data collection is viable depends on a country’s infection rate and whether restrictions on movement and social contact have been put in place. This fast changing situation makes planning almost impossible. For example, I have a survey ready to field in India. Up until a week ago data were being collected normally in many states and slowly, and with extra precautions, in others. On 25 March, the entire country was locked down to stop the spread of the virus. Thankfully, we had opted to wait until the situation clarified and had not begun fieldwork.

The situation in many African countries is also ambiguous. Some countries have few infections and life continues normally, for the time being. Other countries are suffering badly from outbreaks. Ukraine has a serious outbreak and F2F data collection is on hold. Quant work in the Philippines is also on a slowdown, if not entirely stopped.

Experienced, ethical data collectors are in the best position to offer advice on the local situation. The final judgment always belongs to the researcher however. No one — interviewer or respondent — should be put in danger for the sake of survey research.

Data collectors in some F2F countries have begun experimenting with phone surveys and online panels. The possibility for error and biased samples remain serious concerns. Having a complete understanding of how these samples under- or over-represent populations is critical; some populations will simply not be reachable by these modes. Survey work in difficult environments often requires methodological tradeoffs. If you can live with increased and unpredictable error from non-probability samples, or if you can afford to be experimental, I say give it a try.

I Can Field a Survey. Should I?

Of course, pollsters need to consider ethical questions before fielding surveys during a global pandemic. Should interviewers be sent into the field, handhelds ready, to conduct interviews in people’s homes? Absolutely not. Should worried or scared respondents be pestered with questions about topics of less importance than life, death and economic survival? I hear the same argument against conducting surveys in conflict zones. Often, the concern is overstated. Many people — especially in low- and middle- income countries — are not asked their opinions asked about anything. They’re usually happy to oblige. Additionally, respondents can be more forthcoming in times of insecurity and unpredictability. Data collected during this period will be a real “snapshot of a strange time” and will be fascinating to track over time.  As I do in conflict zones or closed spaces, I let respondents tell me if they are uncomfortable or unwilling to participate. I look at response rates, drop-offs and interviewer comments before I make judgements about respondent willingness.

The need to understand public opinion during and after this crisis is not going to go away. Economic, political and social disruption could shape perceptions in unpredictable ways in low/middle income countries and upper-income countries alike. Views that once seemed hardened and unlikely to shift may change radically, or not at all. Whether you’re working in a high income country of North America, Europe or MENA, or looking at surveys in lower/middle income countries, Quirk Global Strategies can help you sort through your options. Contact us through the link on this site.