Posts

2020 has been something else, hasn’t it?  It’s been discouraging trying to run every project from home — even from the Côte d’Azur. I’ve built my business on my ability to engage personally with people, and the environments that shape their opinions. Without this interaction, I feel like I’ve lost one of the most important tools I use to design insightful research. Couple that with the challenges presented by a near-complete shutdown in quantitative and qualitative data collection in many countries, it’s a challenging time to be a pollster.

Everyone knows lots of things aren’t working. But let’s take a look at what is. The transition to CATI interviewing in some places, sped up by COVID, is a genuine bright spot.

CATI Phone Surveys

If you have survey projects in Western Europe or North America, you’re in great shape. Other than slower turnaround times and increased technical demands placed on data collectors, phone interviewing continues using CATI (Computer-Assisted Telephone Interviewing)  Strategic questions about whether it’s a good idea to poll in such an unpredictable environment remain but there aren’t health concerns or technical barriers to worry about.

It’s more complicated for my clients. I work in low- and middle-income countries where interviewers administer general population surveys face-to-face (F2F). It’s the only way to sample rural, poorer or older populations. COVID ended this type of data collection overnight almost everywhere. Even though the main tool I use to conduct surveys disappeared, my need to understand public opinion did not. Responsible field firms have scrambled to figure out how to stay afloat and design alternative means to get the work done safely.

During the dark depths of France’s total lockdown I spent a lot of time on helpful ESOMAR and WAPOR webinars. Designed to help research professionals figure out how to operate in this new environment, the seminars brought together data collectors from around the world to talk about what was working and what was not.

There was plenty of bad news. Low and poorly-distributed mobile telephone penetration in parts of Africa, Eurasia and Latin America means that nationally representative samples aren’t possible for now in lots of countries.

But some vendors highlighted the progress they’ve made in the transition to CATI in countries where mobile penetration has been growing to the point random sampling is becoming practical. Accordingly, the Big Kids, which include certain departments of USG, Pew and Gallup —  are driving the transition. They have been helping local data collectors build technical capacity. improve quality control and retrain interviewers. This benefits the small fish like me. Thanks guys!

Case Study: India Makes the CATI Transition

Over the past 10 years I have fielded multiple F2F surveys in India. Because of its geographic and demographic complexity, it had always been one of the most difficult countries to randomly sample with anything but a gold-plated budget. Clients with normal budgets have to accept trade-offs during sample design. With a survey scheduled to field in mid-March, my client and I had already come to terms with those trade-offs. I designed a sample that balanced the goals of the research with the confines of budget and time using F2F. Sadly, COVID lockdowns pushed the survey into data collection purgatory, unlikely be released unless I found an alternative to F2F.

Since I last fielded an India survey a few years ago, CATI has become a viable data collection option. Thanks to my Zooming, I learned that mobile penetration is 90%. The darkness of French lockdown lightened.

With the client’s consent, I prepped the survey to switch modes. Most significantly, we translated it into 11 languages, compared with the four or five I’d used in previous face-to-face surveys, which sampled fewer states. After translation backchecks (all of them!) and field testing, fieldwork started.

CATI Interview distribution in India

What a nationally representative sample looks like in India. Beautiful!

To my delight, fieldwork produced a genuinely national sample that includes Indian states proportionate to their contribution to the population as a whole, at only slightly greater cost. Some weighting was applied to balance demographics but nothing alarming.  So, not only has CATI become a viable option in India, it’s actually preferable to the old way. Hey, thanks COVID!

Where Else Is the CATI Transition Happening?

Advances in mobile penetration have made a big difference in sampling complex countries like India, as well as in Indonesia and the South Pacific. In these countries, the distribution of respondents among hundreds of sparsely-populated islands make random sampling using F2F complicated and expensive. I have been doing CATI surveys for internal use with a client’s proprietary software in Bangladesh for the last few years, so I know it’s possible. Professional data collectors are doing so as well. Firms are also conducting promising experiments in Kenya and Nigeria. If you’re curious about other countries, contact me and I can look into options.

Lest my exuberance seem irrational, CATI is off the table in many places in Africa, Eurasia, Latin America and the Middle East. Because they exclude too many important voices, I resist the siren call of online panels, SMS and email surveys. Younger, urban and well-off respondents are easy to sample because they have access to these tools and are comfortable using them. However, such voices are already over-represented in debates over public policy. I won’t let COVID distort the debate even further by opting for samples that by design exclude the rural, the poor, the illiterate and those without adequate bandwidth.

Next week: Online qual: the Bad and the Ugly. Hopefully some good, too.

 

 

 

 

 

Election polling in the US has a problem. Or, depending on who you ask, more than one problem.  Since the 2016 election, many big brains have tried to fix these issues with varying degrees of success.

I have problems too, but mine aren’t the same. Pre-COVID, my biggest problems were long fielding times, QC issues and inaccessible sampling points. I don’t have to deal with response rates in the low single digits.

For once this post isn’t about my problems. It is about what I have learned from other folks’ attempts to answer the question: what is the best way to obtain a random, proportionate sample when the tools we’ve always used (for North Americans, telephone surveys) aren’t working very well anymore?

Mixed Mode: Is it the Future?

The answer seems to be some form of “mixed mode” data collection. This means giving respondents several ways to participate in the survey. It could be by telephone, SMS, face-to-face, web, or even snail mail.

I’m not going to get in to the pros (there are some) and the cons (there are many) of each mode, particularly in low- and middle-income countries. Internet, smartphone and literacy distribution vary widely. Although the combination and proportionality are different for every country, mixed mode could mitigate some sampling and data collection problems that pre-date and/or have been exacerbated by COVID. Letting people respond on their terms, rather than mine, seems like an obvious step to take.

Meet Respondents Where They Are

While watching this very interesting University of Chicago panel, it occurred to me that even though I haven’t got the same worries as US election pollsters who can’t get people to answer the phone, I could make my surveys better by providing response options that are more convenient for respondents. These tools exist and COVID has forced us to start using them more!

                      Staying on top of things during the storm is hard

For example, if younger participants prefer online surveys, they should have that option. If it is better to reach older, technophobic or less literate respondents through face-to-face interviewing in their homes, then get out the Kish grid. With careful questionnaire design to mitigate design effects and framing that accounts for selection effects, it is possible to do both in the same survey. Creative use of incentives could also help.

Substituting one mode for another is rarely a solution. I’ve been resisting this approach through this whole COVID year. Running an online-only survey and expecting a representative sample in a country where many people lack smartphones, internet access or have difficulty reading is a big mistake. Younger, better-educated urbanites are already overrepresented in policy discussions. And don’t get me started about panels. Most aren’t, ahem, Pew-level quality.

Shorter Surveys Are Good

There’s more! Shifting away from long and complex face-to-face surveys will force clients to accept shorter questionnaires. As I’ve said before, making busy, less-educated, sometimes food insecure respondents slog through an hour-long survey on constitutional reform does not produce good data. Telephone and online surveys forcing simpler, shorter instruments is an unmitigated good.

COVID has forced us to rethink the ways we’ve always collected data. If one mode appeals more to one group and another group is easier to reach using different mode, why not figure out how to use them both? It’s worth putting in the extra thought. Respondents do us a big favor by answering our questions. It is incumbent on us to make it as easy as possible.

Contact QGS and we can talk about ways to make mixed mode work for your issue and country.

 

One of my favorite arguments against political opinion research in autocracies has been that surveys measure little more than the degree to which state-driven propaganda has penetrated. When voters hear no opposing views in the media and political opponents are marginalized to the point of irrelevance, public opinion almost always favors the autocrat. How could it not?

Wait—wasn’t the internet supposed to have levelled the field? Weren’t social media and messaging platforms supposed to have given oppositional movements the tools to circumvent state-driven communications and reach voters directly?

That argument hasn’t been viable for at least a decade, if it ever was. Even the dumbest autocrats have proven better able exploit the opportunities presented by social media than their less well-organized and -funded opponents. As tools have become more sophisticated, they’ve evolved.

The Savvy Autocrat Uses Disinformation

The predictability of the autocratic media space has vanished. By economically strangling independent media and cultivating dependent media outlets, autocrats have more tools to influence public opinion at their disposal than they did when they dictated content to newspapers, radio and TV stations directly. The autocrat who can combine a nominally “free” but compliant media and hire savvy social media operators who understand the perverse incentives of audience engagement gets a head start.

                                                   It’s complicated

The modern autocrat now has extraordinary capacity to shape public opinion, often beneath the radar. Politically-aligned and compliant mass media outlets launder and amplify social media-driven fringe themes, targeted to receptive audiences. When they turn their attention to democratic processes and allies, the results can be devastating. Perhaps you’ve heard of “Stop the Steal.” If not, google it. It’s been in the news. It will serve as a textbook example of how to use disinformation to build public support for solutions to problems that don’t exist.

Unlike in the “old days,” the information environment in autocracies is extremely fluid. Within a country, domestic and foreign players can team up to spread mutually beneficial disinformation. Those players may have aligning, but temporary, political and economic interests; alliances can be transactional. Different messages on different topics can be micro-targeted at diverse audiences with no accountability.

Sometimes just a “firehose of BS” is all that’s needed to confuse citizens and encourage them to check out. True, false or contradictory, disinformation exhausts the audience to the point it concludes “everyone is lying, you can’t believe anything these days.”  That’s good enough for those for whom muddied waters count as a victory.

Sink or Swim in Polluted Waters

But let’s say you want to advance a policy, or shore up public support for democratic processes. In a polluted environment, understanding how the public view the issues is important, but it’s only part of the picture. You need also to understand the forces that are shaping their views. This is especially true in backsliding democracies where autocrats benefit from the confused and checked-out population they created.

Here are some key questions to answer and track. What and whose strategic objective is disinformation designed to achieve? What messages are getting traction? Where are they coming from and who are the targets? How are platforms’ punitive actions reshaping the information environment? Governments, businesses and advocacy organizations need to understand this.

“But my topic isn’t terribly controversial or political. Why should I care?” If you are operating in a politicized environment, and your agenda or company has the potential to challenge powerful economic interests, you need to fully understand not only public opinion, but also the potential forces arrayed against you. Knowing this, you can launch your advocacy campaign, voter education program or build your brand knowing in advance where the risks lie and the best way to respond.

Thinking Beyond Traditional Qual and Quant

Strategic disinformation is making the places QGS works in and the topics we work on increasingly complicated. Because we care about actionable data, we’ve been thinking a lot about how to expand the definition of “measuring public opinion” to meet these new realities. Our work on disinformation projects in Ukraine, a testing ground for techniques that are now common everywhere, has taught us a lot about which tool is best suited to answer which question. Because disinformation dynamics are different in every country, there is no one-size-fits-all approach. It’s hard to figure out!

When it comes to measuring the impact of disinformation, traditional qualitative and quantitative are not the only, or the best, tools. It’s a multifaceted problem that requires a multifaceted solution: social media mining, media monitoring, usage studies all help illuminate the sources of disinformation and determine which messages are cutting through and which can be ignored. We can combine these tools to give the fullest picture of your messy information environment.

Contact us and we can help you understand the challenge you’re facing

 

 

Looking Forward to 2021. May It Be Better than 2020

Some platform reminded me I started Quirk Global Strategies in Istanbul 14 years ago this week. Looking back, the process that led to that decision could generously be described as “ad hoc.” As it turns out, I didn’t have “a year-long travel shutdown and projects shelved because of a virus” in the threat assessment I didn’t make. While I am thankful that the year wasn’t as catastrophic as it could have been, I hope to rise out of a defensive crouch in 2021.

Above all else, the disruption of 2020 showed me that the hands-on work I do in the field is the foundational value I provide my clients. All my clients benefit from the context I gain in the field and the problem-solving skills I develop. Here are few of the other things I’ve learned.

Polls Still Work

When I hear someone say “polls don’t work,” I look at them with the same degree of respect as I do someone who says “the earth is flat.” Survey research is based on the science of statistics, the principles of which are well-established. Nothing has changed. Furthermore, polls are a tool. You wouldn’t say “hammers don’t work,” unless you’re trying to hammer a screw. In that case you’d be right and you’d be better off trying a screwdriver.

I don’t know if this image belongs to Jason Boxt of 3W Insights, but that’s who I took it from.

For a variety of complicated reasons, probability sampling by telephone has become a less useful tool for some purposes in some places. One purpose that everyone has opinions about is measuring vote intentions of the polarized, diverse US electorate. But that’s not the purpose of most survey research. I have lots of problems with data collection in places I work, but the challenges facing US pollsters are not among them (btw the public pollsters did pretty well in the recent special Senate election in Georgia). Those of us who do strategic research can keep hammering away at nails with reasonable confidence that, as long as we keep a sharp eye on quality control and COVID-related obstacles, our data and the conclusions we draw from it are sound.

Telephone Interviews Continue to Replace Face-to-Face

Interviews once done face-to-face are increasingly being conducted on mobile phones via CATI. In the short run, I still have reservations about the representativeness of phone samples from many Eurasian, African and Middle Eastern countries. However, in the long run, this a positive development. Data collection will be faster, easier to monitor and, once mobile penetration is evenly distributed, might even be more representative than face-to-face samples in some places. Additionally, telephone interviews require shorter, simpler questionnaires. This will result in better quality data. Hearing poorly educated, busy respondents trying their best to power through a 50 minute face-to-face questionnaire on constitutional reform in pre-tests is shame-inducing. With phone interviews, respondents can give their opinion on a bad questionnaire by hanging up. 

Online Qualitative Is Terrible

There, I said it. We can’t just stop doing qualitative research because in-person groups are impossible. Unfortunately, Zoom is the only option we have right now. I’ve learned ways to mitigate its shortcomings. But we shouldn’t for one minute forget what we’re losing when we collect five or six people for a discussion that looks and sounds like Hollywood Squares. So much quality data get left on the table! It actually pains me.

Everyone knows group dynamics can undermine qualitative research. Now that it works differently, I’ve seen how group dynamics can also be the brightest illuminator. I miss it. Zoom participants tend respond to the moderator rather than the person sitting across from them. The result is a lot of “going around the table,” which I hate. I miss seeing reactions of discomfort or alarm or enthusiasm to a fellow participant’s comment, particularly on a sensitive topic. Additionally, being able to google the answer distorts participants’ perceptions, making them appear more engaged or more knowledgeable than they are, especially on politics. Bored online participants have a thousand distractions right in front of them that they don’t have when sitting around a table together. Shorter groups with fewer participants mean less time for depth and breadth, less opportunity for that lightning bolt to strike when an engaged group generates useful ideas.

I miss out, too, as an analyst. Of course it’s possible to listen to groups via a video or audio link. I’ve done it many times when it’s too dangerous to travel or when observer facilities are improvised. But I miss the dynamic of the “back room” where I can check and crosscheck ideas with my local colleagues and moderators. I miss hearing them react to participants’ thoughts and the context they give. They make me a better analyst and project designer.

In-depth Interviews Work Well Online

Here’s one positive thing I’ve learned. Online platforms like Zoom facilitate near-simultaneous translation. This has made it easier for me to personally conduct in-depth interviews with non-English speakers. I wouldn’t do it for all projects. But for program assessments or KIIs, I can dig more deeply and formulate follow-ups that synthesize what I’ve heard respondents tell me. It’s also fun to interact with new and interesting people while I am isolated at home.

Relationships Matter

I recognize that waiting out a pandemic in the Côte d’Azur does not engender much sympathy

I received an email from a colleague in Georgia thanking me for a referral. He lamented the current situation and let me know that his firm has managed to hang on. I appreciated hearing the update. Having spent 14 years building these relationships, I miss them. I’ve skipped my usual trips to Washington, New York and London. The blocks of time I would have spent in Ukraine, Bangladesh, Iraq or Turkey I’ve instead spent in my apartment, staring at my monitor. The isolation has been one of the hardest parts of the last year. My clients and partners are my friends and are the best part of my job. Listening, having meals and drinks and sharing long car rides with them ground my insights in real life. Zoom can’t replace this loss.

As soon as I am vaccinated, I’ll get back on the horse. I’m looking forward to seeing as many of you out there as possible. Contact me if you’re looking for help navigating this complicated new world. Wishing everyone a more engaging, more productive 2021.

 

 

 

 

Six months into socially distant qualitative research and what have we learned? We’ve learned it’s hard. Online focus groups have a lot of flaws, many of which cannot be mitigated, especially in lower- and middle-income countries. But, like a lot of annoying things these days, they aren’t going away. We have to figure out how to collect the best data we can from them.

The default approach cannot be to transfer in-person groups to an online platform. Here are some useful questions to ask during the project design phase

  • How will the quality of the data collected about this topic be affected by the limitations of the platform?
  • Who is being excluded from this project because they lack the technological capacity to participate? How will that affect the strategy that will be based on the research?
  • Can we collect the data we need in the amount of time we have, given shorter and smaller groups?
  • How can we take advantage of the fact that, logistically speaking, geography doesn’t matter anymore?

I’ve learned these lessons in the last few months.

Groups Must Be Shorter

Lengthy Zoom interactions are known to be enervating for everyone. It is much easier for online participants to become distracted, lose the thread of the conversation or leave the group entirely. Disengaged, bored participants – already a problem with many public policy topics — result in poor data. Making things worse, online moderators lack many of the usual tools they use to engage in-person participants.

Groups Must Be Smaller

Think of how you behave on a big zoom call. Can you keep track of who is saying what every minute? Can you listen deeply without distraction so you can probe meaningfully on what that person is trying to communicate without the benefit of body language? Do you tune out and check twitter? Limiting groups to five to seven participants makes life easier for moderators and less boring for participants who have to wait their turns to speak.

Know Who is Excluded

Online groups require participants to have a broadband or 4G internet connection, probably a tablet or monitor and the savvy to know how to use them. Many people in lots of countries don’t have these. Know who your platform excludes, then decide if that’s acceptable according to the goals of the project. Younger, wealthier, urban and tech savvy voters already tend to be overrepresented in public policy discussions. Platforms that by design exclude their older, rural and less connected counterparts exacerbate this dynamic. This is less of a concern in high- and middle-income countries. It is a serious concern in low-income countries or those where digital access is unevenly distributed.

A Quality Recruit Is More Important than Ever

With a smaller group, each voice carries greater weight. That’s why everyone has to be qualified. Spend more time on screener design and prescreening to ensure every participant is the exact person you need.

Prepare for Technical Problems

Resolving technical issues takes up valuable time, distracts participants and the moderator and disrupts the flow of the discussion. Be prepared to redo groups plagued by technical problems. 

The Platform Exacerbates Moderator Flaws

Moderators who are poor at fostering a group dynamic or controlling dominators in person will struggle to do so online. If you rely on the same moderators, investing in a course for them to update their skills could benefit them and you.

Topics that Are Illuminated by Group Dynamics Are Less Well Suited to Online

Idea generation, group dynamics and evolution of opinion as respondents learn from each other are what make in-person discussions so valuable for developing insightful strategies. Online participants are more likely act as disconnected individuals responding to moderator-presented stimuli rather than a group that responds to ideas and thoughts of other participants. For a researcher who works on sensitive political and social topics, this is a huge loss. The most revealing groups are the ones where participants struggle with a complex topic and engage actively with prompts from the moderator and the views of their fellow participants.

The “Back Room” Doesn’t Exist

It’s more difficult to guide the reactions of inexperienced observers. It’s also harder to seek clarifications, answer immediate questions or make alterations on the fly among staff. I miss hearing nationals’ reactions to the discussion and benefitting from their insights while watching online groups. Their insights improve my analyses.

Time and Space Don’t Matter

If your moderators, field team and observers don’t have to travel, you have more flexibility on logistics, scheduling and geography. You also don’t have to coordinate and travel to remote regions to hear rural perspectives. Even groups that must be held in a specific area can be more geographically diverse when travel times, public transport and traffic considerations don’t matter. Homebound respondents might have more flexibility, allowing for a broader range of times for scheduling groups.

Have concerns about conducting online groups? Wonder if they are appropriate for your project and country? Contact QGS. We can help.

We can't just move in-person groups online

                            We can’t just move in-person groups online

 

With traditional focus groups off the table during physical distancing, qualitative researchers face options that may require too many tradeoffs to make the effort worth the costs. The challenges for qualitative research could be even greater than those facing quant, particularly for social or opinion research projects.

If you’re interested in challenges facing quantitative researchers, check out my last two posts.

What Are My Options?

If your qualitative needs can be met with chatroom-style online focus groups, you’re in good shape. It continues to be a solid technique for ad and message testing with larger audiences and for mixing qual and quant methodologies. Recruiters report that homebound respondents have the time and inclination to participate in a variety of formats. This is good news.

No FGDs for now, whether in a room or under a tree

Focus group discussions present a greater challenge however. The most valuable part of a focus group is the personal interaction between participants that an engaged moderator can elicit through thoughtful probing and skillful management of group dynamics. If the success of your project depends on getting beyond top-of mind responses, generating new ideas, and exploring feelings about abstract concepts, remote in-person focus groups on Zoom or other proprietary platform fall short. I worry that so much illuminating data will be distorted or lost that it’s worth asking “what’s the point?”

Do You Enjoy Zoom Meetings? I Don’t

Think of your last Zoom(s) with eight friends or colleagues. Where you satisfied with the quality of interaction with others on the call? Did you enjoy it? How engaged were you with the topic? After more than two months of physical distancing, Zoom interactions are proving to be, at best, unsatisfying and at worst, exhausting. There are good reasons for this.

Axios technology editor Scott Rosenberg articulated my misgivings about video-conferencing as a qualitative tool.  Here’s what he writes:

Videoconferencing imposes cognitive and psychological frictions and aggravates social anxieties. As experts in human-computer interaction point out, using Zoom means putting on a show for others without being able to rely on the cues we primates depend on in physical encounters.

  • There’s usually a slight audio lag, as well as mute-button mistakes and “your internet connection is unstable”-style dropouts.
  • We’re also often opening a chunk of our homes for others to view, and that can trigger social worries.
  • By showing us our own image as well as others’, Zoom ensures that we will critique ourselves in real time.
  • On top of standard-grade performance anxiety, the “big face” image that Zoom uses by default in its “speaker view” can trigger a “fight-or-flight” surge of adrenaline, writes Jeremy Bailenson, founding director of Stanford’s Human Computer Interaction Lab.
  • If you switch to the “Hollywood Squares”-style “gallery view,” you’re confronted with a sea of separated faces, which is not how evolution has adapted us to group interactions.
  • As M. Sacacas observes,you can’t really achieve true eye contact with anyone: If you look right into someone else’s eyes, you will appear to them as if you aren’t looking right at them — to achieve that, you have to look right at the camera.
  • Nonetheless, the whole experience of a videoconference feels to us like an extended bout of direct staring at other people staring back at us. That’s draining, which is why it’s not what actually happens when we meet in person, where we only occasionally look right at one another.

How will participants react when deprived the visual and social cues that help them interpret the reactions of others? Will feeling uncertain, frustrated, insecure, or tired influence their opinions, particularly on sensitive or political topics? How will these dynamics shape strategy built on the data? Additionally, Rosenberg’s analysis is helpful for contextualizing the reactions of citizens of high-income countries who are comfortable interacting with technology. How will citizens of low- or medium-income countries, or those with different cultural expectations for social interaction, respond? I don’t know if we know the answers to these questions.

Traditional focus groups have many biases, all of which are well-known and factored into qualitative analysis. We know people respond to questions differently in a group setting than they would alone. Online video-conferencing multiplies these biases plus adds others that we’ve barely begun to understand. Not only are there different biases to adapt to, the medium itself diminishes the greatest advantage of a focus group: group/moderator interaction.

 So What Can We Do?

Because qualitative research can’t grind to a halt because of physical distancing, we have to come up with ways to mitigate these problems. Here are some suggestions.

  • Assess whether videoconferencing is the right tool for the job. Does the topic demand strong rapport among moderator and participants to explore controversial social topics ? If so, your findings may lack depth, mislead or entirely miss important findings.
  • Write a shorter, simpler moderator’s guide. This medium is not appropriate for a two-hour long discussion on constitutional reforms.
  • Recruit fewer participants. Aim for quality of responses, rather than quantity.
  • Use a moderator well-trained and experienced in managing the complicated dynamics of this type of discussion. A high energy, animated moderator might be better able to engage remote participants than one with a more low-key personality.
  • Manage client expectations. Video-conference groups and in-person groups are not interchangeable. They will provide different kinds of data. Clients who expect traditional focus group data are likely to be disappointed,
  • To avoid the impulse to compare data collected in a traditional focus group and that from a video-conference, start fresh with a new project.
  • Be highly cognizant of data privacy protocols. Compliance with privacy laws apply more stringently to shared and stored video.

It’s going to be hard to convince me that video-conference focus groups, despite being absolutely possible, are advisable for all projects. This is particularly true for social and opinion research projects. All researchers have to adapt to our changing environment. The first question before launching a project should always be “what is my research goal and what tools can I use to reach it?” Then you can decide if having some data is better than having no data. Contact Quirk Global Strategies and we can help you decide.

 

Everything is up in the air! No one knows what’s going to happen next! At Quirk Global Strategies, we’re used to unpredictability.

We’ve developed strategies to manage it. No matter where in the world we’re working, every project starts with a discussion of the research goals. What questions need to be answered? By whom? How will the research be used? When? Answering these basic questions lays the foundation for the project and makes it easier to answer more complicated design questions down the road. This process helps us adjust our research plans to changing circumstances.

A meter apart, ladies

All survey research is a snapshot in time. Using it to predict views in the future is always a mistake. In the Coronavirus era, when the long term social, health and economic impacts have yet to fully hit, trying to use today’s data to predict what people will be thinking in months, or even weeks, is a waste of money.

Views on issues can be harder to shift than you’d think. As a political pollster in the US in September 2001, I assumed that the 9/11 attack was the kind of event that would radically shift perceptions on political topics. After the initial feelings of fear and insecurity wore off, pollsters found that voters’ priorities for elected officials or party preferences had changed little. This is the closest analogy in my political career to what we’re facing now. We can expect the economic and social impacts of COVID-19 will likely be much more far reaching than 9/11’s. But right now, we just don’t know.

Once you decide on the goals of your research, ask yourself these questions:

             Do You Need to Know Now?

If you need to understand how the current environment is shaping public opinion on a policy question today, or if you need to know how consumers have adapted their behavior to physical distancing, you should consider moving forward. If you’re planning to launch an advocacy campaign in six months or a year, you should probably wait, especially if you have limited resources to change your strategy or go in the field again.

            Do You Have the Resources to Respond to the Findings?

There’s nothing worse than binning a costly research project because the landscape has shifted, rendering your data useless. Can your plan be changed if the data reveal something unexpected or counterintuitive? If the answer is no, you should wait. Good research often reveals such findings, particularly in an uncertain environment. It’s wasted if you can’t incorporate it.

             Do You Have the Resources to Poll Again?

For many campaigns, knowing what people are thinking right now, at the beginning of the crisis, is critical information. A snapshot of attitudes in Spring 2020 will provide a baseline for tracking attitudinal shifts later in this year and in the years to come. It might also reflect the “worst case scenario” for your issue, which is useful to know when preparing a campaign. Depending on your timeline and your goal, you will probably need to poll again to update your assumptions. Add that to your budget.

            Is Some Data Better Than No Data?

In the difficult environments where QGS typically works, we don’t let the perfect be the enemy of the good. We always operate under budgetary, security and time restrictions that force us to make concessions. But since we fully understand the goals of each project, we know which methodological trade-offs we can live with and which we cannot.  We adjust our sampling plan to adapt to realities on the ground, report our methodology transparently and adjust our analysis and strategic recommendations accordingly. If you need actionable data, even if it’s not perfect, you have more flexibility in your data collection options.

            Who Do You Need to Talk to?

The answer to this question will help you decide which data collection mode is optimal, given your research goal.

None of this, sorry

If you need a large, proportionate, general population sample in a country where the only way to collect data is via face-to face-interviews, you’ll need to wait. Face-to-face interviews and traditional focus groups are simply off the table right now. Sadly, it’s impossible to predict when interviewers will be able to go out in the field and discussants can sit around a table talking to a live moderator.

The good news is telephone survey research to mobiles and landlines is thriving in Europe, the Gulf and North America. Interviewers stationed safely at home are calling voter file or RDD samples, like always. There are even anecdotal reports of marginal response rate improvements. If your universe has high mobile/landline penetration, phone surveys remain the best way to collect a random sample of a general population universe.

Data collectors in many middle- and low-income countries are on the cusp of being able to field random sample mobile surveys, particularly of urban populations. Before we commit to this approach, however, we need a full understanding of which groups are underrepresented (usually older, rural, lower SES) and which are overrepresented (younger, urban, higher SES) in these samples. We pay particular attention to the gender split: In some places it’s easier to interview women on the phone than in person. In others, men control access to the mobile phone. Then we decide, based on the goals of the research, if we can live with the trade-off.

            Non-Probability Options Abound

If you’re interested in non-random views of urban, younger, educated, higher SES, respondents with mobile phones or internet access, there is no shortage of methodologies available. This is particularly true in high- and middle-income countries but these populations are accessible even in many low-income countries via panels as well. Methodologies such as online surveys, SMS surveys, IVR and social media analytics can also be combined to give a richer, more contextual view of the landscape. Keep in mind, these modes sacrifice randomness and are not a substitute for a proportional sample.  Review the research goal then decide if it matters.

           Quirk Global Strategies Can Help

So should you poll now? Return to the goal of your research. Look at your budget and decide which trade-offs you can tolerate and the ones you cannot. Email us and we can help you think through your options and suggest the best one. We might even suggest waiting.