fbpx

As we continue to shift towards audience-centric marketing, columnist Christi Olson of Bing notes that we can create more effective remarketing campaigns by asking the right questions about our searchers.

Search as we know it is changing, with keywords and match types giving way to a more audience-powered approach. It’s a transition that has been slowly coming, but now that remarketing and remarketing lists for search ads (RLSA) are available on Bing and Google, search marketers can no longer afford to ignore audience-based buying.

In the new search world order, searching for searchers will increasingly be a part of every successful marketer’s integrated search strategy.

Welcome to the new world of search

In the early days of search, keywords and match types were the main levers search advertisers used to find customers. Keywords allowed us to reach the consumers who were searching for our products and services, while match types allowed the query-to-keyword relationship to be more or less relevant, a kind of volume and relevance throttle.

Today, audiences enable advertisers to target the right message to the right person — at potentially the right time — in a way that keywords cannot. Keywords can give you intent and interest levels, but search is now on the cusp of something greater: the ability to create campaigns to specifically meet customers, wherever they are.

Just as exciting, we can use audiences to help us stop wasting digital marketing spend… and those audiences don’t have to be limited to users who have engaged with us from a search standpoint.
Could all search campaigns be remarketing campaigns?

I’ve been noodling on the idea for a while that all campaigns are remarketing campaigns. You might disagree with me, especially since Bing only allows a -90-percent bid modifier. But… a -90-percent bid modifier is still fairly close to creating an exclusion or a negative campaign.

Why is this important? It gives you the ability to segment your customers, adjust your bid strategy to reduce acquisition costs and adjust your messaging based on the audience segment.

Consider this scenario:

In the paid search brand campaigns I managed, I noticed that over time, my CPAs were steadily increasing. Using analytics to investigate, I found that there were a lot of return visitors on our brand keywords. I was paying to re-engage existing customers who were lazy and clicking on my paid search ads to navigate to the site or get a specific offer/deal instead of navigating through organic links or going directly to the website.

This, in conjunction with more competition bidding on my brand keywords, was causing my CPCs and my CPA to increase. My goal was to decrease my CPA and CPC and target net new customers to increase our overall awareness.

I decided to segment the brand campaign into two groups:

Engaged Visitors. Site visitors from the last 30 days who didn’t bounce right away, purchasers, visitors who touched other high-cost channels.

Net-new or Low Engagement Visitors. Visitors who haven’t been to the site in more than 30 days, visitors who bounced within x seconds in the last 30 days and people who haven’t been to my site.

Each group had different bid strategies and messaging.

With the Engaged Visitor segment, I reduced my bids, allowing my ads to go into a lower position, knowing that I ranked well organically. I also adjusted my messaging to our existing customers to not promote discounts/sales.

For the Net-new and Low Engagement Visitors, I did the inverse, increasing bids to make sure I was in prominent positioning with value-based customer messaging.

Making these adjustments, I was able to decrease my CPA for existing customers. And by focusing less on discount or promotional messaging to existing customers, I wasn’t paying to reacquire them every time they wanted to make a transaction. Instead, I could focus on building a new customer base that had a higher lifetime value to my client’s business.

Asking the right questions

I was able to use remarketing because I started to think more strategically about how I was targeting different customer segments.

Think about what other questions you can ask to segment out consumers and what you might do differently in terms of bidding, targeted keywords (head vs. tail) and the overall messaging (ad copy, ad extensions) and user experience. Learn to ask the right questions so you can develop remarketing strategies that align to your business goals.

Ask questions like:

  1.     Would you create different user experience for new vs. existing customers?
  2.     Has a customer been to your website previously?
  3.     Have they engaged through other high-cost channels?
  4.     Have they engaged multiple times across multiple marketing channels?

If you are strategic and smart about the questions you ask, you might change your perspective about how you use audiences and RLSA to make your search campaigns more effective.
Be customer-obsessed

There are a million ways to segment your search campaigns based on audiences — and they all lead to better experiences for your customers. But by using audiences to segment users and create custom messaging and experiences for specific audiences, you will dramatically increase the scale and size of your search marketing campaigns.

Of course, there is a cost associated with managing this; but in most cases, changing your bid strategies or re-attracting and engaging with consumers who are more likely to convert will lead to both campaign spend savings and higher-value relationships with your customers.

Mind blown? It’s because you’re searching for an audience that is using keywords, not just keywords themselves. The new world of search means putting the customer (audience) first and trying to create a great user experience specifically for them.

Source:  http://searchengineland.com/searching-for-searchers-audiences-are-the-new-keywords-247757

Categorized in Search Engine

A Recently Published Responsive Management Journal Article Outlines Why Online Surveys Continue to Yield Inaccurate, Unreliable, and Biased Data

 

INTERNET OR ONLINE SURVEYS have become a popular and attractive way to measure opinions and attitudes of the general population and more specific groups within the general population. Although online surveys may seem to be more economical and easier to administer than traditional survey research methods, they pose several problems to obtaining scientifically valid and accurate results. A peer-reviewed article by Responsive Management staff published in the January-February 2010 issue of Human Dimensions of Wildlife details the specific issues surrounding the use of online surveys in human dimensions research. Reprints of the article can be ordered here. Responsive Management would like to thank Jerry Vaske of Colorado State University for his assistance with the Human Dimensions article and for granting us permission to distribute this popularized version of the article.

Mark Damian Duda
Executive Director

 

Background

 

Natural resource and outdoor recreation professionals have found that gathering information through public opinion and attitude survey research gives them a precise and useful picture of what their organization's constituents think, need, and expect of them. Armed with this valuable information, they have been able to meet the future with organizational planning that is based on insight and knowledge obtained through scientifically valid, unbiased research methods.

It's a fact that conducting such research costs money. And in the current financial climate, with budgets being cut and uncertainty regarding what the future holds, it makes sense for natural resource and outdoor recreation organizations to look for new ways to save money.

 

Online surveys are increasingly popular as an information-gathering tool. More and more online marketing companies offer online surveys at seemingly reasonable rates. Online surveys appear to be a great idea at first blush: they can be set up and administered in-house or contracted out, save time and money, and provide immediate results. But are online surveys a good idea? With few exceptions -- the main one being employee surveys where every single employee has access to the Internet -- for purposes of collecting scientifically valid, accurate, and legally defensible data, the answer at this time is no. Recent research conducted by Responsive Management and published in the peer-reviewed journal Human Dimensions of Wildlife shows that online surveys can produce inaccurate, unreliable, and biased data. There are four main reasons for this: sample validity, non-response bias, stakeholder bias, and unverified respondents.


Sample Validity

 

For a study to be unbiased, every member of the population under study must have an equal chance of participating. When all members of the population under study have an equal likelihood of participating, probability sampling comes into play, and a relatively small sample size can yield results that accurately represent the entire population being studied.

 

For the most part, Internet surveys at this time cannot accomplish this, because there is no such thing as a representative sample of email addresses for various populations, including the general population and its subpopulations, such as registered voters, park visitors, or hunters and anglers. No "master list" of email addresses for any of these groups exists -- not all people within these populations have an email address or access to the Internet. One exception is an online survey of a closed population in which every member of that population has a verified email address and Internet access. An internal survey of an organization in which all potential respondents are known and have guaranteed Internet access, usually through their workplace, is an example of this. Responsive Management has conducted this type of study (mainly employee surveys) for natural resource agencies in the past and has obtained results with scientifically valid sampling methodologies to back up study findings.

 

When online surveys are accessible to anyone who visits a website, the researcher has no control over sample selection. These self-selected opinion polls result in a sample of people who decide to take the survey -- not a sample of scientifically selected respondents who represent the larger population. In this situation online survey results are biased because people who just happen to visit the website, people who are persuaded with a monetary or other incentive to sign up for the survey, people who have a vested interest in the survey results and want to influence them in a certain way, and people who are driven to the site by others are included in the sample. This results in a double bias, because this distortion is in addition to the basic sample already having excluded people who do not have Internet access.

Having access to a valid sample is the foundation for collecting data that truly represent the population being studied. Without a valid sample, every bit of data obtained thereafter is called into question.

 

Non-Response Bias

 

Non-response bias in online surveys is complicated by the most egregious form of self-selection. People who respond to a request to complete an online survey are likely to be more interested in or enthusiastic about the topic and therefore more willing to complete the survey, which biases the results. In fact, the very nature of the Internet, as an information-seeking tool, contributes to this form of bias. For example, if someone who is interested in the subject matter of a survey uses a search engine, such as Google, to seek out information on the subject, that person is more likely to find an online survey on that topic. In this way, more people with a heightened interest in the topic are driven to the online survey.

With a telephone survey, people are contacted who are not necessarily interested in the topic, and if they are not enthusiastic about completing the survey, a trained interviewer can encourage them to do so despite their disinterest, leading to results that represent the whole population being studied, not just those with an interest in the subject.

Another contributor to non-response bias in online surveys is spam and unsolicited mail filters. Users can set the degree of message filtering, and if the tolerance is set strictly enough, they may not even see a request to participate in an online survey because the filter will automatically "trash" the email request when it is delivered. This removes these individuals from the possibility of receiving an invitation to participate in an online survey.

Potential respondents to an email request to participate in an online survey may have more than one, and sometimes multiple, email addresses. It is impossible to know which is the primary address for an individual or even if the person checks the account on a regular basis for incoming mail.

 

Stakeholder Bias

 

Unless specific technical steps are taken with the survey to prevent it, people who have a vested interest in survey results can complete an online survey multiple times and urge others to complete the survey in order to influence the results. This is a common occurrence, especially regarding issues that elicit high levels of concern, such as, in the fish and wildlife context, when an agency wants to measure opinions on proposed regulation changes. Some Internet-savvy individuals have even written automated programs that repeatedly cast votes to influence a poll's results.

Even when safeguards against multiple responses are implemented, there are ways to work around them. If there is a protocol in place that limits survey completions to one per email address, it's easy to go online and open a new email account with a new address and then complete another survey through that email address. If access is limited to one survey completion per computer, completing another survey can be done on a separate computer, at a friend's home, in the workplace, or in a public library, for example. And in the case of online surveys where individuals have to sign up in order to participate, they can sign up under multiple names and email addresses and participate multiple times through each of those email addresses.

 

Unverified Respondents

 

5e2e2k8v

Because of the inability to control who has access to online surveys, there is no way to verify who responds to them -- who they are, their demographic background, their location, and so on. As stated earlier, even when safeguards are implemented to control access to online surveys, there are multiple ways to circumvent those safeguards.

 

A complicating issue is when an organization offers incentives for completing online surveys. Whether it's a chance to win a prize, discounts on purchases, a gift certificate, or some other benefit, offering an incentive without having close control over the sample simply encourages multiple responses from a single person. If someone has a strong desire to win the item, he or she can find ways around any safeguards against multiple responses and complete several surveys, thereby increasing his or her chances of winning the item.

Examples

Three recent collaborative projects with state fish and wildlife agencies gave Responsive Management an opportunity to compare the results of online versus scientific telephone surveys within the same study topics.

 

North Carolina Sunday Hunting Study

 

Sunday hunting has been a controversial issue in North Carolina, with strong feelings among both supporters and opponents. To better understand the issue, the North Carolina Wildlife Resources Commission (NCWRC), Virginia Tech, and Responsive Management collaborated on a study to assess public opinion on Sunday hunting. The study consisted of an online opinion poll, a telephone survey, and an economic analysis.

The online poll was developed and placed on the NCWRC website to elicit feedback on support or opposition to Sunday hunting. The online poll was developed primarily as an outlet for people who wanted to be heard. At the same time, a scientific telephone survey was conducted by Responsive Management, Virginia Tech, and the NCWRC. 

The results of the two surveys were markedly different. The online poll showed that 55% of respondents supported Sunday hunting, whereas 43% opposed it, and 2% had no clear opinion. The telephone survey showed that 25% of respondents supported Sunday hunting, whereas 65% opposed, and 10% had no clear opinion. These differences are well outside of any acceptable margin of error for a valid study. 

 

The telephone survey, because it used a randomly generated sample of North Carolina residents, accurately reflected the opinions of the population as a whole. Because more than 1,000 individuals were interviewed, the sampling error was at most plus or minus 2.815 percentage points.

 

Far more people in the telephone survey of North Carolina residents opposed Sunday hunting compared to those who responded to the online poll. Only 25% of the telephone respondents supported Sunday hunting, whereas 55% of those who responded to the online poll supported it. The telephone survey found a fivefold increase as compared to the online poll in people who had no clear opinion on the subject of Sunday hunting. This indicates that far more people with a vested interest in the results completed the online poll; when the general population was scientifically surveyed, a truer number of North Carolinians who had no clear opinion was revealed. In short, had the NCWRC gone with the online poll results, it would have gotten an inaccurate read on what the public was thinking regarding Sunday hunting in the state.

"While I was not surprised that there were differences between the online interface and telephone survey results, given that the telephone survey used probability sampling and anyone who chose to could give their opinion online, I was somewhat surprised at the size of these differences," said Dain Palmer, Human Dimensions Biologist at the NCWRC.

 

Arizona Big Game Hunt Permit Tag Draw Study

 

In 2006 the Arizona Game and Fish Department (AZGFD) conducted an online survey to assess hunter attitudes toward the Arizona Big Game Hunt Permit Tag Draw, a topic with a high degree of interest to Arizona hunters. When the data collection for the online survey was completed, the AZGFD had doubts about its accuracy and worked with Responsive Management to conduct a non-response bias analysis. A telephone survey of the online survey non-respondents was conducted to assess non-response bias. In other words, those who were contacted by email but who did not respond were contacted by telephone and interviewed.

 

For the online survey, a link to the survey site was distributed by email to individuals who had provided an email address when applying for the 2006 Fall Big Game Draw. Duplicate and invalid email addresses were removed, and the survey was sent to a total of almost 60,000 Fall Big Game Draw applicants.

 

The online survey included a unique website address for each email address, which "closed" the survey to that respondent once he or she completed it. This ensured that multiple responses from a single email address did not occur and that a response from a specific email address could be tracked if necessary. For the telephone survey, people who did not respond to the email request were contacted and interviewed.

 

Responsive Management analyzed those who responded to the survey and those who did not and identified several statistically significant differences between the groups. Of the 766 variables analyzed in the study, differences for 312 variables -- 41% of the variables analyzed -- were statistically significant. This means that, on almost half of the variables where those who responded to the online survey were compared to those who did not respond, there was a meaningful difference between how they responded to the same question.

 

If both of these surveys were representative of the population group under study . . . there would be no statistically significant differences between how the people who responded to the email request answered the questions and how those who did not respond to the request answered the questions.

 

Why are these differences a problem? Simply because they exist. If both of these surveys were representative of the population group under study -- Arizona hunters who applied for the 2006 Fall Big Game Draw and provided an email address -- there would be no statistically significant differences between how the people who responded to the email request answered the questions and how those who did not respond to the request answered the questions. (This bias is in addition to the basic bias of omitting people who did not provide an email address when applying, as described in more detail in the South Carolina study discussed below.)

 

"Our initial reaction to the big game hunt permit study was that it validated what we had been hearing anecdotally for a long time from the general hunting community," said Ty Gray, an Assistant Director with the AZGFD. "Specifically, getting to go hunting or getting a permit tag were very important factors which both groups (Web and phone respondents) agreed on. However, as we started to look closer at some of the other variables, we saw that there were differences that indicated some bias with the online survey -- among those was who was more likely to respond to it."

 

Again, if this were a valid sample to begin with, there would be no statistically significant differences between these two groups. In short, there were major differences in responses, with the online survey providing biased and inaccurate data.

"Game and Fish commissioners regularly have to make important decisions under extreme pressure from special interest groups," said Bob Hernbrode, former chairman of the Arizona State Fish and Game Commission. "Valid social science such as this Arizona study often suggests significantly different outcomes than special interest input would suggest. We need to understand the potential of poorly designed studies and such things as non-response bias."

 

South Carolina Saltwater Fishing and Shellfishing Study

 

In 2009, Responsive Management and the South Carolina Department of Natural Resources (SCDNR) collaborated on a survey to assess participation in and opinions on saltwater fishing and shellfishing in South Carolina and to better understand the accuracy and potential of online surveys. Two different methodologies were used: a scientific survey conducted by telephone and a survey conducted via the Internet. This study is a best-case scenario regarding online surveys because it involved a closed population -- people who obtained a South Carolina Saltwater Recreational Fisheries License. If online surveys could produce accurate data, this would be the study that would prove it.

 

The researchers were able to test this because they had a base sample -- the entire database of Saltwater Recreational Fisheries License holders, including demographic and geographic information for each license holder -- that could be compared to both the telephone and online survey results. When the two methodologies were compared, the telephone survey yielded results that accurately reflected the entire population, whereas the online survey did not. This is because the telephone survey included a greater proportion of the population under study than the online survey did. The telephone survey sample was randomly drawn from the entire population of people who held a Saltwater Recreational Fisheries License; for license holders who did not provide a telephone number, their telephone number was identified by reverse lookup. Therefore, every license holder had an equal chance of being contacted by telephone to take part in the survey. The online survey used a sample consisting of people who held Saltwater Recreational Fisheries Licenses who provided an email address when they purchased their licenses. This systematically excluded license holders who did not have computer access and license holders who chose not to provide an email address. While one might think this is not important, the results showed otherwise. Because of the systematic exclusion of these license holders, the results of the online survey were inaccurate from the outset.

 

This study is a best-case scenario regarding online surveys because it involved a closed population -- people who obtained a South Carolina Saltwater Recreational Fisheries License. If online surveys could produce accurate data, this would be the study that would prove it. 

 

The information from the database indicated that, out of a total population of 103,000 license holders, the online survey had an original sample of approximately 16,100 license holders with email addresses, which produced 12,405 license holders in the sample after email addresses that were undeliverable were removed. Therefore, even before any contacts were made, the online survey had eliminated approximately 88% of the possible sample, and did so in a systematic way, which is the very definition of bias. In addition, there was a notable non-response bias: of the 12,405 license holders contacted by email, only 2,548, or 20.5%, responded to the online survey. These problems lead to a double bias: first, the exclusion of people with no email address, and second, exclusion of those who did not respond to the online survey.

 

With a scientifically selected sample, reducing the sample size to this degree would not be a problem, because the smaller sample would be representative of the population as a whole -- the methodology used to select the sample from the total population being studied would ensure that this would be the case, within a demonstrable sampling error. But in the case of a sample that is not scientifically generated, reducing the sample size in this way simply would bias the results even more -- the more the sample is reduced, the more biased it becomes.

 

Because they had access to the database of all license holders, which included demographic and geographic information, Responsive Management statisticians were able to determine that, from the outset, the respondents who provided email addresses were different from the sample as a whole. If the online sample had been valid, there would have been no statistically significant differences between the two -- each sample would have been consistent with and representative of the population as a whole: the 103,000 license holders being studied

 

When the online survey was completed and the data were analyzed, the online survey respondents were found to be, in general, a more educated and affluent group, and were also disproportionately male. Of particular note, 5.7% of the online survey sample was female, whereas 19.9% of the telephone sample was female; in reality, 18.5% of the license holder database -- the actual number of license holders -- was female. The telephone results were therefore much closer to the truth than the online results. In fact, the online results were so far off the mark that they would have led to highly inaccurate findings, because females were not represented in the proportion that they should have been.

 

"When we initially saw the differences between the online and telephone surveys, we were not too surprised that the results differed, simply due to the fact that only a small portion of license holders provide their email address on their saltwater fishing license application," said Julia Byrd of the SCDNR's Office of Fisheries Management. "Due to this, we thought the results of the online survey might be biased because certain demographic groups would be over- or underrepresented. This was shown in the results."

 

The Result

 

As a result of these problems, obtaining representative, unbiased, scientifically valid results from online surveys is not possible at this time, except in the case of the closed population surveys, such as with employee surveys, described earlier. This is because, from the outset, there is no such thing as a complete and valid sample -- some people are systematically excluded, which is the very definition of bias. In addition, there is no control over who completes the survey or how many times they complete the survey. These biases increase in a stepwise manner, starting out with the basic issue of excluding those without Internet access, then non-response bias, then stakeholder bias, then unverified respondents. As each of these becomes an issue, the data become farther and farther removed from being representative of the population as a whole.

 

For a more detailed look at these examples and more information on the drawbacks of online surveys in the context of human dimensions research, see Duda, M.D, & Nobile, J.L., "The Fallacy of Online Surveys: No Data Are Better Than Bad Data," Human Dimensions of Wildlife 15(1): 55-64. Reprints of the article can be ordered here. A printable version of this email newsletter can be downloaded here (877KB PDF).

Written By: Mark Damian Duda

Source: http://www.responsivemanagement.com/news_from/2010-05-04.htm 

Categorized in Science & Tech

Jenny Marlar of The Gallup Organization moderated this AAPOR Annual Conference session today on sampling and data quality concerns for online surveys.

The Performance of Different Calibration Models in Non-Probability Online Surveys

Julia Clark and Neale El-Dash of Ipsos Public Affairs looked at calibration performance of online surveys conducted for the 2012 U.S. Presidential election. Julia opened by noting that average field times has declined from 5 to 6 days in the 1960s to less than a full day in 2012. The number of polls per month, from 1960 to 2013, has dramatically increased as well.

For the 2012 election cycle, Reuters asked Ipsos for interactive visuals, speed, within-day polling and very narrow granularity (Presidential approval among Iraq veterans, for instance). Ipsos did 163,000 online interviews over 12 months with a constantly changing questionnaire, with weekly tracking by demographics. Ipsos used its Ampario river sample gathered from 300 non-panel partners with 22 million unique hits per year, to make the online survey work less like a panel.

Ipsos used Bayesian credibility intervals as an alternative to margin of error. Ipsos was among the top pollsters for accuracy, with its final poll giving Obama a 2-point lead. In fact, 2012 was a good cycle for online polling.

Post-election, Ipsos applied different calibration methods to see how accuracy could have been improved now that the final results are known. Neale discussed the original methodology, which used raking weighting on demographic variables for voter and nonvoter using CPS November 2009 results, weighting for age, gender, census division and more. Their Bayesian estimator combined their prior market average with the current sample estimate to obtain the final published estimate, with the actual weight assigned to the prior changing over time but, on average, worth 20%.

Ipsos looked at calibration models including no weights, demographics, demographics without race, demographics within states, demographics and party ID, demographics without race but with party ID, demographics with sample source optimized, and the Bayesian estimate. For gauging performance, they used the market average by week and then the final vote. Looking at Democratic lead, the only weighting method that performed worse than no weights at all was weighting by demographics within states. The best performance was by demographic and party ID, with the Bayesian estimator in the top 3 models. The model underperformed for Hispanic demographics and under $30,000 household income.

The Bayesian estimate performs well. It does a good job of optimizing on variance and bias, but a simpler weighting scheme might ultimately perform better. Online overall is still misrepresenting minorities and the less affluent; in a blended online sample, the sample source matters but the optimal mix has yet to be determined.

How Do Different Sampling Techniques Perform in a Web-Only Survey?

Ipek Bilgen of the NORC at the University of Chicago looked at results from a comparison of a Random Sample Email Blast to an Address-Based Sampling Approach. With 71% of U.S. households using the Internet, new web-based sampling approaches are emerging. However, the population is still younger, more educated, and higher SES (Socio-Economic Status). NORC examined different sampling strategies to address this skew to identify how response rates varied by sampling and incentive strategies. Survey results were benchmarked against the General Social Survey.

The study used four sampling methods: ABS, email blasts, Facebook and Google. The incentives were $2, $5 and $10 Amazon gift cards. For emailing, the sample frame was an InfoUSA email address list in 12 strata of 3 age groups and 4 regions. The invite was followed by two reminders. The ABS sample frame used the USPS DSF (Delivery Sequence File) for 4 strata (the geographical region). The invite was followed by two followup letters; respondents received a thank-you postcard.

The 21-question web survey used GSS questions for comparability and included demographic variables and substantive variables on computer and internet use. Results were calibrated by raking weights to the ACS on region, age, sex, etc.

Of 100,000 emails, only 197 took the survey but this was most likely because the invite was cloud-marked as spam, resulting in low delivery rates. The ABS had 750 responses on 10,000 mailings and did have an increased response rate by higher incentive. The Internet sample underestimated the lower educated and lower income groups and obviously overestimated Internet at home and by mobile usage. Also they were more likely to get their news from the Internet than from TV as shown in the GSS.

The ABS approach is getting a different web population than the email blast so different strategies might provide access to different probabilities.

Can We Effectively Sample From Social Media Sites?

Michael Stern of the NORC then shared results from sampling from ads on Facebook and Google. He began by pointing out that mobile surveys might fit especially well with social media sampling, given the prevalance of use of social media on smartphones. Most social media research is passive (scraping the sites), but this was active - focusing on recruiting from social media.

Two prominent examples of Facebook recruiting are Bhutta (2012) and Ramo and Prochaska (2012) targeting low-incidence groups. The Google model of flu protection overpredicted occurrence this year from analyzing searches related to flu.

On advertising on Google and Facebook, advertisers bid on clicks. To prevent people from taking the survey again and again, NORC used PINs and email addresses but the system could be spammed for people interested simply in earning the incentive. The click-through rate for 2 million people was 0.018%, which was great for a survey, Facebook said. In different ad images, the paper and pencil survey image did better than more technological imagery. On Facebook, NORC was able to say that this was an Amazon gift card but Google prevented "Amazon" from being used.

Facebook ads were to the general network, but Google ads were targeted by keywords. NORC chose a selection of unrelated keywords. Of people who click the survey, the majority are young; but answers are much more distributed for people who click the survey. The results from the Facebook ad are fairly random when compared to the GSS but there was a closer correspondence for Google and GSS. The results were not weighted.

Google outperformed the Facebook ads in results, speed of response and lower cost per click: $12 per complete vs. $29 per complete. The Google ad was touted on Slickdeals.com, which led to spam responses.

How Far Have We Come?

J. Michael Dennis of GfK Knowledge Networks discussed the lingering digital divide and its impact on the representativeness of Internet surveys. Internet adoption has slowed. How do the characteristics of the offline population differ from online?

Internet access is 97% for $75,000 and up, according to Pew, 94% for college graduates, but low for high-school educated. Has Internet penetration reached a point where the online population can adequately represent the U.S. general population? Given the persistence of the Digital Divide, should survey researchers still be concerned about sampling coverage issues?

GfK Knowledge Networks equips non-Internet households recruited through Address Based Sampling with netbooks and ISP payment. The survey results compared 3,000 online-only general population sample vs. 3,000 online and "offline". The data is weighted. Estimates for 5 out of 15 demographic variables not used for weighting were statistically different between the two samples, reaching 3.8 points of absolute difference. The average for 7 out of 25 public affairs questions were 1.4 percentage points of difference (note: points not percentage change). On health variables, the significant difference was 9 out of 25 health variables: from a 4.5 point difference on uninsured to a 4.3 percentage point difference on wine consumption; 11 out of 15 technology variables were statistically different. The inclusion of non-Internet households impacts the relationships between variables as well.

This is a 2013 study updating a 2008 study and the significant differences have persisted in time on behaviors such as recycling newspapers, recycling plastics and active participation, but gun ownership is now similar between online and offline populations.

Despite growth in Internet penetration over the years, excluding non-Internet households can still lead to over- or under-estimations for individual variables and change the magnitude of the correlations between variables.

Respondent Validation Phase II

A United Sample (uSamp) presentation studied validating respondent identity in online panels. How do we know panelists are representing their identities truthfully online? When someone signs up for an online panel, how can you tell they are who they say they are?

Validation is using procedures to verify that people responding online are "real" people. Collect Personally Identifiable Information (PII), then compare them to national third-party databases and categorize respondents accordingly.

uSamp conducted 7,200 surveys during the first two weeks of January, 2011: 6,000 of respondents provided name, address, birthdate and 1,200 were unwilling to. Those not validating were from the hardest to reach demographic groups (good news) and were 50% more likely to provide conflicting data (bad news). Validation databases do a poorer job of tracking hard-to-reach demographic groups.

The 2012 survey, of similar survey size, had a higher refusal rate for PII, of 19% vs. 17% in 2011 (25% refusal rate if you include prompt for email address). Respondents who failed verification were twice as likely to fail at least one quality check in the survey.

In conclusion, you can weed out bad actors from a volunteer panel using validation but these same people are easily caught by quality checks. "Fair to ask whether the expected benefits of validation are sufficiently great to balance the off of as much as a quarter of the sample for a study -- and a higher percentage among certain demographic groups?"

Source:
http://www.researchscape.com/blog/sampling-and-data-quality-of-internet-surveys

 

 

 

Categorized in Market Research

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now