Barbara Larson

Barbara Larson

Mozilla rolled out a major update to its Firefox web browser on Tuesday with a bevy of new features, and one old frenemy: Google.

In a blog post, Mozilla said Firefox’s default search engine will be Google in the U.S., Canada, Hong Kong and Taiwan. The agreement recalls a similar, older deal that was scuttled when Firefox and Google’s Chrome web browser became bitter rivals. Three years ago, Mozilla switched from Google to Yahoo as the default Firefox search provider in the U.S. after Yahoo agreed to pay more than $300 million a year over five years — more than Google was willing to pay.

The new Firefox deal could boost Google’s already massive share of the web-search market. When people use Firefox, Google’s search box will be on the launch page, prompting users to type in valuable queries that Google can sell ads against. But the agreement also adds another payment that Alphabet’s Google must make to partners that send online traffic to its search engine, a worrisome cost for shareholders.

 

 

It’s unclear how much Google paid to reclaim this prized digital spot. A Google spokeswoman confirmed the deal but declined to comment further, and Mozilla didn’t disclose financial details.

As Google’s ad sales keep rising, so too has the amount it must dole out to browsers, mobile device makers and other distribution channels to ensure that Google’s search, video service and digital ads are seen. Those sums, called Traffic Acquisition Costs or TAC, rose to $5.5 billion during the third quarter, or 23 percent of ad revenue.

Last quarter, the increase in TAC was primarily due to “changes in partner agreements,” Google Chief Financial Officer Ruth Porat said on the earnings call. She declined to disclose specific partners. A lot of these payments go to Apple, which runs Google search as the default on its Safari browser. In September, Apple added Google search as the default provider for questions people ask Apple’s voice-based assistant Siri, replacing Microsoft’s Bing. In the third quarter, the TAC Google paid to distribution partners, like Apple, jumped 54 percent to $2.4 billion.

Google is likely paying Mozilla less than Apple for search rights. In 2014, Yahoo’s then-Chief Executive Officer, Marissa Mayer, lobbied heavily for the Firefox deal by agreeing to pay $375 million a year, according to regulatory filings. Google paid $1 billion to Apple in 2014 to keep its search bar on iPhones, according to court records.

Firefox once commanded roughly a fourth of the web browser market, but its share has slid in recent years. It now controls 6 percent of the global market, according to research firm Statcounter. Apple’s Safari holds 15 percent followed by Alibaba’s UC Browser with 8 percent. Google’s Chrome browser has 55 percent of the market.

Source: This article was published siliconvalley.com By Mark Bergen

Tuesday, 28 November 2017 14:53

The Problems With Searching the Deep Web

For more than 20 years, researchers have worked to conceptualize methods for making web searching more comprehensive—going beyond the surface sites that are easily accessed by today’s search engines to truly create a system of universal access to the world’s knowledge. The task is proving to be far more complicated than computer scientists had thought. “The existing approaches,” notes one recent analysis, “lack [the ability] to efficiently locate the deep web which is hidden behind the surface web.”

Today, it is estimated that more than 65% of all internet searches in the U.S. are done using Google. Both Bing and Yahoo continue to be major players as well.

Avoiding the Dark Side

We all want searching to be more comprehensive, targeting the exact information that we need with the least amount of effort and frustration. However, nestled near the abyss of the information ocean is the dark web, a space where hackers and criminals create fake sites and conduct their commerce. The dark web continues to frustrate efforts to control illegal activity, including credit scams, drug sales, and the exploitation of international relations. Clearly, this isn’t what we are looking for in information retrieval.

By analyzing available data, Smart Insights says that more than 6.5 billion web searches are made each day around the globe. Current hacking scandals are making it clear that the concept of safe searching is more than just about protecting children from predators. There are a variety of search options that have been designed with privacy in mind:

  • DuckDuckGo, which bills itself as “the search engine that doesn’t track you”
  • Gibiru, which offers “Uncensored Anonymous Search”
  • Swisscows, a Switzerland-based option that calls itself “the efficient alternative for anyone who attaches great importance to data integrity and the protection of privacy”
  • Lukol, which works as a proxy server from Google and removes traceable entities
  • MetaGer, a German search engine that removes any traces of your electronic footprints and also allows for anonymous linking
  • Oscobo, a British product that does not track you and provides a robust option of search types, including images, videos, and maps

And there are others as well, demonstrating that concern for privacy over profits is creating reliable solutions for searchers across the globe.

Going Deeper

Google and other standard web search engines can be infuriating when you’re trying to do intensive background research, due to their lack of deep searching into the content of the databases and websites they retrieve. Given the amount of information on the web, this isn’t surprising, but we need better performance if we are truly able to rely on web searching as a legitimate option for research. Information professionals are used to the structured searching of verifiable information. What is missed is that deep web content—the “meat” of information that searchers need and expect.

Researchers Andrea Cali and Umberto Straccia noted in a 2017 article, “the Deep Web (a.k.a. the Hidden Web) is the set of data that are accessible on the Internet, usually through HTML forms, but are not indexable by search engines, as they are returned only in dynamically-generated pages.” This distinction has made reaching the content in these databases very difficult. The most successful sites, to date, have been targeting specific types of hidden data.

Working largely from public data, “whether researching arrest records, phone numbers, addresses, demographic data, census data, or a wide variety of other information,” Instant Checkmate is a fee-based service that retrieves data from public databases containing arrest reports, court records, government license information, social media profiles, and more. By doing so, it claims to help “thousands of Americans find what they’re looking for each and every day.” Searches seem to take forever, which, given the size of the databases it is searching, isn’t unreasonable. The data is encrypted to protect the searcher’s identity. Reports are far more detailed than anything we might otherwise be able to find in a more timely manner. Similar services include MyLifePipl, and Yippy.

Information professionals are perhaps most familiar with the Internet Archive’s Wayback Machine, the front-end search engine to more than 308 billion archived webpages and link addresses to even more. The Internet Archive itself takes up 30-plus petabytes of server space. For comparison, a single petabyte of data would fill 745 million floppy disksor1.5 million CD-ROMs.

Related...

And that’s just the size of the information that can be searched.Google Scholar andGoogle Books are two search engines that are working to dig deeper into the content of websites for scholarly information. Searchers can do their own searching by using the “site:” command; however, this is a tedious and hit-or-miss process, since these search engines are only able to scan the indexed pages linked to some domain homepages.

Deep Web Search Engines

A variety of search engines are working to provide improved access to key information otherwise hidden inside of websites or behind paywalls. Methods to get to this deep web are currently still under development—and are not regulated to protect users from unethical practices. Deep web search engines are able to uncover more information and links and improve the results of a search to include an estimated 500% more information than traditional search engines.

Examples of today’s search engines that are designed to reach these deep web sites include:

None of these are exceptional resources for information professionals that solve our problems of deep searching. These websites pop up and get taken down very frequently, and others pop up in their place. And none of these systems necessarily has staying power.

To thoroughly access deep web information, you’ll need to install and use a Tor browser, which also provides the basis for access to the dark web. The real issue facing researchers is how to control the search process in these huge, individually structured databases.

Creating a Stable Deep Web Search Tool Is Harder Than You Might Think

In August 2017, a deep web search engine was being touted as bringing better quality deep searching while promising to protect the privacy of users. DeepSearch from TSignal was to be the focus of this NewsBreak; however, it recently disappeared from the web—perhaps it was acquired by another company or taken down for more development and testing. This has happened before and probably will happen again. As researchers noted in a 2013 article, “While crawling the deep-web can be immensely useful for a variety of tasks including web indexing and data integration, crawling the deep-web content is known to be hard.”

Earlier this year, two Indian researchers reported on their goal of creating a dynamic, focused web crawler that would work in two stages: first, to collect relevant sites, and second, for in-site exploring. They noted that the deep web itself remains a major stumbling block because its databases “change continuously and so cannot be easily indexed by a search engine.”

The deep web’s complications are many—query design, requirements for some level of user registration, variant extraction protocols and procedures, etc. Let alone the linguistic complications as global searching confronts meanings and connections of terminology across disciplines and languages. Today’s open web search is so ubiquitous that we rarely think about the potential complications; however, the deep web is another animal, and some researchers question whether it would be possible to bridge this divide without doing much work to modify the “present architecture of [the] web.”

Information professionals can easily see the need for better search techniques to handle the complex, evolving nature of the web—and increasingly, so can other professionals. Psychiatrists studying addiction have initiated their own efforts to better access and study the deep web and dark web due to their role in the “marketing or sale or distribution” of drugs and developing an “easily renewable and anarchic online drug-market [which] is gradually transforming indeed the drug market itself, from a ‘street’ to a ‘virtual’ one.”

What can we do as we wait for a better solution to web search? Reliable scholarly databases can easily be supplemented with existing search sites and mega-search engines. Information professionals have always been aware of the complex nature of search, and today, computer scientists and web designers are confronting these realities as well. There is no ultimate solution—which, if nothing else, guarantees the future of our field.

Source: This article was published newsbreaks.infotoday By Email Nancy K. Herther

Google is the search engine that we all know and love, but most of us are barely scratching the surface of what this amazing tool can really accomplish. In this article, we're going to look at eleven little-known Google search tricks that will save you time, energy, and maybe even a little bit of cash. Some of these are just for fun (like making Google do a barrel roll), others can help you make better purchasing decisions, take major shortcuts, or dig up information on your favorite band, author, or even favorite foods.

Don't buy it until you Google it

shopping

When you're looking to purchase something from your favorite e-commercestore on the Web, don't click on that final checkout button until you've searched for the name of the store plus the word coupon. These promo codes can help you get free shipping, a percentage off your purchase, or entitle you to future savings. It's always worth a look!

Find works from your favorite authors and artists

books

Find all the books your favorite author has ever written simply by typing in "books by", then your author's name. You can do this with albums ("albums by") as well. This is a great way to find past works (or future works) that you might not be aware of.

Find the origins of common words

dictionary

Find out the origins - or etymology - of a specific word by typing in the word plus "etymology. For example, if you type in "flour etymology" you'll see that it is Middle English: a specific use of flower in the sense ‘the best part,’ used originally to mean ‘the finest quality of ground wheat’....The spelling flower remained in use alongside flour until the early 19th cent."

Compare the nutritional value of one food with another

Recipe
Credit: Alexandra Grablewski

Not sure if that piece of pizza is going to be better for you than say a cup of broccoli? Ask Google to compare the nutritional value by typing in "pizza vs. broccoli", or anything else you'd like to compare. Google will come back with all pertinent nutritional and caloric information - it's up to you what you choose to do with that information, of course.

Listen to songs by your favorite artist

Music

If you want to listen to a particular song by your favorite artist, or maybe even explore their discography, just type in "artist" and "songs", i.e., "Carole King songs". You'll get a complete list of songs, plus videos and biographical information. You can also listen to the songs right there within your Web browser; note that this feature isn't always available for all artists.

Find what those symptoms are similar to

medical

Type in something you're experiencing health-wise, and Google will list out similar diagnoses based on what you're experiencing. For example, a search for "headache with eye pain" brings back "migraine", "cluster headache", "tension headache", etc. NOTE: This information is not meant to substitute for that of a licensed medical provider.

Use Google as a timer

Timer

Credit: Flashpop

Need to keep those cookies from burning while you're browsing your favorite sites? Simply type "set timer for" whatever amount of minutes you're looking to keep track of and Google will run it in the background. If you attempt to close the window or tab that is running the timer, you'll get a popup alert asking if you really want to do that.

Make Google do tricks

barrel roll

There are a multitude of fun tricks that you can make Google do with just a couple simple instructions:

  • Type in "do a barrel roll" or "Z or R twice", and Google will rotate the results page a full 360 degrees. 
  • Type in "tilt" or "askew", and your page 'leans' to the right a bit. Searching for anything else via Google puts it back to where it was.
  • Type in  "zerg rush" and your search page returns with 'O's eating the search results. Clicking each 'O' three times stops this.

Find the roster of any sports team

Get a detailed roster breakdown of your favorite sports team simply by typing in "team roster" (substituting the name of your team for the word "team"). You'll see a full-page color roster, with player information.

Find a quote

Use quotation marks to search for an exact quote and its origin. For example, if you knew the partial lyrics to a song, but weren’t sure of the singer or the songwriter, you could simply frame the snippet that you did know in quotation marks and plug it into Google. More often than not, you’ll receive the full song lyrics as well as author, when it was first released, and other identifying information.

Find related sites

top ten search tips

Using Google, you can use a little known command that will bring up sites related to a specified site. This comes in handy especially if you really enjoy a particular site, and you’d like to see if there are others that are similar. Use “related:” to find sites that are similar; for example, “related:nytimes.com”.

 Source: This article was published lifewire.com By Wendy Boswell

Email hacking is prolific, and the results can be severe. Email account attacks often result in password theft, identity theft, account theft, and credit card fraud. Now, Google has publishedthe results of a study that reveals the most common methods hackers use to penetrate Gmail accounts. The tech giant hopes that the research results will help to educate consumers about how to protect their accounts.

The most common method hackers use, according to Google, is phishing. This technique is very common, and can be carried out in many different ways. The most intricate phishing attacks are personalized and targeted (using social engineering).

Socially engineered phishing comes in the form of newsletters about agriculture for farmers, links to articles about cryptocurrencies for investors, or emails with links to professional resources relating to whichever particular career the target has.

On other occasions, a spoof Paypal email that confirms a purchase on Amazon or eBay will link to a fake login page for the service. These kinds of phishing emails rely on the victim’s confusion and concern (because they don’t remember making the purchase), to trick them into entering their details. Sadly, as soon as the target enters their credentials into the fake login page, the cybercriminal gains full access to that account.

Various Methods

Google explains that hackers are using a whole host of methods to penetrate email accounts. Its security blog is called New research: Understanding the root cause of account takeover. The study shares useful information that could help prevent future attacks.

It reveals that 15% of surveilled users believe they suffered a social media or email account hack between March 2016 and March 2017. In addition, Google has disclosed that around 250,000 web logins are “phished” each week.

In total, the researchers identified 788,000 potential victims of key-logging and 12.4 million potential victims of phishing. Google also revealed that around 3.3 billion accounts were endangered by third-party breaches.

The Results

Working alongside researchers at Berkeley University in California, Google analyzed various deep web black markets. By searching for stolen credentials, the researchers were able to ascertain a number of important things.

The researchers concluded that many attacks were the result of a ‘hit and miss’ type method, involving passwords gathered from previous cyberattacks. This is important because it means that consumers could be saving themselves the headache of having multiple accounts penetrated.

Often, when hackers manage to get the login credentials for one account, they will sell those login credentials on the dark web. Other hackers buy those credentials en masse, then use them to try to break into other websites.

If consumers used different passwords for each account, or two-factor authentication, then this technique wouldn’t work. Sadly, more often than not, people use the same email address and password for their Facebook, Twitter, Instagram, Gmail, Slack, Skype, and any other accounts they have. This means that once hackers have breached one account, the rest are vulnerable.  

Sophisticated Techniques

Although phishing and purchasing credentials online are two of the most common methods for gaining entry to email accounts, there are more complex methods. During the course of the year-long study, researchers at Berkeley analyzed 25,000 hacking tools. The researchers found that attack vectors using key-loggers and trojans, which collect data about users, are becoming much more common.

According to the findings, software that ascertains people’s IP addresses is often delivered via phishing techniques. Then, in a secondary attack, the hacker delivers the key-logging malware or – worse – a trojan that communicates with a Command and Control (CnC) server.

These types of trojans give cybercriminals easy access to people’s machines, allowing them to search the entire system, and even to turn on microphones and webcams. With this kind of malware on a victim’s machine, it’s only a matter of time before credentials are entered and passwords or credit card details are siphoned off.

Simple Solutions Go a Long Way

The first thing that consumers must start doing is to use unique passwords for all their accounts. A unique password stops the possibility of dark web vendors selling stolen credentials that can then be used to access multiple accounts. A secure password needs to be long and difficult (not a pet’s name!). This kind of secure password is too tough to actually remember. For this reason, it is going to be necessary to either have a little black book that you keep your passwords in (which isn’t that secure, because you could lose it) or to use a password manager.

A password manager like KeePass will allow you to remember just one difficult password in order to access a whole database of strong passwords for all your accounts. This takes the pressure off and allows you to have super strong, unique passwords.

Antivirus Protection

As far as malware and trojans go, a good antivirus and firewall go a long way. What’s more, there are plenty of free antivirus and anti-malware programs on the market, so you have no excuse for not having one. Yes, you can pay up to $100 per year for an antivirus. However, the reality is that you don’t actually get better malware protection by paying more: you just get more tools (that you don’t really need).

When it comes to a firewall, Windows has had an excellent one built in since way back in Windows XP. The Windows firewall is excellent, and using it in combination with an up-to-date antivirus like Malwarebytes is essential for security.

In addition, it’s important to always take software updates when they become available. Flash updates, web browser updates, and other software updates – such as operating system security patches – all ensure that your system is protected against the latest threats. Zero-day vulnerabilities are discovered all the time, and they can lead to very severe threats.

Two-factor Authentication

According to recent studies, most Americans are not using dual-factor authentication. This is a real shame because it is the easiest way to protect accounts. If you haven’t already, please do set up two-factor authentication on your email account (and other accounts).

Virtual Private Networks

People should also strongly consider using a Virtual Private Network (VPN). A VPN is one of the most advanced forms of internet protection. They work by securely encrypting all of the data coming and going from a connected device. This ensures that even if someone does ‘sniff’ your traffic (using the newly discovered KRACK vulnerability, for example), they can’t actually steal your credentials.

In addition, when you connect to a VPN, your real IP address is concealed and replaced with the IP address of a VPN server. By hiding your true IP address, VPNs make it harder for hackers to deliver trojans and other malware to your devices.

Finally, internet users should always be wary when opening links that look official in emails. Phishing emails are very convincing, but if you look at the actual address browser it’s usually possible to tell if you’re on the real site.

The best thing to do is not click on links in emails. Instead, navigate to the website in question manually by entering the address into your browser. If you’re on the real site, the address should start with HTTPS and have a little green lock on the left that shows you the connection is secure. When in doubt, check the web address bar in your browser. 

Google Tightening Security

The good news is that Google has used the information to add security to its service.

Last month, the firm launched a number of tools designed to help people protect their accounts. These include a personalized account security checkup, new phishing warnings, and an Advanced Protection Program for at-risk users.

In addition, Google has tightened up the location radius for accounts, meaning that people will be asked if an unusual login is really them more often. Google believes that it has already used the findings from its study to prevent hackers from penetrating a staggering 67 million Google accounts.

Source: This article was published bestvpn.com By Ray Walsh

No matter how popular and easy to use an email platform like Gmail may be, having to actually go ahead and manage email on a day-to-day basis can be a daunting, dreadful task. Using extra email management tools that work with Gmail may not make you fall in love with email, but it will certainly help take some of the headache out of it by giving you back some of your precious time and energy.

Whether you use Gmail for personal or professional reasons, on the web or from a mobile device, all of the following tools may be of great benefit to you. Take a look to see which ones catch your eye.

Inbox by Gmail

Inbox by Gmail is basically a must-have if you regularly check your messages from your mobile device. Google took everything it new about how its users were using Gmail and came up with a brand new, super intuitive, highly visual email platform that simplifies and speeds up email.

Group incoming email messages in bundles for better organization, see highlights at a glance with card-like visuals, set reminders for tasks that need to be done later and "snooze" email messages so you can take care of them tomorrow, next week, or whenever you want. More »

Boomerang for Gmail

Boomerang
Photo © drmakkoy / Getty Images

Ever wish you could write an email now, but send it later? Instead of doing exactly that – leaving it as a draft and then trying to remember to send it at a specific time – just use Boomerang. Free users can schedule up to 10 emails per month (and more if you post about Boomerang on social media).

When you write a new email in Gmail with Boomerang installed, you can press the new "Send Later" button that appears next to the regular "Send" button, which allows you to quickly pick a time to send (tomorrow morning, tomorrow afternoon, etc.) or the opportunity to set an exact date and time to send it. More »

Unroll.me

Mailbox
Photo © erhui1979 / Getty Images

Subscribe to too many email newsletters? Unroll.me not only allows you to unsubscribe from them in bulk, but also lets you create your own "rollup" of email newsletters, which brings you a daily digest of all the newsletter subscriptions you actually want to keep.

Unroll.me also has a nifty iOS app you can use to manage all your email subscriptions while you're on the go. If there's a particular subscription you want to keep in your inbox, just send it to your "Keep" section so Unroll.me doesn't touch it. More »

Rapportive

Profile
Photo © runeer / Getty Images

Do you communicate with a lot of new people via Gmail? If you do, sometimes it can feel eerily robotic when you don't know who's on the other end of the screen. Rapportive is one tool that offers a solution by connecting to LinkedIn so it can automatically match profiles based on the email address you're communicating with.

So when you send or receive a new message, you'll see a short LinkedIn profile summary in the righthand side of Gmail featuring their profile photo, location, current employer and more — but only if they have filled out that information on LinkedIn and have their account associated with that email address. It's potentially a nice way to put a face to an email message. More »

SaneBox

Folder
Photo © erhui1979 / Getty Images

Similar to Unroll.me, SaneBox is another Gmail tool that can help automate your organization of incoming messages. Instead of creating filters and folders yourself, SaneBox will analyze all of your messages and activity to understand which emails are important to you before moving all of the unimportant emails to a new folder called "SaneLater."

You can also move unimportant messages that still show up in your inbox to your SaneLater folder, and if something that gets filed into your SaneLater folder becomes important again, you can move it out of there. Even though SaneLater takes the manual work out of organization, you still have full control for those messages you need to specifically put somewhere. More »

LeadCooker

Email
Photo R?stem G?RLER / Getty Images

When it comes to online marketing, it's no question that email is still massively important. Many email marketers send messages all at once to hundreds or thousands of email addresses with the click of a button using third-partyemail marketing platforms like MailChimp or Aweber. The downside to this is that it's not very personal and can easily end up as spam.

LeadCooker can help you strike a balance between emailing lots of people and keeping it more personal. You still get a lot of the features of traditional email marketing platforms like automated follow-ups and tracking, but recipients won't see an unsubscribe link and your messages come straight from your Gmail address. Plans start at $1 per 100 emails with LeadCooker. More »

Sortd for Gmail

Stack of Papers
Photo © CSA-Archive / Getty Images

Sortd is an amazing tool that completely transforms the look of your Gmail account into something that looks and functions much more like a to-do list. With a UI that's as simple and as intuitive to use as Gmail itself, the aim of Sortd is to offer people who struggle to stay on top of email a better way to stay organized.

Sortd is the first "smart skin" for Gmail that divides your inbox into four main columns, with options to customize things the way you want. There are also apps available for both iOS and Android. Since it's currently in beta, the tool is totally free for now, so check it out while you can before pricing is put in place! More »

Giphy for Gmail

Animated GIF
Image made with Canva.com

Giphy is a popular search engine for GIFs. While you can certainly go straight to Giphy.com to search for a GIF to embed in a new Gmail message, a much easier and more convenient way to do it is by installing the Giphy for Gmail Chrome extension.

If you love using GIFs in Gmail, this is a must-have to help you save more time and compose your messages more efficiently. The reviews of this extension are pretty good overall, although some reviewers have expressed concern about bugs. The Giphy team seems to update the extension every so often, so if it doesn't work for you straight away, consider trying it again when a new version is available. More »

Ugly Email

Eye
Photo © ilyast / Getty Images

More email senders are now using tracking tools so they can get to know more about you without you even knowing it. They can typically see when you open their emails, if you clicked on any links inside, where you're opening/clicking from, and what device you're using. If you really value your privacy, you may want to consider taking advantage of Ugly Email to help you easily identify which Gmail messages that you receive are being tracked.

Ugly Email, which is a Chrome Extension, simply puts a little "evil eye" icon in front of the subject field of every tracked email. When you see that little evil eye, you can decide whether you want to open it, trash it, or maybe create a filter for future emails from that sender. More »

SignEasy for Gmail

Signature
Photo © carduus / Getty Images

Receiving documents as attachment in Gmail that need to be filled out and signed can be a real pain to work with. SignEasy simplifies the whole process by allowing you to easily fill out forms and sign documents without ever leaving your Gmail account.

A SignEasy option appears when you click to view the attachment in your browser. Once you've filled out the fields that need completion, the updated document is attached in the same email thread. More »

Source: This article was published lifewire.com By Elise Moreau

In this introduction to the basic steps of market research, the reader can find help with framing the research question, figuring out which approach to data collection to use, how best to analyze the data, and how to structure the market research findings and share them with clients.

The market research process consists of six discrete stages or steps. They are as follows:

The third step of market research - - Collect the Data or Information - entails several important decisions. One of the first things to consider at this stage is how the research participants are going to be contacted. There was a time when survey questionnaires were sent to prospective respondent via the postal system. As you might imagine, the response rate was quite low for mailed surveys, and the initiative was costly.

Telephone surveys were also once very common, but people today let their answering machines take calls or they have caller ID, which enables them to ignore calls they don't want to receive. Surprisingly, the Pew Foundation conducts an amazingly large number of surveys, many of which are part of longitudinal or long-term research studies.

Large-scale telephone studies are commonly conducted by the Pew researchers and the caliber of their research is top-notch.

Some companies have issued pre-paid phone cards to consumers who are asked to take a quick survey before they use the free time on the calling card. If they participate in the brief survey, the number of free minutes on their calling card is increased.

Some of the companies that have used this method of telephone surveying include Coca-Cola, NBC, and Amaco.

Methods of Interviewing

In-depth interviews are one of the most flexible ways to gather data from research participants. Another advantage of interviewing research participants in person is that their non-verbal language can be observed, as well as other attributes about them that might contribute to a consumer profile. Interviews can take two basic forms: Arranged interviews and intercept interviews.

Arranged interviews are time-consuming, require logistical considerations of planning and scheduling, and tend to be quite expensive to conduct. Exacting sampling procedures can be used in arranged interviews that can contribute to the usefulness of the interview data set. In addition, the face-to-face aspect of in-depth interviewing can result in exposure to interviewer bias, so training of interviewers necessarily becomes a component of an in-depth interviewing project.

Intercept interviews take place in shopping malls, on street corners, and even at the threshold of people's homes. With intercept interviews, the sampling is non-probabilistic. For obvious reasons, intercept interviews must be brief, to the point, and not ask questions that are off-putting.

Otherwise, the interviewer risks seeing the interviewee walk away. One version of an intercept interview occurs when people respond to a survey that is related to a purchase that they just made. Instructions for participating in the survey are printed on their store receipt and, generally, the reward for participating is a free item or a chance to be entered in a sweepstakes.

Online data collection is rapidly replacing other methods of accessing consumer information. Brief surveys and polls are everywhere on the Web. Forums and chat rooms may be sponsored by companies that wish to learn more from consumers who volunteer their participation. Cookies and clickstream data send information about consumer choices right to the computers of market researchers. Focus groups can be held online and in anonymous blackboard settings.

Market research has become embedded in advertising on digital platforms.

There are still many people who do not regularly have access to the Internet. Providing internet access for people who do not have connections at home or are intimidated by computing or networking can be fruitful. Often, the novelty of encountering an online market research survey or poll that looks like and acts like a game is incentive enough to convert reticent Internet users.

Characteristics of Data Collection

Data collection strategies are closely tied to the type of research that is being conducted as the traditions are quite strong and have resilient philosophical foundations. In the rapidly changing field of market research, these traditions are being eroded as technology makes new methods available. The shift to more electronic means of surveying consumers is beneficial in a number of ways. Once the infrastructure is in place, digital data collection is rapid, relatively error-free, and often fun for consumers. Where data collection is still centralized, market researchers can eliminate the headache of coding data by inputting responses into computers or touch screens. The coding is instantaneous and the data analysis is rapid.

Regardless of how data is collected, the human element is always important. It may be that the expert knowledge of market researchers shifts to different places in the market research stream. For example, the expert knowledge of a market researcher is critically important in the sophisticated realm of Bayesian Networks simulation and structured equation modeling -- two techniques that are conducted through computer modeling. Intelligently designed market research requires planning regardless of the platform. The old adage still holds true: Garbage in, garbage out.

Now you are ready to take a look at the market research process Step 4. Analyze the Data.

Sources

Kotler, P. (2003). Marketing Management (11th ed.). Upper Saddle River, NJ: Pearson Education, Inc., Prentice Hall.

Lehmann, D. R. Gupta, S., and Seckel, J. (1997). Market Research. Reading, MA: Addison-Wesley

Rather than becoming ubiquitous in homes as expected, the Internet of Things (IoT) has become the butt of jokes, in part because of major security and privacy issues. UK mobile chip designer ARM -- which created the architecture used by Qualcomm, Samsung and others -- has a lot to lose if it doesn't take off. As such, it has unveiled a new security framework called Platform Security Architecture (PSA) that will help designers build security directly into device firmware.

ARM notes that "many of the biggest names in the industry" have signed on to support PSA (sorry ARM, that's a bad acronym). That includes Google Cloud Platform, Sprint, Softbank, which owns ARM, and Cisco. (A complete list is shown in the image below.)

The main component of it is an open-source reference "Firmware-M" that the company will unveil for Armv8-M systems in early 2018. ARM said that PSA also gives hardware, software and cloud platform designers IoT threat models, security analyses, and hardware and firmware architecture specifications, based on a "best practice approach" for consumer devices.

Despite Intel's best efforts, ARM is far and away the most prevalent architecture used in connected homes for security devices, light bulbs, appliances and more. ARM says that over 100 billion IoT devices using its designs have shipped, and expects another 100 billion by 2021. Improving the notoriously bad security of such devices is a good start, but it also behooves manufacturers to create compelling devices, not pointless ones.

Source: This article was published engadget.com By Steve Dent

Friday, 27 October 2017 12:17

Bing more than search engine

Queries provide data mine for Microsoft's AI developments

Microsoft's Bing search engine has long been a punch line in the tech industry, an also-ran that has never come close to challenging Google's dominant position.

But Microsoft could still have the last laugh, since its service has helped lay the groundwork for its burgeoning artificial intelligence effort, which is helping keep the company competitive as it builds out its post-PC future.

Bing probably never stood a chance at surpassing Google, but its 2nd-place spot is worth far more than the advertising dollars it pulls in with every click. Billions of searches over time have given Microsoft a massive repository of everyday questions people ask about their health, the weather, store hours or directions.

“The way machines learn is by looking for patterns in data,” said former Microsoft CEO Steve Ballmer, when asked earlier this year about the relationship between Microsoft's AI efforts and Bing, which he helped launch nearly a decade ago. “It takes large data sets to make that happen.”

Microsoft has spent decades investing in various forms of artificial intelligence research, the fruits of which include its voice assistant Cortana, email-sorting features and the machine-learning algorithms used by businesses that pay for its cloud platform Azure.

It's been stepping up its overt efforts recently, such as with this year's acquisition of Montreal-based Maluuba, which aims to create “literate machines” that can process and communicate information more like humans do.

Some see Bing as the overlooked foundation to those efforts.

“They're getting a huge amount of data across a lot of different contexts – mobile devices, image searches,” said Larry Cornett, a former executive for Yahoo's search engine. “Whether it was intentional or not, having hundreds of millions of queries a day is exactly what you need to power huge artificial intelligence systems.”

Bing started in 2009, a rebranding of earlier Microsoft search engines. Yahoo and Microsoft signed a deal for Bing to power Yahoo's search engine, giving Microsoft access to Yahoo's greater search share, said Cornett, who worked for Yahoo at the time. Similar deals have infused Bing into the search features for Amazon tablets and, until recently, Apple's Siri.

All of this has helped Microsoft better understand language, images and text at a large scale, said Steve Clayton, who as Microsoft's chief storyteller helps communicate the company's AI strategy.

“It's so much more than a search engine for Microsoft,” he said. “It's fuel that helps build other things.”

Bing serves dual purposes, he said, as a source of data to train artificial intelligence and a vehicle to be able to deliver smarter services.

While Google also has the advantage of a powerful search engine, other companies making big investments in the AI race – such as IBM or Amazon – do not.

“Amazon has access to a ton of e-commerce queries, but they don't have all the other queries where people are asking everyday things,” Cornett said.

Neither Bing nor Microsoft's AI efforts have yet made major contributions to the company's overall earnings, though the company repeatedly points out “we are infusing AI into all our products,” including the workplace applications it sells to corporate customers.

The company on Thursday reported fiscal first-quarter profit of $6.6 billion, up 16 percent from a year earlier, on revenue of $24.5 billion, up 12 percent. Meanwhile, Bing-driven search advertising revenue increased by $210 million, or 15 percent, to $1.6 billion – or roughly 7 percent of Microsoft's overall business.

That's OK by current Microsoft current CEO Satya Nadella, who nearly a decade ago was the executive tapped by Ballmer to head Bing's engineering efforts.

In his recent autobiography, Nadella describes the search engine as a “great training ground for building the hyper-scale, cloud-first services” that have allowed the company to pivot to new technologies as its old PC-software business wanes.

Source: This article was published journalgazette.net By MATT O'BRIEN

How has Google's local search changed throughout the years? Columnist Brian Smith shares a timeline of events and their impact on brick-and-mortar businesses.

Deciphering the Google algorithm can sometimes feel like an exercise in futility. The search engine giant has made many changes over the years, keeping digital marketers on their toes and continually moving the goalposts on SEO best practices.

Google’s continuous updating can hit local businesses as hard as anyone. Every tweak and modification to its algorithm could adversely impact their search ranking or even prevent them from appearing on the first page of search results for targeted queries. What makes things really tricky is the fact that Google sometimes does not telegraph the changes it makes or how they’ll impact organizations. It’s up to savvy observers to deduce what has been altered and what it means for SEO and digital marketing strategies.

What’s been the evolution of local search, and how did we get here? Let’s take a look at the history of Google’s local algorithm and its effect on brick-and-mortar locations.

2005: Google Maps and Local Business Center become one

After releasing Local Business Center in March 2005, Google took the next logical step and merged it with Maps, creating a one-stop shop for local business info. For users, this move condensed relevant search results into a single location, including driving directions, store hours and contact information.

This was a significant moment in SEO evolution, increasing the importance of up-to-date location information across store sites, business listings and online directories.

2007: Universal Search & blended results

Universal Search signified another landmark moment in local search history, blending traditional search results with various listings from other search engines. Instead of working solely through the more general, horizontal SERPs, Universal Search combined results from Google’s vertical-focused search queries like Images, News and Video.

Google’s OneBox started to show within organic search results, bringing a whole new level of exposure that was not there before.  The ramifications on local traffic were profound, as store listings were better positioned to catch the eye of Google users.

2010: Local Business Center becomes Google Places

In 2010, Google rebranded/repurposed Local Business Center and launched Google Places. This was more than a mere name change, as a number of important updates were included, like adding new image features, local advertising options and the availability of geo-specific tags for certain markets. But more importantly, Google attempted to align Places pages with localized search results, where previously information with localized results was coming from Google Maps.

The emergence of Places further cemented Google’s commitment to bringing local search to the forefront. To keep up with these rapidly changing developments, brick-and-mortar businesses needed to make local search a priority in their SEO strategies.

All the algorithm updates plus insightful analysis delivered directly to your inbox. Subscribe today!

2012: Google goes local with Venice

Prior to Venice, Google’s organic search results defaulted to more general nationwide sites. Only Google Maps would showcase local options. With the Venice update, Google’s algorithm could take into account a user’s stated location and return organic results reflecting that city or state. This was big, because it allowed users to search anchor terms without using local modifiers.

The opportunity for companies operating in multiple territories was incredible. By setting up local page listings, businesses could effectively rank higher on more top-level queries just by virtue of being in the same geographic area as the user. A better ranking with less effort — it was almost too good to be true.

2013: Hummingbird spreads its wings

Hummingbird brought about significant changes to Google’s semantic search capabilities. Most notably, it helped the search engine better understand long-tail queries, allowing it to more closely tie results to specific user questions — a big development in the eyes of main search practitioners.

Hummingbird forced businesses to change their SEO strategies to adapt and survive. Simple one- or two-word phrases would no longer be the lone focal point of a healthy SEO plan, and successful businesses would soon learn to target long-tail keywords and queries — or else see their digital marketing efforts drop like a stone.

2014: Pigeon takes flight

Two years after Venice brought local search to center stage, the Pigeon update further defined how businesses ranked on Google localized SERPs. The goal of Pigeon was to refine local search results by aligning them more directly with Google’s traditional SEO ranking signals, resulting in more accurate returns on user queries.

Pigeon tied local search results more closely with deep-rooted ranking signals like content quality and site architecture. Business listings and store pages needed to account for these criteria to continue ranking well on local searches.

2015: RankBrain adds a robotic touch

In another major breakthrough for Google’s semantic capabilities, the RankBrain update injected artificial intelligence into the search engine. Using RankBrain’s machine learning software, Google’s search engine was able to essentially teach itself how to more effectively process queries and results and more accurately rank web pages.

RankBrain’s ability to more intelligently process page information and discern meaning from complex sentences and phrases further drove the need for quality content. No more gaming the system. If you wanted your business appearing on the first SERP, your site had better have the relevant content to back it up.

2015: Google cuts back on snack packs

A relatively small but important update, in 2015, Google scaled back its “snack pack” of local search results from seven listings to a mere three. While this change didn’t affect the mechanics of SEO much, it limited visibility on page one of search results and further increased the importance of ranking high in local results.

2016: Possum shakes things up

The Possum update was an attempt to level the playing field when it came to businesses in adjoining communities. During the pre-Possum years, local search results were often limited to businesses in a specific geographical area. This meant that a store in a nearby area just outside the city limits of Chicago, for instance, would have difficulty ranking and appearing for queries that explicitly included the word “Chicago.”

Instead of relying solely on search terms, Possum leveraged the user’s location to more accurately determine what businesses were both relevant to their query and nearby.

This shift to user location is understandable given the increasing importance of mobile devices. Letting a particular search phrase dictate which listings are returned doesn’t make much sense when the user’s mobile device provides their precise location.

2017 and beyond

Predicting when the next major change in local search will occur and how it will impact ranking and SEO practices can be pretty difficult, not least because Google rarely announces or fully explains its updates anymore.

That being said, here are some evergreen local SEO tips that never go out of fashion (at least not yet):

  • Manage your local listings for NAP (name, address, phone number) accuracy and reviews.
  • Be sure to adhere to organic search best practices and cultivate localized content and acquire local links for each store location.
  • Mark up your locations with structured data, particularly Location and Hours, and go beyond if you are able to.

When in doubt, look at what your successful competitors are doing, and follow their lead. If it works, it works — that is, until Google makes another ground-shaking algorithm change.

Source: This article was published searchengineland.com By Brian Smith

Monday, 16 October 2017 13:25

The future of the Internet of Things

Life online has been rough lately — for the billions of people who use the Web every day, and also for the tech giants behind much of the world’s hardware and software.

Meanwhile, amid hacks and misinformation, the Internet is entering a new frontier. Connected devices, or the Internet of Things, are introducing the Internet to even more private aspects of our lives.

First, the latest news:

A massive cybersecurity breach at Equifax exposed millions of Americans’ most sensitive data, from Social Security numbers to home addresses. The aftermath yielded even more digital drama: erroneous tweets, fake websites and phishing scams.

At Facebook, Mark Zuckerberg has admitted politically motivated Russian accounts used the social network during the recent US presidential election. “I don’t want anyone to use our tools to undermine democracy,” Zuckerberg had to aver.

Across the Atlantic, Google is facing a staggering $2.7 billion antitrust fine from the European Union. Officials say the search giant is depriving Internet users of choice and depriving its competitors of a fair shake. (Google disagrees and says it’s actually improving user choice and competition.)

And on the African continent, the government of Togo knocked the Internet offline amid growing protests. Social media sites, online banking and mobile text messaging were blocked — a blow to freedom of expression and other democratic ideals.

Headlines such as these are reminders that online life is deeply entwined with offline life. What happens on the Internet affects our pocketbooks and even our democracies.

All of this is happening as the Internet of Things grows exponentially. In the ’90s, the Internet was tethered to our desktops. Last decade, it leapt onto our phones and into our pockets. Now, the Internet is becoming pervasive: It’s entering our cities, our cars, our thermostats and even our salt shakers.

As a result, the connection between online life and all other aspects of life is deepening. Five or 10 years down the line, the implications of another Equifax hack, or another Internet shutdown, will be far greater.

What do we do? Right now, the Internet of Things is at an inflection point. It’s pervasive but also still in its infancy. Rules have yet to be written, and social mores yet to be established. There are many possible futures — some darker than others.

If we continue forward with the Internet’s current design patterns — controlled by a handful of Silicon Valley giants, with personal data as currency — those darker futures will likely prevail. Internet-connected bedrooms, cars, pacemakers and dialysis machines would be beholden to companies, not individual users. Personal data — captured on an even more granular level — would remain currency, and threats such as hacking would extend to even more intimate areas of our lives.

There are already sobering examples. Consider My Friend Cayla, the Internet-enabled toy that the German government labeled an “illegal espionage apparatus.” Cayla is a seemingly innocuous, Barbie-like doll. But Cayla records conversations, hawks products to impressionable youngsters, and is vulnerable to hackers.

Even good news raises murky questions. Tesla recently gave its customers affected by Hurricane Irma a battery boost — a noble gesture. But some Floridians and journalists questioned the implications: What happens if someone other than Tesla gains access to a fleet of vehicles? This is a concern about connected cars that long predates Hurricane Irma.

Alternatively, we — Internet users and consumers — can demand new, better design patterns. The Internet of Things can adopt an ethos akin to the early Internet: decentralized, open source and harmonized with privacy.

Here, too, there are examples. A growing number of technologists ask not only what’s possible but also what’s responsible. At the recent ThingsCon event in Berlin, creations from this movement were on display.

We encountered the concept of the Internet of Things trust marks — third-party labels that signal whether a device sufficiently respects privacy. We were introduced to Simply Secure’s Knowledge Base, a tool kit instructing developers and designers how to make privacy-respecting, secure products. And we saw SolarPak, a backpack created in Senegal that’s equipped with a solar panel. It collects energy during the day to power a small LED lamp at night, allowing students to study.

These are ideas and devices that solve problems, as many technology products do. But they also put responsibility first; data collection and planned obsolescence aren’t part of the equation.

This isn’t a simple issue. Big tech platforms do have an important role to play in making the Internet of Things more responsible and ethical. But the dynamic of so few controlling so much of our lives is simply too risky. As we welcome the Internet into more intimate parts of our lives, individual consumers and users must remain in control. And loud consumer demand alone isn’t enough: Regulators and industry leaders need to take steps, too. Together, we must ensure new hardware and software put responsibility ahead of flashiness and profit.

Source: This article was published wtkr.com By CNN WIRE

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now