Source: This article was published econsultancy.com By Rebecca Sentance - Contributed by Member: William A. Woods

What does the future hold for voice search? If you search the web for these words – or a version of them – you’ll encounter no shortage of grand predictions.

“By 2020, 30% of web browsing sessions will be done without a screen.” Or, “By 2020, 50% of all searches will be conducted via voice.” (I’ll come back to that one in a second). Or, “2017 will be the year of voice search.” Oops, looks like we might have missed the boat on that last one.

The great thing about the future is that no-one can know exactly what’s going to happen, but you can have fun throwing out wild predictions, which most people will have forgotten about by the time we actually get there.

That’s why you get so many sweeping, ambitious, and often contradictory forecasts doing the rounds – especially with a sexy, futuristic technology like voice. It doesn’t do anyone any real harm unless for some reason your company has decided to stake its entire marketing budget on optimizing for the 50% of the populace who are predicted to be using voice search by 2020.

However, in this state of voice search series, I’ve set out to take a realistic look at voice search in 2018, beyond the hype, to determine what opportunities it really presents for marketers. But when it comes to predicting the future, things get a little murkier.

I've made some cautious predictions to the tune of assuming that if smart speaker ownership increases over the coming years, voice search volume will also likely increase; or that mobile voice search might be dropping away as smart speaker voice search catches on.

In this article, though, I'll be looking at where voice search as a whole could be going: not just on mobile, or on smart speakers, but of any kind. What is the likelihood that voice search will go "mainstream" to the point that it makes up as substantial a portion of overall search volume as is predicted? What are the obstacles to that? And what does this mean for the future of voice optimisation?

Will half of all searches by 2020 really be voice searches?

I'm going to start by looking at one of the most popular predictions that is cited in relation to voice search: "By 2020, 50% of all searches will be carried out via voice."

This statistic is popularly attributed to comScore, but as is often the case with stats, things have become a little distorted in the retelling. The original prediction behind this stat actually came from Andrew Ng, then Chief Scientist at Baidu. In an exclusive interview with Fast Company in September 2014, he stated that "In five years' time, at least 50% of all searches are going to be either through images or speech."

The quote was then popularised by Mary Meeker, who included it on a timeline of voice search in her Internet Trends 2016 Report, with "2020" as the year by which this prediction was slated to come true.

So, not just voice search, but voice and visual search. This makes things a little trickier to benchmark, not least because we don't have any statistics yet on how many searches are carried out through images. (I'm assuming this would include the likes of Google Lens and Pinterest Lens, as well as Google reverse image search).

Let's assume for the sake of argument that 35% of Ng's predicted 50% of searches will be voice search, since voice technology is that bit more widespread and well-supported, while a visual search is largely still in its infancy. How far along are we towards reaching that benchmark?

I'm going to be generous here and count voice queries of every kind in my calculations, even though as I indicated in Part 1, only around 20% of these searches can actually be ranked for. Around 60% of Google searches are carried out on mobile (per Hitwise), so if we use Google's most recent stat that 1 in every 5 mobile searches is carried out via voice, that means about 12% of all Google searches (420 million searches) are mobile voice queries.

In Part 2 I estimated that another 26.4 million queries are carried out via smart speakers, which is an additional 0.75% - so in total that makes 12.75% of searches, or if we're rounding up, 13% of Google searches that are voice queries.

This means that the number of voice queries on Google would need to increase by another 22 percentage points over the next year and a half for Ng's prediction to come true. To reach 50% - the stat most often cited by voice enthusiasts as to why voice is so crucial to optimise for - we would need to find an additional 1.3 billion voice searches per day from somewhere.

That's nearly ten times the number of smart speakers predicted to ship to the US over the next three years. Even if you believe that smart speakers will single-handedly bring voice search into the mainstream, it's a tall order.

So okay, we've established that voice enthusiasts might need to cool their jets a bit when it comes to the adoption of voice search. But if we return to (our interpretation of) Andrew Ng's prediction that 35% of searches by 2020 will be voice, what is going to make the volume of voice search leap up those remaining 22 percentage points in less than two years?

Is it sheer volume of voice device ownership? Is it the increasing normalisation of speaking aloud to a device in public? Or is it something else?

Ng made another prediction, via Twitter this time, in December 2016 which gives us a clue as to his thinking in this regard. He wrote, "As speech-recognition accuracy goes from 95% to 99%, we'll go from barely using it to using all the time!"

So, Andrew Ng believes that sheer accuracy of recognition is what will take voice search into the mainstream. 95% word recognition is actually the same threshold of accuracy as human speech (Google officially reached this threshold last year, to great excitement), so Ng is holding machines to a higher standard than humans – which is fair enough, since we tend to approach new technology and machine interfaces with a higher degree of scepticism, and are less forgiving of errors. In order to win us over, they have to really wow us.

But is a pure vocal recognition the only barrier to voice search going mainstream? Let's consider the user experience of voice search.

The UX problems with voice

As I mentioned in our last installment of natural language and conversational search, when using voice interfaces, we tend to hold the same expectations that we have for a conversation with a human being.

We expect machines to respond in a human way, seamlessly and intuitively carrying on the exchange; when they don't, bringing us up short with an "I'm sorry, I don't understand the question," we're thrown off and turned off.

This explains why voice recognition is weighted so highly as a measure of success for voice interfaces, but it's not the only important factor. Often, understanding you still isn't enough to produce the right response; many voice commands depend on specific phrasing to activate, meaning that you can still be brought up short if you don't know exactly what to utter to achieve the result you want.

The internet is full of examples of what happens when our voice assistants don't quite understand the question.

Or what about if you misspeak – the verbal equivalent of a typo? When typing, you can just delete and retype your query before you submit, but when speaking, there's no way to take back the last word or phrase you uttered. Instead, you have to wait for the device to respond, give you an error, and then start again.

If this happens multiple times, it can prompt the user to give up in exasperation. Writing for Gizmodo, Chris Thomson paints a vivid picture of the frustration experienced by users with speech impediments when trying to use voice-activated smart speakers.

One of the major reasons that voice interfaces are heralded as the future of technology is because speaking your query or command aloud is supposed to be so much faster and more frictionless than typing it. At the moment, though, that's far from being the case.

However, while they might be preventing the uptake of voice interfaces (which is intrinsically linked to the adoption of voice search) at the moment, these are all issues that could reasonably be solved in the future as the technology advances. None of them are deal-breakers.

For me, the real deal-breaker when it comes to voice search, and the reason why I believe it will never see widespread adoption in its present state, is this: it doesn't do what it's supposed to.

One result to rule them all?

Think back for a moment to what web search is designed to do. Though we take it for granted nowadays, before search engines came along, there was no systematic way to find web pages and navigate the world wide web. You had to know the web address of a site already in order to visit it, and the early "weblogs" (blogs) often contained lists of interesting sites that web users had found on their travels.

Web search changed all that by doing the hard work for users – pulling in information about what websites were out there, and presenting it to users so that they could navigate the web more easily. This last part is the issue that I'm getting at, in a sidelong sort of way: so that they could navigate the web.

Contrast that with what voice search currently does: it responds to a query from the user with a single, definitive result. It might be possible to follow up that query with subsequent searches, or to carry out an action (e.g. ordering pizza, hearing a recipe, receiving directions), but otherwise, the voice journey stops there. You can't browse the web using your Amazon Echo. You can use your smartphone, but for all intents and purposes, that's just mobile search. Nothing about that experience is unique to voice search.

This is the reason why voice search is only ever used for general knowledge queries or retrieving specific pieces of information: it's inherently hampered by an inability to explore the web.

It's why voice search in its present state is mostly a novelty: not just because voice devices themselves are a novelty, but because it's difficult to really search with it.

One result to rule them all?

Even when voice devices like smart speakers catch on and become part of people's daily lives, it's because of their other capabilities, not because of search. Search is always incidental.

This is also why Google, Amazon and other makers of smart speakers are more interested in expanding the commands that their devices respond to and the places they can respond to them. For them, that is the future of voice.

What does this mean for voice search?

What true voice search could sound like

I see two possible future scenarios for voice search.

One, voice search remains as a "single search result" tool which is mostly useful for fact-finding exercises and questions that have a definitive answer, in which case there will always be a limit to how big voice search can get, and voice will only ever be a minor channel in the grand scheme of search and SEO. Marketers should recognise the role that it plays in their overall search strategy (if any), think about the use cases realistically, and optimise for those – or not – if it makes sense to.

Or two, voice search develops into a genuine tool for searching the web. This might involve a user being initially read the top result for their search, and then being presented with the option to hear more search results – perhaps three or four, to keep things concise.

If they then want to hear content from one of the results, they can instruct the voice assistant to navigate to that webpage, and then proceed to listen to an audio version of the news article, blog post, Wikipedia page, or other websites that they've chosen.

Duane Forrester, VP Insights at Yext, envisages just such an eventuality during a wide-ranging video discussion on the future of voice search with Stone Temple Consulting's Eric Enge and PeakActivity's Brent Csutoras. The whole discussion is excellent and well, well worth a watch or a read (the transcript is available beneath the video).

Duane Forrester: We may see a resurgence in [long-form content] a couple of years from now if our voice assistants are now reading these things out loud.

Brent Csutoras: Sure. Like an audible.

Duane: Exactly, like a built-in native audible, like “I’m on this page, do you want me to read it? “Yes, read it out loud to me.” There we go.

Brent: Yes because in that sense, I’m going to want to hear more. I’m driving down the street and want to hear about what’s happening and I want to hear follow up pieces.

Duane: It immediately converts every single website, every page of content, every blog, it immediately converts all of those into on-demand podcasts. That’s a cool idea, it’s a cool adaptation. I’m not sure if we’ll get there. We will when we get to the point of having a digital agent. But that’s still years in the future.

At first, I was sceptical of the idea that people would ever want to consume web content primarily via audio. Surely it would be slower and less convenient than visually scanning the same information?

Then I thought about the fast-growing popularity of podcasts and audiobooks and realized that the audio web could fit into our lives in many of the same ways that other types of audio have – especially if voice devices become as omnipresent as many techs and marketing pundits are predicting they will.

Is this a distant future? Perhaps. But this is how I imagine voice search truly entering the mainstream, the same way that web search did: as a means of exploring the web.

The future of voice search might not be Google

What surprises me is that for all the hype surrounding voice search and its possibilities, hardly anyone has pointed out the obvious drawback of the single search result or considered what it could mean for voice adoption.

An article by Marieke van de Rakt of Yoast highlights it as an obstacle but believes that screen connectivity is the answer. This is a possibility, especially as Google and Amazon are now equipping their smart speakers with screens - but I think that requiring a screen removes some of the convenience of voice as a user interface, one that can be interacted with while doing other things (like driving) without pulling the user's attention away.

For the most part, however, it seems to me that marketers and SEOs have been too content to just follow Google's lead (and Bing's, because realistically, where Google goes, Bing will follow) when it comes to things like voice search. Is Google presenting the user with a single search result? Everyone optimize for single search results; the future of search will be one answer!

Why? What about that makes for a good user experience? Is this what search was meant to do?

I understand letting Google set the agenda when it comes to SEO more broadly because realistically it's so dominant that any SEO strategy has to mainly cater to Google. However, I don't think we should assume that Google will remain the leader of the search in every new, emerging area like voice or visual search.

Oh, Google is doing its best to stay on top, and there's no denying that it's taken an early lead; its speech recognition and conversational search capabilities are currently second to none. But Google isn't the hot young start-up that it was when it came along and challenged the web search status quo. It's much bigger now, and has investors to answer to.

Google makes a huge amount of revenue from its search and advertising empire; its primary interest is in maintaining that. One search result suits Google just fine, if it means that users won't leave its walled garden.

Marketers and SEOs should remember that Google wasn't always the king of web search; other web search engines entered the game first, and were very popular – but Google changed the game because the way it had of doing the search was so much better, and users loved it. Eventually, the other search engines couldn't compete.

The same thing could easily happen with voice search.

The logos of some of the early search engines that Google out-competed in its quest for web search dominance.

The future of voice optimisation

So where does that leave the future of voice optimisation?

Many of these eventualities seem like far-off possibilities at best, and there’s no way of being certain how they will pan out. How should marketers go about optimising for voice now and in the near future?

Though I’ve taken a fairly sceptical stance throughout this series, I do believe that voice is worth optimising for. However, the opportunity around voice search specifically is limited, and so I believe that brands should consider all the options for being present on voice as a whole – whether that’s on mobile, as a mobile voice search result, or on smart speakers, as an Alexa Skill or Google Home Action – and pursue whatever strategy makes most sense for their brand.

I’m interested in seeing us move away from thinking about voice and voice devices as a search channel, and more as a general marketing channel that it’s possible to be present on in various different ways – like social media.

It’s still extremely early days for this technology, and while the potential is huge, there are still many things we don’t know about what the future of voice will look like, so it’s important not to jump the gun.

Brent Csutoras sums things up extremely well in the future of voice search discussion:

This is an important technology I really think you should pay attention to. What I worry about is that people start feeling like they have to be involved, right? It’s like, “Oh crap, I don’t want to be left behind.”

What I would say is that in this space, it’s like the example of Instagram. Everybody wanted to have an Instagram account and they had nothing visual to show, so they just started creating crap to show it. If you have something that fits for voice search right now, then you should absolutely take the steps that you can to participate with it. If you don’t, then definitely just pay attention to it.

This space is going to open up, it is going to provide an opportunity for just about everyone, so stay abreast of what’s happening in this space, what’s the technology, and start envisioning your company in that space, and then wait until you have that opportunity to make that a reality. But don’t overstress yourself and feel like you’re failing because you’re not in the space right now.

Categorized in Search Engine

Source: This article was published martechadvisor.com By Brett Tabano - Contributed by Member: Logan Hochstetler

During the holidays and other peak season times, most consumers will shop around for the best deal before booking travel, which is why comparison search ads - an element of performance marketing and sometimes referred to as vertical search ads - are critical.  In this conversation – a part of our Search and CRO special for July - we explore more about the concept of vertical or comparison search, and how you – the marketer - can apply it practically for better business outcomes. Brett Tabano, Senior Vice President of Marketing, MediaAlpha walks us through 5 key aspects of vertical search that are sure to get you thinking.

1. Search is more than Google: Vertical Search Engines are different from regular search engines.

Brett: When marketers think of search advertising, they often think of running ads on traditional search engines like Google, Yahoo, and Bing. While this type of search advertising is important, it is also critical to consider vertical search or native comparison search, which entails running ads within the native search results from a publisher or platform.

A vertical search engine differs from a traditional search engine because it is specific to one particular product or service category versus the broad results you get from a traditional search engine. For example, KAYAK is a vertical search engine for travel, Zillow for the real estate sector, Progressive for car insurance, and Bankrate for mortgages.

Moreover, vertical search engines often require the user to input a number of structured data fields to get the results they are seeking, versus a typical keyword search from a traditional search engine.

This information is particularly useful because the user is voluntarily inputting information based on the service or product they are seeking a price/quote from versus having to infer data on the user.

2. Vertical search + programmatic is a powerful combination

Vertical search is particularly important for any brand or product in high-consideration service categories where the consumer is likely to compare multiple options before converting to a paying customer. Think auto insurance, life insurance, mortgage rates, credit cards, travel, home services, etc. These are the products and services that consumers typically research and compare quotes/prices before they purchase.

3 ways to search

  • Traditional ‘horizontal’ search: you have a broad idea of what you are looking for
  • Native ‘vertical’ (comparison) search: you know exactly what you are looking for
  • Discovery search – you don’t know what you are looking for but want to stumble upon content

Most ad networks require advertisers to pay an average price for their media regardless of the consumer segments they are trying to reach. When forced into an average pricing model, advertisers pay the same price for everyone, even though each user/impression has a different value to each advertiser. Through a programmatic platform, advertisers can right price each source, user and placement to ensure they are only acquiring traffic that has a high likelihood of converting. Advertisers want to value what they are buying from all supply sources as granularly as possible since each source, and each user has a different value to each advertiser. Without granularity, advertisers are forced to buy media on a one-size-fits-all approach using an average price. Meaning, different prices should be considered for the granular consumer segments you are now able to target through programmatic platforms. Being able to optimize bids for dozens (or hundreds) of different consumer segments in real-time is a benefit only a programmatic platform can offer, but it requires a change in mindset away from the simplicity provided through an average pricing model. 

3. Vertical search, competitor ads could make you money!

A key consideration for implementing a vertical search or native comparison search strategy is centered around your traditional search strategy. If you are actively buying search keywords to drive users to your site to purchase your product/service, how do you monetize the users that do not convert to recoup your marketing costs? Through vertical search or native comparison search, you can not only recoup these costs but more importantly, you can generate a new profitable ad revenue source.

Users search and purchase patterns are changing - and they are looking to obtain the best price/quote before making a purchase decision as quickly as possible. To match these new patterns and enhance the user experience, implementing a vertical or native comparison search is crucial as it allows the publisher to surface additional/competitive offers outside of their own.

It may sound counter-intuitive to showcase your competitors, but there are a number of use cases where you can test the waters to prove this model works.

For example, you are a hotel site and the consumer’s desired dates are sold out. Instead of forcing the user back to perform another search, you can surface additional hotel offers and generate revenue when the consumer clicks through. Or, you’re an airline, and the flight is sold out - the same concept can apply. Or you’re an insurance provider not offering coverage in the user’s state. These are all scenarios in which you should leverage native comparison search to monetize.

4. Vertical search is a performance marketing tactic

One of the benefits of performance marketing is that you are graded on how successful you are at driving sales, or at least, driving high-intent users. Vertical or native comparison search media is typically sold on a CPC (cost-per-click) model then backed into a CPA (cost-per-acquisition). This allows advertisers to quickly measure the success of the campaign and optimize accordingly.

5. Vertical search can directly impact your CRO (converting existing website traffic into revenue)

One of the best ways to determine when and where to display native comparison search ads is through the use of predictive analytics and machine learning. We are seeing many partners implement this strategy as it allows them to better understand the user’s intention. Perhaps the user is early in the decision process and not ready to buy at this time. Or perhaps this user is predicted to have a low CLV (customer lifetime value). This is when you would want to display additional offers in the form of native comparison search in order to generate ad revenue. Since this is a new revenue source, we are seeing many of our partners use this new revenue to attract new customers, typically through traditional search. By monetizing users that won't convert, they are now able to generate revenue which can then be used to attract more customers, that hopefully will. This creates a virtuous cycle for the publisher.

Our take: while a traditional search isn’t going anywhere, one cannot deny that user search behavior – and expectations – are changing (however subtly). At the bottom of the funnel, people will tend to turn to a vertical search engine over a generic one to get more specific, tailored and actionable results. In more B2C environments, they want the information they need to make a decision- irrespective of where it comes from. It is certainly an opportunity worth exploring for advertisers. A great way to start would be to test a small budget over the next buying cycle in your industry to see how it pays off!

Categorized in Search Engine

Source: This article was published searchengineland.com By Barry Schwartz - Contributed by Member: Deborah Tannen

Forget that blue Google search results interface for the local panel -- here is a new fresh look

Google is now rolling out a new look for the local panel in the mobile search results. The new look goes from the blue interface with text buttons to a white interface with rounded buttons. Here is the new look you might be able to see now when you search for a local business on your smartphone:


Here is what this looked like the other day, in the blue interface:


Google has been testing this new interface on and off since January of this year.

If you do not see the new interface now, it might require a bit more time for it to fully roll out.

We have emailed Google for a comment and will update this story when we receive one.

Categorized in Search Engine

Source: This article was published techcrunch.com By Frederic Lardinois - Contributed by Member: Carol R. Venuti

One of Google’s first hardware products was its search appliance, a custom-built server that allowed businesses to bring Google’s search tools to the data behind their firewalls. That appliance is no more, but Google today announced the spiritual successor to it with an update to Cloud Search. Until today, Cloud Search only indexed G Suite data. Now, it can pull in data from a variety of third-party services that can run on-premise or in the cloud, making the tool far more useful for large businesses that want to make all of their data searchable by their employees.

“We are essentially taking all of Google expertise in search and are applying it to your enterprise content,” Google said.

One of the launch customers for this new service is Whirlpool, which built its own search portal and indexed more than 12 million documents from more than a dozen services using this new service.

“This is about giving employees access to all the information from across the enterprise, even if it’s traditionally siloed data, whether that’s in a database or a legacy productivity tool and make all of that available in a single index,” Google explained.

To enable this functionality, Google is making a number of software adapters available that will bridge the gap between these third-party services and Cloud Search. Over time, Google wants to add support for more services and bring this cloud-based technology on par with what its search appliance was once capable of.

The service is now rolling out to a select number of users. Over time, it’ll become available to both G Suite users and as a standalone version.

Categorized in Search Engine

Source: This article was published searchenginejournal.com By Matt Southern - Contributed by Member: Corey Parker

Google My Business has started sending notifications when new listings go live on the web.

A representative from Google announced this update while stating users will only receive these notifications if their business accounts are fewer than 100 listings.

Bulk verification accounts will not receive these notifications.

Accounts will also not receive these notifications if their language preference is set to anything other than US English.

Another way to confirm a listing’s real-time status is to click the direct links on the “Your business is on Google” card.

The “Your business is on Google” card is displayed in the GMB dashboard on the right-hand side, as seen in the example below:

Those who would prefer not to receive notifications can unsubscribe at any time from the settings menu.

Categorized in Search Engine

Source: This article was published wired.com By LILY HAY NEWMAN - Contributed by Member: David J. Redcliff

IN THE BEGINNING, there was phone phreaking and worms. Then came spam and pop-ups. And none of it was good. But in the nascent decades of the internet, digital networks were detached and isolated enough that the average user could mostly avoid the nastiest stuff. By the early 2000s, though, those walls started coming down, and digital crime boomed.

Google, which will turn 20 in September, grew up during this transition. And as its search platform spawned interconnected products like ad distribution and email hosting, the company realized its users and everyone on the web faced an escalation of online scams and abuse. So in 2005, a small team within Google started a project aimed at flagging possible social engineering attacks—warning users when a webpage might be trying to trick them into doing something detrimental.

A year later, the group expanded its scope, working to flag links and sites that might be distributing malware. Google began incorporating these anti-abuse tools into its own products, but also made them available to outside developers. By 2007, the service had a name: Safe Browsing. And what began as a shot in the dark would go on to fundamentally change security on the internet.

You've been protected by Safe Browsing even if you haven't realized it. When you load a page in most popular browsers or choose an app from the Google Play Store, Safe Browsing is working behind the scenes to check for malicious behavior and notify you of anything that might be amiss. But setting up such a massive vetting system at the scale of the web isn't easy. And Safe Browsing has always grappled with a core security challenge—how to flag and block bad things without mislabeling legitimate activity or letting anything malicious slip through. While that problem isn’t completely solved, Safe Browsing has become a stalwart of the web. It underlies user security in all of Google’s major platforms—including Chrome, Android, AdSense, and Gmail—and runs on more than 3 billion devices worldwide.

In the words of nine Google engineers who have worked on Safe Browsing, from original team members to recent additions, here’s the story of how the product was built, and how it became such a ubiquitous protective force online.

Niels Provos, a distinguished engineer at Google and one of the founding members of Safe Browsing: I first started working on denial of service defense for Google in 2003, and then late in 2005 there was this other engineer at Google called Fritz Schneider who was actually one of the very first people on the security team. He was saying, ‘Hey Niels, this phishing is really becoming a problem, we should be doing something about it.’ He had started to get one or two engineers interested part-time, and we figured out that the first problem that we should be solving was not actually trying to figure out what is a phishing page, but rather how do we present this to the user in a way that makes sense to them? So that started the very early phishing team.

One of the trends that we had observed was the bad guys figured out that just compromising other web servers actually doesn’t really give you all that much. What they were getting was essentially bandwidth, but not a lot of interesting data. So then they turned to their compromised web servers that got lots and lots of visitors, and it was like, ‘How about we compromise those people with downloads?’ So there was a change in malicious behavior.

We were already working on phishing, and I thought, you know, the malware thing maybe even a larger problem. And we’re sort of uniquely positioned because with the Google search crawler we have all this visibility into the web. So then we started with phishing and malware, and Safe Browsing came together that way.

Panos Mavrommatis, Engineering Director of Safe Browsing: Safe Browsing started as an anti-phishing plugin for Mozilla Firefox since this was 2005 and Google didn’t have its own browser then. When I joined in 2006, the team lead at the time was Niels, and he wanted us to expand and protect users not just from phishing but also from malware. So that was my initial project—which I haven’t finished yet.

'But we did not really conceive that 10 years later we would be on 3 billion devices. That’s actually a little bit scary.'

NIELS PROVOS, GOOGLE

The goal was to crawl the web and protect users of Google’s main product, which was Search, from links that could point them to sites that could harm their computer. So that was the second product of Safe Browsing after the anti-phishing plugin, and the user would see labels on malicious search results. Then if you did click on it you would get an additional warning from the search experience that would tell you that this site might harm your computer.

One interesting thing that happened was related to how we communicated with webmasters who were affected by Safe Browsing alerts. Because very quickly when we started looking into the problem of how users might be exposed to malware on the web, we realized that a lot of it came from websites that were actually benign, but were compromised and started delivering malware via exploits. The site owners or administrators typically did not realize that this was happening.

In our first interactions with webmasters, they would often be surprised. So we started building tools dedicated to webmasters, now called Search Console. The basic feature was that we would try to guide the webmaster to the reason that their website was infected, or if we didn’t know the exact reason we would at least tell them which pages on their server were distributing malware, or we would show them a snippet of code that was injected into their site.

Provos: We got a lot of skepticism, like ‘Niels, you can’t tell me that you’re just doing this for the benefit of web users, right? There must be an angle for Google as well.’ Then we articulated this narrative that if the web is safer for our users, then that will benefit Google because people will use our products more often.

But we did not really conceive that 10 years later we would be on 3 billion devices. That’s actually a little bit scary. There’s a sense of huge responsibility that billions of people rely on the service we provide, and if we don’t do a good job at detection then they get exposed to malicious content.

Mavrommatis: Around 2008 we started building an engine that ran every page Google already fetched, to evaluate how the page behaved. This was only possible because of Google’s internal cloud infrastructure. That was part of why Google was able to do a lot of innovation at the time, we had this extremely open infrastructure internally where you could use any unused resources, and do things like run a malicious detection engine on the full web.

Moheeb Abu Rajab, Principal Engineer at Safe Browsing: Coming from graduate school, I had been trying to build this type of system on a couple of machines, so I was spending lots of time trying to set that up. And it’s just the minimum effort at Google to run on a huge scale.

Mavrommatis: The other thing we developed at the same time was a slower but deeper scanner that loaded web pages in a real browser, which is more resource-intensive than the other work we had been doing that just tested each component of a site. And having those two systems allowed us to build our first machine learning classifier. The deeper crawling service would provide training data for the lightweight engine, so it could learn to identify which sites are the most likely to be malicious and need a deep scan. Because even at Google-scale we could not crawl the whole search index with a real browser.

Noé Lutz, Google AI engineer, formerly Safe Browsing: Around the same time, in 2009, we worked on machine learning for phishing as well. And this was a pretty scary moment for the team because up until then we used machine learning as a filtering function, to figure out where to focus this heavy weight computing resource, but this was the first time we actually decided something was phishing or malicious or harmful or not harmful in a fully automated way.

I remember the day we flipped the switch it was like, now the machine is responsible. That was a big day. And nothing bad happened. But what I do remember is it took extremely long for us to turn that tool on. I think we all expected that it would take a couple of weeks, but it took actually several months to make sure that we were very confident in what we were doing. We were very conscious from the get-go how disruptive it can be if we make a mistake.

Provos: The moments that stand out do tend to be the more traumatic ones. There was a large production issue we had in 2009, it was a Saturday morning. We had a number of bugs that came together and we ended up doing a bad configuration push. We labeled every single Google search result as malicious.

Even in 2009, Google was already a prevalent search engine, so this had a fairly major impact on the world. Fortunately, our site reliability engineering teams are super on top of these things and the problem got resolved within 15 minutes. But that caused a lot of soul searching and a lot of extra guards and defenses to be put in place, so nothing like that would happen again. But luckily by then, we were already at a point where people within Google had realized that Safe Browsing was actually a really important service, which is why we had integrated it into Search in the first place.

Nav Jagpal, Google Software Engineer: In 2008 we integrated Safe Browsing into Chrome, and Chrome represented a big shift because before with browsers like Internet Explorer, you could easily be on an old version. And there were drive-by downloads exploiting that, where you could go to a website, not click on anything, and walk away with an infection on your computer. But then over time, everyone got better at building software. The weakest link was the browser; now it’s the user. Now to get code running on people’s machines, you just ask them. So that’s why Safe Browsing is so crucial.

Mavrommatis: Around 2011 and 2012 we started building even deeper integrations for Google’s platforms, particularly Android and Chrome Extensions and Google Play. And we created unique, distinct teams to go focus on each product integration and work together with the main teams that provided the platforms.

Allison Miller, former Safe Browsing product manager, now at Bank of America (interviewed by WIRED in 2017): Safe Browsing is really behind the scenes. We build infrastructure. We take that information and we push it out to all the products across Google that have any place where there is the potential for the user to stumble across something malicious. People don’t necessarily see that that goes on. We’re a little too quiet about it sometimes.

Fabrice Jaubert, software development manager of Safe Browsing: There were challenges in branching out outside of the web, but there were advantages, too, because we had a little bit more control over the ecosystem, so we could guide it toward safer practices. You can’t dictate what people do with their web pages, but we could say what we thought was acceptable or not in Chrome extensions or in Android apps.

Lutz: There were also some non-technical challenges. Google is a big company, and it can be challenging to collaborate effectively across teams. It’s sometimes hard to realize from the outside, but Chrome is written in a language that is different from a lot of other parts of Google, and they have release processes that are very different. And the same is true for Android, they have a different process of releasing software. So getting everybody aligned and understanding each other, I perceived it as a big hurdle to overcome.

'We are really behind the scenes. We build infrastructure.'

ALLISON MILLER, GOOGLE

Stephan Somogyi, Google AI product manager, formerly Safe Browsing: This is a very hackneyed cliché so please don’t use it against me, but the whole 'rising tide lifts all boats' thing actually really holds true for Safe Browsing. There wasn’t ever any debate that we wanted to expand its reach onto mobile, but we had a profound dilemma because the amount of data that Safe Browsing used for desktop was an intractable amount for mobile. And we knew that everything that we push down to the mobile device costs the user money because they're paying for their data plans. So we wanted to use compression to take the data we already had and make it smaller. And we didn’t want the users to get hosed by five apps each having their own Safe Browsing implementation and all downloading the same data five times. So we said let’s bake it into Android and take the heavy lifting onto ourselves all in one place. It’s been a system service since the fall of 2015.

So we built a dead simple API so developers can just say, ‘Hey Android Local System Service, is this URL good or bad?’ We also wanted to write this thing so it wouldn’t unnecessarily spin up the cell modem and eat battery life because that’s just not nice. So if the network isn’t up anyway, don’t call it up. We just spent an awful lot of effort on implementation for Android. It turned out to be a lot more subtle and nuanced than we first anticipated.

Mavrommatis: The other big effort that our team was involved in around 2013 and 2014 was what we call “unwanted software.” It’s primarily for desktop users, and it’s sort of an adaptation from actors who may have in the past been using just malware techniques, but now they would find that it’s possible to hide malware within software that seems focused on a legitimate function. It was unclear how antivirus companies should label this, and how big companies and browsers should deal with this. But what we focused on was what is the impact on the user?

Around 2014, our data showed that over 40 percent of the complaints that Chrome users reported were related to some sort of software that was running on their device that would impact their browsing experience. It might inject more ads or come bundled with other software they didn't need, but it was a potentially unwanted program. These practices were causing a lot of problems and we would see a lot of Chrome users downloading these kinds of apps. So we refined our download protection service and also found ways to start warning users about potentially unwanted downloads.

Jagpal: It’s a large responsibility, but it also feels very abstract. You get a warning or alert and you think, ‘Wait a minute, am I protecting myself here?’ But it’s so abstract that if we write code for something concrete, like turning on a light switch at home, it’s like, ‘Whoa, that is so cool. I can see that.’

Jaubert: My 14-year-old definitely takes Safe Browsing for granted. He got a phishing message as an SMS text, so it didn’t go through our systems, and he was shocked. He asked me, ‘Why aren’t you protecting me? I thought this couldn’t happen!’ So I think people are starting to take it for granted in a good way.

Emily Schechter, Chrome Security product manager (former Safe Browsing program manager): You can tell people that they’re secure when they’re on a secure site, but what really matters is that you tell them when they’re not secure when they’re on a site that is actively doing something wrong.

People should expect that the web is safe and easy to use by default. You shouldn’t have to be a security expert to browse the web, you shouldn’t have to know what phishing is, you shouldn’t have to know what malware is. You should just expect that software is going to tell you when something has gone wrong. That’s what Safe Browsing is trying to do.

Categorized in Search Engine

Source: This article was published searchenginejournal.com By Matt Southern - Contributed by Member: Anna K. Sasaki

Google is now rolling out new features, announced last month, which make it easier for users to find local restaurants and bars that match their tastes.

The majority of these new features exist in the redesigned “Explore” tab.

New “Explore” Tab

When viewing a location in Google Maps, users can tap on the “Explore” tab to get recommendations for restaurants, bars, and cafes within the area.

Top Hot Spots

A new section, called “The Foodie List,” will rank the top spots in a city based on trending lists from local experts as well as Google’s own algorithms.

“Your Match” Scores

When viewing the listing for a restaurant or bar, a new feature called “Your Match” will provide a numeric rating that tells a user how likely they are to enjoy a place based on their own preferences. This is determined based on previous reviews and browsing history.

In addition, users can tell Google Maps about their food and drink preferences so the app can surface better recommendations. This can be done from the “Settings” tab, where users can select the types of cuisines and restaurants they like.

Ready to know how to make effective PPC ads?
Download SEJ's guide, PPC 101: A Complete Guide to Pay-Per-Click Marketing Basics, to learn the best practices in PPC keywords, ad copywriting, ad targeting, and so much more!

Personalized Recommendation Hub

A brand new “For You” tab will keep users informed about everything happening in areas they care about. This could include areas near their home, work, or a city they visit frequently.

Users can also follow a particular neighborhood to instantly see if there’s a hot new restaurant in the area, a new cafe that’s a perfect match, or if a favorite dining spot is in the news.

Android Exclusive Features

A feature exclusive to Android will let users automatically keep track of how many of the top-ranked spots they’ve visited.

Also exclusive to Android is a feature that will surface the top events and activities happening within a particular area. Users can see photos, descriptions, and filter by categories like “good for kids,” “cheap” or “indoor or outdoor.”

To start using these new features, just update the Google Maps app from the App Store or Play Store.

Categorized in Search Engine

Source: This article was published searchenginejournal.com By Matt Southern - Contributed by Member: Corey Parker

Google’s John Mueller revealed that the search engine’s algorithms do not punish keyword stuffing too harshly.

In fact, keyword stuffing may be ignored altogether if the content is found to otherwise have value to searchers.

This information was provided on Twitter in response to users inquiring about keyword stuffing. More specifically, a user was concerned about a page ranking well in search results despite obvious signs of keyword repetition.

Prefacing his statement with the suggestion to focus on one’s own content rather than someone else’s, Mueller goes on to say that there are over 200 factors used to rank pages and “the nice part is that you don’t have to get them all perfect.”

When the excessive keyword repetition was further criticized by another user, Mueller said this practice shouldn’t result in a page being removed from search results, and “boring keyword stuffing” may be ignored altogether.

Official AdWords Campaign Templates
Select your industry. Download your campaign template. Custom built with exact match keywords and converting ad copy with high clickthrough rates.

“Yeah, but if we can ignore boring keyword stuffing (this was popular in the 90’s; search engines have a lot of practice here), there’s sometimes still enough value to be found elsewhere. I don’t know the page, but IMO keyword stuffing shouldn’t result in removal from the index.”

There are several takeaways from this exchange:

  • An SEO’s time is better spent improving their own content, rather than trying to figure out why other content is ranking higher.
  • Excessive keyword stuffing will not result in a page being removed from indexing.
  • Google may overlook keyword stuffing if the content has value otherwise.
  • Use of keywords is only one of over 200 ranking factors.

Overall, it’s probably not a good idea to overuse keywords because it arguably makes the content less enjoyable to read. However, keyword repetition will not hurt a piece of content when it comes to ranking in search results.

Categorized in Search Engine

 Source: This article was published forbes.com By Jayson DeMers - Contributed by Member: William A. Woods

Some search optimizers like to complain that “Google is always changing things.” In reality, that’s only a half-truth; Google is always coming out with new updates to improve its search results, but the fundamentals of SEO have remained the same for more than 15 years. Only some of those updates have truly “changed the game,” and for the most part, those updates are positive (even though they cause some major short-term headaches for optimizers).

Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes but introduced semantic search, which transformed SEO for the better—both for users and for marketers.

What Is Semantic Search?

I’ll start with a briefer on what semantic search actually is, in case you aren’t familiar. The so-called Hummingbird update came out back in 2013 and introduced a new way for Google to consider user-submitted queries. Up until that point, the search engine was built heavily on keyword interpretation; Google would look at specific sequences of words in a user’s query, then find matches for those keyword sequences in pages on the internet.

Search optimizers built their strategies around this tendency by targeting specific keyword sequences, and using them, verbatim, on as many pages as possible (while trying to seem relevant in accordance with Panda’s content requirements).

Hummingbird changed this. Now, instead of finding exact matches for keywords, Google looks at the language used by a searcher and analyzes the searcher’s intent. It then uses that intent to find the most relevant search results for that user’s intent. It’s a subtle distinction, but one that demanded a new approach to SEO; rather than focusing on specific, exact-match keywords, you had to start creating content that addressed a user’s needs, using more semantic phrases and synonyms for your primary targets.

Voice Search and Ongoing Improvements

Of course, since then, there’s been an explosion in voice search—driven by Google’s improved ability to recognize spoken words, its improved search results, and the increased need for voice searches with mobile devices. That, in turn, has fueled even more advances in semantic search sophistication.

One of the biggest advancements, an update called RankBrain, utilizes an artificial intelligence (AI) algorithm to better understand the complex queries that everyday searchers use, and provide more helpful search results.

Why It's Better for Searchers

So why is this approach better for searchers?

  • Intuitiveness. Most of us have already taken for granted how intuitive searching is these days; if you ask a question, Google will have an answer for you—and probably an accurate one, even if your question doesn’t use the right terminology, isn’t spelled correctly, or dances around the main thing you’re trying to ask. A decade ago, effective search required you to carefully calculate which search terms to use, and even then, you might not find what you were looking for.
  • High-quality results. SERPs are now loaded with high-quality content related to your original query—and oftentimes, a direct answer to your question. Rich answers are growing in frequency, in part to meet the rising utility of semantic search, and it’s giving users faster, more relevant answers (which encourages even more search use on a daily basis).
  • Content encouragement. The nature of semantic search forces searches optimizers and webmasters to spend more time researching topics to write about and developing high-quality content that’s going to serve search users’ needs. That means there’s a bigger pool of content developers than ever before, and they’re working harder to churn out readable, practical, and in-demand content for public consumption.

Why It's Better for Optimizers

The benefits aren’t just for searchers, though—I’d argue there are just as many benefits for those of us in the SEO community (even if it was an annoying update to adjust to at first):

  • Less pressure on keywords. Keyword research has been one of the most important parts of the SEO process since search first became popular, and it’s still important to gauge the popularity of various search queries—but it isn’t as make-or-break as it used to be. You no longer have to ensure you have exact-match keywords at exactly the right ratio in exactly the right number of pages (an outdated concept known as keyword density); in many cases, merely writing about the general topic is incidentally enough to make your page relevant for your target.
  • Value Optimization. Search optimizers now get to spend more time optimizing their content for user value, rather than keyword targeting. Semantic search makes it harder to accurately predict and track how keywords are specifically searched for (and ranked for), so we can, instead, spend that effort on making things better for our core users.
  • Wiggle room. Semantic search considers synonyms and alternative wordings just as much as it considers exact match text, which means we have far more flexibility in our content. We might even end up optimizing for long-tail phrases we hadn’t considered before.

The SEO community is better off focusing on semantic search optimization, rather than keyword-specific optimization. It’s forcing content producers to produce better, more user-serving content, and relieving some of the pressure of keyword research (which at times is downright annoying).

Take this time to revisit your keyword selection and content strategies, and see if you can’t capitalize on these contextual queries even further within your content marketing strategy.

Categorized in Search Engine

Source: This article was published searchengineland.com By Barry Schwartz - Contributed by Member: Bridget Miller

After killing off prayer time results in Google several years ago, Google brings the feature back for some regions.

The prayer times can be triggered for some queries that seem to be asking for that information and also include geographic designators, such as [prayer times mecca], where Islamic prayer times are relevant. It’s possible that queries without a specific location term, but conducted from one of those locations, would also trigger the prayer times, but we weren’t able to test that functionality.

A Google spokesperson told Search Engine Land “coinciding with Ramadan, we launched this feature in a number of predominantly Islamic countries to make it easier to find prayer times for locally popular queries.”

“We continue to explore ways we can help people around the world find information about their preferred religious rituals and celebrations,” Google added.

Here is a screenshot of prayer times done on desktop search:

Google gives you the ability to customize the calculation method used to figure out when the prayer times are in that region. Depending on your religious observance, you may hold one method over another. Here are the available Islamic prayer time calculation methods that Google offers:

Not all queries return this response, and some may return featured snippets as opposed to this specific prayer times box. So please do not be confused when you see a featured snippet versus a prayer-time one-box.

This is what a featured snippet looks like in comparison to the image above:

The most noticeable way to tell this isn’t a real prayer-times box is that you cannot change the calculation method in the featured snippet. In my opinion, it would make sense for Google to remove the featured snippets for prayer times so searchers aren’t confused. Since featured snippets may be delayed, they probably aren’t trustworthy responses for those who rely on these prayer times. Smart answers are immediate and are calculated by Google directly.

Back in 2011, Google launched prayer times rich snippets, but about a year later, Google killed off the feature. Now, Google has deployed this new approach without using markup or schema; instead, Google does the calculation internally without depending on third-party resources or websites.

Categorized in Search Engine

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now