[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Edna Thomas]

Google is giving site owners the ability to customize how their content is previewed in search results.

By default, Google has always generated search snippets according to the users’ queries and what types of devices they’re using.

However, there was previously no room for customization – it was only possible to allow a textual snippet or not allow one.

Now, Google is introducing multiple methods that allow for more fine-grained configuration of the preview content shown for web pages.

These methods include using robots meta tags as well as a brand new type of HTML attribute. Here’s more information about each of these methods.

Configuring Search Snippets With Robots Meta Tags

The content shown in search snippet previews can now be configured using robots meta tags.

The following robots meta tags can be added to an HTML page’s, or specified via the x-robots-tag HTTP header:

  • “nosnippet” – This is an existing option to specify that you don’t want any textual snippet shown for a page.
  • “max-snippet:[number]” (NEW) – Specify a maximum text-length, in characters, of a snippet for your page.
  • “max-video-preview:[number]” (NEW) – Specify a maximum duration in seconds of an animated video preview.
  • “max-image-preview:[setting]” (NEW) – Specify a maximum size of image preview to be shown for images on this page, using either “none”, “standard”, or “large”.

The above robots meta tags can also be combined, for example:

New data-nosnippet HTML attribute

Google is introducing an all-new way to limit which part of a page can be shown as a preview in search results.

The new “data-nosnippet” HTML attribute on span, div, and section elements can prevent specific parts of an HTML page from being shown within the textual snippet in search results.

In other words – if you want to prevent Google from giving away too much of your content in search results, this is the method you want to use.

Here’s an example:

Harry Houdini is undoubtedly the most famous magician ever to live.

In this example, if someone were searching for a query like “most famous magician,” the HTML attribute would prevent Google from giving away the answer (Harry Houdini) in search results.

What SEOs and Site Owners Need to Know

Here’s a rundown of need-to-know information regarding these changes.

No changes to search rankings
This update will only affect how snippets are displayed in search results. Google confirms these settings will have no impact on search rankings.

Depending on how a site owner chooses to configure these settings there may be an impact on CTR, which could then impact traffic. But that is not related to search rankings.

When do these changes come into effect?
Preview settings for robots meta tags will become effective in mid-to-late October 2019. It may take a week for the global rollout to be completed once it starts.

The data-nosnippet HTML attribute will be effective later this year. No specific timeframe was provided for that particular setting.

Will these new changes affect how rich results are displayed?
Content in structured data that is eligible for display as a rich result will not be affected by any of these new settings.

Site owners already have control over the content displayed in rich results by what they choose to include in the structured data itself.

How will these changes affect featured snippets?
Featured snippets depend on the availability of preview content. So if you limit the preview content too heavily it may no longer be eligible to be displayed as a featured snippet, although it could still be displayed as a regular snippet.

The minimum number of characters required for a featured snippet varies by language, which is why Google cannot provide an exact max-snippets length to ensure eligibility.

Can site owners experiment with snippet length?
Site owners can absolutely adjust these settings at any time. For example – if you specify a max-snippet length and later decide you’d rather display a longer snippet in search results, you can simply change the HTML attribute.

Google notes that these new methods of configuring search snippet previews will operate the same as other results displayed globally. If the settings are changed, then your new preferences will be displayed in search results the next time Google recrawls the page.

Google will 100% follow these settings
These new settings will not be treated as hints or suggestions. Google will fully abide by the site owners preferences as specified in the robots meta tags and/or HTML attribute.

No difference between desktop and mobile settings
Preview preferences will be applied to both mobile and desktop search results. If a site has separate mobile and desktop versions then the same markup should be used on both.

Some last notes

These options are available to site owners now, but the changes will not be reflected in search results until mid-to-late October at the earliest.

For more information, see Google’s developer documentation on meta tags.

Categorized in Search Engine

[Source: This article was published in gritdaily.com By Faisal Quyyumi - Uploaded by the Association Member: Jason bourne]

recent study conducted by Yext and Forbes shows consumers only believe 50 percent of their search results when looking up information about brands.

Yext is a New York City technology company focusing on online brand management and Forbes, of course, is a business magazine. Over 500 consumers in the United States were surveyed for the study.

FINDINGS

57 percent of those in the study avoid search engines and prefer to visit the brand’s official website because they believe it is more accurate.

50 percent of those surveyed use third-party sites and applications to learn more about brands. 48 percent believe a brand’s website is their most reliable source.

20 percent of “current and new customers trust social media sites to deliver brand information,” according to Search Engine Journal. 28 percent of buyers avoid buying from a certain brand after they have received inaccurate information.

WHY DON’T THEY BUY?

A few reasons why consumers do not buy from a brand is due to unsatisfactory customer service, excessive requests for information and if a company’s website is not easy to navigate.

Mar Ferrentino, Chief Strategy Officer of Yext said: “Our research shows that regardless of where they search for information, people expect the answers they find to be consistent and accurate – and they hold brands responsible to ensure this is the case.”

The study says customers look at a brand’s website and search engine results for information. This information includes customer service numbers, hours, events, and a brand’s products.

A BETTER WAY TO MARKET ONLINE

The three best practices that brands can use for a customer to have a seamless experience is to maintain, guarantee and monitor.

The company should maintain present-day information and complete accuracy on its website along with an easy-to-use search function. The study also tells brands “guarantee searches return high-quality results by ensuring that tools like Google My Business and other directories have updated and correct information”. Lastly, a brand needs to be active and respond to questions and posts online on social media, corporate websites and review sites.

Companies are doing their best to keep up with consumer expectations for an authentic experience.

Many people use third-party sites such as Google, Bing or Yelp because they are able to compare and categorize numerous products at once.

CONSUMERS HESITATE

New users and consumers are often hesitant and require time to build trust with a company, whereas current customers have confidence in the brand and help by writing positive reviews. 45 percent of customers “say they are usually looking for customer reviews of brands of products when they visit a third-party site” (Forbes).

Reviews determine whether consumers will avoid buying a product or if they want to continue interacting with the vendor.

True Value Company, an American wholesaler, is changing their marketing strategy to adapt to a more Internet-based audience. “We’ve made significant technology investments – including re-platforming our website – to back that up and support our brick and mortar stores for the online/offline world in which consumers live,” said David Elliot, the senior vice-president of marketing.

Despite branding on social media becoming more popular, it does not fall in the top 50 percent of most-trusted sources for brand information.

A 2008 study done by Forrester Research, an American based market research company, shows how much consumers trust different information sources. The sources range from personal emails to Yellow Pages to message board posts.

The most trusted is emails “from people you know” at 77 percent; followed by consumer product ratings/reviews at 60 percent and portal/search engines at 50 percent. The least trusted information source is a company blog at only 16 percent.

Corporate blogs are the least dependable information source to consumers as these should be the most reliable way for companies to express and share information with their audience.

The study shows the significance of a brand’s online marketing strategy. It is vital for companies to make sure their website looks like a trustworthy source.

Companies don’t need to stop blogging — but instead, have to do it in a trustworthy and engaging manner.

Want to read the full report? Click here.

Categorized in Search Engine

[Source: This article was published in enca.com - Uploaded by the Association Member: Rene Meyer]

In this file illustration picture taken on July 10, 2019, the Google logo is seen on a computer in Washington, DC. 

SAN FRANCISCO - Original reporting will be highlighted in Google’s search results, the company said as it announced changes to its algorithm.

The world’s largest search engine has come under increasing criticism from media outlets, mainly because of its algorithms - a set of instructions followed by computers - that newspapers have often blamed for plumenting online traffic and the industry’s decline.

Explaining some of the changes in a blog post, Google's vice president of news Richard Gingras said stories that were critically important and labor intensive -- requiring experienced investigative skills, for example -- would be promoted.

Articles that demonstrated “original, in-depth and investigative reporting,” would be given the highest possible rating by reviewers, he wrote on Thursday.

These reviewers - roughly 10,000 people whose feedback contributes to Google’s algorithm - will also determine the publisher’s overall reputation for original reporting, promoting outlets that have been awarded Pulitzer Prizes, for example.

It remains to be seen how such changes will affect news outlets, especially smaller online sites and local newspapers, who have borne the brunt of the changing media landscape.

And as noted by the technology website TechCrunch, it is hard to define exactly what original reporting is: many online outlets build on ‘scoops’ or exclusives with their own original information, a complexity an algorithm may have a hard time picking through.

The Verge - another technology publication - wrote the emphasis on originality could exacerbate an already frenetic online news cycle by making it lucrative to get breaking news online even faster and without proper verification.

The change comes as Google continues to face criticism for its impact on the news media.

Many publishers say the tech giant’s algorithms - which remain a source of mysterious frustration for anyone outside Google -- reward clickbait and allow investigative and original stories to disappear online.

Categorized in Search Engine

[Source: This article was published in flipweb.org By Abhishek - Uploaded by the Association Member: Jay Harris]

One of the first question that someone who is getting into SEO would have is how exactly does Google rank the websites that you see in Google Search. Ranking a website means that giving them rank in terms of positions. The first position URL that you see in Google Search is ranked number 1 and so on. Now, there are various factors involved in ranking websites on Google Search. It is also not the case that you can’t rank higher if your website’s rank is decided once. Therefore, you would have the question of how does Google determine which URL of a website should come first and which should be lower.

For this reason, Google’s John Mueller has now addressed this question and explains in a video how Google picks website URL for its Search. John explains that there are site preference signals which are involved in determining the rank of a website. However, the most important signals are the preference of the site and the preference of the user accessing the site.

Here are the Site preference signals:

  • Link rep canonical annotations
  • Redirects
  • Internal linking
  • URL in the sitemap file
  • HTTPS preference
  • Nicer looking URLs

One of the keys, as John Mueller has previously mentioned, is to remain consistent. While John did not explain what he means by being consistent, it should mean that you should keep on doing whatever you do. Now, one of the best examples of being consistent is to post on your website every day in order to rank higher up in search results. If you are not consistent, your website’s ranking might get lost and you will have to start all over again. Apart from that, you have to be consistent when it comes to performing SEO as well. If you stop that, your website will suffer in the long run.

Categorized in Search Engine

[Source: This article was Published in theverge.com BY James Vincent - Uploaded by the Association Member: Jennifer Levin] 

A ‘tsunami’ of cheap AI content could cause problems for search engines

Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance, it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:

You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.

The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.”

The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.

As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.

AI text generation will make the article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.”

Mike Blumenthal, an SEO consultant, and expert says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.

And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue. That doesn’t stop it being politically damaging.

The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.

“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge. “So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.”

Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.

However, as we’ve seen with visual deep fakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger. Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.

Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.

SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.

More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems, but it certainly minimizes the risk from high-volume spam.

The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.

“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic, it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all?

Categorized in Search Engine

 [Source: This article was Published in zdnet.com By Catalin Cimpanu - Uploaded by the Association Member: Deborah Tannen]

Extension developer says he sold the extension weeks before; not responsible for the shady behavior.

Google has removed a Chrome extension from the official Web Store yesterday for secretly hijacking search engine queries and redirecting users to ad-infested search results.

The extension's name was "YouTube Queue," and at the time it was removed from the Web Store, it had been installed by nearly 7,000 users.

The extension allowed users to queue multiple YouTube videos in the order they wanted for later viewing.

EXTENSION TURNED INTO ADWARE IN EARLY JUNE

But under the hood, it also intercepted search engine queries, redirected the query through the Croowila URL, and then redirected users to a custom search engine named Information Vine, which listed the same Google search results but heavily infused with ads and affiliate links.

Users started noticing the extension's shady behavior almost two weeks ago, when first reports surfaced on Reddit, followed by two more, a few days later [12].

The extension was removed from the Web Store yesterday after Microsoft Edge engineer (and former Google Chrome developer) Eric Lawrence pointed out the extension's search engine hijacking capabilities on Twitter.

eric lawrence

Lawrence said the extension's shady code was only found in the version listed on the Chrome Web Store, but not in the extension's GitHub repository.

In an interview with The Register, the extension's developer claimed he had no involvement and that he previously sold the extension to an entity going by Softools, the name of a well-known web application platform.

In a following inquiry from The Register, Softools denied having any involvement with the extension's development, let alone the malicious code.

The practice of a malicious entity offering to buy a Chrome extension and then adding malicious code to the source is not a new one.

Such incidents have been first seen as early as 2014, and as recently as 2017, when an unknown party bought three legitimate extensions (Particle for YouTube, Typewriter Sounds, and Twitch Mini Player) and repurposed them to inject ads on popular sites.

In a 2017 tweet, Konrad Dzwinel, a DuckDuckGo software engineer and the author of the SnappySnippet, Redmine Issues Checker, DOMListener, and CSS-Diff Chrome extensions, said he usually receives inquiries for selling his extensions every week.

konrad

In a February 2019 blog post, antivirus maker Kaspersky warned users to "do a bit of research to ensure the extension hasn't been hijacked or sold" before installing it in their browser.

Developers quietly selling their extensions without notifying users, along with developers falling for spear-phishing campaigns aimed at their Chrome Web Store accounts, are currently the two main methods through which malware gangs take over legitimate Chrome extensions to plant malicious code in users' browsers.

COMING AROUND TO THE AD BLOCKER DEBATE

Furthermore, Lawrence points out that the case of the YouTube Queue extension going rogue is the perfect example showing malicious threat actors abusing the Web Request API to do bad things.

This is the same API that most ad blockers are using, and the one that Google is trying to replace with a more stunted one named the Declarative Net Request API.

eric

This change is what triggered the recent public discussions about "Google killing ad blockers."

However, Google said last week that 42% of all the malicious extensions the company detected on its Chrome Web Store since January 2018, were abusing the Web Request API in one way or another -- and the YouTube Queue extension is an example of that.

In a separate Twitter thread, Chrome security engineer Justin Schuh again pointed out that Google's main intent in replacing the old Web Request API was privacy and security-driven, and not anything else like performance or ad blockers, something the company also officially stated in a blog post last week.

justin

justin schuh

 

Categorized in Internet Privacy

 [Source: This article was Published in searchenginejournal.com By Dave Davies - Uploaded by the Association Member: Clara Johnson]

Let’s begin by answering the obvious question:

What Is Universal Search?

There are a few definitions for universal search on the web, but I prefer hearing it from the horse’s mouth on things like this.

While Google hasn’t given a strict definition that I know of as to what universal search is from an SEO standpoint, they have used the following definition in their Search Appliance documentation:

“Universal search is the ability to search all content in an enterprise through a single search box. Although content sources might reside in different locations, such as on a corporate network, on a desktop, or on the World Wide Web, they appear in a single, integrated set of search results.”

Adapted for SEO and traditional search, we could easily turn it into:

“Universal search is the ability to search all content across multiple databases through a single search box. Although content sources might reside in different locations, such as a different index for specific types or formats of content, they appear in a single, integrated set of search results.”

What other databases are we talking about? Basically:

Universal Search

On top of this, there are additional databases that information is drawn from (hotels, sports scores, calculators, weather, etc.) and additional databases with user-generated information to consider.

These range from reviews to related searches to traffic patterns to previous queries and device preferences.

Why Universal Search?

I remember a time, many years ago, when there were 10 blue links…

search

It was a crazy time of discovery. Discovering all the sites that didn’t meet your intent or your desired format, that is.

And then came Universal Search. It was announced in May of 2007 (by Marissa Mayer, if that gives it context) and rolled out just a couple months after they expanded on the personalization of results.

The two were connected and not just by being announced by the same person. They were connected in illustrating their continued push towards Google’s mission statement:

“Our mission is to organize the world’s information and make it universally accessible and useful.”

Think about those 10 blue links and what they offered. Certainly, they offered scope of information not accessible at any point in time prior, but they also offered a problematic depth of uncertainty.

Black hats aside (and there were a lot more of them then), you clicked a link in hopes that you understood what was on the other side of that click and we wrote titles and descriptions that hopefully fully described what we had to offer.

A search was a painful process, we just didn’t know it because it was better than anything we’d had prior.

Enter Universal Search

Then there was Universal Search. Suddenly the guesswork was reduced.

Before we continue, let’s watch a few minutes of a video put out by Google shortly after Universal Search launched.

The video starts at the point where they’re describing what they were seeing in the eye tracking of search results and illustrates what universal search looked like at the time.

 

OK – notwithstanding that this was a core Google video, discussing a major new Google feature and it has (at the time of writing) 4,277 views and two peculiar comments – this is an excellent look at the “why” of Universal Search as well as an understanding of what it was at the time, and how much and how little it’s changed.

How Does It Present Itself?

We saw a lot of examples of Universal Search in my article on How Search Engines Display Search Results.

Where then we focused on the layout itself and where each section comes from, here we’re discussing more the why and how of it.

At a root level and as we’ve all seen, Universal Search presents itself as sections on a webpage that stand apart from the 10 blue links. They are often, but not always, organically generated (though I suspect they are always organically driven).

This is to say, whether a content block exists would be handled on the organic search side, whereas what’s contained in that content block may-or-may-not include ads.

So, let’s compare then versus now, ignoring cosmetic changes and just looking at what the same result would look like with and without Universal Search by today’s SERP standards.

sej civil war serp

This answers two questions in a single image.

It answers the key question of this section, “How does Universal Search present itself?”

This image also does a great job of answering the question, “Why?”

Imagine the various motivations I might have to enter the query [what was the civil war]. I may be:

  • A high school student doing an essay.
  • Someone who simply is not familiar with the historic event.
  • Looking for information on the war itself or my query may be part of a larger dive into civil wars across nations or wars in general.
  • Someone who prefers articles.
  • Someone who prefers videos.
  • Just writing an unrelated SEO article and need a good example.

The possibilities are virtually endless.

If you look at the version on the right, which link would you click?

How about if you prefer video results?

The decision you make will take you longer than it likely does with Universal Search options.

And that’s the point.

The Universal Search structure makes decision making faster across a variety of intents, while still leaving the blue links (though not always 10 anymore) available for those looking for pages on the subject.

In fact, even if what you’re looking for exists in an article, the simple presence of Universal Search results will help filter out the results you don’t want and leaves SEO pros and website owners free to focus our articles to ranking in the traditional search results and other types and formats in appropriate sections.

How Does Google Pick the Sections?

Let me begin this section by stating very clearly – this is the best guess.

As we’re all aware, Google’s systems are incredibly complicated. There may be more pieces than I am aware of, obviously.

There are two core areas I can think of that they would use for these adjustments.

Users

Now, before you say, “But Google says they don’t use user metrics to adjust search results!” let’s consider the specific wording that Google’s John Mueller used when responding to a question on user signals:

“… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going in the right way.

But for individual pages, I don’t think that’s something worth focusing on at all.”

So, they do use the data. They use it on their end, but not to rank individual pages.

What you can take this as it relates to Universal Search is that Google will test different blocks of data for different types of queries to determine how users interact with them. It is very likely that Bing does something similar.

Most certainly they pick locations for possible placements, limitations on the number of different result types/databases, and have determined starting points (think: templates for specific query types) for their processes and then simply let machine learning take over running slight variances or testing layouts on pages generated for unknown queries, or queries where new associations may be attained.

For example, a spike in a query that ties to a sudden rise in new stories related to the query could trigger the news carousel being inserted into the search results, provided that past similar instances produced a positive engagement signal and it would remain as long as user engagement indicated it.

Query Data

It is virtually a given that a search engine would use their own query data to determine which sections to insert into the SERPs.

If a query like [pizza] has suggested queries like:

Recommended Pizza Searches

Implying that most such searchers are looking for restaurants it makes sense that in a Universal Search structure, the first organic result would not be a blue link but:

Pizza SERP

It is very much worth remembering that the goal of a search engine is to provide a single location where a user can access everything they are looking for.

At times this puts them in direct competition with themselves in some ways. Not that I think they mind losing traffic to another of their own properties.

Let’s take YouTube for example. Google’s systems will understand not just which YouTube videos are most popular but also which are watched through, when people eject, skip or close out, etc.

They can use this not just to understand which videos are likely to resonate on Google.com but also understand more deeply what supplemental content people are interested in when they search for more general queries.

I may search for [civil war], but that doesn’t mean I’m not also interested in the Battle at Antietam specifically.

So, I would suggest that the impact of these other databases does not simply impact the layouts as illustrated in Universal Search but that these databases themselves can and likely are being used to connect topics and information together and thus impacting the core search rankings themselves.

Takeaway

So, what does this all mean for you?

For one, you can use the machine learning systems of the search engines to assist in your content development strategies.

Sections you see appearing in Universal Search tell us a lot about the types and formats of content that users expect or engage with.

Also important is that devices and technology are changing rapidly. I suspect the idea of Universal Search is about to go through a dramatic transformation.

This is due in part to voice search, but I suspect it will have more to do with the push by Google to provide a solution rather than options.

A few well-placed filters could provide refinement that produces only a single result and many of these filters could be automatically applied based on known user preferences.

I’m not sure we’ll get to a single result in the next two to three years but I do suspect that we will see it for some queries and where the device lends itself to it.

If I query “weather” why would the results page not look like:

Weather SERP

In my eyes, this is the future of Universal Search.

Or, as I like to call it, search.

Categorized in Search Engine

[Source: This article was Published in techcrunch.com By Catherine Shu - Uploaded by the Association Member: Jay Harris]

Earlier this week, music lyrics repository Genius accused Google of lifting lyrics and posting them on its search platform. Genius told the Wall Street Journal that this caused its site traffic to drop. Google, which initially denied wrongdoing but later said it was investigating the issue, addressed the controversy in a blog post today. The company said it will start including attribution to its third-party partners that provide lyrics in its information boxes.

When Google was first approached by the Wall Street Journal, it told the newspaper that the lyrics it displays are licensed by partners and not created by Google. But some of the lyrics (which are displayed in information boxes or cards called “Knowledge Panels” at the top of search results for songs) included Genius’ Morse code-based watermarking system. Genius said that over the past two years it repeatedly contacted Google about the issue. In one letter, sent in April, Genius told Google it was not only breaking the site’s terms of service but also violating antitrust law—a serious allegation at a time when Google and other big tech companies are facing antitrust investigations by government regulators.

After the WSJ article was first published, Google released a statement that said it was investigating the problem and would stop working with lyric providers who are “not upholding good practices.”

In today’s blog post, Satyajeet Salgar, a group product manager at Google Search, wrote that the company pays “music publishers for the right to display lyrics since they manage the rights to these lyrics on behalf of songwriters.” Because many music publishers license lyrics text from third-party lyric content providers, Google works with those companies.

“We do not crawl or scrape websites to source these lyrics. The lyrics you see in information boxes on Search come directly from lyrics content providers, and they are updated automatically as we receive new lyrics and corrections on a regular basis,” Salgar added.

These partners include LyricFind,  which Google has had an agreement with since 2016. LyricFind’s chief executive told the WSJ that it does not source lyrics from Genius.

While Salgar’s post did not name any companies, he addressed the controversy by writing “news reports this week suggested that one of our lyrics content providers is in a dispute with a lyrics site about where their written lyrics come from. We’ve asked our partner to investigate the issue to ensure that they’re following industry best practices in their approach.”

In the future, Google will start including attribution to the company that provided the lyrics in its search results. “We will continue to take an approach that respects and compensates rights-holders, and ensures that music publishers and songwriters are paid for their work,” Salgar wrote.

Genius, which launched as Rap Genius in 2009, has been at loggerheads with Google before. In 2013, a SEO trick Rap Genius used to place itself higher in search results ran afoul of Google’s web spam team. Google retaliated by burying Rap Genius links under pages of other search results. The conflict was resolved after less than two weeks, but during that time Rap Genius’ traffic plummeted dramatically.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Dave Davies - Uploaded by AIRS Member: Anna K. Sasaki]

Let me begin this with a full disclaimer. I begin each day by ransacking the news to make sure I know what’s going on in the search world around me. Follow me on Twitter and at some point, in the morning you’ll find a flurry of Tweets – that’s when.

For a slide deck, I had put together recently, I decided to publish each change in the SERP (search engine results pages) layouts for the month prior. There were 18 slides in that section. And that was just for February 2019.

I want to stress this point, a point we will come back to later. It’s important.

But for now, all we need to keep in mind is that there is a good chance that between the second this piece is published and the time you are reading there may well have been changes.

Actually, there’s a very likely chance that between the time I finish writing it, it gets edited, and publishes, there may well have already been changes.

Yes, the pace of change in the SERPs is that fast.

They may not be huge… but they’re there and through more than a dozen per month, over a year even that small once create dramatically different experiences.

So, what we will focus on here are the main blocks and some of the elements on them. That is to say, the main areas, where the data is gathered to produce them and what that means for you.

Generic SERP Layout

Let’s start by looking at a pretty generic SERP layout:

Generic SERP Layout

This isn’t the only layout as we’ll see below but it’s likely pretty familiar to you.

So, what are these sections?

A: Featured Snippet / Answer Box

This is the section above the organic results that attempts to answer a user’s complete intent.

As we can see in the example above, if the only intent is a simple answer, this is where it’ll likely (though not exclusively) be.

Importantly, structuring your content in a way that produces the answer box often results in the answer for Google voice search as well. But not always… as with the example above. More on that below.

B: Knowledge Panel / Graph

For business or known human entity queries, this generally contains a summary of the information Google views as core to their identity. That is, key information a searcher would likely be interested in knowing.

For more general queries, however (like the civil war), we find key facts and images, generally with links to other relevant events or entities.

I noted above that voice search results don’t exclusively come from the answer box.

If there is a knowledge panel the voice result will generally come from here. In fact, I’ve yet to find an exception though it may be a truncated version.

C: People Also Ask

Exactly as the name suggests, this section contains a list of questions that relate to the initial query.

This section is generally triggered when the initial query implies that the user is seeking information on a topic.

The list of questions relates more to the query itself than search volumes. That is to say, these are not necessarily the top queries around an entity but those questions that relate to the initial question.

When a result is expanded, an answer for the query is given with a link to the site the answer was drawn from as well as a search result for the query with additional details.

Interestingly: The answer is given on the initial results page:

serp layout people also asked

Differs from the Answer Box result on the results page if clicked:

serp layout people also asked 2

Likely they are assuming that the user’s intent differs when the query is being directly searched vs. tacked on to the previous.

D & D2: Organic Results

Technically everything on the page above is an organic result.

As everyone reading this article is most certainly aware, these are produced based on a combination of very sophisticated algorithms over at the Googleplex(es) and are ordered based on those algorithms – designed to produce the top pages to satisfy a user’s likely intent(s).

I’m not going to attempt to dive into what signals are used right now as that’s not the purpose of this article.

When there are popular videos that attempt to answer a query, they are often displayed in a carousel.

Alternatively, if the query inspires Google to believe that the user intent would be met with the addition of images we’ll find:

E: Video Results (Alternate: News or Images)

serp layout

Or if the query triggers the likely intent that the user may be looking for news:

serp layout video news

F: Related Entities

In section F above we find a row of related entities based on a core characteristic.

In the query used as an example, we were seeking information on a major military conflict. Google has determined that “military conflict” is the entity association most relevant to the searcher and thus listed others.

There can be more than one such row of results at the bottom of the page though I’ve yet to see more than three.

G: Searches Related to…

At the bottom, we find the related searches.

They differ from the “People Also Ask” in that they don’t have to be questioned (though they can be). As such, there can be a bit of overlap, but not necessarily.

Generally, these are generated by searches that people who searched for the present query have also searched.

Local SERP Layout

Oh, wait… Google hasn’t monetized yet and there are some SERP features that are missing.

OK, let’s try again.

As it’s almost lunch as I write this, let’s look up pizza near me. We get:

serp layout local.fw

H: Snack Pack / Map Pack / Local Pack

For anyone familiar with local in any way or anyone who’s ever done any type of query with local intent, you’ll be familiar with the map pack/snack pack / local pack. Wow, that’s a lot of names.

Terminology Lesson: For folks newer to SEO, until August of 2015 there were 7 results in the map pack. On August 7, Google reduced that number to 3.

As everyone was familiar with 7 being the map pack and this was a far lower number, it became referred to as the snack pack.

If you run a local business and want in the map results, here’s a guide on Local SEO.

I: Discover More Places

This section of the SERPs can be a bit confusing until you really think about it.

  • I ran a query for pizza.
  • I looked through a variety of results.
  • I hit the bottom of the page.
  • They’re showing me things related to the high-level category but not necessarily related to pizza.

At the bottom of the page, Google has added a section to help me either refine my search, focus it more on sub-categories like delivery, or change gears altogether.

If I hit the bottom of the page, they’re assuming I might not have been specific in my desires or even known them and so they’re providing new options.

Talk about making page 2 irrelevant.

See Anyone's Analytics Account, in Real Time.
You can literally see real-time sales and conversion data for any website, and which campaigns drove that traffic. Start your free trial today.

SERP with Google Ads

Right… all this and we still haven’t seen much in the way of ads. So, let’s kill two birds with one stone and look at the SERP:

SERP with Google Ads

I & I2: Ads

I don’t think any of us really need any insight into what this section is for.

It’s what pays for all that Google is and let’s then do things like buy Burning Man.

J: Shopping Results

Sometimes they’re tucked away at the right, sometimes they’re placed in a carousel within the results themselves but at its core, the shopping ad units are simply Google Ads power by product-specific data.

If you sell products, have them in a database, invest in Google Ads and don’t have a shopping feed set up to power their shopping ads, it’s definitely something to look into.

K & K2: Related Searches

Once again, we see Google dropping a couple of rows of images to distract us from page 2.

These lists are based on entity association on a topical level.

All of the books in the first list relate to the topic of the civil war and the status of being nonfiction. The second list is also related to the topic of the civil war but the status of fiction.

What’s interesting is that Google doesn’t assume from a click in this zone that you’ve actually found what you wanted in the first place but rather are inviting you down a different path.

If I click “The Civil War: A Narrative” I am taken to the page:

sej serp layout civil war book

A carousel at the top displays an expanded version of the list from the previous page. Of course, they take the time to toss in another ad in case I’d like to purchase it.

There’s a knowledge panel as this is a specifically defined entity and then there are organic results.

Additional SERP Layouts & Features

While I will publish this knowing full well that I’m going to miss some due to the sheer volume of different permutations, layouts and sections, here are a few of the more interesting layouts the occupy zones listed above:

Events

Events

Google has added events into the featured snippet area we discussed above as Section A. This just happened last February though it was on mobile prior to that.

So … get your event schema up-to-date.

And if we’re going to the Cherry Blossom Festival in Tokyo we probably need a place to stay.

Travel

If you run hotels or are just looking for a place, a quick query on Google and you’ll find in the layout:

sej serp layout hotels

A carousel and map lend the familiar options and you’re guided down the path towards a conversion.

While this is similar to the traditional map layout, the volume of filters and options make it a massive threat to those in the travel sector.

The way into this section is paid via Google Hotel Ads.

Twitter

For topics that are trending we see:

 sej serp layout joker

Where Google is pulling in tweets from fairly strong Twitter accounts right into the search results.

And More…

As noted above, I know I’m likely missing many.

In future pieces, I’ll be diving into some specifics on news, maps, images, and video but if you can think of any content blocks or zones I left out… please don’t wait until then.

We’d love to see them posted on our Facebook post on just this subject, which we’ve set up here.

Why Does This Matter?

You may be wondering why it matters. You’re focused on the top 10 organic links or maybe the featured snippets so why does any of the rest concern you?

The first and most obvious answer is that knowing the various zones and elements on the page informs you as to the opportunities there. In fact, for the first query I entered above there are many opportunities buried in there.

Think about the query and the layout and question always whether there are elements on the page that would steer the users to subsets.

I asked, “what is the civil war”. Might I be sidetracked by a “People also ask”?

Could I get pulled into YouTube? What suggested searches might I click as Google tries to keep me from journeying to page 2?

In these are hidden opportunities.

But there’s more than that.

Within many of these sections, you’re being told specifically how Google is connecting the dots on your topic.

For broad topics think of what the “Searches related to” (G) section is telling you. Think about what the Related Entities (F) mean and how they relate to the content you should be including on your site.

For narrower topics think about what the “People also ask” (C) and Knowledge Panels (B) are signaling.

If people are “also asking” question that Google has deemed relevant to the questions you ask, should you not be answering them too?

Do the “Related Searches” (K) not tell you what entities Google considers related? Heck, they say so right in the naming of the section.

And of course, look to the formats. If Google wants to provide results in specific formats for specific queries, it’s likely that the searchers and responding to them. That means they’ll respond to you if you produce it.

Looking at the SERPs can tell you a LOT about how Google is connecting entities together and if they are, then doing the same can’t help but send a strong signal of relevancy.

When thinking about your content strategy… look to the SERPs.

Not to Mention Mobile SERPs

I’ve used a lot of examples here and they’ve all learned on the desktop. What can I say, I had to choose one and it was easier to get screenshots.

The same basic elements exist on mobile, but you will often find them arranged in a different order.

Pay attention to this of course as it tells you how relevant each zone is on different devices. If you’re ranking highly in organic on mobile, you may be buried beneath more videos and carousels than on desktop.

Knowing this will help you understand your traffic and where to put your efforts based on where your market conducts their queries.

What it tells you about your subject however remains constant, however, it may advise you on how that content is formatted.

Categorized in Search Engine
Page 1 of 6

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now