fbpx

[Source: This article was published in flipweb.org By Abhishek - Uploaded by the Association Member: Jay Harris]

One of the first question that someone who is getting into SEO would have is how exactly does Google rank the websites that you see in Google Search. Ranking a website means that giving them rank in terms of positions. The first position URL that you see in Google Search is ranked number 1 and so on. Now, there are various factors involved in ranking websites on Google Search. It is also not the case that you can’t rank higher if your website’s rank is decided once. Therefore, you would have the question of how does Google determine which URL of a website should come first and which should be lower.

 

For this reason, Google’s John Mueller has now addressed this question and explains in a video how Google picks website URL for its Search. John explains that there are site preference signals which are involved in determining the rank of a website. However, the most important signals are the preference of the site and the preference of the user accessing the site.

Here are the Site preference signals:

  • Link rep canonical annotations
  • Redirects
  • Internal linking
  • URL in the sitemap file
  • HTTPS preference
  • Nicer looking URLs

One of the keys, as John Mueller has previously mentioned, is to remain consistent. While John did not explain what he means by being consistent, it should mean that you should keep on doing whatever you do. Now, one of the best examples of being consistent is to post on your website every day in order to rank higher up in search results. If you are not consistent, your website’s ranking might get lost and you will have to start all over again. Apart from that, you have to be consistent when it comes to performing SEO as well. If you stop that, your website will suffer in the long run.

Categorized in Search Engine

[Source: This article was Published in theverge.com BY James Vincent - Uploaded by the Association Member: Jennifer Levin] 

A ‘tsunami’ of cheap AI content could cause problems for search engines

Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

 

Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance, it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:

You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.

The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.”

The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.

As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.

AI text generation will make the article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.”

 

Mike Blumenthal, an SEO consultant, and expert says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.

And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue. That doesn’t stop it being politically damaging.

The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.

“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge. “So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.”

Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.

However, as we’ve seen with visual deep fakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger. Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.

 

Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.

SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.

More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems, but it certainly minimizes the risk from high-volume spam.

The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.

“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic, it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all?

Categorized in Search Engine

 [Source: This article was Published in zdnet.com By Catalin Cimpanu - Uploaded by the Association Member: Deborah Tannen]

Extension developer says he sold the extension weeks before; not responsible for the shady behavior.

Google has removed a Chrome extension from the official Web Store yesterday for secretly hijacking search engine queries and redirecting users to ad-infested search results.

The extension's name was "YouTube Queue," and at the time it was removed from the Web Store, it had been installed by nearly 7,000 users.

The extension allowed users to queue multiple YouTube videos in the order they wanted for later viewing.

EXTENSION TURNED INTO ADWARE IN EARLY JUNE

But under the hood, it also intercepted search engine queries, redirected the query through the Croowila URL, and then redirected users to a custom search engine named Information Vine, which listed the same Google search results but heavily infused with ads and affiliate links.

 

Users started noticing the extension's shady behavior almost two weeks ago, when first reports surfaced on Reddit, followed by two more, a few days later [12].

The extension was removed from the Web Store yesterday after Microsoft Edge engineer (and former Google Chrome developer) Eric Lawrence pointed out the extension's search engine hijacking capabilities on Twitter.

eric lawrence

Lawrence said the extension's shady code was only found in the version listed on the Chrome Web Store, but not in the extension's GitHub repository.

In an interview with The Register, the extension's developer claimed he had no involvement and that he previously sold the extension to an entity going by Softools, the name of a well-known web application platform.

In a following inquiry from The Register, Softools denied having any involvement with the extension's development, let alone the malicious code.

The practice of a malicious entity offering to buy a Chrome extension and then adding malicious code to the source is not a new one.

Such incidents have been first seen as early as 2014, and as recently as 2017, when an unknown party bought three legitimate extensions (Particle for YouTube, Typewriter Sounds, and Twitch Mini Player) and repurposed them to inject ads on popular sites.

In a 2017 tweet, Konrad Dzwinel, a DuckDuckGo software engineer and the author of the SnappySnippet, Redmine Issues Checker, DOMListener, and CSS-Diff Chrome extensions, said he usually receives inquiries for selling his extensions every week.

konrad

In a February 2019 blog post, antivirus maker Kaspersky warned users to "do a bit of research to ensure the extension hasn't been hijacked or sold" before installing it in their browser.

Developers quietly selling their extensions without notifying users, along with developers falling for spear-phishing campaigns aimed at their Chrome Web Store accounts, are currently the two main methods through which malware gangs take over legitimate Chrome extensions to plant malicious code in users' browsers.

 

COMING AROUND TO THE AD BLOCKER DEBATE

Furthermore, Lawrence points out that the case of the YouTube Queue extension going rogue is the perfect example showing malicious threat actors abusing the Web Request API to do bad things.

This is the same API that most ad blockers are using, and the one that Google is trying to replace with a more stunted one named the Declarative Net Request API.

eric

This change is what triggered the recent public discussions about "Google killing ad blockers."

However, Google said last week that 42% of all the malicious extensions the company detected on its Chrome Web Store since January 2018, were abusing the Web Request API in one way or another -- and the YouTube Queue extension is an example of that.

In a separate Twitter thread, Chrome security engineer Justin Schuh again pointed out that Google's main intent in replacing the old Web Request API was privacy and security-driven, and not anything else like performance or ad blockers, something the company also officially stated in a blog post last week.

justin

justin schuh

 

Categorized in Internet Privacy

 [Source: This article was Published in searchenginejournal.com By Dave Davies - Uploaded by the Association Member: Clara Johnson]

Let’s begin by answering the obvious question:

What Is Universal Search?

There are a few definitions for universal search on the web, but I prefer hearing it from the horse’s mouth on things like this.

While Google hasn’t given a strict definition that I know of as to what universal search is from an SEO standpoint, they have used the following definition in their Search Appliance documentation:

“Universal search is the ability to search all content in an enterprise through a single search box. Although content sources might reside in different locations, such as on a corporate network, on a desktop, or on the World Wide Web, they appear in a single, integrated set of search results.”

Adapted for SEO and traditional search, we could easily turn it into:

 

“Universal search is the ability to search all content across multiple databases through a single search box. Although content sources might reside in different locations, such as a different index for specific types or formats of content, they appear in a single, integrated set of search results.”

What other databases are we talking about? Basically:

Universal Search

On top of this, there are additional databases that information is drawn from (hotels, sports scores, calculators, weather, etc.) and additional databases with user-generated information to consider.

These range from reviews to related searches to traffic patterns to previous queries and device preferences.

Why Universal Search?

I remember a time, many years ago, when there were 10 blue links…

search

It was a crazy time of discovery. Discovering all the sites that didn’t meet your intent or your desired format, that is.

And then came Universal Search. It was announced in May of 2007 (by Marissa Mayer, if that gives it context) and rolled out just a couple months after they expanded on the personalization of results.

The two were connected and not just by being announced by the same person. They were connected in illustrating their continued push towards Google’s mission statement:

“Our mission is to organize the world’s information and make it universally accessible and useful.”

Think about those 10 blue links and what they offered. Certainly, they offered scope of information not accessible at any point in time prior, but they also offered a problematic depth of uncertainty.

 

Black hats aside (and there were a lot more of them then), you clicked a link in hopes that you understood what was on the other side of that click and we wrote titles and descriptions that hopefully fully described what we had to offer.

A search was a painful process, we just didn’t know it because it was better than anything we’d had prior.

Enter Universal Search

Then there was Universal Search. Suddenly the guesswork was reduced.

Before we continue, let’s watch a few minutes of a video put out by Google shortly after Universal Search launched.

The video starts at the point where they’re describing what they were seeing in the eye tracking of search results and illustrates what universal search looked like at the time.

 

OK – notwithstanding that this was a core Google video, discussing a major new Google feature and it has (at the time of writing) 4,277 views and two peculiar comments – this is an excellent look at the “why” of Universal Search as well as an understanding of what it was at the time, and how much and how little it’s changed.

How Does It Present Itself?

We saw a lot of examples of Universal Search in my article on How Search Engines Display Search Results.

Where then we focused on the layout itself and where each section comes from, here we’re discussing more the why and how of it.

At a root level and as we’ve all seen, Universal Search presents itself as sections on a webpage that stand apart from the 10 blue links. They are often, but not always, organically generated (though I suspect they are always organically driven).

This is to say, whether a content block exists would be handled on the organic search side, whereas what’s contained in that content block may-or-may-not include ads.

So, let’s compare then versus now, ignoring cosmetic changes and just looking at what the same result would look like with and without Universal Search by today’s SERP standards.

sej civil war serp

This answers two questions in a single image.

It answers the key question of this section, “How does Universal Search present itself?”

This image also does a great job of answering the question, “Why?”

Imagine the various motivations I might have to enter the query [what was the civil war]. I may be:

  • A high school student doing an essay.
  • Someone who simply is not familiar with the historic event.
  • Looking for information on the war itself or my query may be part of a larger dive into civil wars across nations or wars in general.
  • Someone who prefers articles.
  • Someone who prefers videos.
  • Just writing an unrelated SEO article and need a good example.

The possibilities are virtually endless.

 

If you look at the version on the right, which link would you click?

How about if you prefer video results?

The decision you make will take you longer than it likely does with Universal Search options.

And that’s the point.

The Universal Search structure makes decision making faster across a variety of intents, while still leaving the blue links (though not always 10 anymore) available for those looking for pages on the subject.

In fact, even if what you’re looking for exists in an article, the simple presence of Universal Search results will help filter out the results you don’t want and leaves SEO pros and website owners free to focus our articles to ranking in the traditional search results and other types and formats in appropriate sections.

How Does Google Pick the Sections?

Let me begin this section by stating very clearly – this is the best guess.

As we’re all aware, Google’s systems are incredibly complicated. There may be more pieces than I am aware of, obviously.

There are two core areas I can think of that they would use for these adjustments.

Users

Now, before you say, “But Google says they don’t use user metrics to adjust search results!” let’s consider the specific wording that Google’s John Mueller used when responding to a question on user signals:

“… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going in the right way.

But for individual pages, I don’t think that’s something worth focusing on at all.”

So, they do use the data. They use it on their end, but not to rank individual pages.

What you can take this as it relates to Universal Search is that Google will test different blocks of data for different types of queries to determine how users interact with them. It is very likely that Bing does something similar.

Most certainly they pick locations for possible placements, limitations on the number of different result types/databases, and have determined starting points (think: templates for specific query types) for their processes and then simply let machine learning take over running slight variances or testing layouts on pages generated for unknown queries, or queries where new associations may be attained.

For example, a spike in a query that ties to a sudden rise in new stories related to the query could trigger the news carousel being inserted into the search results, provided that past similar instances produced a positive engagement signal and it would remain as long as user engagement indicated it.

Query Data

It is virtually a given that a search engine would use their own query data to determine which sections to insert into the SERPs.

If a query like [pizza] has suggested queries like:

Recommended Pizza Searches

Implying that most such searchers are looking for restaurants it makes sense that in a Universal Search structure, the first organic result would not be a blue link but:

Pizza SERP

It is very much worth remembering that the goal of a search engine is to provide a single location where a user can access everything they are looking for.

At times this puts them in direct competition with themselves in some ways. Not that I think they mind losing traffic to another of their own properties.

Let’s take YouTube for example. Google’s systems will understand not just which YouTube videos are most popular but also which are watched through, when people eject, skip or close out, etc.

 

They can use this not just to understand which videos are likely to resonate on Google.com but also understand more deeply what supplemental content people are interested in when they search for more general queries.

I may search for [civil war], but that doesn’t mean I’m not also interested in the Battle at Antietam specifically.

So, I would suggest that the impact of these other databases does not simply impact the layouts as illustrated in Universal Search but that these databases themselves can and likely are being used to connect topics and information together and thus impacting the core search rankings themselves.

Takeaway

So, what does this all mean for you?

For one, you can use the machine learning systems of the search engines to assist in your content development strategies.

Sections you see appearing in Universal Search tell us a lot about the types and formats of content that users expect or engage with.

Also important is that devices and technology are changing rapidly. I suspect the idea of Universal Search is about to go through a dramatic transformation.

This is due in part to voice search, but I suspect it will have more to do with the push by Google to provide a solution rather than options.

A few well-placed filters could provide refinement that produces only a single result and many of these filters could be automatically applied based on known user preferences.

I’m not sure we’ll get to a single result in the next two to three years but I do suspect that we will see it for some queries and where the device lends itself to it.

If I query “weather” why would the results page not look like:

Weather SERP

In my eyes, this is the future of Universal Search.

Or, as I like to call it, search.

Categorized in Search Engine

[Source: This article was Published in techcrunch.com By Catherine Shu - Uploaded by the Association Member: Jay Harris]

Earlier this week, music lyrics repository Genius accused Google of lifting lyrics and posting them on its search platform. Genius told the Wall Street Journal that this caused its site traffic to drop. Google, which initially denied wrongdoing but later said it was investigating the issue, addressed the controversy in a blog post today. The company said it will start including attribution to its third-party partners that provide lyrics in its information boxes.

 

When Google was first approached by the Wall Street Journal, it told the newspaper that the lyrics it displays are licensed by partners and not created by Google. But some of the lyrics (which are displayed in information boxes or cards called “Knowledge Panels” at the top of search results for songs) included Genius’ Morse code-based watermarking system. Genius said that over the past two years it repeatedly contacted Google about the issue. In one letter, sent in April, Genius told Google it was not only breaking the site’s terms of service but also violating antitrust law—a serious allegation at a time when Google and other big tech companies are facing antitrust investigations by government regulators.

After the WSJ article was first published, Google released a statement that said it was investigating the problem and would stop working with lyric providers who are “not upholding good practices.”

In today’s blog post, Satyajeet Salgar, a group product manager at Google Search, wrote that the company pays “music publishers for the right to display lyrics since they manage the rights to these lyrics on behalf of songwriters.” Because many music publishers license lyrics text from third-party lyric content providers, Google works with those companies.

 

“We do not crawl or scrape websites to source these lyrics. The lyrics you see in information boxes on Search come directly from lyrics content providers, and they are updated automatically as we receive new lyrics and corrections on a regular basis,” Salgar added.

These partners include LyricFind,  which Google has had an agreement with since 2016. LyricFind’s chief executive told the WSJ that it does not source lyrics from Genius.

While Salgar’s post did not name any companies, he addressed the controversy by writing “news reports this week suggested that one of our lyrics content providers is in a dispute with a lyrics site about where their written lyrics come from. We’ve asked our partner to investigate the issue to ensure that they’re following industry best practices in their approach.”

In the future, Google will start including attribution to the company that provided the lyrics in its search results. “We will continue to take an approach that respects and compensates rights-holders, and ensures that music publishers and songwriters are paid for their work,” Salgar wrote.

Genius, which launched as Rap Genius in 2009, has been at loggerheads with Google before. In 2013, a SEO trick Rap Genius used to place itself higher in search results ran afoul of Google’s web spam team. Google retaliated by burying Rap Genius links under pages of other search results. The conflict was resolved after less than two weeks, but during that time Rap Genius’ traffic plummeted dramatically.

 

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

 

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

 

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Dave Davies - Uploaded by AIRS Member: Anna K. Sasaki]

Let me begin this with a full disclaimer. I begin each day by ransacking the news to make sure I know what’s going on in the search world around me. Follow me on Twitter and at some point, in the morning you’ll find a flurry of Tweets – that’s when.

For a slide deck, I had put together recently, I decided to publish each change in the SERP (search engine results pages) layouts for the month prior. There were 18 slides in that section. And that was just for February 2019.

I want to stress this point, a point we will come back to later. It’s important.

But for now, all we need to keep in mind is that there is a good chance that between the second this piece is published and the time you are reading there may well have been changes.

Actually, there’s a very likely chance that between the time I finish writing it, it gets edited, and publishes, there may well have already been changes.

Yes, the pace of change in the SERPs is that fast.

They may not be huge… but they’re there and through more than a dozen per month, over a year even that small once create dramatically different experiences.

So, what we will focus on here are the main blocks and some of the elements on them. That is to say, the main areas, where the data is gathered to produce them and what that means for you.

 

Generic SERP Layout

Let’s start by looking at a pretty generic SERP layout:

Generic SERP Layout

This isn’t the only layout as we’ll see below but it’s likely pretty familiar to you.

So, what are these sections?

A: Featured Snippet / Answer Box

This is the section above the organic results that attempts to answer a user’s complete intent.

As we can see in the example above, if the only intent is a simple answer, this is where it’ll likely (though not exclusively) be.

Importantly, structuring your content in a way that produces the answer box often results in the answer for Google voice search as well. But not always… as with the example above. More on that below.

B: Knowledge Panel / Graph

For business or known human entity queries, this generally contains a summary of the information Google views as core to their identity. That is, key information a searcher would likely be interested in knowing.

For more general queries, however (like the civil war), we find key facts and images, generally with links to other relevant events or entities.

I noted above that voice search results don’t exclusively come from the answer box.

If there is a knowledge panel the voice result will generally come from here. In fact, I’ve yet to find an exception though it may be a truncated version.

C: People Also Ask

Exactly as the name suggests, this section contains a list of questions that relate to the initial query.

This section is generally triggered when the initial query implies that the user is seeking information on a topic.

The list of questions relates more to the query itself than search volumes. That is to say, these are not necessarily the top queries around an entity but those questions that relate to the initial question.

When a result is expanded, an answer for the query is given with a link to the site the answer was drawn from as well as a search result for the query with additional details.

Interestingly: The answer is given on the initial results page:

serp layout people also asked

Differs from the Answer Box result on the results page if clicked:

serp layout people also asked 2

Likely they are assuming that the user’s intent differs when the query is being directly searched vs. tacked on to the previous.

D & D2: Organic Results

Technically everything on the page above is an organic result.

As everyone reading this article is most certainly aware, these are produced based on a combination of very sophisticated algorithms over at the Googleplex(es) and are ordered based on those algorithms – designed to produce the top pages to satisfy a user’s likely intent(s).

I’m not going to attempt to dive into what signals are used right now as that’s not the purpose of this article.

When there are popular videos that attempt to answer a query, they are often displayed in a carousel.

Alternatively, if the query inspires Google to believe that the user intent would be met with the addition of images we’ll find:

E: Video Results (Alternate: News or Images)

serp layout

Or if the query triggers the likely intent that the user may be looking for news:

serp layout video news

F: Related Entities

In section F above we find a row of related entities based on a core characteristic.

In the query used as an example, we were seeking information on a major military conflict. Google has determined that “military conflict” is the entity association most relevant to the searcher and thus listed others.

 

There can be more than one such row of results at the bottom of the page though I’ve yet to see more than three.

G: Searches Related to…

At the bottom, we find the related searches.

They differ from the “People Also Ask” in that they don’t have to be questioned (though they can be). As such, there can be a bit of overlap, but not necessarily.

Generally, these are generated by searches that people who searched for the present query have also searched.

Local SERP Layout

Oh, wait… Google hasn’t monetized yet and there are some SERP features that are missing.

OK, let’s try again.

As it’s almost lunch as I write this, let’s look up pizza near me. We get:

serp layout local.fw

H: Snack Pack / Map Pack / Local Pack

For anyone familiar with local in any way or anyone who’s ever done any type of query with local intent, you’ll be familiar with the map pack/snack pack / local pack. Wow, that’s a lot of names.

Terminology Lesson: For folks newer to SEO, until August of 2015 there were 7 results in the map pack. On August 7, Google reduced that number to 3.

As everyone was familiar with 7 being the map pack and this was a far lower number, it became referred to as the snack pack.

If you run a local business and want in the map results, here’s a guide on Local SEO.

I: Discover More Places

This section of the SERPs can be a bit confusing until you really think about it.

  • I ran a query for pizza.
  • I looked through a variety of results.
  • I hit the bottom of the page.
  • They’re showing me things related to the high-level category but not necessarily related to pizza.

At the bottom of the page, Google has added a section to help me either refine my search, focus it more on sub-categories like delivery, or change gears altogether.

If I hit the bottom of the page, they’re assuming I might not have been specific in my desires or even known them and so they’re providing new options.

Talk about making page 2 irrelevant.

 

See Anyone's Analytics Account, in Real Time.
You can literally see real-time sales and conversion data for any website, and which campaigns drove that traffic. Start your free trial today.

SERP with Google Ads

Right… all this and we still haven’t seen much in the way of ads. So, let’s kill two birds with one stone and look at the SERP:

SERP with Google Ads

I & I2: Ads

I don’t think any of us really need any insight into what this section is for.

It’s what pays for all that Google is and let’s then do things like buy Burning Man.

J: Shopping Results

Sometimes they’re tucked away at the right, sometimes they’re placed in a carousel within the results themselves but at its core, the shopping ad units are simply Google Ads power by product-specific data.

If you sell products, have them in a database, invest in Google Ads and don’t have a shopping feed set up to power their shopping ads, it’s definitely something to look into.

K & K2: Related Searches

Once again, we see Google dropping a couple of rows of images to distract us from page 2.

These lists are based on entity association on a topical level.

All of the books in the first list relate to the topic of the civil war and the status of being nonfiction. The second list is also related to the topic of the civil war but the status of fiction.

What’s interesting is that Google doesn’t assume from a click in this zone that you’ve actually found what you wanted in the first place but rather are inviting you down a different path.

If I click “The Civil War: A Narrative” I am taken to the page:

sej serp layout civil war book

A carousel at the top displays an expanded version of the list from the previous page. Of course, they take the time to toss in another ad in case I’d like to purchase it.

There’s a knowledge panel as this is a specifically defined entity and then there are organic results.

Additional SERP Layouts & Features

While I will publish this knowing full well that I’m going to miss some due to the sheer volume of different permutations, layouts and sections, here are a few of the more interesting layouts the occupy zones listed above:

Events

Events

Google has added events into the featured snippet area we discussed above as Section A. This just happened last February though it was on mobile prior to that.

So … get your event schema up-to-date.

And if we’re going to the Cherry Blossom Festival in Tokyo we probably need a place to stay.

 

Travel

If you run hotels or are just looking for a place, a quick query on Google and you’ll find in the layout:

sej serp layout hotels

A carousel and map lend the familiar options and you’re guided down the path towards a conversion.

While this is similar to the traditional map layout, the volume of filters and options make it a massive threat to those in the travel sector.

The way into this section is paid via Google Hotel Ads.

Twitter

For topics that are trending we see:

 sej serp layout joker

Where Google is pulling in tweets from fairly strong Twitter accounts right into the search results.

And More…

As noted above, I know I’m likely missing many.

In future pieces, I’ll be diving into some specifics on news, maps, images, and video but if you can think of any content blocks or zones I left out… please don’t wait until then.

We’d love to see them posted on our Facebook post on just this subject, which we’ve set up here.

Why Does This Matter?

You may be wondering why it matters. You’re focused on the top 10 organic links or maybe the featured snippets so why does any of the rest concern you?

The first and most obvious answer is that knowing the various zones and elements on the page informs you as to the opportunities there. In fact, for the first query I entered above there are many opportunities buried in there.

Think about the query and the layout and question always whether there are elements on the page that would steer the users to subsets.

I asked, “what is the civil war”. Might I be sidetracked by a “People also ask”?

Could I get pulled into YouTube? What suggested searches might I click as Google tries to keep me from journeying to page 2?

In these are hidden opportunities.

But there’s more than that.

Within many of these sections, you’re being told specifically how Google is connecting the dots on your topic.

For broad topics think of what the “Searches related to” (G) section is telling you. Think about what the Related Entities (F) mean and how they relate to the content you should be including on your site.

For narrower topics think about what the “People also ask” (C) and Knowledge Panels (B) are signaling.

If people are “also asking” question that Google has deemed relevant to the questions you ask, should you not be answering them too?

Do the “Related Searches” (K) not tell you what entities Google considers related? Heck, they say so right in the naming of the section.

And of course, look to the formats. If Google wants to provide results in specific formats for specific queries, it’s likely that the searchers and responding to them. That means they’ll respond to you if you produce it.

Looking at the SERPs can tell you a LOT about how Google is connecting entities together and if they are, then doing the same can’t help but send a strong signal of relevancy.

When thinking about your content strategy… look to the SERPs.

Not to Mention Mobile SERPs

I’ve used a lot of examples here and they’ve all learned on the desktop. What can I say, I had to choose one and it was easier to get screenshots.

The same basic elements exist on mobile, but you will often find them arranged in a different order.

Pay attention to this of course as it tells you how relevant each zone is on different devices. If you’re ranking highly in organic on mobile, you may be buried beneath more videos and carousels than on desktop.

Knowing this will help you understand your traffic and where to put your efforts based on where your market conducts their queries.

What it tells you about your subject however remains constant, however, it may advise you on how that content is formatted.

Categorized in Search Engine

[This article is originally published in thenextweb.com written by Abhimanyu Ghoshal - Uploaded by AIRS Member: Carol R. Venuti]

The European Union is inching closer to enacting sweeping copyright legislation that would require platforms like Google, Facebook to pay publishers for the privilege of displaying their content to users, as well as monitoring copyright infringement by users on the sites and services they manage.

That’s set to open a Pandora’s Box of problems that could completely derail your internet experience because it’d essentially disallow platforms from displaying content from other sources. In a screenshot shared with Search Engine Land, Google illustrated how this might play out in its search results for news articles:

google
An example of what Google’s search results for news might look like if the EU goes ahead with its copyright directive

As you can see, the page looks empty, because it’s been stripped of all copyrighted content – headlines, summaries and images from articles from various publishers.

 

Google almost certainly won’t display unusable results like these, but it will probably only feature content from publishers it’s cut deals with (and it’s safe to assume that’s easier for larger companies than small ones).

That would reduce the number of sources of information you’ll be able to discover through the search engine, and it’ll likely lead to a drop in traffic for media outlets. It’s a lose-lose situation, and it’s baffling that EU lawmakers don’t see this as a problem – possibly because they’re fixated on how this ‘solution’ could theoretically benefit content creators and copyright holders by ruling that they must be paid for their output.

It isn’t yet clear when the new copyright directive will come into play – there are numerous processes involved that could take until 2021 before it’s implemented in EU countries’ national laws. Hopefully, the union’s legislators will see sense well before that and put a stop to this madness.

Update: We’ve clarified in our headline that this is Google’s opinion of how its search service will be affected by the upcoming EU copyright directive; it isn’t yet clear how it will eventually be implemented.

Categorized in Search Engine

[This article is originally published in blogs.scientificamerican.com written by Daniel M. Russell and Mario Callegaro - Uploaded by AIRS Member: Rene Meyer] 

Researchers who study how we use search engines share common mistakes, misperceptions, and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second, she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

 

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926 when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and runs with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites and the first search results for those terms provide just that. Even the difference between a query like [the who]versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

 

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however, most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

 

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to the given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study, we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Search Engine

[This article is originally published in 9to5google.com written by Abner Li - Uploaded by AIRS Member: Dorothy Allen]

Since the European Union Copyright Directive was introduced last year, Google and YouTube have been lobbying against it by enlisting creators and users. Ahead of finalized language for Article 11 and 13 this month, Google Search is testing possible responses to the “link tax.”

Article 11 requires search engines and online news aggregators — like Google Search and News, respectively — to pay licensing fees when displaying article snippets or summaries. The end goal is for online tech giants to sign commercial licenses to help publishers adapt online and provide a source of revenue.

 

Google discussed possible ramifications in December if Article 11 was not altered. Google News could be shut down in Europe, while fewer news articles would appear in Search results. This could be a determinate to news sites, especially smaller ones, that rely on Search to get traffic.

The company is already testing the impact of Article 11 on Search. Screenshots from Search Engine Land show a “latest news” query completely devoid of context. The Top Stories carousel would not feature images or headlines, while the 10 blue links would not include any summary or description when linking to news sites. What’s left is the name of the domain and the URL for users to click on.

 

This A/B test is possibly already live for users in continental Europe. Most of the stories in the top carousel lack cover images, while others just use generic graphics. Additionally, links from European publications lack any description, just the full, un-abbreviated page title, and domain.

Google told Search Engine Land that it is currently conducting experiments “to understand what the impact of the proposed EU Copyright Directive would be to our users and publisher partners.” This particular outcome might occur if Google does not sign any licensing agreements with publishers.

 

Meanwhile, if licenses are signed, Google would be “in the position of picking winners and losers” by having to select what deals it wants to make. Presumably, the company would select the most popular at the expense of smaller sites. In December, the company’s head of news pointed out that “it’s unlikely any business will be able to license every single news publisher.”

Effectively, companies like Google will be put in the position of picking winners and losers. Online services, some of which generate no revenue (for instance, Google News) would have to make choices about which publishers they’d do deals with. Presently, more than 80,000 news publishers around the world can show up in Google News, but Article 11 would sharply reduce that number. And this is not just about Google, it’s unlikely any business will be able to license every single news publisher in the European Union, especially given the very broad definition being proposed.

Google will make a decision on its products and approach after the final language of the Copyright Directive is released.

Dylan contributed to this article

Categorized in Search Engine
Page 3 of 8

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media