Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Edna Thomas]

Google is giving site owners the ability to customize how their content is previewed in search results.

By default, Google has always generated search snippets according to the users’ queries and what types of devices they’re using.

However, there was previously no room for customization – it was only possible to allow a textual snippet or not allow one.

Now, Google is introducing multiple methods that allow for more fine-grained configuration of the preview content shown for web pages.

These methods include using robots meta tags as well as a brand new type of HTML attribute. Here’s more information about each of these methods.

Configuring Search Snippets With Robots Meta Tags

The content shown in search snippet previews can now be configured using robots meta tags.

The following robots meta tags can be added to an HTML page’s, or specified via the x-robots-tag HTTP header:

  • “nosnippet” – This is an existing option to specify that you don’t want any textual snippet shown for a page.
  • “max-snippet:[number]” (NEW) – Specify a maximum text-length, in characters, of a snippet for your page.
  • “max-video-preview:[number]” (NEW) – Specify a maximum duration in seconds of an animated video preview.
  • “max-image-preview:[setting]” (NEW) – Specify a maximum size of image preview to be shown for images on this page, using either “none”, “standard”, or “large”.

The above robots meta tags can also be combined, for example:

New data-nosnippet HTML attribute

Google is introducing an all-new way to limit which part of a page can be shown as a preview in search results.

The new “data-nosnippet” HTML attribute on span, div, and section elements can prevent specific parts of an HTML page from being shown within the textual snippet in search results.

In other words – if you want to prevent Google from giving away too much of your content in search results, this is the method you want to use.

Here’s an example:

Harry Houdini is undoubtedly the most famous magician ever to live.

In this example, if someone were searching for a query like “most famous magician,” the HTML attribute would prevent Google from giving away the answer (Harry Houdini) in search results.

What SEOs and Site Owners Need to Know

Here’s a rundown of need-to-know information regarding these changes.

No changes to search rankings
This update will only affect how snippets are displayed in search results. Google confirms these settings will have no impact on search rankings.

Depending on how a site owner chooses to configure these settings there may be an impact on CTR, which could then impact traffic. But that is not related to search rankings.

When do these changes come into effect?
Preview settings for robots meta tags will become effective in mid-to-late October 2019. It may take a week for the global rollout to be completed once it starts.

The data-nosnippet HTML attribute will be effective later this year. No specific timeframe was provided for that particular setting.

Will these new changes affect how rich results are displayed?
Content in structured data that is eligible for display as a rich result will not be affected by any of these new settings.

Site owners already have control over the content displayed in rich results by what they choose to include in the structured data itself.

How will these changes affect featured snippets?
Featured snippets depend on the availability of preview content. So if you limit the preview content too heavily it may no longer be eligible to be displayed as a featured snippet, although it could still be displayed as a regular snippet.

The minimum number of characters required for a featured snippet varies by language, which is why Google cannot provide an exact max-snippets length to ensure eligibility.

Can site owners experiment with snippet length?
Site owners can absolutely adjust these settings at any time. For example – if you specify a max-snippet length and later decide you’d rather display a longer snippet in search results, you can simply change the HTML attribute.

Google notes that these new methods of configuring search snippet previews will operate the same as other results displayed globally. If the settings are changed, then your new preferences will be displayed in search results the next time Google recrawls the page.

Google will 100% follow these settings
These new settings will not be treated as hints or suggestions. Google will fully abide by the site owners preferences as specified in the robots meta tags and/or HTML attribute.

No difference between desktop and mobile settings
Preview preferences will be applied to both mobile and desktop search results. If a site has separate mobile and desktop versions then the same markup should be used on both.

Some last notes

These options are available to site owners now, but the changes will not be reflected in search results until mid-to-late October at the earliest.

For more information, see Google’s developer documentation on meta tags.

Categorized in Search Engine

[Source: This article was published in gritdaily.com By Faisal Quyyumi - Uploaded by the Association Member: Jason bourne]

recent study conducted by Yext and Forbes shows consumers only believe 50 percent of their search results when looking up information about brands.

Yext is a New York City technology company focusing on online brand management and Forbes, of course, is a business magazine. Over 500 consumers in the United States were surveyed for the study.

FINDINGS

57 percent of those in the study avoid search engines and prefer to visit the brand’s official website because they believe it is more accurate.

50 percent of those surveyed use third-party sites and applications to learn more about brands. 48 percent believe a brand’s website is their most reliable source.

20 percent of “current and new customers trust social media sites to deliver brand information,” according to Search Engine Journal. 28 percent of buyers avoid buying from a certain brand after they have received inaccurate information.

WHY DON’T THEY BUY?

A few reasons why consumers do not buy from a brand is due to unsatisfactory customer service, excessive requests for information and if a company’s website is not easy to navigate.

Mar Ferrentino, Chief Strategy Officer of Yext said: “Our research shows that regardless of where they search for information, people expect the answers they find to be consistent and accurate – and they hold brands responsible to ensure this is the case.”

The study says customers look at a brand’s website and search engine results for information. This information includes customer service numbers, hours, events, and a brand’s products.

A BETTER WAY TO MARKET ONLINE

The three best practices that brands can use for a customer to have a seamless experience is to maintain, guarantee and monitor.

The company should maintain present-day information and complete accuracy on its website along with an easy-to-use search function. The study also tells brands “guarantee searches return high-quality results by ensuring that tools like Google My Business and other directories have updated and correct information”. Lastly, a brand needs to be active and respond to questions and posts online on social media, corporate websites and review sites.

Companies are doing their best to keep up with consumer expectations for an authentic experience.

Many people use third-party sites such as Google, Bing or Yelp because they are able to compare and categorize numerous products at once.

CONSUMERS HESITATE

New users and consumers are often hesitant and require time to build trust with a company, whereas current customers have confidence in the brand and help by writing positive reviews. 45 percent of customers “say they are usually looking for customer reviews of brands of products when they visit a third-party site” (Forbes).

Reviews determine whether consumers will avoid buying a product or if they want to continue interacting with the vendor.

True Value Company, an American wholesaler, is changing their marketing strategy to adapt to a more Internet-based audience. “We’ve made significant technology investments – including re-platforming our website – to back that up and support our brick and mortar stores for the online/offline world in which consumers live,” said David Elliot, the senior vice-president of marketing.

Despite branding on social media becoming more popular, it does not fall in the top 50 percent of most-trusted sources for brand information.

A 2008 study done by Forrester Research, an American based market research company, shows how much consumers trust different information sources. The sources range from personal emails to Yellow Pages to message board posts.

The most trusted is emails “from people you know” at 77 percent; followed by consumer product ratings/reviews at 60 percent and portal/search engines at 50 percent. The least trusted information source is a company blog at only 16 percent.

Corporate blogs are the least dependable information source to consumers as these should be the most reliable way for companies to express and share information with their audience.

The study shows the significance of a brand’s online marketing strategy. It is vital for companies to make sure their website looks like a trustworthy source.

Companies don’t need to stop blogging — but instead, have to do it in a trustworthy and engaging manner.

Want to read the full report? Click here.

Categorized in Search Engine

[Source: This article was published in enca.com - Uploaded by the Association Member: Rene Meyer]

In this file illustration picture taken on July 10, 2019, the Google logo is seen on a computer in Washington, DC. 

SAN FRANCISCO - Original reporting will be highlighted in Google’s search results, the company said as it announced changes to its algorithm.

The world’s largest search engine has come under increasing criticism from media outlets, mainly because of its algorithms - a set of instructions followed by computers - that newspapers have often blamed for plumenting online traffic and the industry’s decline.

Explaining some of the changes in a blog post, Google's vice president of news Richard Gingras said stories that were critically important and labor intensive -- requiring experienced investigative skills, for example -- would be promoted.

Articles that demonstrated “original, in-depth and investigative reporting,” would be given the highest possible rating by reviewers, he wrote on Thursday.

These reviewers - roughly 10,000 people whose feedback contributes to Google’s algorithm - will also determine the publisher’s overall reputation for original reporting, promoting outlets that have been awarded Pulitzer Prizes, for example.

It remains to be seen how such changes will affect news outlets, especially smaller online sites and local newspapers, who have borne the brunt of the changing media landscape.

And as noted by the technology website TechCrunch, it is hard to define exactly what original reporting is: many online outlets build on ‘scoops’ or exclusives with their own original information, a complexity an algorithm may have a hard time picking through.

The Verge - another technology publication - wrote the emphasis on originality could exacerbate an already frenetic online news cycle by making it lucrative to get breaking news online even faster and without proper verification.

The change comes as Google continues to face criticism for its impact on the news media.

Many publishers say the tech giant’s algorithms - which remain a source of mysterious frustration for anyone outside Google -- reward clickbait and allow investigative and original stories to disappear online.

Categorized in Search Engine

[Source: This article was published in flipweb.org By Abhishek - Uploaded by the Association Member: Jay Harris]

One of the first question that someone who is getting into SEO would have is how exactly does Google rank the websites that you see in Google Search. Ranking a website means that giving them rank in terms of positions. The first position URL that you see in Google Search is ranked number 1 and so on. Now, there are various factors involved in ranking websites on Google Search. It is also not the case that you can’t rank higher if your website’s rank is decided once. Therefore, you would have the question of how does Google determine which URL of a website should come first and which should be lower.

For this reason, Google’s John Mueller has now addressed this question and explains in a video how Google picks website URL for its Search. John explains that there are site preference signals which are involved in determining the rank of a website. However, the most important signals are the preference of the site and the preference of the user accessing the site.

Here are the Site preference signals:

  • Link rep canonical annotations
  • Redirects
  • Internal linking
  • URL in the sitemap file
  • HTTPS preference
  • Nicer looking URLs

One of the keys, as John Mueller has previously mentioned, is to remain consistent. While John did not explain what he means by being consistent, it should mean that you should keep on doing whatever you do. Now, one of the best examples of being consistent is to post on your website every day in order to rank higher up in search results. If you are not consistent, your website’s ranking might get lost and you will have to start all over again. Apart from that, you have to be consistent when it comes to performing SEO as well. If you stop that, your website will suffer in the long run.

Categorized in Search Engine

[Source: This article was Published in theverge.com BY James Vincent - Uploaded by the Association Member: Jennifer Levin] 

A ‘tsunami’ of cheap AI content could cause problems for search engines

Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance, it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:

You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.

The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.”

The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.

As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.

AI text generation will make the article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.”

Mike Blumenthal, an SEO consultant, and expert says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.

And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue. That doesn’t stop it being politically damaging.

The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.

“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge. “So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.”

Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.

However, as we’ve seen with visual deep fakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger. Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.

Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.

SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.

More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems, but it certainly minimizes the risk from high-volume spam.

The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.

“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic, it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all?

Categorized in Search Engine

 [Source: This article was Published in zdnet.com By Catalin Cimpanu - Uploaded by the Association Member: Deborah Tannen]

Extension developer says he sold the extension weeks before; not responsible for the shady behavior.

Google has removed a Chrome extension from the official Web Store yesterday for secretly hijacking search engine queries and redirecting users to ad-infested search results.

The extension's name was "YouTube Queue," and at the time it was removed from the Web Store, it had been installed by nearly 7,000 users.

The extension allowed users to queue multiple YouTube videos in the order they wanted for later viewing.

EXTENSION TURNED INTO ADWARE IN EARLY JUNE

But under the hood, it also intercepted search engine queries, redirected the query through the Croowila URL, and then redirected users to a custom search engine named Information Vine, which listed the same Google search results but heavily infused with ads and affiliate links.

Users started noticing the extension's shady behavior almost two weeks ago, when first reports surfaced on Reddit, followed by two more, a few days later [12].

The extension was removed from the Web Store yesterday after Microsoft Edge engineer (and former Google Chrome developer) Eric Lawrence pointed out the extension's search engine hijacking capabilities on Twitter.

eric lawrence

Lawrence said the extension's shady code was only found in the version listed on the Chrome Web Store, but not in the extension's GitHub repository.

In an interview with The Register, the extension's developer claimed he had no involvement and that he previously sold the extension to an entity going by Softools, the name of a well-known web application platform.

In a following inquiry from The Register, Softools denied having any involvement with the extension's development, let alone the malicious code.

The practice of a malicious entity offering to buy a Chrome extension and then adding malicious code to the source is not a new one.

Such incidents have been first seen as early as 2014, and as recently as 2017, when an unknown party bought three legitimate extensions (Particle for YouTube, Typewriter Sounds, and Twitch Mini Player) and repurposed them to inject ads on popular sites.

In a 2017 tweet, Konrad Dzwinel, a DuckDuckGo software engineer and the author of the SnappySnippet, Redmine Issues Checker, DOMListener, and CSS-Diff Chrome extensions, said he usually receives inquiries for selling his extensions every week.

konrad

In a February 2019 blog post, antivirus maker Kaspersky warned users to "do a bit of research to ensure the extension hasn't been hijacked or sold" before installing it in their browser.

Developers quietly selling their extensions without notifying users, along with developers falling for spear-phishing campaigns aimed at their Chrome Web Store accounts, are currently the two main methods through which malware gangs take over legitimate Chrome extensions to plant malicious code in users' browsers.

COMING AROUND TO THE AD BLOCKER DEBATE

Furthermore, Lawrence points out that the case of the YouTube Queue extension going rogue is the perfect example showing malicious threat actors abusing the Web Request API to do bad things.

This is the same API that most ad blockers are using, and the one that Google is trying to replace with a more stunted one named the Declarative Net Request API.

eric

This change is what triggered the recent public discussions about "Google killing ad blockers."

However, Google said last week that 42% of all the malicious extensions the company detected on its Chrome Web Store since January 2018, were abusing the Web Request API in one way or another -- and the YouTube Queue extension is an example of that.

In a separate Twitter thread, Chrome security engineer Justin Schuh again pointed out that Google's main intent in replacing the old Web Request API was privacy and security-driven, and not anything else like performance or ad blockers, something the company also officially stated in a blog post last week.

justin

justin schuh

 

Categorized in Internet Privacy

 [Source: This article was Published in searchenginejournal.com By Dave Davies - Uploaded by the Association Member: Clara Johnson]

Let’s begin by answering the obvious question:

What Is Universal Search?

There are a few definitions for universal search on the web, but I prefer hearing it from the horse’s mouth on things like this.

While Google hasn’t given a strict definition that I know of as to what universal search is from an SEO standpoint, they have used the following definition in their Search Appliance documentation:

“Universal search is the ability to search all content in an enterprise through a single search box. Although content sources might reside in different locations, such as on a corporate network, on a desktop, or on the World Wide Web, they appear in a single, integrated set of search results.”

Adapted for SEO and traditional search, we could easily turn it into:

“Universal search is the ability to search all content across multiple databases through a single search box. Although content sources might reside in different locations, such as a different index for specific types or formats of content, they appear in a single, integrated set of search results.”

What other databases are we talking about? Basically:

Universal Search

On top of this, there are additional databases that information is drawn from (hotels, sports scores, calculators, weather, etc.) and additional databases with user-generated information to consider.

These range from reviews to related searches to traffic patterns to previous queries and device preferences.

Why Universal Search?

I remember a time, many years ago, when there were 10 blue links…

search

It was a crazy time of discovery. Discovering all the sites that didn’t meet your intent or your desired format, that is.

And then came Universal Search. It was announced in May of 2007 (by Marissa Mayer, if that gives it context) and rolled out just a couple months after they expanded on the personalization of results.

The two were connected and not just by being announced by the same person. They were connected in illustrating their continued push towards Google’s mission statement:

“Our mission is to organize the world’s information and make it universally accessible and useful.”

Think about those 10 blue links and what they offered. Certainly, they offered scope of information not accessible at any point in time prior, but they also offered a problematic depth of uncertainty.

Black hats aside (and there were a lot more of them then), you clicked a link in hopes that you understood what was on the other side of that click and we wrote titles and descriptions that hopefully fully described what we had to offer.

A search was a painful process, we just didn’t know it because it was better than anything we’d had prior.

Enter Universal Search

Then there was Universal Search. Suddenly the guesswork was reduced.

Before we continue, let’s watch a few minutes of a video put out by Google shortly after Universal Search launched.

The video starts at the point where they’re describing what they were seeing in the eye tracking of search results and illustrates what universal search looked like at the time.

 

OK – notwithstanding that this was a core Google video, discussing a major new Google feature and it has (at the time of writing) 4,277 views and two peculiar comments – this is an excellent look at the “why” of Universal Search as well as an understanding of what it was at the time, and how much and how little it’s changed.

How Does It Present Itself?

We saw a lot of examples of Universal Search in my article on How Search Engines Display Search Results.

Where then we focused on the layout itself and where each section comes from, here we’re discussing more the why and how of it.

At a root level and as we’ve all seen, Universal Search presents itself as sections on a webpage that stand apart from the 10 blue links. They are often, but not always, organically generated (though I suspect they are always organically driven).

This is to say, whether a content block exists would be handled on the organic search side, whereas what’s contained in that content block may-or-may-not include ads.

So, let’s compare then versus now, ignoring cosmetic changes and just looking at what the same result would look like with and without Universal Search by today’s SERP standards.

sej civil war serp

This answers two questions in a single image.

It answers the key question of this section, “How does Universal Search present itself?”

This image also does a great job of answering the question, “Why?”

Imagine the various motivations I might have to enter the query [what was the civil war]. I may be:

  • A high school student doing an essay.
  • Someone who simply is not familiar with the historic event.
  • Looking for information on the war itself or my query may be part of a larger dive into civil wars across nations or wars in general.
  • Someone who prefers articles.
  • Someone who prefers videos.
  • Just writing an unrelated SEO article and need a good example.

The possibilities are virtually endless.

If you look at the version on the right, which link would you click?

How about if you prefer video results?

The decision you make will take you longer than it likely does with Universal Search options.

And that’s the point.

The Universal Search structure makes decision making faster across a variety of intents, while still leaving the blue links (though not always 10 anymore) available for those looking for pages on the subject.

In fact, even if what you’re looking for exists in an article, the simple presence of Universal Search results will help filter out the results you don’t want and leaves SEO pros and website owners free to focus our articles to ranking in the traditional search results and other types and formats in appropriate sections.

How Does Google Pick the Sections?

Let me begin this section by stating very clearly – this is the best guess.

As we’re all aware, Google’s systems are incredibly complicated. There may be more pieces than I am aware of, obviously.

There are two core areas I can think of that they would use for these adjustments.

Users

Now, before you say, “But Google says they don’t use user metrics to adjust search results!” let’s consider the specific wording that Google’s John Mueller used when responding to a question on user signals:

“… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going in the right way.

But for individual pages, I don’t think that’s something worth focusing on at all.”

So, they do use the data. They use it on their end, but not to rank individual pages.

What you can take this as it relates to Universal Search is that Google will test different blocks of data for different types of queries to determine how users interact with them. It is very likely that Bing does something similar.

Most certainly they pick locations for possible placements, limitations on the number of different result types/databases, and have determined starting points (think: templates for specific query types) for their processes and then simply let machine learning take over running slight variances or testing layouts on pages generated for unknown queries, or queries where new associations may be attained.

For example, a spike in a query that ties to a sudden rise in new stories related to the query could trigger the news carousel being inserted into the search results, provided that past similar instances produced a positive engagement signal and it would remain as long as user engagement indicated it.

Query Data

It is virtually a given that a search engine would use their own query data to determine which sections to insert into the SERPs.

If a query like [pizza] has suggested queries like:

Recommended Pizza Searches

Implying that most such searchers are looking for restaurants it makes sense that in a Universal Search structure, the first organic result would not be a blue link but:

Pizza SERP

It is very much worth remembering that the goal of a search engine is to provide a single location where a user can access everything they are looking for.

At times this puts them in direct competition with themselves in some ways. Not that I think they mind losing traffic to another of their own properties.

Let’s take YouTube for example. Google’s systems will understand not just which YouTube videos are most popular but also which are watched through, when people eject, skip or close out, etc.

They can use this not just to understand which videos are likely to resonate on Google.com but also understand more deeply what supplemental content people are interested in when they search for more general queries.

I may search for [civil war], but that doesn’t mean I’m not also interested in the Battle at Antietam specifically.

So, I would suggest that the impact of these other databases does not simply impact the layouts as illustrated in Universal Search but that these databases themselves can and likely are being used to connect topics and information together and thus impacting the core search rankings themselves.

Takeaway

So, what does this all mean for you?

For one, you can use the machine learning systems of the search engines to assist in your content development strategies.

Sections you see appearing in Universal Search tell us a lot about the types and formats of content that users expect or engage with.

Also important is that devices and technology are changing rapidly. I suspect the idea of Universal Search is about to go through a dramatic transformation.

This is due in part to voice search, but I suspect it will have more to do with the push by Google to provide a solution rather than options.

A few well-placed filters could provide refinement that produces only a single result and many of these filters could be automatically applied based on known user preferences.

I’m not sure we’ll get to a single result in the next two to three years but I do suspect that we will see it for some queries and where the device lends itself to it.

If I query “weather” why would the results page not look like:

Weather SERP

In my eyes, this is the future of Universal Search.

Or, as I like to call it, search.

Categorized in Search Engine

[Source: This article was Published in techcrunch.com By Catherine Shu - Uploaded by the Association Member: Jay Harris]

Earlier this week, music lyrics repository Genius accused Google of lifting lyrics and posting them on its search platform. Genius told the Wall Street Journal that this caused its site traffic to drop. Google, which initially denied wrongdoing but later said it was investigating the issue, addressed the controversy in a blog post today. The company said it will start including attribution to its third-party partners that provide lyrics in its information boxes.

When Google was first approached by the Wall Street Journal, it told the newspaper that the lyrics it displays are licensed by partners and not created by Google. But some of the lyrics (which are displayed in information boxes or cards called “Knowledge Panels” at the top of search results for songs) included Genius’ Morse code-based watermarking system. Genius said that over the past two years it repeatedly contacted Google about the issue. In one letter, sent in April, Genius told Google it was not only breaking the site’s terms of service but also violating antitrust law—a serious allegation at a time when Google and other big tech companies are facing antitrust investigations by government regulators.

After the WSJ article was first published, Google released a statement that said it was investigating the problem and would stop working with lyric providers who are “not upholding good practices.”

In today’s blog post, Satyajeet Salgar, a group product manager at Google Search, wrote that the company pays “music publishers for the right to display lyrics since they manage the rights to these lyrics on behalf of songwriters.” Because many music publishers license lyrics text from third-party lyric content providers, Google works with those companies.

“We do not crawl or scrape websites to source these lyrics. The lyrics you see in information boxes on Search come directly from lyrics content providers, and they are updated automatically as we receive new lyrics and corrections on a regular basis,” Salgar added.

These partners include LyricFind,  which Google has had an agreement with since 2016. LyricFind’s chief executive told the WSJ that it does not source lyrics from Genius.

While Salgar’s post did not name any companies, he addressed the controversy by writing “news reports this week suggested that one of our lyrics content providers is in a dispute with a lyrics site about where their written lyrics come from. We’ve asked our partner to investigate the issue to ensure that they’re following industry best practices in their approach.”

In the future, Google will start including attribution to the company that provided the lyrics in its search results. “We will continue to take an approach that respects and compensates rights-holders, and ensures that music publishers and songwriters are paid for their work,” Salgar wrote.

Genius, which launched as Rap Genius in 2009, has been at loggerheads with Google before. In 2013, a SEO trick Rap Genius used to place itself higher in search results ran afoul of Google’s web spam team. Google retaliated by burying Rap Genius links under pages of other search results. The conflict was resolved after less than two weeks, but during that time Rap Genius’ traffic plummeted dramatically.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine
Page 1 of 7

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now