fbpx

Google is enhancing its Collections in Search feature, making it easy to revisit groups of similar pages.

Similar to the activity cards in search results, introduced last year, Google’s Collections feature allows users to manually create groups of like pages.

Now, using AI, Google will automatically group together similar pages in a collection. This feature is compatible with content related to activities like cooking, shopping, and hobbies.

 

collection.jpeg

This upgrade to collections will be useful in the event you want to go back and look at pages that weren’t manually saved. Mona Vajolahi, a Google Search Product Manager, states in an announcement:

“Remember that chicken parmesan recipe you found online last week? Or that rain jacket you discovered when you were researching camping gear? Sometimes when you find something on Search, you’re not quite ready to take the next step, like cooking a meal or making a purchase. And if you’re like me, you might not save every page you want to revisit later.”

These automatically generated collections can be saved to keep forever, or disregarded if not useful. They can be accessed any time from the Collections tab in the Google app, or through the Google.com side menu in a mobile browser.

 

Once a collection is saved, Google can help users discover even more similar pages by tapping on the “Find More” button. Google is also adding a collaboration feature that allow users to share and work on creating collections with other people.

Auto-generated collections will start to appear for US English users this week. The ability to see related content will launch in the coming weeks.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Logan Hochstetler]

Categorized in Search Engine

It’s not paid inclusion, but it is paid occlusion

Happy Friday to you! I have been reflecting a bit on the controversy du jour: Google’s redesigned search results. Google is trying to foreground sourcing and URLs, but in the process it made its results look more like ads, or vice versa. Bottom line: Google’s ads just look like search results now.

I’m thinking about it because I have to admit that I don’t personally hate the new favicon -plus-URL structure. But I think that might be because I am not a normal consumer of web content. I’ve been on the web since the late ‘90s and I parse information out of URLs kind of without thinking about it. (In fact, the relative decline of valuable information getting encoded into the URL is a thing that makes me sad.)

 

I admit that I am not a normal user. I set up custom Chrome searches and export them to my other browsers. I know what SERP means and the term kind of slips out in regular conversation sometimes. I have opinions about AMP and its URL and caching structure. I’m a weirdo.

As that weirdo, Google’s design makes perfect sense and it’s possible it might do the same for regular folk. The new layout for search result is ugly at first glance — but then Google was always ugly until relatively recently. I very quickly learned to unconsciously take in the information from the top favicon and URL-esque info without it really distracting me.

...Which is basically the problem. Google’s using that same design language to identify its ads instead of much more obvious, visually distinct methods. It’s consistent, I guess, but it also feels deceptive.

Recode’s Peter Kafka recently interviewed Buzzfeed CEO Jonah Peretti, and Peretti said something really insightful: what if Google’s ads really aren’t that good? What if Google is just taking credit for clicks on ads just because people would have been searching for that stuff anyway? I’ve been thinking about it all day: what if Google ads actually aren’t that effective and the only reason they make so much is billions of people use Google?

The pressure to make them more effective would be fairly strong, then, wouldn’t it? And it would get increasingly hard to resist that pressure over time.

I am old enough to remember using the search engines before Google. I didn’t know how bad their search technology was compared to what was to come, but I did have to bounce between several of them to find what I wanted. Knowing what was a good search for WebCrawler and what was good for Yahoo was one of my Power User Of The Internet skills.

So when Google hit, I didn’t realize how powerful and good the PageRank technology was right away. What I noticed right away is that I could trust the search results to be “organic” instead of paid and that there were no dark patterns tricking me into clicking on an ad.

One of the reasons Google won search in the first place with old people like me was that in addition to its superior technology, it drew a harder line against allowing paid advertisements into its search results than its competitors.

With other search engines, there was the problem of “paid inclusion,” which is the rare business practice that does exactly what the phrase means. You never really knew if what you were seeing was the result of a web-crawling bot or a business deal.

This new ad layout doesn’t cross that line, but it’s definitely problematic and it definitely reduces my trust in Google’s results. It’s not so much paid inclusion as paid occlusion.

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

 

And most of all: if Google is willing to visually muddle ads, how long until its users lose trust in the algorithm itself? With this change, Google is becoming what it once sought to overcome: AltaVista.

Read More...

[Source: This article was published in theverge.com By Barry Schwartz - Uploaded by the Association Member: James Gill]

Categorized in Search Engine

Google has started testing a feature that will display the search query in the Chrome address bar rather than the actual page's URL when performing searches on Google.

This experimental feature is called "Query in Omnibox" and has been available as a flag in Google Chrome since Chrome 71, but is disabled by default.

In a test being conducted by Google, this feature is being enabled for some users and will cause the search keyword to be displayed in the browser's address bar, or Omnibox, instead of the URL that you normally see. 

enabled-search.jpg

Query in Omnibox enabled

In BleepingComputer's tests, this feature only affects searches on Google and does not affect any other search engine.

 

When this feature is not enabled, Google will display the URL of the search in the Omnibox as you would expect. This allows you to not only properly identify the site you are on, but also to easily share the search with another user.

experiment-disabled.jpg

Query in Omnibox Disabled​​​

For example, to see the above search, you can just copy the https://www.google.com/search?q=test link from the address bar and share it with someone else.

With the Query in Omnibox feature enabled, though, if you copy the search keyword it will just copy that keyword into the clipboard rather than the site's URL. If you want to access the URL, you need to right-click on the keyword and select 'Show URL'.

show-url.jpg

Google is eroding the URL

Google has made it clear that they do not think that the URL is very useful to users.

In a Wired interview, Adrienne Porter Felt, Chrome's engineering manager. explained that Google wants to change how they are displayed in Chrome as people have a hard time understanding them.

 

"People have a really hard time understanding URLs. They’re hard to read, it’s hard to know which part of them is supposed to be trusted, and in general I don’t think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone—they know who they’re talking to when they’re using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity."

Instead of removing them in one fell swoop, Google is gradually eroding the various elements of a URL until there is nothing left.

We saw the beginning of this transition when Google Chrome 79 was released and it stopped displaying the www subdomain in URLs.

no-www.jpg

WWW subdomain removed from URL

In this next phase, they are testing the removal of URLs altogether from Google searches, which as everyone knows, is by far the most used web search engine.

What is next? The removal of URLs on other search engines or only showing a page title when browsing a web site?

All these questions remain to be answered, but could it be that Google is not wrong about URLs?

I was opposed to the removal of the WWW trivial subdomain from URLs for a variety of reasons and now I don't even realize it's missing.

BleepingComputer has reached out to Google with questions about this test, but had not heard back as of yet.

 

 [This article is originally published in bleepingcomputer.com By Lawrence Abrams - Uploaded by AIRS Member: Dana W. Jimenez]

Categorized in Search Engine

Earlier today, Google  announced that it would be redesigning the redesign of its search results as a response to withering criticism from politicians, consumers and the press over the way in which search results displays were made to look like ads.

Google makes money when users of its search service click on ads. It doesn’t make money when people click on an unpaid search result. Making ads look like search results makes Google more money.

It’s also a pretty evil (or at least unethical) business decision by a company whose mantra was “Don’t be evil”(although they gave that up in 2018).

 

 

Users began noticing the changes to search results last week, and at least one user flagged the changes earlier this week.

There's something strange about the recent design change to google search results, favicons and extra header text: they all look like ads, which is perhaps the point?

Screenshot 1
EO0MQcEU0AAGtVR
 
Google responded with a bit of doublespeak from its corporate account about how the redesign was intended to achieve the opposite effect of what it was actually doing.

“Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded ‘Ad’ label for ads,” the company wrote.

Senator Mark Warner (D-VA) took a break from impeachment hearings to talk to The Washington Post about just how bad the new search redesign was.

“We’ve seen multiple instances over the last few years where Google has made paid advertisements ever more indistinguishable from organic search results,” Warner told the Post. “This is yet another example of a platform exploiting its bottleneck power for commercial gain, to the detriment of both consumers and also small businesses.”

Google’s changes to its search results happened despite the fact that the company is already being investigated by every state in the country for antitrust violations.

 

For Google, the rationale is simple. The company’s advertising revenues aren’t growing the way they used to, and the company is looking at a slowdown in its core business. To try and juice the numbers, dark patterns present an attractive way forward.

Indeed, Google’s using the same tricks that it once battled to become the premier search service in the U.S. When the company first launched its search service, ads were clearly demarcated and separated from actual search results returned by Google’s algorithm. Over time, the separation between what was an ad and what wasn’t became increasingly blurred.

 
Screenshot 2

Color fade: A history of Google ad labeling in search results http://selnd.com/2adRCdU 

CoOOsx WAAAgFhq
 
“Search results were near-instant and they were just a page of links and summaries – perfection with nothing to add or take away,” user experience expert Harry Brignull (and founder of the watchdog website darkpatterns.org) said of the original Google search results in an interview with TechCrunch.

“The back-propagation algorithm they introduced had never been used to index the web before, and it instantly left the competition in the dust. It was proof that engineers could disrupt the rules of the web without needing any suit-wearing executives. Strip out all the crap. Do one thing and do it well.”

“As Google’s ambitions changed, the tinted box started to fade. It’s completely gone now,” Brignull added.

The company acknowledged that its latest experiment might have gone too far in its latest statement and noted that it will “experiment further” on how it displays results.

 [Source: This article was published in techcrunch.com By Jonathan Shieber - Uploaded by the Association Member: Joshua Simon]

Categorized in Search Engine

Ever had to search for something on Google, but you’re not exactly sure what it is, so you just use some language that vaguely implies it? Google’s about to make that a whole lot easier.

Google announced today it’s rolling out a new machine learning-based language understanding technique called Bidirectional Encoder Representations from Transformers, or BERT. BERT helps decipher your search queries based on the context of the language used, rather than individual words. According to Google, “when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English.”

 

Most of us know that Google usually responds to words, rather than to phrases — and Google’s aware of it, too. In the announcement, Pandu Nayak, Google’s VP of search, called this kind of searching “keyword-ese,” or “typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.” It’s amusing to see these kinds of searches — heck, Wired has made a whole cottage industry out of celebrities reacting to these keyword-ese queries in their “Autocomplete” video series” — but Nayak’s correct that this is not how most of us would naturally ask a question.

As you might expect, this subtle change might make some pretty big waves for potential searchers. Nayak said this “[represents] the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.” Google offered several examples of this in action, such as “Do estheticians stand a lot at work,” which apparently returned far more accurate search results.

I’m not sure if this is something most of us will notice — heck, I probably wouldn’t have noticed if I hadn’t read Google’s announcement, but it’ll sure make our lives a bit easier. The only reason I can see it not having a huge impact at first is that we’re now so used to keyword-ese, which is in some cases more economical to type. For example, I can search “What movie did William Powell and Jean Harlow star in together?” and get the correct result (Libeled Lady; not sure if that’s BERT’s doing or not), but I can also search “William Powell Jean Harlow movie” and get the exact same result.

 

BERT will only be applied to English-based searches in the US, but Google is apparently hoping to roll this out to more countries soon.

[Source: This article was published in thenextweb.com By RACHEL KASER - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

Google confirmed an update affecting local search results has now fully rolled out, a process that began in early November.

Screenshot 1

In what’s been called the November 2019 Local Search Update, Google is now applying neural matching to local search results. To explain neural matching, Google points to a tweet published earlier this year that describes it as a super-synonym system.

That means neural matching allows Google to better understand the meaning behind queries and match them to the most relevant local businesses – even if the keywords in the query are not specifically included in the business name and description.

“The use of neural matching means that Google can do a better job going beyond the exact words in business name or description to understand conceptually how it might be related to the words searchers use and their intents.”

 

In other words, some business listings might now be surfaced for queries they wouldn’t have shown up for prior to this update. Hopefully, that proves to be a good thing.

Google notes that, although the update has finished rolling out, local search results as they are displayed now are not set in stone by any means. Like regular web searches, results can change over time.

Google has not stated to what extent local search results will be impacted by this update, though it was confirmed this is a global launch across all countries and languages.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Jasper Solander]

Categorized in Search Engine

Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

 

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

 

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Edna Thomas]

Google is giving site owners the ability to customize how their content is previewed in search results.

By default, Google has always generated search snippets according to the users’ queries and what types of devices they’re using.

However, there was previously no room for customization – it was only possible to allow a textual snippet or not allow one.

Now, Google is introducing multiple methods that allow for more fine-grained configuration of the preview content shown for web pages.

 

These methods include using robots meta tags as well as a brand new type of HTML attribute. Here’s more information about each of these methods.

Configuring Search Snippets With Robots Meta Tags

The content shown in search snippet previews can now be configured using robots meta tags.

The following robots meta tags can be added to an HTML page’s, or specified via the x-robots-tag HTTP header:

  • “nosnippet” – This is an existing option to specify that you don’t want any textual snippet shown for a page.
  • “max-snippet:[number]” (NEW) – Specify a maximum text-length, in characters, of a snippet for your page.
  • “max-video-preview:[number]” (NEW) – Specify a maximum duration in seconds of an animated video preview.
  • “max-image-preview:[setting]” (NEW) – Specify a maximum size of image preview to be shown for images on this page, using either “none”, “standard”, or “large”.

The above robots meta tags can also be combined, for example:

New data-nosnippet HTML attribute

Google is introducing an all-new way to limit which part of a page can be shown as a preview in search results.

The new “data-nosnippet” HTML attribute on span, div, and section elements can prevent specific parts of an HTML page from being shown within the textual snippet in search results.

In other words – if you want to prevent Google from giving away too much of your content in search results, this is the method you want to use.

Here’s an example:

Harry Houdini is undoubtedly the most famous magician ever to live.

In this example, if someone were searching for a query like “most famous magician,” the HTML attribute would prevent Google from giving away the answer (Harry Houdini) in search results.

What SEOs and Site Owners Need to Know

Here’s a rundown of need-to-know information regarding these changes.

No changes to search rankings
This update will only affect how snippets are displayed in search results. Google confirms these settings will have no impact on search rankings.

Depending on how a site owner chooses to configure these settings there may be an impact on CTR, which could then impact traffic. But that is not related to search rankings.

When do these changes come into effect?
Preview settings for robots meta tags will become effective in mid-to-late October 2019. It may take a week for the global rollout to be completed once it starts.

The data-nosnippet HTML attribute will be effective later this year. No specific timeframe was provided for that particular setting.

 

Will these new changes affect how rich results are displayed?
Content in structured data that is eligible for display as a rich result will not be affected by any of these new settings.

Site owners already have control over the content displayed in rich results by what they choose to include in the structured data itself.

How will these changes affect featured snippets?
Featured snippets depend on the availability of preview content. So if you limit the preview content too heavily it may no longer be eligible to be displayed as a featured snippet, although it could still be displayed as a regular snippet.

The minimum number of characters required for a featured snippet varies by language, which is why Google cannot provide an exact max-snippets length to ensure eligibility.

Can site owners experiment with snippet length?
Site owners can absolutely adjust these settings at any time. For example – if you specify a max-snippet length and later decide you’d rather display a longer snippet in search results, you can simply change the HTML attribute.

Google notes that these new methods of configuring search snippet previews will operate the same as other results displayed globally. If the settings are changed, then your new preferences will be displayed in search results the next time Google recrawls the page.

Google will 100% follow these settings
These new settings will not be treated as hints or suggestions. Google will fully abide by the site owners preferences as specified in the robots meta tags and/or HTML attribute.

No difference between desktop and mobile settings
Preview preferences will be applied to both mobile and desktop search results. If a site has separate mobile and desktop versions then the same markup should be used on both.

Some last notes

These options are available to site owners now, but the changes will not be reflected in search results until mid-to-late October at the earliest.

For more information, see Google’s developer documentation on meta tags.

Categorized in Search Engine

[Source: This article was published in gritdaily.com By Faisal Quyyumi - Uploaded by the Association Member: Jason bourne]

recent study conducted by Yext and Forbes shows consumers only believe 50 percent of their search results when looking up information about brands.

Yext is a New York City technology company focusing on online brand management and Forbes, of course, is a business magazine. Over 500 consumers in the United States were surveyed for the study.

 

FINDINGS

57 percent of those in the study avoid search engines and prefer to visit the brand’s official website because they believe it is more accurate.

50 percent of those surveyed use third-party sites and applications to learn more about brands. 48 percent believe a brand’s website is their most reliable source.

20 percent of “current and new customers trust social media sites to deliver brand information,” according to Search Engine Journal. 28 percent of buyers avoid buying from a certain brand after they have received inaccurate information.

WHY DON’T THEY BUY?

A few reasons why consumers do not buy from a brand is due to unsatisfactory customer service, excessive requests for information and if a company’s website is not easy to navigate.

Mar Ferrentino, Chief Strategy Officer of Yext said: “Our research shows that regardless of where they search for information, people expect the answers they find to be consistent and accurate – and they hold brands responsible to ensure this is the case.”

The study says customers look at a brand’s website and search engine results for information. This information includes customer service numbers, hours, events, and a brand’s products.

A BETTER WAY TO MARKET ONLINE

The three best practices that brands can use for a customer to have a seamless experience is to maintain, guarantee and monitor.

The company should maintain present-day information and complete accuracy on its website along with an easy-to-use search function. The study also tells brands “guarantee searches return high-quality results by ensuring that tools like Google My Business and other directories have updated and correct information”. Lastly, a brand needs to be active and respond to questions and posts online on social media, corporate websites and review sites.

Companies are doing their best to keep up with consumer expectations for an authentic experience.

Many people use third-party sites such as Google, Bing or Yelp because they are able to compare and categorize numerous products at once.

CONSUMERS HESITATE

New users and consumers are often hesitant and require time to build trust with a company, whereas current customers have confidence in the brand and help by writing positive reviews. 45 percent of customers “say they are usually looking for customer reviews of brands of products when they visit a third-party site” (Forbes).

Reviews determine whether consumers will avoid buying a product or if they want to continue interacting with the vendor.

True Value Company, an American wholesaler, is changing their marketing strategy to adapt to a more Internet-based audience. “We’ve made significant technology investments – including re-platforming our website – to back that up and support our brick and mortar stores for the online/offline world in which consumers live,” said David Elliot, the senior vice-president of marketing.

 

Despite branding on social media becoming more popular, it does not fall in the top 50 percent of most-trusted sources for brand information.

A 2008 study done by Forrester Research, an American based market research company, shows how much consumers trust different information sources. The sources range from personal emails to Yellow Pages to message board posts.

The most trusted is emails “from people you know” at 77 percent; followed by consumer product ratings/reviews at 60 percent and portal/search engines at 50 percent. The least trusted information source is a company blog at only 16 percent.

Corporate blogs are the least dependable information source to consumers as these should be the most reliable way for companies to express and share information with their audience.

The study shows the significance of a brand’s online marketing strategy. It is vital for companies to make sure their website looks like a trustworthy source.

Companies don’t need to stop blogging — but instead, have to do it in a trustworthy and engaging manner.

Want to read the full report? Click here.

Categorized in Search Engine

[Source: This article was published in enca.com - Uploaded by the Association Member: Rene Meyer]

In this file illustration picture taken on July 10, 2019, the Google logo is seen on a computer in Washington, DC. 

SAN FRANCISCO - Original reporting will be highlighted in Google’s search results, the company said as it announced changes to its algorithm.

The world’s largest search engine has come under increasing criticism from media outlets, mainly because of its algorithms - a set of instructions followed by computers - that newspapers have often blamed for plumenting online traffic and the industry’s decline.

 

Explaining some of the changes in a blog post, Google's vice president of news Richard Gingras said stories that were critically important and labor intensive -- requiring experienced investigative skills, for example -- would be promoted.

Articles that demonstrated “original, in-depth and investigative reporting,” would be given the highest possible rating by reviewers, he wrote on Thursday.

These reviewers - roughly 10,000 people whose feedback contributes to Google’s algorithm - will also determine the publisher’s overall reputation for original reporting, promoting outlets that have been awarded Pulitzer Prizes, for example.

It remains to be seen how such changes will affect news outlets, especially smaller online sites and local newspapers, who have borne the brunt of the changing media landscape.

And as noted by the technology website TechCrunch, it is hard to define exactly what original reporting is: many online outlets build on ‘scoops’ or exclusives with their own original information, a complexity an algorithm may have a hard time picking through.

The Verge - another technology publication - wrote the emphasis on originality could exacerbate an already frenetic online news cycle by making it lucrative to get breaking news online even faster and without proper verification.

The change comes as Google continues to face criticism for its impact on the news media.

Many publishers say the tech giant’s algorithms - which remain a source of mysterious frustration for anyone outside Google -- reward clickbait and allow investigative and original stories to disappear online.

Categorized in Search Engine
Page 2 of 8

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media