fbpx

These days, you can’t get hardly anything done without a good, working web browser, but what do you do when Google Chrome starts acting up? Here’s a guide for how to clean up some of Google Chrome’s most common issues including slow loading, excess notifications, using the wrong search engine, and more.

Note: This guide is only intended for Google Chrome for Windows, Mac, Linux, and Chrome OS. The instructions here do not apply to Chrome on smartphones, but do let us know in the comments or on Twitter if you’d like to see a similar guide for Android or iOS.

Google vs. Google Chrome

Before we dive in, I want to make sure we all understand that there’s a difference between “Google” and “Google Chrome.” “Google Chrome” is a web browser, the tool you use to view sites on the internet, including the one you’re on right now! Google Chrome is used for the same things as Safari, Microsoft Edge, and Firefox.

Meanwhile, “Google” is the company that makes the Chrome browser. Commonly, though, when folks think of “Google,” they’re thinking of the “Google.com” search enginegoogle_search_desktop_1.jpg

More importantly, you can use Google as your search engine in other browsers like Edge and Safari. You do not need Google Chrome to use Google Search.

Similarly, just because you’re using Google Chrome does not mean your searches will always go through Google Search.

How to switch Chrome search engines

Are you looking to remove Yahoo from Google Chrome? What should you do if your searches in Chrome are going to Yahoo or Bing instead of Google? Sometimes people will change this on purpose, simply because they prefer another search engine over Google, but other times a program will ask to switch Google Chrome’s search engine to something like Yahoo without your notice.

No matter how your search engine got changed, there’s a few different things you can try to remove Yahoo or any other search engine and switch back to using Google Search in Chrome.

Method 1: Switch to Google

First, click the three dots menu button in the top-right corner of Google Chrome, then click Settings. On this page, scroll down to “Search engine.” Make sure that the setting labeled “Search engine used in the address bar” is set to “Google.”

google-chrome-switch-search-engine.png

If all goes well, your searches in the address bar should now default to Google Search.

Method 2: Reset to default settings

If switching the search engine manually doesn’t change things, the next step we recommend is to reset Google Chrome’s settings to default. Open the Settings page again, and on the left-hand side of the page, click “Advanced” then “Reset settings.”

Next, click on “Restore settings to their original defaults” and you’ll be offered to confirm that you really do want to reset your customized settings. Click the “Reset settings” button to confirm.

chrome-reset-settings-2.png

Once done, your Google Chrome should be most of the way back to the way it was on day one. All of your Google Chrome extensions will still be installed, but will be “disabled” after reset. For help on how to re-enable them, look through our guide to removing/disabling Chrome extensions down below.

Method 3: Check for malware

If your searches in Chrome are still going to Yahoo or another search engine instead of Google, even after resetting to default, you’ll want to check for malware using a program like MalwareBytes or seek professional tech support.

How to update Google Chrome

There are many, many reasons why Google Chrome can act up or feel slow, but before we dive in to more advanced methods, it’s important that we cover the easiest thing to check first and that’s whether or not you’re on the newest update. By default, Google Chrome should keep itself updated automatically, but sometimes this doesn’t happen. Here’s how to manually check whether Google Chrome is up to date.

Windows/macOS

First, click the three dots menu button in the top-right corner of Google Chrome, then hover over “Help” and click on “About Google Chrome.” When the next page opens, Google Chrome will immediately begin checking if you need a new update. Simply follow along with what it asks. Or, if you’re already up to date, you’ll see “Google Chrome is up to date.”

update-google-chrome-desktop-2.png

Chrome OS

Click the clock in the bottom-right corner of your screen, this will open the notification list and Chrome OS’s quick settings panel. In this panel, click on the gear icon to open the Settings app. On the left-hand side of the page, click on “About Chrome OS.”

update-google-chrome-os-1.png

At the center of the new page that opens, click the button labeled “Check for updates.” If an update is available, your Chromebook will begin downloading and installing it immediately, then prompt you to restart. If there’s no update available you’ll see “Your Chromebook is up to date.”

update-google-chrome-os-3.png

That said, not every update to Chrome OS arrives for every device right away. Sometimes Google’s Pixelbook and Pixel Slate devices will get Chrome OS updates a few days earlier than others. It’s also important to check whether your Chromebook is still eligible for updates.

How to remove/disable Chrome extensions

When used wisely, extensions can be a fantastic way to add new features to your Google Chrome browser. However, some extensions have been shown to drastically slow down Google Chrome or even hijack your searches.

If your Google Chrome is acting weirdly or is being very slow, it’s probably time to look at your installed extensions and remove anything you don’t truly need. First, click the three dots menu button in the top-right corner of Google Chrome, then hover over “More tools,” and click “Extensions.”
chrome-open-extensions-menu.png

In the page that opens, you’ll see a list of every extension you’ve installed for Google Chrome. Next to each extension, you’ll see a handy “Remove” button. After you click the button, a pop-up will appear asking if you’re sure you want to remove. Click the “Remove” button on the popup to confirm.

remove-chrome-extension.gif

We strongly recommend that you remove every extension that you don’t recognize, as any one of them could be the culprit for Google Chrome being slow or any other issues.

If Google Chrome is still slow or acting strangely, you can try disabling your other extensions one-by-one. Open the Extensions page, as described above, and at the bottom right of each extension, you’ll see a little switch. Click the switch to turn that extension either on or off.

google-chrome-extension-enabled.png

google-chrome-extension-disabled.png

If you’ve removed or disabled all of your extensions and Google Chrome is still loading slowly or behaving strangely, you’ll likely want to check for malware using a program like MalwareBytes or seek professional tech support.

How to stop Chrome pop-ups and notifications

Notifications are without a doubt one of the most controversial additions to web browsers like Google Chrome in the last few years. On the one hand, notifications are necessary for the web to have the app-like experiences that developers have long dreamed of.

Conversely, some websites have abused notifications, making them one of the worst features of Google Chrome today. Luckily, it’s not too hard to turn off notifications for websites in Google Chrome.

First, remember that Google Chrome’s notifications are accepted on a per-site basis, which means you can turn off notifications from a bad website while still keeping notifications from Gmail or Twitter, if you so choose.

Click the three dots menu button in the top-right corner of Google Chrome, then click Settings. On the left-hand side of the page, click “Privacy and security,” then in the center of the page click “Site settings.”

On the page that opens, scroll down to “Permissions” and click on “Notifications.” At the top of this page, you’ll see a switch labeled “Sites can ask to send notifications.” If you turn this switch off, Google Chrome will never again ask if you want to receive notifications from any website.

1) Privacy and security

google-chrome-notification-settings-1.png

2) Site settings

google-chrome-notification-settings-2.png

3) Notifications

google-chrome-notification-settings-3.png

4) Notifications permission toggle

google-chrome-notification-settings-4.png

However, this switch does nothing about the sites you’ve already agreed to receive notifications from. To turn those sites’ notifications off, scroll down to the section labeled “Allow.”

In the Allow section, you’ll find the list of websites that you’ve agreed to receive notifications from. Next to each of these, you’ll see a three dots menu button. To disable Google Chrome’s notifications for a particular site, click that menu button, followed by “Block.”

google-chrome-notification-settings-5.png

If you’re still receiving unwanted notifications from Google Chrome after cleaning out this list, your next step would be to try removing any extensions that may be misbehaving.

 [Source: This article was published in 9to5google.com By Kyle Bradshaw - Uploaded by the Association Member: Dana W. Jimenez]

Categorized in Search Engine

“I just want search to work like it does on Amazon and Google.” I can’t tell you how many times I’ve heard that lament from friends, clients and other search folks. Frustration and dissatisfaction are common emotions when it comes to enterprise search — that is, search within the firewall.

Google on the web makes search look easy: you type in a word or two, and you get a list of dozens, if not hundreds of relevant pages. We’d all like search like that for our web and internal repositories too.

But remember that at one point, Google offered an enterprise solution in a box: the Google Search Appliance (GSA). It was a large yellow Google-branded Dell server that would crawl and index internal content, respect security and deliver pretty good results quickly. And the Google logo was available on every page to remind users they were using Google search.

The GSA was marketed to partners and corporations from 2004 through early 2019, when it was removed from the market. The GSA showed decent results, but they never lived up user expectations. What went wrong?

Several IT managers have told me users had anticipated the quality of results to be “just like Google” — but the GSA just didn’t live up to their expectations. One search manager told me that simply adding the GSA logo to their existing non-Google search platform reduced user complaints by 40%.

I’m not proposing that you find a ‘Powered by Google’ graphic and simply add it to your search form. First, that’s misleading; and probably a violation of Google’s intellectual property. And secondly, your users will react to the quality of the results, not the search page logo.

One school of thought was that Google simply decided to focus on their primary business, delivering high quality on the web. In fact, the GSA just didn’t have access to the magic that makes its web search so good: Metadata.

It turns out that internal enterprise search is hard.

Upgrade Your User Search Experience

Partly because of its size and popularity, Google on the web takes advantage of the context available to it. That means the results you see may include queries used and pages that you have viewed in the past. But what really adds value is that Google will also include post-query behavior of other Google users who performed the same query.

The good news is you can likely improve your internal search results by implementing the same approach Google uses on the public web.

Your internal content brings some challenges of its own. On the web, there are sometimes thousands of pages that are nearly identical: if Google web shows you any one of those near duplicates, you’ll probably be satisfied. But behind the firewall, people are typically looking for a single page; and if search can’t find it, users complain.

Internal search comes with its own challenges; but it also has metadata that can be used to improve results. 

Almost all of the internal content we’ve seen with clients is secure. While parts of some repositories — think HR — are available across the organization, HR does have secure content such as payroll data, employee reviews, etc. that must not be available to all.

The Solution: Use the Context!

One of the differences between internet and intranet content is security. And repositories fall into one of two general areas: users and content. Security should come into play for both types of content.

User Level Security

In a lot of enterprise environments, many, if not most, repositories apply user or content level security. And typically there are a number of elements here. The fields can be used to add useful metadata. Fields that are available and make sense to be included as user-level metadata may include the following:

Location: Office, Department, Time Zone

Office location, department and time zone

Direct phone & email

List of active clients

User Level Security

Location, role, title, office, department

Role & title

Manager name & contact info

Key accounts

Content Level Security

Access level

Content including queries, viewed results pages, and saved and/or rejected-ignored

Actually, this is really a starting data point; examine, experiment, and dive in!

[Source: This article was published in cmswire.co By Miles Kehoe - Uploaded by the Association Member: Dana W. Jimenez]

Categorized in Internet Search

An update to Google Images creates a new way for site owners to drive traffic with their photos.

Google is adding more context to photos in image search results, which presents site owners with a new opportunity to earn traffic.

Launching this week, a new feature in Google Images surfaces quick facts about what’s being shown in photos.

Information about people, places or things related to the image is pulled from Google’s Knowledge Graph and displayed underneath photos when they’re clicked on.

image-search.jpeg

More Context = More Clicks?

Google says this update is intended to help searchers explore topics in more detail.

One of the ways searchers can explore topics in more detail is by visiting the web page where the image is featured.

The added context is likely to make images more appealing to click on. It’s almost like Google added meta descriptions to image search results.

However, it’s not quite the same as that, because the images and the facts appearing underneath come from different sources.

image-search-1.jpeg

Results in Google Images are sourced from sites all over the web, but the corresponding facts for each image are pulled from the Knowledge Graph.

In the examples shared by Google, you can see how the image comes from the website where it’s hosted while the additional info is taken from another source.

On one hand, that gives site owners little control over the information that is displayed under their images in search results.

On the other hand, Google is giving searchers more information about images that could potentially drive more clicks to the image source.

Perhaps the best part of this update is it requires no action on the part of site owners. Google will enhance your image search snippets all on its own.

Another Traffic Opportunity

If you’re fortunate enough to have content included in Google’s Knowledge Graph, then there’s now more opportunities to have those links surfaced in search results.

Contrary to how it may seem at times, Wikipedia is not the only source of information in Google’s Knowledge Graph. Google draws from hundreds of sites across the web to compile billions of facts.

After all, there are over 500 billion facts about five billion entities in the Knowledge Graph – they can’t all come from Wikipedia.

An official Google help page states:

“Facts in the Knowledge Graph come from a variety of sources that compile factual information. In addition to public sources, we license data to provide information such as sports scores, stock prices, and weather forecasts.

We also receive factual information directly from content owners in various ways, including from those who suggest changes to knowledge panels they’ve claimed.”

As Google says, site owners can submit information to the Knowledge Graph by claiming a knowledge panel.

That’s not something everyone can do, however, as they either have to be an entity featured in a knowledge panel or represent one.

But this is still worth mentioning as it’s low-hanging fruit for those who have the opportunity to claim a knowledge panel and haven’t yet.

Claiming your business’s knowledge panel is a must-do if you haven’t done so already. Local businesses stand to gain the most from from this update.

That’s especially true if yours is the sort of business that would have photos of it published on the web.

Then your Knowledge Graph information, with a link, could potentially be surfaced underneath those images.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Deborah Tannen]

Categorized in Search Engine

Google has made some new substantial changes to their How Google Search Works” search documents for website owners. And as always when Google makes changes to important documents with impact on SEO, such as How Search Works and the Quality Rater Guidelines, there are some key insights SEOs can gleam from the new changes Google has made.

Of particular note, Google detailing how it views a “document” as potentially comprising of more than one webpage, what Google considers primary and secondary crawls, as well as an update to their reference of “more than 200 ranking factors” which has been present in this document since 2013.

But here are the changes and what they mean for SEOs.

Contents [hide]

  • 1 Crawling
    • 1.1 Improving Your Crawling
  • 2 The Long Version
  • 3 Crawling
    • 3.1 How does Google find a page?
    • 3.2 Improving Your Crawling
  • 4 Indexing
    • 4.1 Improving your Indexing
      • 4.1.1 What is a document?
  • 5 Serving Results
  • 6 Final Thoughts
      • 6.0.1 Jennifer Slegg
      • 6.0.2 Latest posts by Jennifer Slegg (see all)

Crawling

Google has greatly expanded this section.

They made a slight change to wording, with “some pages are known because Google has already crawled them before” changed to “some pages are known because Google has already visited them before.”   This is a fairly minor change, primarily because Google decided to include an expanded section detailing what crawling actually is.

Google removed:

This process of discovery is called crawling.

The removal of the crawling definition was simply because it was redundant.  In Google’s expanded crawling section, they included a much more detailed definition and description of crawling instead.

The added definition:

Once Google discovers a page URL, it visits, or crawls, the page to find out what’s on it. Google renders the page and analyzes both the text and non-text content and overall visual layout to decide where it should appear in Search results. The better that Google can understand your site, the better we can match it to people who are looking for your content.

There is still a great debate on how much page layout is taken into account.  There was the page layout algo that was released many years, in order to penalize content that was pushed well below the fold in order to increase the odds a visitor might click on an advertisement that appeared above the fold instead.  But with more traffic moving to mobile, and the addition of mobile first indexing, the importance of above and below the fold for on page layout seemingly was less important.

When it comes to page layout and mobile first, Google says:

Don’t let ads harm your mobile page ranking. Follow the Better Ads Standard when displaying ads on mobile devices. For example, ads at the top of the page can take up too much room on a mobile device, which is a bad user experience.

But in How Google Search Works, Google is specifically calling attention to the “overall visual layout” with “where it should appear in Search results.”

It also brings attention to “non-text” content.  While the most obvious of this refers to image content, the referral to it is quite open ended.  Could this refer to OCR as well, which we know Google has been dabbling in?

Improving Your Crawling

Under the “to improve your site crawling” section, Google has expanded this section significantly as well.

Google has added this point:

Verify that Google can reach the pages on your site, and that they look correct. Google accesses the web as an anonymous user (a user with no passwords or information). Google should also be able to see all the images and other elements of the page to be able to understand it correctly. You can do a quick check by typing your page URL in the Mobile-Friendly test tool.

This is a good point – so many new site owners end up accidentally blocking Googlebot from crawling or not realizing their site is set to be only viewable by logged in users only.  This makes it clear that site owners should try viewing their site without also being logged into it, to see if there are any unexpected accessibility or other issues that aren’t note when logged in as an admin or high level user.

Also recommending site owners check their site via the Mobile-Friendly testing tool is good, since even seasoned SEOs use the tool to quickly see if there are Googlebot specific issues with how Google is able to see, render and crawl a specific webpage – or a competitor’s page.

Google expanded their specific note about submitting a single page to the index.

If you’ve created or updated a single page, you can submit an individual URL to Google. To tell Google about many new or updated pages at once, use a sitemap.

Previously, it just mentioned submitting changes to a single page using the submit URL tool.  This just adds clarification to those who are newer to SEO that they do not need to submit every single new or updated pages to Google individually, but that using sitemaps is the best way to do that.  There have definitely been new site owners who add each page to Google using that tool because they don’t realize sitemaps is a thing.  But part of this is that WordPress is such a prevalent way to create a new website, yet it does not have native support for sitemaps (yet), so site owners need to either install a specific sitemaps plugin or use one of the many SEO tool plugins that offer sitemaps as a feature.

This new change also highlights using the tool for creating pages as well, instead of just the previous reference of “changes to a single page.”

Google has also made a change to the section about “if you ask Google to crawl only one page” section as well.  They are now referencing what Google views as a “small site” – according to Google,  a smaller site is one with less than 1,000 pages.

Google also stresses the importance of a strong navigation structure, even for sites it considers “small.”  It says site owners of small sites can just submit their homepage to Google, “provided that Google can reach all your other pages by following a path of links that start from your homepage.”

With so many sites being on WordPress, it is less likely that there will be random orphaned pages that are not accessible by following links from the homepage  But depending on the specific WordPress theme used, sometimes there can be orphaned pages from pages being added but not manually added to the pages menu… in these cases, if a sitemap is used as well, those pages shouldn’t be missed even if not directly linked from the homepage.

In the “get your page linked to by another page” section, Google has added that links in “advertisements links that you pay for in other sites, links in comments, or other links that don’t follow the Google Webmaster Guidelines won’t be followed by Google.”  A small change, but Google is making it clear that it is a Google specific thing that these links won’t be followed, but they might be followed by other search engines.

But perhaps the most telling part of this is at the end of the crawling section, Google adds:

Google doesn’t accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they’re wrong.

It has long been an issue with scammy SEO companies to guarantee first positioning on Google, to increase rankings or requiring payment to submit a site to Google.  And with the ambiguous Google Partner badge for AdWords, many use the Google Partners badge to imply  they are certified by Google for SEO and organic ranking purposes.  That said, most of those who are reading the How Search Works probably are already aware of this.  But nice to see Google add this in writing again, for times when SEOs need to prove to clients that there is not a “pay to win” option, outside of AdWords, or simply to show someone who might be falling for some scammy SEO company’s claims of Google rankings.

The Long Version

Google then gets into what they call the “long version” of How Google Search Works, with more details on the above sections, covering more nuances that impact SEO.

Crawling

Google has changed how they refer to the “algorithmic process”.  Previously, it stated “Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often and how many pages to fetch from each site.”  Curiously, they removed the reference to “computer programs”, which provoked the question about which computer programs exactly Google was using.

The new updated version simply states:

Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site.

Google also updated the wording for the crawl process, changing that it is “augmented with sitemap data” to “augmented by sitemap” data.

Google also made a change where it referenced that Googlebot “detects” links and changed it to “finds” links, as well as changes from Googlebot visiting “each of these websites” to the much more specific “page”.  This second change makes it more accurate since Google visiting a website won’t necessarily mean it crawls all links on all pages.  The change to “page” makes it more accurate and specific for webmasters.

Previously it read:

As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl.

Now it reads:

When Googlebot visits a page it finds links on the page and adds them to its list of pages to crawl.

Google has added a new section about using Chrome to crawl:

During the crawl, Google renders the page using a recent version of Chrome. As part of the rendering process, it runs any page scripts it finds. If your site uses dynamically-generated content, be sure that you follow the JavaScript SEO basics.

By referencing a recent version of Chrome, this addition is clarifying the change from last year where Googlebot was finally upgraded to the latest version of Chromium for crawling, an update from Google only crawling with Chrome 41 for years.

Google also notes it runs “any page scripts it finds,” and advises site owners to be aware of possible crawl issues as a result of using dynamically-generated content with the use of JavaScript, specifying that site owners should ensure they follow their JavaScript SEO basics.

Google also details the primary and secondary crawls, something that has garnered much confusion since Google revealed primary and secondary crawls, but Google’s details in this How Google Search Works documents detail it differently than how some SEOs previously interpreted it.

Here is the entire new section for primary and secondary crawls:

Primary crawl / secondary crawl

Google uses two different crawlers for crawling websites: a mobile crawler and a desktop crawler. Each crawler type simulates a user visiting your page with a device of that type.

Google uses one crawler type (mobile or desktop) as the primary crawler for your site. All pages on your site that are crawled by Google are crawled using the primary crawler. The primary crawler for all new websites is the mobile crawler.

In addition, Google recrawls a few pages on your site with the other crawler type (mobile or desktop). This is called the secondary crawl, and is done to see how well your site works with the other device type.

In this section, Google refers to primary and secondary crawls as being specific to their two crawlers – the mobile crawler and the desktop crawler.  Many SEOs think of primary and secondary crawling in reference to Googlebot making two passes over a page, where javascript is rendered on the secondary crawl.  So while Google clarifies their use of desktop and mobile Googlebots, the use of language here does cause confusion for those who use this to refer to the primary and secondary crawls for javascript purposes.  So to be clear, Google’s reference to their primary and secondary crawl has nothing to do with javascript rendering, but only to how they use both mobile and desktop Googlebots to crawl and check a page.

What Google is clarifying in this specific reference to primary and secondary crawl is that Google is using two crawlers – both mobile and desktop versions of Googlebot – and will crawl sites using a combination of both.

Google did specifically state that new websites are crawled with the mobile crawler in their Mobile-First Indexing Best Practices” document, as of July 2019.  But this is the first time it has made an appearance in their How Google Search Works document.

Google does go into more detail about how it uses both the desktop and mobile Googlebots, particularly for sites that are currently considered mobile first by Google.  It wasn’t clear just how much Google was checking desktop versions of sites if they were mobile first, and there have been some who have tried to take advantage of this by presenting a spammier version to desktop users, or in some cases completely different content.  But Google is confirming it is still checking the alternate version of the page with their crawlers.

So sites that are mobile first will see some of their pages crawled with the desktop crawler.  However, it still isn’t clear how Google handles cases where they are vastly different, especially when done for spam reasons, as there doesn’t seem to be any penalty for doing so, aside from a possible spam manual action if it is checked or a spam report is submitted.  And this would have been a perfect opportunity to be clearer about how Google will handle pages with vastly different content depending on whether it is viewed on desktop or on mobile.  Even in the mobile friendly documents, Google only warns about ranking differences if content is on the desktop version of the page but is missing on the mobile version of the page.

How does Google find a page?

Google has removed this section entirely from the new version of the document.

Here is what was included in it:

How does Google find a page?

Google uses many techniques to find a page, including:

  • Following links from other sites or pages
  • Reading sitemaps

It isn’t clear why Google removed this specifically.  It is slightly redundant, but it was missing the submitting a URL option as well.

Improving Your Crawling

Google makes the use of hreflang a bit clearer, especially for those who might just be learning what hreflang is and how it works by providing a bit more detail.

Formerly it said “Use hreflang to point to alternate language pages.”  Now it states “Use hreflang to point to alternate versions of your page in other languages.”

Not a huge change, but a bit clearer.

Google has also added two new points, providing more detail about ensuring Googlebot is able to access all the content on the page, not just the content (words) specifically.

First, Google added:

Be sure that Google can access the key pages, and also the important resources (images, CSS files, scripts) needed to render the page properly.

So Google is stressing about ensuring Google can access all the important content.  And it is also specifically calling attention to other types of elements on the page that Google wants to also have access to in order to properly crawl the page, including images, CSS and scripts.  For those webmasters who went through the whole “mobile first indexing” launch, they are fairly familiar with issues surrounding blocking files, especially CSS and scripts, something that some CMS had blocked Googlebot from crawling by default.

But for newer site owners, they might not realize this was possible, or that they might be doing it.  It would have been nice to see Google add specific information on how those newer to SEO can check for this, particularly for those who also might not be clear on what exactly “rendering” means.

Google also added:

Confirm that Google can access and render your page properly by running the URL Inspection tool on the live page.

Here Google does add specific information about using the URL Inspection tool in order to see what site owners are blocking or content that is causing issues when Google tries to render it.  I think these last two new points could have been combined, and made slightly clearer for how site owners can use the tool to check for all these issues.

Indexing

Google has made significant changes to this section as well. And Google starts off with making major changes to the first paragraph.  Here is the original version:

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such astags and alt attributes.

The updated version now reads:

Googlebot processes each page it crawls in order to understand the content of the page. This includes processing the textual content, key content tags and attributes, such astags and alt attributes, images, videos, and more.

Google no longer states it processes pages to “compile a massive index of all the words it sees and their location on each page.”  This was always a curious way for them to call attention to the fact they are simply indexing all words it comes across and their position on a page, when in reality it is a lot more complex than that.  So it definitely clears that up.

They have also added that they are processing “textual content” which is basically calling attention to the fact it indexes the words on the page, something that was assumed by everyone.  But it does differentiate between the new addition later in the paragraph regarding images, videos and more.

Previously, Google simply made reference to attributes such as title and alt tags and attributes.  But now it is getting more granular, specifically referring to “images, videos and more.”  However, this does mean Google is considering images, videos and “more” to understand the content on the page, which could affect rankings.

Improving your Indexing

Google changed “read our SEO guide for more tips” to “Read our basic SEO guide and advanced user guide for more tips.”

What is a document?

Google has added a massive section here called “What is a document?”  It talks specifically about how Google determines what is a document, but also includes details about how Google views multiple pages with identical content as a single document, even with different URLs, and how it determines canonicals.

First, here is the first part of this new section:

What is a “document”?

Internally, Google represents the web as an (enormous) set of documents. Each document represents one or more web pages. These pages are either identical or very similar, but are essentially the same content, reachable by different URLs. The different URLs in a document can lead to exactly the same page (for instance, example.com/dresses/summer/1234 and example.com?product=1234 might show the same page), or the same page with small variations intended for users on different devices (for example, example.com/mypage for desktop users and m.example.com/mypage for mobile users).

Google chooses one of the URLs in a document and defines it as the document’s canonical URL. The document’s canonical URL is the one that Google crawls and indexes most often; the other URLs are considered duplicates or alternates, and may occasionally be crawled, or served according to the user request: for instance, if a document’s canonical URL is the mobile URL, Google will still probably serve the desktop (alternate) URL for users searching on desktop.

Most reports in Search Console attribute data to the document’s canonical URL. Some tools (such as the Inspect URL tool) support testing alternate URLs, but inspecting the canonical URL should provide information about the alternate URLs as well.

You can tell Google which URL you prefer to be canonical, but Google may choose a different canonical for various reasons.

So the tl:dr is that Google will view pages with identical  or near-identical content as the same document, regardless of how many of them there are.  For seasoned SEOs, we know this as internal duplicate content.

Google also states that when Google determines these duplicates, they may not be crawled as often.  This is important to note for site owners that are working to de-duplicate content which Google is considering duplicate.  So it would be more important to submit these URLs to be recrawled, or give those newly de-duplicated pages links from the homepage in order to ensure Google recrawls and indexed the new content, so Google de-dupes them properly.

It also brings up an important note about desktop versus mobile, that Google will still likely serve the desktop version of a page instead of the mobile version for desktop users, when a site has two different URLs for the same page where is designed for mobile users and the other for desktop.  While many websites have changed to serving the same URL and content for both using responsive design, some sites still run two completely different sites and URLs for desktop and mobile users.

Google also mentions that you can tell Google the URL you prefer Google to use as the canonical, but states they can chose a different URL “for various reasons.”  While Google doesn’t detail specifics about why Google might choose a different canonical than the one the site owner specifies, it is usually due to http vs https, if a page is included in a sitemap or not, page quality, if the pages appear to be completely different and should not be canonicalized, or due to significant incoming links to the non-canonical URL.

Google has also included definitions for many o the terms used by SEOs and in Google Search Console.

Document: A collection of similar pages. Has a canonical URL, and possibly alternate URLs, if your site has duplicate pages. URLs in the document can be from the same or different organization (the root domain, for example “google” in www.google.com). Google chooses the best URL to show in Search results according to the platform (mobile/desktop), user language‡ or location, and many other variables. Google discovers related pages on your site by organic crawling, or by site-implemented features such as redirects or tags. Related pages on other organizations can only be marked as alternates if explicitly coded by your site (through redirects or link tags).

Again, Google is talking about the fact a single document can encompass more than just a single URL, as Google will consider a single document to potentially have many duplicate or near duplicate pages as well as pages assigned via canonical.  Google makes specific mention about “alternates” that appear on other sites, that can only be considered alternates if the site owner specifically codes it.  And that Google will choose the best URL from within the collection of documents to show.

But it fails to mention that Google can consider pages duplicate on other sites and will not show those duplicates, even if they aren’t from the same sites, something that site owners see happen frequently when someone steals content and sometimes sees the stolen version ranking over the original.

There was a notation added for the above, dealing with hreflang.

Pages with the same content in different languages are stored in different documents that reference each other using hreflang tags; this is why it’s important to use hreflang tags for translated content.

Google shows that it doesn’t include identical content under the same “document” when it is simply in a different language, which is interesting.  But Google is tressing the importance of using hreflang in these cases.

URL: The URL used to reach a given piece of content on a site. The site might resolve different URLs to the same page.

Pretty self explanatory, although it does have reference to the fact different URLs can be resolved to the same page, presumably such as with redirects or alias.

Page: A given web page, reached by one or more URLs. There can be different versions of a page, depending on the user’s platform (mobile, desktop, tablet, and so on).

Also pretty self explanatory, bringing up the specifics that some site owners can be served different versions of the same page, such as if they try and view the same page on a mobile device versus a desktop computer.

Version: One variation of the page, typically categorized as “mobile,” “desktop,” and “AMP” (although AMP can itself have mobile and desktop versions). Each version can have a different URL (example.com vs m.example.com) or the same URL (if your site uses dynamic serving or responsive web design, the same URL can show different versions of the same page) depending on your site configuration. Language variations are not considered different versions, but different documents.

Simply clarifying with greater details the different versions of a page, and how Google typically categorizes them as “mobile,” “desktop,” and “AMP”.

Canonical page or URL: The URL that Google considers as most representative of the document. Google always crawls this URL; duplicate URLs in the document are occasionally crawled as well.

Google states here again that non-canonical pages are not crawled as often as the main canonical that a site owner assigns to a group of pages they want canonical.  Google does not include specific mention here that they sometimes chose a different page as the canonical one, even if there is a specific page designated as the canonical one.

Alternate/duplicate page or URL: The document URL that Google might occasionally crawl. Google also serves these URLs if they are appropriate to the user and request (for example, an alternate URL for desktop users will be served for desktop requests rather than a canonical mobile URL).

The key takeaway here is that Google “might” occasionally crawl the site’s duplicate or alternative page.  And here they stress that Google will serve these alternative URLs “if they are appropriate.”  It is unfortunate they don’t go into greater detail in why they might serve these pages instead of the canonical, outside of the mention of desktop versus mobile, as we have seen many cases where Google picks a different page to show other than the canonical for a myriad of reasons.

Google also fails to mention how this impacts duplicate content found on other sites, we we do know Google will crawl those less often as well.

Site: Usually used as a synonym for a website (a conceptually related set of web pages), but sometimes used as a synonym for a Search Console property, although a property can actually be defined as only part of a site. A site can span subdomains (and even domains, for properly linked AMP pages).

Interesting to note here what they consider a website – a conceptually related set of webpages – and how it related to the usage of a Google Search Console property, as “a property can actually be defined as only part of a site.”

Google does make mention that AMP, which technically appear on a different domain, are considered part of the main site.

Serving Results

Google has made a pretty interesting specific change here in regards to their ranking factors.  Previously, Google stated:

Relevancy is determined by over 200 factors, and we always work on improving our algorithm.

Google has now updated this “over 200 factors” with a less specific one.

Relevancy is determined by hundreds of factors, and we always work on improving our algorithm.

The 200 factors in the How Google Search Works dates back to 2013 when the document was launched, although then it also made reference to PageRank (“Relevancy is determined by over 200 factors, one of which is the PageRank for a given page”) which Google removed when they redesigned their document in 2018.

While Google doesn’t go into specifics on the number anymore, it can be assumed that a significant number of ranking factors have been added since 2013 when this was first claimed in this document.  But I am sure some SEOs will be disappointed we don’t get a brand new shiny number like “over 500” ranking factors that SEOs can obsess about.

Final Thoughts

There are some pretty significant changes made to this document that SEOs can get a bit of insight from.

Google’s description of what it considers a document and how it relates to other identical or near-identical pages on a site is interesting, as well as Google’s crawling behavior towards the pages within a document it considers as alternate pages.  While this behavior has often been noted, it is more concrete information on how site owners should handle these duplicate and near-duplicate pages, particularly when they are trying to un-duplicate those pages and see them crawled and indexed as their own document.

They added a lot of useful advice for newer site owners, which is particularly helpful with so many new websites coming online this year due to the global pandemic.  Things such as checking a site without being logged in, how to submit both pages and sites to Google, etc.

The mention of what Google considers a “small site” is interesting because it gives a more concrete reference point for how Google sees large versus small sites.  For some, a small site could mean under 30 pages and the idea of a site with millions of pages being unfathomable.  And the reinforcement of a strong navigation, even for “small sites” is useful for showing site owners and clients who might push for navigation that is more aesthetic than practical for both usability and SEO.

The primary and secondary crawl additions will probably cause some confusion for those who think of primary and secondary in terms of how Google processes scripts on a page when it crawls it.  But it is nice to have more concrete information on how and when Google will crawl using the alternate version of Googlebot for sites that are usually crawled with either the mobile Googlebot or the desktop one.

Lastly, the change from the “200 ranking factors” to a less specific, but presumably much higher number of ranking factors will disappoint some SEOs who liked having some kind of specific number of potential ranking factors to work out.

[Source: This article was published in thesempost.com By JENNIFER SLEGG - Uploaded by the Association Member: Barbara larson]

Categorized in Search Techniques

I haven't been the biggest fan of Google Images since it removed direct image links, but the service has been working on a few useful features behind the scenes. Starting this week, contextual information about images will appear when you tap on them, similar to what you would get from regular web searches.

"When you search for an image on mobile in the U.S.," Google wrote in a blog post, "you might see information from the Knowledge Graph related to the result. That information would include people, places or things related to the image from the Knowledge Graph’s database of billions of facts, helping you explore the topic more."

Screenshot_20200708-154057-329x713.png

Unlike with web searches, Images can display multiple Knowledge Graph information panels for a single result. Google says the feature combines data from the web page and Google Lens-style deep learning to determine what information to display.

The feature is going live in the Google Android app, as well as the mobile web version of Google Images.

[Source: This article was published in androidpolice.com By Corbin Davenport - Uploaded by the Association Member: Jeremy Frink]

Categorized in Search Engine

Google’s Biggest Privacy Push: Auto-Delete Of Web, App, Location Data, Youtube Search For New Users

Search engine giant, Google has disclosed a new development, where it will let users control their privacy. Google doesn’t have much of a reputation when it comes to gathering data about people. However this does come as a great news! 

Let’s find out what changes Google is planning to make right here!

Google CEO Sundar Pichai Announces New Developments Regarding Privacy

The new developments have when announced by the CEO of Google Sundar Pichai. He said that there will be a lot of privacy improvements included in the platform that well enables users to control the data that they’re sharing. 

Previously Google had enabled users to delete this data automatically every 3 months or 18 months. As per the new development, this feature can be enabled by default for any new users. 

As we all know, Google registers all the Search history, YouTube history, location history, and voice commands made through Google Assistant on the My Activity page. 

Google CEO, Sundar Pichai said, “As we design our products, we focus on three important principles: keeping your information safe, treating it responsibly, and putting you in control. Today, we are announcing privacy improvements to help do that, including changes to our data retention practices across our core products to keep less data by default.”

How Does Google’s New Feature Work? All You Need To Know

When any Google user turns on their Location History for the first time, the auto delete option will be set to 18 months by default. Previously, this was off by default. Additionally, Web and App Activity auto delete will also default to 18 months for any new users.

In simple words, the data of your activity will be deleted after 18 months automatically and continuously, which was previously stored until you chose to delete it. 

Remember that you can turn these settings off or also change your auto-delete option whenever you want. 

However, if the Location History and Web and App Activity has already been turned on by the user that will not be changed by Google. But, the company will remind its users about the new auto delete control features via notifications and mail.

As per Pichai, when a user signs into their Google Account, they will be able to search for  “Google Privacy Checkup” and “Is my Google Account secure?” Their query will be answered by a box that will be visible only to the user which will show privacy and security settings. Additionally, these can be easily reviewed or adjusted as well.

[Source: This article was published in trak.in By Radhika Kajarekar - Uploaded by the Association Member: Corey Parker]

Categorized in Search Engine

On Wednesday, Google announced broad changes in its default data practices for new users, including a significant expansion in the company’s willingness to automatically delete data.

In a blog post announcing the changes, CEO Sundar Pichai emphasized the company’s commitment to privacy, security, and user choice. “As we design our products, we focus on three important principles: keeping your information safe, treating it responsibly, and putting you in control,” Pichai wrote. “Today, we are announcing privacy improvements to help do that.”

Google’s auto-delete feature applies to search history (on web or in-app), location history, and voice commands collected through the Google Assistant or devices like Google Home. Google logs that data in its My Activity page, where users can see what data points have been collected and manually delete specific items. Historically, Google has retained that information indefinitely, but in 2019, the company rolled out a way to automatically delete data points after three months or 18 months, depending on the chosen setting.

Starting today, those settings will be on by default for new users. Google will set web and app searches to auto-delete after 18 months even if users take no action at all. Google’s location history is off by default, but when users turn it on, it will also default to an 18-month deletion schedule.

The new defaults will only apply to new users, and existing Google accounts won’t see any settings change. However, Google will also be promoting the option on the search page and on YouTube in an effort to drive more users to examine their auto-delete settings. Auto-delete can be turned on from the Activity Controls page.

The system also extends to YouTube history, although the default will be set to three years to ensure the broader data can be used by the platform’s recommendation algorithms.

In some ways, the new settings represent a compromise between the privacy interests of users and Google’s business interests as an ad network. A user’s most recent data is also the most valuable since it can be used to target people who have recently engaged with a particular product. By keeping the last 18 months of activity, Google is able to retain most of that ad value while also deleting most of the data that would otherwise be available.

Alongside the new default settings, Google will also make it easier for users use Chrome’s Incognito mode, allowing mobile users to switch to Incognito mode with a long-press on their profile picture. The feature launches today on iOS and will soon come to Android and other platforms.

Google announced an expansion of the Password Checkup tool earlier this week.

[Source: This article was published in theverge.com By Russell Brandom - Uploaded by the Association Member: David J. Redcliff] 

Categorized in Search Engine

Google is bringing fact check information to image search results worldwide starting today.

Google is adding “Fact Check” labels to thumbnails in image search results in a continuation of its fact check efforts in Search and News.

“Photos and videos are an incredible way to help people understand what’s going on in the world. But the power of visual media has its pitfalls⁠—especially when there are questions surrounding the origin, authenticity or context of an image.”

This change is being rolled out today to help people navigate issues around determining the authenticity of images, and make more informed decisions about the content they consume.

When you see certain pictures in Google Images, such as a shark swimming down the street in Houston, Google will attach a “Fact Check” label underneath the thumbnail.

Screenshot 1
Is that image of a shark swimming down a street in Houston real? Google Images now has "Fact Check" labels to help inform you in some cases like this (no, it was not real). Our post today explains more about how & when fact checks appear in Google Images: https://www.blog.google/products/search/bringing-fact-check-information-google-images/ …

EbIVJlCU4AAonJG.jpg

After tapping on a fact-checked result to view a larger preview of the image, Google will display a summary of the information contained on the web page where the image is featured.

A “Fact Check” label will only appear on select images that come from independent, authoritative sources on the web. It’s not exactly known what criteria a publisher needs to meet in order to be considered authoritative.

According to a help page, Google uses an algorithm to determine which publishers are trusted sources.

Google also relies on ClaimReview structured data markup that publishers are required to use to indicate fact check content to search engines.

Fact Check labels may appear both for fact check articles about specific images and for fact check articles that include an image in the story.

As mentioned at the beginning of this article, Google already highlights fact checks in regular search results and Google News. YouTube also utilizes ClaimReview to surface fact check information panels in Brazil, India and the U.S.

Google says its fact check labels are surfaced billions of times per year.

While adding ClaimReview markup is encouraged, being eligible to serve a Fact Check label does not affect rankings. This goes for Google Search, Google Images, Google News, and YouTube.

 [Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Olivia Russell]

Categorized in Search Engine

Live captions are an important part of the tech industry, and a big part of the reason why that is the case has to do with the fact that a lot of the people that are trying to use tech solutions are living with disabilities with hearing impairments being among the most common disabilities that people end up facing on a regular basis. Hence, a lot of tech companies have been working on live captions but the fact of the matter is that we haven’t seen anything quite like what Chrome has just done.

You see, the latest version of Chrome is going to feature support for live captions, marking the first time that a web browser has ever had anything of this nature all in all. Enabling the feature would make it so that you would start seeing a dedicated captions box, and any media that you play would start showing captions inside that box. This is useful because of the fact that not all companies emphasize live captions and making their technology accessible quite as much as they should be doing, and this is causing a lot of problems along the way.



If you want to toggle captions on then you need to start by having the latest version of Chrome Canary. Once you have the latest version, the next thing that you are going to have to do is type chrome://flags into the address bar, and when you see the option to search for flags put in “live captions”. A drop down menu will come up and if you select “enabled” all you would have to do is restart your browser and then you can start using the live captions. Once you have restarted the browser, go into accessibility section in your settings to switch them on or off and play any media to see if they are working properly.

g-live-caption.png

gc.png

live-caption-chrome.png

[Source: This article was published in digitalinformationworld.com By Zia Muhammad - Uploaded by the Association Member: Corey Parker]

Categorized in Search Engine

Facebook  is testing a new feature that aims to keep users inside its platform when they’re looking for factual information they would otherwise turn to Google or Wikipedia to find. The company confirmed to TechCrunch it’s now piloting an updated version of Facebook Search that displays factual information when users search for topics like public figures, places, and interests — like movies and TV shows.

For example, if you type in a movie title in the Facebook search bar, you’ll be shown an information box that gives you all the details about the movie.

The information is gathered from publicly available data, including Wikipedia. But instead of requiring users to click out of Facebook to view the information, it’s displayed in a side panel next to the search results. This is similar to the automatically generated Knowledge Panel format Google uses for these same types of searches.

SocialMediaToday was the first to report the news of the pilot, citing posts from Twitter users like JC Van ZijlMatt Navarra, and Giulio S.

Facebook confirmed with TechCrunch the feature is a pilot program that’s currently running in English on iOS, desktop, and mobile web. (Users may or may not see the information panels themselves, as this is still a test.)

Screen-Shot-2020-06-11-at-12.10.07-PM.png

We’ve found the new feature can be fairly hit or miss, however.

For starters, it doesn’t always recognize a search term as a proper title. A search for “joker,” for instance displayed a Wikipedia-powered information box for the movie. But a search for “parasite” failed to do so for the Oscar-winning title that in 2020 became the first non-English film to win Best Picture.

Meanwhile, a search for “Donald Trump” easily returned an information panel for the U.S. president, but information for many members of his cabinet did not come up when they were searched by name. Information about leading coronavirus expert Dr. Anthony Fauci came up in a side panel when the term “Anthony Fauci” was entered in the Facebook’s search box, but not when “dr. Fauci” was used as the search query.

Screen-Shot-2020-06-11-at-12.09.51-PM.png

Google’s Knowledge Panel doesn’t experience these same problems, as it’s able to make intuitive leaps about which person, place, or thing the user is likely searching for at the time of their query.

Facebook Search will also direct users toward its own features when doing so is more beneficial, it appears. For instance, a search for “COVID” or “COVID-19” will return Facebook’s own COVID-19 Information Center at the top of the search results, not a data-powered side panel about the disease. Google, by comparison, returns a coronavirus map, case overview, and CDC information in its Knowledge Panel.

And a search for the popular game “Animal Crossing” returns its Facebook Page and the option to add it to the titles you’re tracking on Facebook Gaming, but no information panel.

In other words, don’t expect to see an information panel for all the persons, places, or things you search for on Facebook at this time.

The update follows the closure of Facebook’s previous Graph Search feature. Years ago, Facebook attempted to reinvent its search engine with the launch of Graph Search, which allowed users to find people, places, photos, and interests using Facebook data. The feature was later shut down as Facebook dealt with the backlash from major security lapses, like the Cambridge Analytica scandal. Doing so hampered investigators’ ability to catch criminals and other bad actors, BuzzFeed News noted at the time.

Last year, Facebook also told Vice it was pausing some aspects of Graph Search to focus on improvements to keyword search instead.

Presenting “factual” information in the sidebar could also help Facebook claim it’s addressing concerns around the spread of misinformation on its platform. As a home for active disinformation campaigns, propaganda, and conspiracy theories, Facebook needs a tool that displays fact-checked, factual information. (There was a time when Wikipedia wasn’t considered a valid source of that kind of information, but we’re long past that point now!)

This isn’t the first time Facebook has tapped Wikipedia data to enhance its service. It used Wikipedia information on its community pages over a decade ago, for example.

Facebook didn’t offer additional details regarding how long it plans to test the new search feature or when it expects it to roll out more broadly.

[Source: This article was published in techcrunch.com By Sarah Perez - Uploaded by the Association Member: Daniel K. Henry]

Categorized in Search Engine
Page 1 of 91

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media