fbpx

Update: July 15, 2020 at 5:29 PM ET: In an email, Microsoft confirmed the news to Android Authority that Bing has indeed been added to the list of search engines in certain parts of Android after installing the Outlook app. The company claims this addition has no impact on users’ default search engines on their phones.

Original article: July 13, 2020 at 9:40 PM ET: If you use Outlook for your Android phone’s email and calendars, you might see an unexpected sales pitch for Microsoft’s search engine.

Android users have discovered that Outlook slips a “Bing search” option into the long-press menu you see when you select text. Tap it and it will open your default browser with a Bing query for whatever words you had selected. It’s helpful, but likely not what you wanted if you live in a Google-centric world.

 

The menu option doesn’t appear for everyone, and some have reported success in getting rid of it by uninstalling Outlook. It might not even be visible if you reinstall the app. It doesn’t appear to be available when you install other Microsoft apps beyond Bing.

We’ve asked Microsoft for comment, although this isn’t a completely novel strategy. The company slipped suggestions for its own apps into Android’s share menu in 2019.

There is an incentive for the company to experiment with features like this.

Microsoft is using built-in Android functionality to add the Bing search option. It’s not compromising your device or otherwise going out of bounds, then. However, the practice might not find many fans. The company is promoting Bing to users who didn’t expect it (and frequently didn’t want it) on their devices in any form, let alone system-wide.

There is an incentive for the company to experiment with features like this. Bing had just under 2.8% of search engine usage share in June 2020, according to StatCounter. While that’s larger than most of the competition, it pales compared to Google’s 91.75% share. Microsoft has a lot of ground to cover if it’s going to be more competitive, and suggesting Bing searches to millions of users there have been over 100 million downloads as we write this) theoretically helps close the gap.

 

[Source: This article was published in androidauthority.com By Jon Fingas - Uploaded by the Association Member: David J. Redcliff]

Categorized in Search Engine

Pinterest aims to display a greater variety of content types in the home feed by utilizing a new ranking model.

Pinterest is introducing a new ranking model to its home feed in an effort to surface certain types of content more often.

Traditionally, Pinterest ranks content in the home feed using a click-through prediction model.

Pins that a user is most likely to click on, as determined by past activity, are prioritized in their home feed

While that model is effective at maximizing user engagement, it’s not the best model for surfacing a variety of content types.

For example, if a user never clicks on video content then they’ll never be shown pins with video in their home feed.

 

But that doesn’t necessarily mean they wouldn’t engage with video content if it were to be surfaced.

Pinterest found itself with a problem of wanting to boost more content types while still keeping content recommendations relevant.

To solve this problem, Pinterest is introducing a real-time ranking system for its home feed called “controllable distribution.”

Controllable Distribution

Pinterest describes controllable distribution as a “flexible real-time system.”

It’s not a complete algorithm overhaul. Rather, controllable distribution is only applied after the traditional home feed ranking algorithm.

Pinterest will still use its click-through prediction model to find relevant content. Then it will apply controllable distribution to diversify the types of content being displayed.

Controllable distribution makes it possible to specify a target for how many impressions a certain content type should receive.

For example, controllable distribution could be used to specify that 4% of users’ home feeds should contain video content.

This is done through a system that tracks what percentage of the feed was video in the past. Then, the system boosts or demotes content according to how close that percentage is to the specified target.

Pinterest says this can be accomplished while still respecting users’ content preferences.

What Does This Mean for Marketers?

As a real-time system, the controllable distribution model will be continuously adjusted.

On one hand, that means the home feed won’t get stale for users.

On the other hand, it’s not exactly possible to optimize for an algorithm that changes in realtime.

Perhaps the best piece of advice for Pinterest marketers to take away from this is to follow Pinterest’s lead.

Pinterest is diversifying the types of content in the home feed. If you want more opportunities to show up in peoples’ feeds then diversify the types of content you publish.

For example, if you only publish photos, then consider adding some videos or GIFs to the mix. Maybe some product pins if you’re an e-commerce retailer.

Pinterest’s target for displaying certain types content will be changing all the time.

Publishing a wide variety of content will help ensure you have the right type of content available at the time Pinterest wants to display it.

Additional Notes

Pinterest’s home feed ranking team used to do manually what controllable distribution is designed to do algorithmically.

Yes, Pinterest’s home feed ranking team actually used to step in and adjust how often certain types of content appeared in users’ home feed.

Yaron Greif of Pinterest’s home feed ranking team describes the old process as “painful for both practical and theoretical reasons.”

“In practice, these hand-tuned boosts quickly became unmanageable and interfered with each other. And worse, they often stop working over time — especially when ranking models are updated. We regularly had to delay very promising new ranking models because they broke business constraints.

In theory, controlling content on a per-request basis is undesirable because it prevents personalization. If we show each user the same number of video Pins we can’t show more videos to people who really like to watch videos or vice versa.”

Pinterest says it’s committed to investing in the post-ranking stage of surfacing content. So it’s possible we may see this model applied elsewhere on the platform in the future.

 

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Edna Thomas]

Categorized in Social

Google has made some new substantial changes to their How Google Search Works” search documents for website owners. And as always when Google makes changes to important documents with impact on SEO, such as How Search Works and the Quality Rater Guidelines, there are some key insights SEOs can gleam from the new changes Google has made.

Of particular note, Google detailing how it views a “document” as potentially comprising of more than one webpage, what Google considers primary and secondary crawls, as well as an update to their reference of “more than 200 ranking factors” which has been present in this document since 2013.

But here are the changes and what they mean for SEOs.

Contents [hide]

  • 1 Crawling
    • 1.1 Improving Your Crawling
  • 2 The Long Version
  • 3 Crawling
    • 3.1 How does Google find a page?
    • 3.2 Improving Your Crawling
  • 4 Indexing
    • 4.1 Improving your Indexing
      • 4.1.1 What is a document?
  • 5 Serving Results
  • 6 Final Thoughts
      • 6.0.1 Jennifer Slegg
      • 6.0.2 Latest posts by Jennifer Slegg (see all)

Crawling

Google has greatly expanded this section.

They made a slight change to wording, with “some pages are known because Google has already crawled them before” changed to “some pages are known because Google has already visited them before.”   This is a fairly minor change, primarily because Google decided to include an expanded section detailing what crawling actually is.

Google removed:

This process of discovery is called crawling.

The removal of the crawling definition was simply because it was redundant.  In Google’s expanded crawling section, they included a much more detailed definition and description of crawling instead.

The added definition:

Once Google discovers a page URL, it visits, or crawls, the page to find out what’s on it. Google renders the page and analyzes both the text and non-text content and overall visual layout to decide where it should appear in Search results. The better that Google can understand your site, the better we can match it to people who are looking for your content.

There is still a great debate on how much page layout is taken into account.  There was the page layout algo that was released many years, in order to penalize content that was pushed well below the fold in order to increase the odds a visitor might click on an advertisement that appeared above the fold instead.  But with more traffic moving to mobile, and the addition of mobile first indexing, the importance of above and below the fold for on page layout seemingly was less important.

When it comes to page layout and mobile first, Google says:

Don’t let ads harm your mobile page ranking. Follow the Better Ads Standard when displaying ads on mobile devices. For example, ads at the top of the page can take up too much room on a mobile device, which is a bad user experience.

But in How Google Search Works, Google is specifically calling attention to the “overall visual layout” with “where it should appear in Search results.”

It also brings attention to “non-text” content.  While the most obvious of this refers to image content, the referral to it is quite open ended.  Could this refer to OCR as well, which we know Google has been dabbling in?

Improving Your Crawling

Under the “to improve your site crawling” section, Google has expanded this section significantly as well.

Google has added this point:

Verify that Google can reach the pages on your site, and that they look correct. Google accesses the web as an anonymous user (a user with no passwords or information). Google should also be able to see all the images and other elements of the page to be able to understand it correctly. You can do a quick check by typing your page URL in the Mobile-Friendly test tool.

This is a good point – so many new site owners end up accidentally blocking Googlebot from crawling or not realizing their site is set to be only viewable by logged in users only.  This makes it clear that site owners should try viewing their site without also being logged into it, to see if there are any unexpected accessibility or other issues that aren’t note when logged in as an admin or high level user.

Also recommending site owners check their site via the Mobile-Friendly testing tool is good, since even seasoned SEOs use the tool to quickly see if there are Googlebot specific issues with how Google is able to see, render and crawl a specific webpage – or a competitor’s page.

Google expanded their specific note about submitting a single page to the index.

If you’ve created or updated a single page, you can submit an individual URL to Google. To tell Google about many new or updated pages at once, use a sitemap.

Previously, it just mentioned submitting changes to a single page using the submit URL tool.  This just adds clarification to those who are newer to SEO that they do not need to submit every single new or updated pages to Google individually, but that using sitemaps is the best way to do that.  There have definitely been new site owners who add each page to Google using that tool because they don’t realize sitemaps is a thing.  But part of this is that WordPress is such a prevalent way to create a new website, yet it does not have native support for sitemaps (yet), so site owners need to either install a specific sitemaps plugin or use one of the many SEO tool plugins that offer sitemaps as a feature.

 

This new change also highlights using the tool for creating pages as well, instead of just the previous reference of “changes to a single page.”

Google has also made a change to the section about “if you ask Google to crawl only one page” section as well.  They are now referencing what Google views as a “small site” – according to Google,  a smaller site is one with less than 1,000 pages.

Google also stresses the importance of a strong navigation structure, even for sites it considers “small.”  It says site owners of small sites can just submit their homepage to Google, “provided that Google can reach all your other pages by following a path of links that start from your homepage.”

With so many sites being on WordPress, it is less likely that there will be random orphaned pages that are not accessible by following links from the homepage  But depending on the specific WordPress theme used, sometimes there can be orphaned pages from pages being added but not manually added to the pages menu… in these cases, if a sitemap is used as well, those pages shouldn’t be missed even if not directly linked from the homepage.

In the “get your page linked to by another page” section, Google has added that links in “advertisements links that you pay for in other sites, links in comments, or other links that don’t follow the Google Webmaster Guidelines won’t be followed by Google.”  A small change, but Google is making it clear that it is a Google specific thing that these links won’t be followed, but they might be followed by other search engines.

But perhaps the most telling part of this is at the end of the crawling section, Google adds:

Google doesn’t accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they’re wrong.

It has long been an issue with scammy SEO companies to guarantee first positioning on Google, to increase rankings or requiring payment to submit a site to Google.  And with the ambiguous Google Partner badge for AdWords, many use the Google Partners badge to imply  they are certified by Google for SEO and organic ranking purposes.  That said, most of those who are reading the How Search Works probably are already aware of this.  But nice to see Google add this in writing again, for times when SEOs need to prove to clients that there is not a “pay to win” option, outside of AdWords, or simply to show someone who might be falling for some scammy SEO company’s claims of Google rankings.

The Long Version

Google then gets into what they call the “long version” of How Google Search Works, with more details on the above sections, covering more nuances that impact SEO.

Crawling

Google has changed how they refer to the “algorithmic process”.  Previously, it stated “Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often and how many pages to fetch from each site.”  Curiously, they removed the reference to “computer programs”, which provoked the question about which computer programs exactly Google was using.

The new updated version simply states:

Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site.

Google also updated the wording for the crawl process, changing that it is “augmented with sitemap data” to “augmented by sitemap” data.

Google also made a change where it referenced that Googlebot “detects” links and changed it to “finds” links, as well as changes from Googlebot visiting “each of these websites” to the much more specific “page”.  This second change makes it more accurate since Google visiting a website won’t necessarily mean it crawls all links on all pages.  The change to “page” makes it more accurate and specific for webmasters.

Previously it read:

As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl.

Now it reads:

When Googlebot visits a page it finds links on the page and adds them to its list of pages to crawl.

Google has added a new section about using Chrome to crawl:

During the crawl, Google renders the page using a recent version of Chrome. As part of the rendering process, it runs any page scripts it finds. If your site uses dynamically-generated content, be sure that you follow the JavaScript SEO basics.

By referencing a recent version of Chrome, this addition is clarifying the change from last year where Googlebot was finally upgraded to the latest version of Chromium for crawling, an update from Google only crawling with Chrome 41 for years.

Google also notes it runs “any page scripts it finds,” and advises site owners to be aware of possible crawl issues as a result of using dynamically-generated content with the use of JavaScript, specifying that site owners should ensure they follow their JavaScript SEO basics.

Google also details the primary and secondary crawls, something that has garnered much confusion since Google revealed primary and secondary crawls, but Google’s details in this How Google Search Works documents detail it differently than how some SEOs previously interpreted it.

Here is the entire new section for primary and secondary crawls:

Primary crawl / secondary crawl

Google uses two different crawlers for crawling websites: a mobile crawler and a desktop crawler. Each crawler type simulates a user visiting your page with a device of that type.

Google uses one crawler type (mobile or desktop) as the primary crawler for your site. All pages on your site that are crawled by Google are crawled using the primary crawler. The primary crawler for all new websites is the mobile crawler.

In addition, Google recrawls a few pages on your site with the other crawler type (mobile or desktop). This is called the secondary crawl, and is done to see how well your site works with the other device type.

In this section, Google refers to primary and secondary crawls as being specific to their two crawlers – the mobile crawler and the desktop crawler.  Many SEOs think of primary and secondary crawling in reference to Googlebot making two passes over a page, where javascript is rendered on the secondary crawl.  So while Google clarifies their use of desktop and mobile Googlebots, the use of language here does cause confusion for those who use this to refer to the primary and secondary crawls for javascript purposes.  So to be clear, Google’s reference to their primary and secondary crawl has nothing to do with javascript rendering, but only to how they use both mobile and desktop Googlebots to crawl and check a page.

 

What Google is clarifying in this specific reference to primary and secondary crawl is that Google is using two crawlers – both mobile and desktop versions of Googlebot – and will crawl sites using a combination of both.

Google did specifically state that new websites are crawled with the mobile crawler in their Mobile-First Indexing Best Practices” document, as of July 2019.  But this is the first time it has made an appearance in their How Google Search Works document.

Google does go into more detail about how it uses both the desktop and mobile Googlebots, particularly for sites that are currently considered mobile first by Google.  It wasn’t clear just how much Google was checking desktop versions of sites if they were mobile first, and there have been some who have tried to take advantage of this by presenting a spammier version to desktop users, or in some cases completely different content.  But Google is confirming it is still checking the alternate version of the page with their crawlers.

So sites that are mobile first will see some of their pages crawled with the desktop crawler.  However, it still isn’t clear how Google handles cases where they are vastly different, especially when done for spam reasons, as there doesn’t seem to be any penalty for doing so, aside from a possible spam manual action if it is checked or a spam report is submitted.  And this would have been a perfect opportunity to be clearer about how Google will handle pages with vastly different content depending on whether it is viewed on desktop or on mobile.  Even in the mobile friendly documents, Google only warns about ranking differences if content is on the desktop version of the page but is missing on the mobile version of the page.

How does Google find a page?

Google has removed this section entirely from the new version of the document.

Here is what was included in it:

How does Google find a page?

Google uses many techniques to find a page, including:

  • Following links from other sites or pages
  • Reading sitemaps

It isn’t clear why Google removed this specifically.  It is slightly redundant, but it was missing the submitting a URL option as well.

Improving Your Crawling

Google makes the use of hreflang a bit clearer, especially for those who might just be learning what hreflang is and how it works by providing a bit more detail.

Formerly it said “Use hreflang to point to alternate language pages.”  Now it states “Use hreflang to point to alternate versions of your page in other languages.”

Not a huge change, but a bit clearer.

Google has also added two new points, providing more detail about ensuring Googlebot is able to access all the content on the page, not just the content (words) specifically.

First, Google added:

Be sure that Google can access the key pages, and also the important resources (images, CSS files, scripts) needed to render the page properly.

So Google is stressing about ensuring Google can access all the important content.  And it is also specifically calling attention to other types of elements on the page that Google wants to also have access to in order to properly crawl the page, including images, CSS and scripts.  For those webmasters who went through the whole “mobile first indexing” launch, they are fairly familiar with issues surrounding blocking files, especially CSS and scripts, something that some CMS had blocked Googlebot from crawling by default.

But for newer site owners, they might not realize this was possible, or that they might be doing it.  It would have been nice to see Google add specific information on how those newer to SEO can check for this, particularly for those who also might not be clear on what exactly “rendering” means.

Google also added:

Confirm that Google can access and render your page properly by running the URL Inspection tool on the live page.

Here Google does add specific information about using the URL Inspection tool in order to see what site owners are blocking or content that is causing issues when Google tries to render it.  I think these last two new points could have been combined, and made slightly clearer for how site owners can use the tool to check for all these issues.

Indexing

Google has made significant changes to this section as well. And Google starts off with making major changes to the first paragraph.  Here is the original version:

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such astags and alt attributes.

The updated version now reads:

Googlebot processes each page it crawls in order to understand the content of the page. This includes processing the textual content, key content tags and attributes, such astags and alt attributes, images, videos, and more.

Google no longer states it processes pages to “compile a massive index of all the words it sees and their location on each page.”  This was always a curious way for them to call attention to the fact they are simply indexing all words it comes across and their position on a page, when in reality it is a lot more complex than that.  So it definitely clears that up.

 

They have also added that they are processing “textual content” which is basically calling attention to the fact it indexes the words on the page, something that was assumed by everyone.  But it does differentiate between the new addition later in the paragraph regarding images, videos and more.

Previously, Google simply made reference to attributes such as title and alt tags and attributes.  But now it is getting more granular, specifically referring to “images, videos and more.”  However, this does mean Google is considering images, videos and “more” to understand the content on the page, which could affect rankings.

Improving your Indexing

Google changed “read our SEO guide for more tips” to “Read our basic SEO guide and advanced user guide for more tips.”

What is a document?

Google has added a massive section here called “What is a document?”  It talks specifically about how Google determines what is a document, but also includes details about how Google views multiple pages with identical content as a single document, even with different URLs, and how it determines canonicals.

First, here is the first part of this new section:

What is a “document”?

Internally, Google represents the web as an (enormous) set of documents. Each document represents one or more web pages. These pages are either identical or very similar, but are essentially the same content, reachable by different URLs. The different URLs in a document can lead to exactly the same page (for instance, example.com/dresses/summer/1234 and example.com?product=1234 might show the same page), or the same page with small variations intended for users on different devices (for example, example.com/mypage for desktop users and m.example.com/mypage for mobile users).

Google chooses one of the URLs in a document and defines it as the document’s canonical URL. The document’s canonical URL is the one that Google crawls and indexes most often; the other URLs are considered duplicates or alternates, and may occasionally be crawled, or served according to the user request: for instance, if a document’s canonical URL is the mobile URL, Google will still probably serve the desktop (alternate) URL for users searching on desktop.

Most reports in Search Console attribute data to the document’s canonical URL. Some tools (such as the Inspect URL tool) support testing alternate URLs, but inspecting the canonical URL should provide information about the alternate URLs as well.

You can tell Google which URL you prefer to be canonical, but Google may choose a different canonical for various reasons.

So the tl:dr is that Google will view pages with identical  or near-identical content as the same document, regardless of how many of them there are.  For seasoned SEOs, we know this as internal duplicate content.

Google also states that when Google determines these duplicates, they may not be crawled as often.  This is important to note for site owners that are working to de-duplicate content which Google is considering duplicate.  So it would be more important to submit these URLs to be recrawled, or give those newly de-duplicated pages links from the homepage in order to ensure Google recrawls and indexed the new content, so Google de-dupes them properly.

It also brings up an important note about desktop versus mobile, that Google will still likely serve the desktop version of a page instead of the mobile version for desktop users, when a site has two different URLs for the same page where is designed for mobile users and the other for desktop.  While many websites have changed to serving the same URL and content for both using responsive design, some sites still run two completely different sites and URLs for desktop and mobile users.

Google also mentions that you can tell Google the URL you prefer Google to use as the canonical, but states they can chose a different URL “for various reasons.”  While Google doesn’t detail specifics about why Google might choose a different canonical than the one the site owner specifies, it is usually due to http vs https, if a page is included in a sitemap or not, page quality, if the pages appear to be completely different and should not be canonicalized, or due to significant incoming links to the non-canonical URL.

Google has also included definitions for many o the terms used by SEOs and in Google Search Console.

Document: A collection of similar pages. Has a canonical URL, and possibly alternate URLs, if your site has duplicate pages. URLs in the document can be from the same or different organization (the root domain, for example “google” in www.google.com). Google chooses the best URL to show in Search results according to the platform (mobile/desktop), user language‡ or location, and many other variables. Google discovers related pages on your site by organic crawling, or by site-implemented features such as redirects or tags. Related pages on other organizations can only be marked as alternates if explicitly coded by your site (through redirects or link tags).

Again, Google is talking about the fact a single document can encompass more than just a single URL, as Google will consider a single document to potentially have many duplicate or near duplicate pages as well as pages assigned via canonical.  Google makes specific mention about “alternates” that appear on other sites, that can only be considered alternates if the site owner specifically codes it.  And that Google will choose the best URL from within the collection of documents to show.

 

But it fails to mention that Google can consider pages duplicate on other sites and will not show those duplicates, even if they aren’t from the same sites, something that site owners see happen frequently when someone steals content and sometimes sees the stolen version ranking over the original.

There was a notation added for the above, dealing with hreflang.

Pages with the same content in different languages are stored in different documents that reference each other using hreflang tags; this is why it’s important to use hreflang tags for translated content.

Google shows that it doesn’t include identical content under the same “document” when it is simply in a different language, which is interesting.  But Google is tressing the importance of using hreflang in these cases.

URL: The URL used to reach a given piece of content on a site. The site might resolve different URLs to the same page.

Pretty self explanatory, although it does have reference to the fact different URLs can be resolved to the same page, presumably such as with redirects or alias.

Page: A given web page, reached by one or more URLs. There can be different versions of a page, depending on the user’s platform (mobile, desktop, tablet, and so on).

Also pretty self explanatory, bringing up the specifics that some site owners can be served different versions of the same page, such as if they try and view the same page on a mobile device versus a desktop computer.

Version: One variation of the page, typically categorized as “mobile,” “desktop,” and “AMP” (although AMP can itself have mobile and desktop versions). Each version can have a different URL (example.com vs m.example.com) or the same URL (if your site uses dynamic serving or responsive web design, the same URL can show different versions of the same page) depending on your site configuration. Language variations are not considered different versions, but different documents.

Simply clarifying with greater details the different versions of a page, and how Google typically categorizes them as “mobile,” “desktop,” and “AMP”.

Canonical page or URL: The URL that Google considers as most representative of the document. Google always crawls this URL; duplicate URLs in the document are occasionally crawled as well.

Google states here again that non-canonical pages are not crawled as often as the main canonical that a site owner assigns to a group of pages they want canonical.  Google does not include specific mention here that they sometimes chose a different page as the canonical one, even if there is a specific page designated as the canonical one.

Alternate/duplicate page or URL: The document URL that Google might occasionally crawl. Google also serves these URLs if they are appropriate to the user and request (for example, an alternate URL for desktop users will be served for desktop requests rather than a canonical mobile URL).

The key takeaway here is that Google “might” occasionally crawl the site’s duplicate or alternative page.  And here they stress that Google will serve these alternative URLs “if they are appropriate.”  It is unfortunate they don’t go into greater detail in why they might serve these pages instead of the canonical, outside of the mention of desktop versus mobile, as we have seen many cases where Google picks a different page to show other than the canonical for a myriad of reasons.

Google also fails to mention how this impacts duplicate content found on other sites, we we do know Google will crawl those less often as well.

Site: Usually used as a synonym for a website (a conceptually related set of web pages), but sometimes used as a synonym for a Search Console property, although a property can actually be defined as only part of a site. A site can span subdomains (and even domains, for properly linked AMP pages).

Interesting to note here what they consider a website – a conceptually related set of webpages – and how it related to the usage of a Google Search Console property, as “a property can actually be defined as only part of a site.”

Google does make mention that AMP, which technically appear on a different domain, are considered part of the main site.

Serving Results

Google has made a pretty interesting specific change here in regards to their ranking factors.  Previously, Google stated:

Relevancy is determined by over 200 factors, and we always work on improving our algorithm.

Google has now updated this “over 200 factors” with a less specific one.

Relevancy is determined by hundreds of factors, and we always work on improving our algorithm.

The 200 factors in the How Google Search Works dates back to 2013 when the document was launched, although then it also made reference to PageRank (“Relevancy is determined by over 200 factors, one of which is the PageRank for a given page”) which Google removed when they redesigned their document in 2018.

While Google doesn’t go into specifics on the number anymore, it can be assumed that a significant number of ranking factors have been added since 2013 when this was first claimed in this document.  But I am sure some SEOs will be disappointed we don’t get a brand new shiny number like “over 500” ranking factors that SEOs can obsess about.

Final Thoughts

There are some pretty significant changes made to this document that SEOs can get a bit of insight from.

Google’s description of what it considers a document and how it relates to other identical or near-identical pages on a site is interesting, as well as Google’s crawling behavior towards the pages within a document it considers as alternate pages.  While this behavior has often been noted, it is more concrete information on how site owners should handle these duplicate and near-duplicate pages, particularly when they are trying to un-duplicate those pages and see them crawled and indexed as their own document.

They added a lot of useful advice for newer site owners, which is particularly helpful with so many new websites coming online this year due to the global pandemic.  Things such as checking a site without being logged in, how to submit both pages and sites to Google, etc.

The mention of what Google considers a “small site” is interesting because it gives a more concrete reference point for how Google sees large versus small sites.  For some, a small site could mean under 30 pages and the idea of a site with millions of pages being unfathomable.  And the reinforcement of a strong navigation, even for “small sites” is useful for showing site owners and clients who might push for navigation that is more aesthetic than practical for both usability and SEO.

The primary and secondary crawl additions will probably cause some confusion for those who think of primary and secondary in terms of how Google processes scripts on a page when it crawls it.  But it is nice to have more concrete information on how and when Google will crawl using the alternate version of Googlebot for sites that are usually crawled with either the mobile Googlebot or the desktop one.

Lastly, the change from the “200 ranking factors” to a less specific, but presumably much higher number of ranking factors will disappoint some SEOs who liked having some kind of specific number of potential ranking factors to work out.

 

[Source: This article was published in thesempost.com By JENNIFER SLEGG - Uploaded by the Association Member: Barbara larson]

Categorized in Search Techniques

How has Google's local search changed throughout the years? Columnist Brian Smith shares a timeline of events and their impact on brick-and-mortar businesses.

Deciphering the Google algorithm can sometimes feel like an exercise in futility. The search engine giant has made many changes over the years, keeping digital marketers on their toes and continually moving the goalposts on SEO best practices.

Google’s continuous updating can hit local businesses as hard as anyone. Every tweak and modification to its algorithm could adversely impact their search ranking or even prevent them from appearing on the first page of search results for targeted queries. What makes things really tricky is the fact that Google sometimes does not telegraph the changes it makes or how they’ll impact organizations. It’s up to savvy observers to deduce what has been altered and what it means for SEO and digital marketing strategies.

What’s been the evolution of local search, and how did we get here? Let’s take a look at the history of Google’s local algorithm and its effect on brick-and-mortar locations.

 

2005: Google Maps and Local Business Center become one

After releasing Local Business Center in March 2005, Google took the next logical step and merged it with Maps, creating a one-stop shop for local business info. For users, this move condensed relevant search results into a single location, including driving directions, store hours and contact information.

This was a significant moment in SEO evolution, increasing the importance of up-to-date location information across store sites, business listings and online directories.

2007: Universal Search & blended results

Universal Search signified another landmark moment in local search history, blending traditional search results with various listings from other search engines. Instead of working solely through the more general, horizontal SERPs, Universal Search combined results from Google’s vertical-focused search queries like Images, News and Video.

Google’s OneBox started to show within organic search results, bringing a whole new level of exposure that was not there before.  The ramifications on local traffic were profound, as store listings were better positioned to catch the eye of Google users.

2010: Local Business Center becomes Google Places

In 2010, Google rebranded/repurposed Local Business Center and launched Google Places. This was more than a mere name change, as a number of important updates were included, like adding new image features, local advertising options and the availability of geo-specific tags for certain markets. But more importantly, Google attempted to align Places pages with localized search results, where previously information with localized results was coming from Google Maps.

The emergence of Places further cemented Google’s commitment to bringing local search to the forefront. To keep up with these rapidly changing developments, brick-and-mortar businesses needed to make local search a priority in their SEO strategies.

All the algorithm updates plus insightful analysis delivered directly to your inbox. Subscribe today!

2012: Google goes local with Venice

Prior to Venice, Google’s organic search results defaulted to more general nationwide sites. Only Google Maps would showcase local options. With the Venice update, Google’s algorithm could take into account a user’s stated location and return organic results reflecting that city or state. This was big, because it allowed users to search anchor terms without using local modifiers.

The opportunity for companies operating in multiple territories was incredible. By setting up local page listings, businesses could effectively rank higher on more top-level queries just by virtue of being in the same geographic area as the user. A better ranking with less effort — it was almost too good to be true.

2013: Hummingbird spreads its wings

Hummingbird brought about significant changes to Google’s semantic search capabilities. Most notably, it helped the search engine better understand long-tail queries, allowing it to more closely tie results to specific user questions — a big development in the eyes of main search practitioners.

Hummingbird forced businesses to change their SEO strategies to adapt and survive. Simple one- or two-word phrases would no longer be the lone focal point of a healthy SEO plan, and successful businesses would soon learn to target long-tail keywords and queries — or else see their digital marketing efforts drop like a stone.

2014: Pigeon takes flight

Two years after Venice brought local search to center stage, the Pigeon update further defined how businesses ranked on Google localized SERPs. The goal of Pigeon was to refine local search results by aligning them more directly with Google’s traditional SEO ranking signals, resulting in more accurate returns on user queries.

Pigeon tied local search results more closely with deep-rooted ranking signals like content quality and site architecture. Business listings and store pages needed to account for these criteria to continue ranking well on local searches.

2015: RankBrain adds a robotic touch

In another major breakthrough for Google’s semantic capabilities, the RankBrain update injected artificial intelligence into the search engine. Using RankBrain’s machine learning software, Google’s search engine was able to essentially teach itself how to more effectively process queries and results and more accurately rank web pages.

RankBrain’s ability to more intelligently process page information and discern meaning from complex sentences and phrases further drove the need for quality content. No more gaming the system. If you wanted your business appearing on the first SERP, your site had better have the relevant content to back it up.

2015: Google cuts back on snack packs

A relatively small but important update, in 2015, Google scaled back its “snack pack” of local search results from seven listings to a mere three. While this change didn’t affect the mechanics of SEO much, it limited visibility on page one of search results and further increased the importance of ranking high in local results.

2016: Possum shakes things up

The Possum update was an attempt to level the playing field when it came to businesses in adjoining communities. During the pre-Possum years, local search results were often limited to businesses in a specific geographical area. This meant that a store in a nearby area just outside the city limits of Chicago, for instance, would have difficulty ranking and appearing for queries that explicitly included the word “Chicago.”

Instead of relying solely on search terms, Possum leveraged the user’s location to more accurately determine what businesses were both relevant to their query and nearby.

This shift to user location is understandable given the increasing importance of mobile devices. Letting a particular search phrase dictate which listings are returned doesn’t make much sense when the user’s mobile device provides their precise location.

2017 and beyond

Predicting when the next major change in local search will occur and how it will impact ranking and SEO practices can be pretty difficult, not least because Google rarely announces or fully explains its updates anymore.

That being said, here are some evergreen local SEO tips that never go out of fashion (at least not yet):

  • Manage your local listings for NAP (name, address, phone number) accuracy and reviews.
  • Be sure to adhere to organic search best practices and cultivate localized content and acquire local links for each store location.
  • Mark up your locations with structured data, particularly Location and Hours, and go beyond if you are able to.

When in doubt, look at what your successful competitors are doing, and follow their lead. If it works, it works — that is, until Google makes another ground-shaking algorithm change.

Source: This article was published searchengineland.com By Brian Smith

Categorized in Search Engine

November was all about testing for our article page group. We’ve been running A/B tests on a small percentage of the mobile audience; testing new commenting and site socialization features, variations on UX treatments and relevancy matching on ad units, as well as some improvements aimed at streamlining page flow and better surfacing of related content. We’ve also begun discovery on an overhaul of our registration and user account management experience, with an eye towards enhanced consumer identity management and a tighter platform alignment strategy.

As we move towards the end of the year we’ll be continuing and expanding our testing of new commenting and social engagement features, and planning a new and more scalable approach to prototyping and testing in 2017.

Mobile Web App (beta)

This month was an exciting one for our new mobile products team. We surveyed a portion of our users to see how they were liking the experience, and over 80% of users surveyed offered positive feedback.

We also been refining the ad experience, and we continued to tweak our user experience by adding navigation prompts and nudges to help users to navigate and explore the experience. As of November 29, we are also live with production traffic. Our social team is sharing two new mobile links every day, and our site has proven robust enough to handle more than 17,000 users in one day.

 

We are also hard at work preparing new mobile list experiences for the upcoming World’s Most Powerful People and 30 Under 30 lists. Simultaneously, we continue work to streamline the app framework, thus allowing us to launch apps around new list events more expeditiously.

Lists

November was a busy month for lists. It featured the launch of Best States for BusinessFinTech50NHL valuations and the Just 100 Companies, a new list ranking America’s best corporate citizens within their industries. The product team is now in full swing working on our 2016 ranking of the World’s Most Powerful People, and our signature 30 Under 30 list launching in the first week of 2017.

CMS & Data Science

During November we released several improvements to the CMS focused around tagging content and automated channel/section categorization. In the coming weeks, we'll be introducing improvements to how the CMS handles media, providing a simple, cohesive experience when adding media, andoffering more granular search options. We'll also be debuting a new Help Center, a centralized knowledge base, FAQs and community for our contributor network.

The data science and CMS teams have been collaborating to bring on a suit of features to the CMS that help authors optimize and tag their stories for improved shareability and search engine performance. This week, we released a hashtag suggestion tracker in the CMS to monitor usage and increase accuracy. Next week we’ll be debuting new headline optimization and SEO suggestions features.

Lastly, the data science team is nearing completion on a real-time data pipeline that streams live article data into structured tables where it can be analyzed in real-time, allowing us to pinpoint and react to traffic spikes on viral content as they happen.

 

ForbesConnect

In November,ForbesConnectpublished the Forbes Healthcare app, a designated conference app for the Forbes Healthcare Summit in New York City. In December, we will continue our efforts to expand our business development plan in order to provide a light-weight networking platform for business schools.

Level Up by Forbes

The Level Up team is now publishing its own videos on Facebook. Check out our page to watch a few and don’t forget to like the page while you are there! Additionally, Level Up’s content on Amazon Alexa is now recorded and published in audio form (previously it was only text-to-speech), and we will be live on Google Assistant on Dec 15th.

Page Performance

In recent weeks, we have been exploring new ways to maximizing the performance of our core site pages. By ruthlessly shaving off every unnecessary byte and millisecond, we can deliver the core experience as quickly as possible. When a web page loads, you often see many intermediate steps. In those first moments, the page is not an image; it is an animation. By default, the frames of this animation arise in an unintended way from technical factors. We are taking control of these early moments, designing, choreographing and engineering them. We are also working toward comprehensive page performance monitoring of every production page view, as well as automated performance analysis as part of the development process.

Author:  Nina Gould

Source:  http://www.forbes.com/

Categorized in Others

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more