In a bid to fight fake news and low-quality content, Google is updating its search algorithms. In addition to making improvements to search ranking, the search engine giant wants to offer people easier ways to directly report offensive or misleading content.

In a blog post, Google Vice President of Search Ben Gomes said that Google has improved its evaluation methods and made algorithmic updates to surface more authoritative content.

For the first time, users will be able to directly flag content that appears in Autocomplete and Featured Snippets in Google Search.

moduleplant id="583"]

Autocomplete helps predict the searches people might be typing, while Featured Snippets appear at the top of search results showing a highlight of the information relevant to what people are looking for.

“Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system. The most high profile of these issues is the phenomenon of “fake news,” where content on the web has contributed to the spread of blatantly misleading, low quality, offensive or downright false information,” Gomes said in the blog.

Google has a team of evaluators – real people – to monitor the quality of Google’s search results. Their ratings will help the company gather data on the quality of its results and identify areas for improvements.

moduleplant id="535"]

Last month, Google updated its Search Quality Rater Guidelines to provide more detailed examples of low-quality web pages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories. “These guidelines will begin to help our algorithms in demoting such low-quality content and help us to make additional improvements over time,” Gomes said.

Featured Snippets

moduleplant id="558"]

Meanwhile, Google recently updated its How Search Works site to provide detailed info to users and website owners about the technology behind Google Search.

This article was published in marketexclusive.com by David Zazoff

 

 
Categorized in Search Engine

Two billion photos find their way onto Facebook’s family of apps every single day and the company is racing to understand them and their moving counterparts with the hope of increasing engagement. And while machine learning is undoubtedly the map to the treasure, Facebook and its competitors are still trying to work out how to deal with the spoils once they find them.

Facebook AI Similarity Search (FAISS), released as an open-source library last month, began as an internal research project to address bottlenecks slowing the process of identifying similar content once a user’s preferences are understood. Under the leadership of Yann LeCun, Facebook’s AI Research (FAIR) lab is making it possible for everyone to more quickly relate needles within a haystack.

On its own, training a machine learning model is already an incredibly intensive computational process. But a funny thing happens when machine learning models comb over videos, pictures and text — new information gets created! FAISS is able to efficiently search across billions of dimensions of data to identify similar content.

In an interview with TechCrunch, Jeff Johnson, one of the three FAIR researchers working on the project, emphasized that FAISS isn’t so much a fundamental AI advancement as it is a fundamental AI-enabling technique.

Imagine you wanted to perform object recognition on a public video that a user shared to understand its contents so you could serve up a relevant ad. First you’d have to train and run that algorithm on the video, coming up with a bunch of new data.

From that, let’s say you discover that your target user is a big fan of trucks, the outdoors and adventure. This is helpful, but it’s still hard to say what advertisement you should display — a rugged tent? An ATV? A Ford F-150?

To figure this out, you would want to create a vector representation of the video you analyzed and compare it to your corpus of advertisements with the intent of finding the most similar video. This process would require a similarity search, whereby vectors are compared in multi-dimensional space.

In this representation of a similarity search, the blue vector is the query. The distance between the “arrows” reflects their relative similarity.

In real life, the property of being an adventurous outdoorsy fan of trucks could constitute hundreds or even thousands of dimensions of information. Multiply this by the number of different videos you’re searching across and you can see why the library you implement for similarity search is important.

“At Facebook we have massive amounts of computing power and data and the question is how we can best take advantage of that by combining old and new techniques,” posited Johnson.

Facebook reports that implementing k-nearest neighbor across GPUs resulted in an 8.5x improvement in processing time. Within the previously explained vector space, nearest neighbor algorithms let us identify the most closely related vectors.

More efficient similarity search opens up possibilities for recommendation engines and personal assistants alike. Facebook M, its own intelligent assistant, relies on having humans in the loop to assist users. Facebook considers “M” to be a test bed to experiment with the relationship between humans and AI. LeCun noted that there are a number of domains within M where FAISS could be useful.

“An intelligent virtual assistant looking for an answer would need to look through a very long list,” LeCun explained to me. “Finding nearest neighbors is a very important functionality.”

Improved similarity search could support memory networks to help keep track of context and basic factual knowledge, LeCun continued. Short-term memory contrasts with learned skills like finding the optimal solution to a puzzle. In the future, a machine might be able to watch a video or read a story and then answer critical follow-up questions about it.

More broadly, FAISS could support more dynamic content on the platform. LeCun noted that news and memes change every day and better methods of searching content could drive better user experiences.

Two billion new photos a day presents Facebook with a billion and a half opportunities to better understand its users. Each and every fleeting chance at boosting engagement is dependent on being able to quickly and accurately sift through content and that means more than just tethering GPUs.

Source : techcrunch.com

Categorized in Social

As Google becomes increasingly sophisticated in its methods for scoring and ranking web pages, it's more difficult for marketers to keep up with SEO best practices. Columnist Jayson DeMers explores what can be done to keep up in a world where machine learning rules the day.

Google’s rollout of artificial intelligence has many in the search engine optimization (SEO) industry dumbfounded. Optimization tactics that have worked for years are quickly becoming obsolete or changing.

Why is that? And is it possible to find a predictable optimization equation like in the old days? Here’s the inside scoop.

The old days of Google

Google’s pre-machine-learning search engine operated monolithically. That is to say, when changes came, they came wholesale. Large and abrupt movements, sometimes tectonic, were commonplace in the past.

What applied to one industry/search engine result applied to all results. This was not to say that every web page was affected by every algorithmic change. Each algorithm affected a specific type of web page. Moz’s algorithm change history page details the long history of Google’s algorithm updates and what types of sites and pages were impacted.

The SEO industry began with people deciphering these algorithm updates and determining which web pages they affected (and how). Businesses rose and fell on the backs of decisions made due to such insights, and those that were able to course-correct fast enough were the winners. Those that couldn’t learned a hard lesson.

These lessons turned into the “rules of the road” for everyone else, since there was always one constant truth: algorithmic penalties were the same for each vertical. If your competitor got killed doing something Google didn’t like, you’d be sure that as long as you didn’t commit the same mistake, you’d be OK. But recent evidence is beginning to show that this SEO idiom no longer holds. Machine learning has made these penalties specific to each keyword environment. SEO professionals no longer have a static set of rules they can play by.

Dr. Pete Meyers, Moz’s Marketing Scientist recently noted, “Google has come a long way in their journey from a heuristic-based approach to a machine learning approach, but where we’re at in 2016 is still a long way from human language comprehension. To really be effective as SEOs, we still need to understand how this machine thinks, and where it falls short of human behavior. If you want to do truly next-level keyword research, your approach can be more human, but your process should replicate the machine’s understanding as much as possible.”

Moz has put together guides and posts related to understanding Google’s latest artificial intelligence in its search engine as well as launched its newest tool, Keyword Explorer, which addresses these changes.

Google decouples ranking updates

Before I get into explaining how things went off the rails for SEOs, I first have to touch on how technology enabled Google’s search engine to get to its current state.

It has only been recently that Google has possessed the kind of computational power to begin to make “real-time” updates a reality. On June 18, 2010, Google revamped its indexing structure, dubbed “Caffeine,” which allowed Google to push updates to its search index quicker than ever before. Now, a website could publish new or updated content and see the updates almost immediately on Google. But how did this work?

Google - caffeine updates

Before the Caffeine update, Google operated like any other search engine. It crawled and indexed its data, then sent that indexed data through a massive web of SPAM filters and algorithms that determined its eventual ordering on Google’s search engine results pages.

After the Caffeine update, however, select fresh content could go through an abbreviated scoring process (temporarily) and go straight to the search results. Minor things, like an update to a page’s title tag or meta description tag, or a published article for an already “vetted” website, would be candidates for this new process.

Sounds great, right? As it turned out, this created a huge barrier to establishing correlation between what you changed on your website and how that change affected your ranking. The detaching of updates to its search results — and the eventual thorough algorithmic scoring process that followed — essentially tricked many SEOs into believing that certain optimizations had worked, when in fact they hadn’t.

This was a precursor to the future Google, which would no longer operate in a serialized fashion. Google’s blog effectively spelled out the new Caffeine paradigm: “[E]very second Caffeine processes hundreds of thousands of pages in parallel.”

From an obfuscation point of view, Caffeine provided broad cover for Google’s core ranking signals. Only a meticulous SEO team, which carefully isolated each and every update, could now decipher which optimizations were responsible for specific ranking changes in this new parallel algorithm environment.

When I reached out to him for comment, Marcus Tober, founder and CTO of Searchmetrics, said, “Google now looks at hundreds of ranking factors. RankBrain uses machine learning to combine many factors into one, which means factors are weighted differently for each query. That means it’s very likely that even Google’s engineers don’t know the exact composition of their highly complex algorithm.”

“With deep learning, it’s developing independently of human intervention. As search evolves, our approach is evolving with Google’s algorithmic changes. We analyze topics, search intention and sales funnel stages because we’re also using deep learning techniques in our platform. We highlight content relevance because Google now prioritizes meeting user intent.”

These isolated testing cycles were now very important in order to determine correlation, because day-to-day changes on Google’s index were not necessarily tied to ranking shifts anymore.

The splitting of the atomic algorithm

As if that weren’t enough, in late 2015, Google released machine learning within its search engine, which continued to decouple ranking changes from its standard ways of doing things in the past.

As industry veteran John Rampton reported in TechCrunch, the core algorithms within Google now operate independently based on what is being searched for. This means that what works for one keyword might not work for another. This splitting of Google’s search rankings has since caused a tremendous amount of grief within the industry as conventional tools, which prescribe optimizations indiscriminately across millions of keywords, could no longer operate on this macro level. Now, searcher intent literally determines which algorithms and ranking factors are more important than others in that specific environment.

This is not to be confused with the recent announcement that there will be a separate index for Mobile vs. Desktop, where a clear distinction of indexes will be present. There are various tools to help SEOs understand their place within separate indexes. But how do SEOs deal with different ranking algorithms within the same index?

The challenge is to categorize and analyze these algorithmic shifts on a keyword basis. One technology that addresses this — and is getting lots of attention — was invented by Carnegie Mellon alumni Scott Stouffer. After Google repeatedly attempted to hire him, Stouffer decided instead to co-found an AI-powered enterprise SEO platform called Market Brew, based on a number of patents that were awarded in recent years.

Stouffer explains, “Back in 2006, we realized that eventually machine learning would be deployed within Google’s scoring process. Once that happened, we knew that the algorithmic filters would no longer be a static set of SEO rules. The search engine would be smart enough to adjust itself based on machine learning what worked best for users in the past. So we created Market Brew, which essentially serves to ‘machine learn the machine learner.'”

“Our generic search engine model can train itself to output very similar results to the real thing. We then use these predictive models as a sort of ‘Google Sandbox’ to quickly A/B test various changes to a website, instantly projecting new rankings for the brand’s target search engine.”

Because Google’s algorithms work differently between keywords, Stouffer says there are no clear delineations anymore. Combinations of keyword and things like user intent and prior success and failure determine how Google weights its various core algorithms.

Predicting and classifying algorithmic shifts

Is there a way we, as SEOs, can start to quantitatively understand the algorithmic differences/weightings between keywords? As I mentioned earlier, there are ways to aggregate this information using existing tools. There are also some new tools appearing on the market that enable SEO teams to model specific search engine environments and predict how those environments are shifting algorithmically.

A lot of the answers depend on how competitive and broad your keywords are. For instance, a brand that only focuses on one primary keyword, with many variations of subsequent long-tail keyword phrases, will likely not be affected by this new way of processing search results. Once an SEO team figures things out, they’ve got it figured out.

On the flip side, if a brand has to worry about many different keywords that span various competitors in each environment, then investment in these newer technologies may be warranted. SEO teams need to keep in mind that they can’t simply apply what they’ve learned in one keyword environment to another. Some sort of adaptive analysis must be used.

Summary

Technology is quickly adapting to Google’s new search ranking methodology. There are now tools that can track each algorithmic update, determining which industries and types of websites are affected the most. To combat Google’s new emphasis on artificial intelligence, we’re now seeing the addition of new search engine modeling tools that are attempting to predict exactly which algorithms are changing, so SEOs can adjust strategies and tactics on the fly.

We’re entering a golden age of SEO for engineers and data scientists. As Google’s algorithms continue to get more complex and interwoven, the SEO industry has responded with new high-powered tools to help understand this new SEO world we live in.

Author : Jayson DeMers

Source : searchengineland.com

Categorized in Search Engine

Want to understand how Bing ranks and shows data on images within Bing Image search?

Bing shared in detail how Bing Image search provides relevant images for your queries while showing how they now also provide descriptions, captions and actions for images for the new Bing Image search experience.

One interesting tidbit that Bing shared was that of all the images they indexed, 59% of them have at least one duplicate image across the web. They said many images have hundreds, if not thousands, of duplication across them web. A duplicate image can be someone either taking the original image and hosting it on a different URL on their own server or someone making slight changes to the image and hosting it elsewhere, while not sourcing the original image URL.

Here is a graph Bing shared showing data on image duplication across the web:

04-Pie-Chart-of-Duplicate

Author : Barry Schwartz

Source : searchengineland.com

Categorized in Search Engine

We are all now in what’s called the “big data era,” and we’ve been here for quite some time. Once upon a time we were only just starting to piece together dialogue. Then when one group of people had learned this dialogue, it was up to them t pass it on the next group and so on and so on. However, as more people began to fill the Earth, more information was learned and gathered, making it too difficult to pass on in the form of dialogue. Instead, we needed to codify this information to share it all.

Sharing and codifying this learned knowledge into writing would have been quite a shift, technologically, for our species. Another big change came when we moved to the complex mathematics we have today from what was once just simple calculations. Coding, however, is still relatively new in comparison and didn’t come into play until 1945 when people like Grace Hopper worked on the Harvard Mark 1 computer. It emerged more through necessity rather than anything else. People figured that if they could find a way to codify instructions to a machine to tell it what steps to take, any manual operation could be eliminated saving any business time and money.

Then came along algorithms. Algorithms are very different from code. The code is a set of instructions for the computer. It’s calculation in a specific platform in a specific programming language. Algorithms, on the other hand, are a series of steps that describe a way of solving a problem that meets the criteria of both being correct and ability to be terminated if need be. Algorithms have been around much longer than coding had and was recorded as being used as far back as 820AD by the Muslim mathematician, Al-Khawarizm. They’re a finite number of calculations that will yield a result once implemented. Because coding is a way of getting instructions direct to a computer it’s well suited to implement algorithms.

The way code performs can be impacted depending on how the algorithm is implemented. Algorithms generate better performance gains than any hardware can. In 2010, a Federal report showed how algorithmic improvements have resulted in significant performance increases in areas including logistics, natural language processing, and speech recognition. Because we’re now in the “big data” era, we need to think big in order to be able to cope with the vast amount of data coming in. Instead of writing code to search our data given a set of parameters of the certain pattern as traditional coding focuses on, with big data we look for the pattern that matches the data. But now, there is so much data that even the patterns are hard to recognize. So again, programmers have had to take another step back.

Now another step’s been added to the equation that finds patterns humans don’t see, such as the certain wavelength of light, or data over a certain volume. This ‘over data’ is what’s known as big data. So, this new algorithmic step now successfully searches for patterns and will also create the code needed to do it. Pedro Domingos explains this well in his book, “The Master Algorithm.” Here he describes how “learner algorithms” are used to create new algorithms that can write the code needed to carry out their desired task. “The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automates automaton itself. Without it, programmers become the bottleneck holding up progress. With it the pace of progress picks up,” says Domingos.

Author : Linda Johnson

Source : trendintech.com

Categorized in Search Engine

Yesterday, Google released a new quality raters guidelines PDF document that was specifically updated to tell the quality raters how to spot and flag offensive, upsetting, inaccurate and hateful web pages in the search results.

Paul Haahr, a lead search engineer at Google who celebrated his 15th year at the company, told us that Google has been working on algorithms to combat web pages that are offensive, upsetting, inaccurate and hateful in their search results. He said it only impacts about 0.1% of the queries but it is an important problem.

With that, they want to make sure their algorithms are doing a good job. So that is why they have updated their quality raters guidelines so that they can test to make sure the search results reflect their algorithms. If they don't that data goes back to the engineers where they can tweak things or make new algorithms or machine learning techniques to weed out even more of the content Google doesn't want in their search results.

Paul Haahr explained that there are times where people specifically want to find hateful or inaccurate information. Maybe on the inaccurate side, they like satire sites or maybe on the hate side, they hate people. Google should not prevent people from finding content that they want, Paul said. And the quality raters guidelines explains with key examples on how raters should rate such pages.

But overall, ever since the elections, Google, Facebook and others have been under fire to do something about facts and hate and more. They released fact checking schema for news stories. They supposedly banned AdSense publishers. They removed certain classes of hate and inaccurate results from the search results. And they tweaked the top stories algorithm to show more accurate and authoritative results.

Google has been working on this and they want to continue working on this. The quality raters will help make sure what the engineers are doing, does translate into proper search results. At the same time, as you all mostly know, quality raters have no power to remove search results or adjust rankings, they just rate the search results and that data goes back to the Google engineers to use.

Both Danny Sullivan dug into this and Jennifer Slegg dug into the quality raters guidelines changes. So go to those two sites to read the summaries on how Google defines them, overall it is pretty fascinating because it is not an easy solution or easy judgement calls - so Google has to define them pretty precisely.

It is an important problem, but with only 0.1% of queries impacted, seems like a lot of effort is being put on this.

Download the updated raters guidelines over here..

Forum discussion at WebmasterWorld.

Author : Barry Schwartz

Source : https://www.seroundtable.com/google-algorithms-targets-offensive-inaccurate-hate-23558.html

Categorized in Search Engine

BARACK Obama is planning a coup, fluoride is dulling my IQ and five US Presidents were members of the Ku Klux Klan — well, that’s if you believe the “facts” that Google delivers.

The search engine giant has joined Facebook as being a deliverer of fake news, thanks to the reliance of an algorithm which looks for popular results rather than true results.

Generally, Google escapes a lot of the bad press that other tech giants, quite fairly, cop.

Twitter is a place where nameless trolls say inexecutable things while Facebook is the place where ignorant people share their ignorant views in a way that is unreasonably popular. Just ask US President Donald Trump.

But now it’s Google’s term to cop some flak and it’s because the search engine, rather than just deliver results, also seeks to return what Danny Sullivan of Search Engine Land calls the “one true answer”.

The reason Google is now a spreader of lies and falsehood comes down to the realisation that we Google things we want an answer to.

Google Inc. headquarters in Mountain View, California. Picture: AP

Google Inc. headquarters in Mountain View, California. Picture: APSource:News Limited 

Want to know “when did World War II” end, you type it into Google. And rather than just get a link to dozens of websites, we also get a box at the top of the screen with the dates of World War II.

You have a question and now you have an answer.

This way of delivering a fact is called a “featured snippet”. It’s been a feature that Google has delivered since 2014 and, generally, people have been happy. But they’re not happy now because Google’s one true answer, in some cases, is total rubbish.

The problem is particularly highlighted with the Google Home speaker, the smart speaker that in some cases has been delivering dumb answers.

Several people have shared videos on YouTube and Twitter of asking Google Home the question: Is Obama planning a coup?

The real answer would be something like “naw mate, he’s living the good life and glad to be doing so”. The answer, according to Google, is yep — he’s in league with the Chinese.

Likewise, according to Google Home, there have been five US presidents who were members of the Ku Klux Klan. Nope, according to more reliable sources, there is no evidence that any US presidents were members of the Klan although some were racists. (Eight US presidents, including George Washington, owned slaves.)

You can keep going down this rabbit hole of misinformation that is not all right-wing conspiracies. According to Google snippets, Obama’s birth certificate is forged, Donald Trump is paranoid and mentally ill and “republicans = Nazis”.

 
 
 
 
 
 
  

Not all of the false answers are political. There is medical misinformation, including the claim that fluoride will lower your IQ, and it took God six days to create the Earth.

Google has issued a statement blaming the misinformation on the algorithm and says people can click on a feedback button on each boxed fact to report it as incorrect.

The problem Google faces in all of this is the amount of misinformation out there.

The “facts” that it delivers comes from the top ten results for each query. Arguably, Google is the messenger and someone else has created the falsehood and spread it.

Sullivan crunched the numbers to work out how Google might fix it.

It could, for instance, assign a person to check each fact.

But given Google processes 5 billion queries a day and about 15 per cent of them have featured snippets, that would require someone to check nearly 1 billion facts a day.

Or it could drop the feature altogether, but the problem in the age of Apple Siri, Amazon’s Alexa and Google Home, is that people are now used to asking a device a question and expecting an answer.

Other solutions would be to more obviously source the fact, so that it’s clear that it comes from something that is an unreliable source. Or only deliver snippets if they come from a list of vetted sites — but even that is problematic.

Here is the one real answer. Don’t believe everything you hear — even if the person talking is a smart speaker with artificial intelligence. They’ll say anything.

Source : http://www.news.com.au/technology/gadgets/google-joins-facebook-in-fake-news-cycle-with-algorithm-delivering-false-facts/news-story/1d65166dc1a2ac947aa3c0d10c806721

Categorized in Search Engine

At the end of 2016, YouTube suddenly changed its algorithm for calculating and presenting videos to viewers, leaving many popular creators to protest the change and label it as damaging to YouTubers everywhere.

YouTube’s new algorithm is just a sign of changing viewing habits and YouTube’s plan to reinvent itself. Contrary to the majority opinion, the algorithm change was necessary and is beneficial to small YouTubers.

This algorithm is responsible for what videos show up in the suggested tab beside the video a user’s currently watching and what’s on the trending tab. The algorithm deals with what videos are shown to a viewer compared to another video.

Many YouTubers such as pewdiepie and JackSepticEye said the new algorithm is killing their channels. They’ve claimed their videos are not being viewed as much as they used to be and that people are being randomly unsubscribed from their channels.

The algorithm does have major problems. Watch time is now the primary method for calculating what videos are displayed to viewers. Longer videos now do better on YouTube than shorter videos, but this doesn’t mean short videos don’t get views. Watch time isn’t a good, reliable factor for promoting certain videos.

Another issue is the Trending tab, which appears to be broken following the update. Whereas it previously showcased recent viral videos and up-and-coming videos, it now shows many videos from popular TV shows like NBC’s Today. These videos often have fewer views than new videos that aren’t on the Trending tab.

Despite the hate this new algorithm receives, it’s actually a good tool that smaller, unrecognized YouTubers can use to their advantage.

It all comes down to metadata, the behind-the-scenes information an uploader has to provide YouTube with when they upload their videos. This includes a video’s title, description, tags, thumbnail and playlists.

The platform is known for clickbait. This method works, but the algorithm works differently.

Large, established YouTube channels have fallen into the habit of promoting their new videos with clickbait and flashy thumbnails that don’t really have to do with the majority of that video’s content. They rely on their subscriber base to have notifications turned on or to arrive on their video watch page through a link on social media. That isn’t how it works anymore.

Large channels might be losing subscribers and getting fewer views because they aren’t adapting to the metadata system.

Using relevant tags and titles will allow YouTube to learn what a video is about. YouTube can then share the video as a recommendation to those looking for similar videos. Tagging videos with good search terms helps to get a video displayed higher on search results, which can lead to more views.

YouTuber Roberto Blake made a video detailing how creators can use good tags to get more views, even with the new algorithm.

“If you don’t know how to properly tag YouTube videos for search and discovery, then YouTube will have a harder time promoting your videos to new viewers and even to your subscribers based on what else they’ve watched,” Blake said in the video’s description.

YouTube is a search engine. Creators who understand this will have their videos rank higher and get more views if that particular topic is being search frequently. Making videos about trending topics will get more views than videos about the uploader’s life.

Just because a majority of YouTubers are calling out the new algorithm and complaining about losing views and subscribers doesn’t mean the algorithm is bad. It’s a flawed system that needs to be changed, but when it’s used correctly, success can still be found.

As long as these big YouTubers continue to blame the platform for their channels’ shortcomings, small YouTubers can grow by using good tags and understanding how to use the system.

Author : Chase Charaba

Source : http://www.puyalluppost.com/youtube-algorithm.htm/

Categorized in Social

If you’re reading this right now, it means you’re invested in search engine optimization.

Maybe you’ve been doing it for decades since Google was nascent. Or maybe you’re stumbling into SEJ for the first time as someone who’s brand new to SEO.

Either way, you’re here to learn how to use search engines to drive targeted traffic to your website and convert visitors into new customers, client, patients, readers, or loyal fans.

One of the hardest aspects of SEO is stay on top of all the updates Google announces, and especially the ones that it doesn’t.

Oh No

 

If you didn’t take a good hard look at your content before Panda came out, you might have felt the pain of a Panda slap and a big rankings drop.

If you didn’t clean up your trashy backlink profile before Penguin was released, that Penguin slap and rankings loss probably didn’t do you any favors.

You get it. A huge part of optimizing your website correctly for search engines is staying on top of algorithm changes in real-time.

But with so many resources to choose from in 2017, which bring you the most accurate information? And more importantly, which can you really trust?

Why Does Google Make Changes So Often?

First things first – why does Google make so many changes to its search algorithm?

In 2012 alone, Google launched 665 improvements to search. That number was probably even higher in 2013, 2014, 2015, and 2016.

Google’s mission with search is pretty straightforward: to give users the most valuable solution to their query. It sounds simple, but with almost five billion web pages on the web, Google search is taking on a massive undertaking. The constant changes in search algorithm are attempting to improve results for 40,000 searches every second and 1.2 trillion every year (not to mention ensure AdWords it improving their bottom line).

How Do I Stay Updated?

While Google makes hundreds of updates to search every year, knowing them all isn’t necessary for even the most skilled SEO. Remember that it’s much more important to stay up-to-date with the biggest updates and the most well-known changes in search results.

The most important updates are:

  1. Major algorithm updates like Panda, Penguin, Hummingbird, etc.
  2. Major click-biasing changes like knowledge graph, Google answer box, image carousel, etc.
  3. Major user behavior changes like mobile search, load speed expectations, CTR curve changes, etc.

These are the biggest updates that will affect the visibility of your website in search engine results. Here are the resources that will help you track these changes effectively.

1. Google Webmaster Tools Blog

The best place to start when it comes to Google updates is the source. While there are many trusted resources out there that report on Google search changes, it’s best to hear about major updates straight from the horse’s mouth.

The Google Webmaster Central Blog offers official news on crawling and indexing sites for the Google index.

Want to learn about the new mobile-friendly test API? Here’s a good place to start.

Google Webmaster Central Blog

The Google Webmasters YouTube Channel is another official Google source. This is a good option if you like to digest information via video and can be a nice change-of-pace from the nonstop onslaught of articles on new Google updates. 

Webmasters record videos from around the world, so the channel has a real international flavor. Subscribe to the channel and you’ll receive video updates via emails every once in a while from YouTube. Set it and forget it!

2. Search Engine Roundtable

Search Engine Roundtable is probably the best source outside of Google to find out about the most recent Google updates. They exclusively cover Google and publish five to ten articles or updates daily. They send out a daily recap daily so you can get all the news in your inbox without having to go searching.

Search Engine Roundtable

3. Moz

Moz is one of the companies at the forefront of SEO and inbound marketing, as it has been for years now. Their approach to making SEO accessible for everyone is their hallmark, and it makes them a must-bookmark source for SEO knowledge.

The Moz Blog is a great place not only to learn about Google’s most recent updates but to really dive deep into what they mean for you and your website. Knowing when and how updates affect websites in general is one thing, but learning exactly how they work and what you need to do to continue optimizing correctly and avoid penalties is going to be the difference-maker.

Join the email newsletter as well as the Moz Top 10 Newsletter to get updates right in your inbox.

Mozcast is another helpful Moz product. It’s a weather report showing turbulence in Google’s search algorithm for any given day. The stormier and hotter the weather, the more Google’s rankings changed.

Mozcast 100

4. Search Engine Journal

That’s right. There’s a reason most posts on SEJ receive hundreds or thousands of reads and shares – you’ve already arrived at a premier destination for SEO.

While most posts here go into real depth about a specific topic related to search engine marketing, Search Engine Journal offers a lot with regards to Google’s algorithm updates.

Matt Southern is the Lead Newswriter here and regularly publishes shorter articles that will give you the rundown on something happening right now.

For example, if you wanted to know what 200 sites were banned from Google search results for promoting fake news recently, Matt’s your guy.

5. Search Engine Land

Search Engine Land is another daily publication that covers all aspects of the search marketing industry. They have a section specifically to help you track Google’s updates, and you can of course subscribe to said updates via email.

6. Twitter

There are a few key figures in the SEO community you should follow. You can even activate mobile notifications so you’re the first one to see their tweets and learn about Google’s most recent updates.

The Real Trick

Now that you have all these great resources to stay up-to-date on Google’s changes, you need to set up a system that helps you keep track. Going to each of these websites every day isn’t sustainable, so let’s get you set up with a better system.

  1. Subscribe to each of the above periodicals above via email. That way, all your updates will flow directly into your inbox.
  2. Sign up for unroll.me and get all those updates in one, clean email every morning, afternoon or evening.

Now you’ll receive a daily update about Google algorithm changes without leaving your computer. Magic!

Knowing about the changes Google makes to its algorithm is essential to keeping your website’s content visible in search results. Staying vigilant, continuing to learn and setting up a system that will autonomously deliver you news are the keys towards maintaining an SEO-friendly website.

Author : Joe Howard

Source : https://www.searchenginejournal.com/communication-overload-keeping-google-searchs-constant-updates/185168/

Categorized in Search Engine

Did your rankings in Google get better or worse over the past week? Many webmasters and SEOs are noticing some significant changes in Google's search rankings algorithm.

Last Tuesday, Feb. 7, there seems to have been a Google algorithm change that adjusted how many sites rank — both for good and bad. I’ve been tracking the update since Feb. 8, and over time, more and more webmasters and SEOs have been taking notice of the ranking changes at Google.

This seems to be unrelated to the unconfirmed link algorithm change from earlier in February. This new update seems to be more related to Panda, based on such things as content and site quality, versus link factors.

Google has not confirmed the update and would not comment on what webmasters and SEOs have been noticing over the past week in the search results. So we cannot confirm if this was a content quality shift, link quality change or something else. But what we can say is that webmasters and SEOs are very busy noticing these ranking changes, through looking at ranking reports or their traffic from Google in their analytics, or using tracking tools that track visibility and other means.

The automated tracking tools from Mozcast, RankRanger, Accuranker and others also all showed evidence of an algorithm update on Feb. 7.

This update seems to have been somewhat significant, which is why we reached out to Google for a comment. If we hear more from Google, we will update you. But for now, this is all based on the conversation and chatter that I track closely within the industry.

Author : Barry Schwartz

Source : http://searchengineland.com/new-unconfirmed-google-algorithm-update-touched-february-7th-269338

Categorized in Search Engine
Page 3 of 6

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now