fbpx

Search-engine giant says one in 10 queries (and some advertisements) will see improved results from algorithm change

MOUNTAIN VIEW, Calif.—Google rarely talks about its secretive search algorithm. This week, the tech giant took a stab at transparency, unveiling changes that it says will surface more accurate and intelligent responses to hundreds of millions of queries each day.

Top Google executives, in a media briefing Thursday, said they had harnessed advanced machine learning and mathematical modeling to produce better answers for complex search entries that often confound its current algorithm. They characterized the changes—under a...

Read More...

[Source: This article was published in wsj.com By Rob Copeland - Uploaded by the Association Member: Jasper Solander] 

 
Categorized in Search Engine

Google said it is making the biggest change to its search algorithm in the past five years that, if successful, users might not be able to detect.

The search giant on Friday announced a tweak to the software underlying its vaunted search engine that is meant to better interpret queries when written in sentence form. Whereas prior versions of the search engine may have overlooked words such as “can” and “to,” the new software is able to help evaluate whether those change the intent of a search, Google has said. Put a bit more simply, it is a way of understanding search terms in relation to each other and it looks at them as an entire phrase, rather than as just a bucket of words, the company said. Google is calling the new software BERT, after a research paper published last year by Google executives describing a form of language processing known as Bidirectional Encoder Representations from Transformers.

While Google is constantly tweaking its algorithm, BERT could affect as many as 10 percent of English language searches, said Pandu Nayak, vice president of search, at a media event. Understanding queries correctly so Google returns the best result on the first try is essential to Google’s transformation from a list of links to determining the right answer without having to even click through to another site. The challenge will increase as queries increasingly move from text to voice-controlled technology.

But even big changes aren’t likely to register with the masses, he conceded.

“Most ranking changes the average person does not notice, other than the sucking feeling that their searches were better,” said Nayak.

“You don’t have the comparison of what didn’t work yesterday and what does work today,” said Ben Gomes, senior vice president of search.

BERT, said Nayak, may be able to determine that a phrase such as “math practice books for adults” likely means the user wants to find math books that adults can use, because of the importance of the word “for.” A prior version of the search engine displayed a book result targeted for “young adults,” according to a demonstration he gave.

Google is rolling out the new algorithm to U.S. users in the coming weeks, the company said. It will later offer it to other countries, though it didn’t offer specifics on timing.

The changes suggest that even after 20 years of data collection and Google’s dominance of search — with about 90 percent market share — Web searches may best be thought of as equal parts art and science. Nayak pointed to examples like searches for how to park a car on a hill with no curb or whether a Brazilian needs a visa to travel to the United States as yielding less than satisfactory results without the aide of the BERT software.

To test BERT, Google turned to its thousands of contract workers known as “raters,” Nayak said, who compared results from search queries with and without the software. Over time, the software learns when it needs to read entire phrases versus just keywords. About 15 percent of the billions of searches conducted each day are new, Google said.

Google said it also considers other input, such as whether a user tries rephrasing a search term rather than initially clicking on one of the first couple of links.

Nayak and Gomes said they didn’t know whether BERT would be used to improve advertising sales that are related to search terms. Advertising accounts for the vast majority of Google’s revenue.

[Source: This article was published inunionleader.com By Greg Bensinger - Uploaded by the Association Member: Jeremy Frink]

Categorized in Search Engine

Source: This article was Published searchengineland.com By Barry Schwartz - Contributed by Member: Clara Johnson

Some SEOs are seeing more fluctuations with the Google rankings now, but Google has confirmed the August 1 update has been fully rolled out.

Google has just confirmed that the core search algorithm update that began rolling out a week ago has now finished fully rolling out. Google search liaison Danny Sullivan said on Twitter, “It’s done” when I asked him if the rollout was complete.

Danny did add that if we are seeing other changes, “We always have changes that happen, both broad and more specific.” This is because some of the tracking tools are seeing more fluctuations today, and if they are unrelated to this update, the question is what they can be attributed to.

Here is Danny’s tweet:

@dannysullivan is the rollout of the core update complete? Seeing fluctuations today.

It's done. That said, we always have changes that happen, both broad and more specific.

Based on our research, the August 1 update was one of the more significant updates we have seen from Google on the organic search side in some time. It continued to roll out over the weekend and has now completed.

Google’s current advice on this update is that webmasters do not need to make any technical changes to their websites. In fact, the company said, “no fix” is required and that it is aimed at promoting sites that were once undervalued. Google has said that you should continue to look at ways of making your overall website better and provide even better-quality content and experiences to your website visitors.

Now that the rollout is complete, you can check to see if your site was impacted. But as Danny Sullivan said above, there are always changes happening in search.

 

Categorized in Search Engine

Source: This article was published phys.org - Contributed by Member: Logan Hochstetler

As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools.

With this in mind, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley is developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific , called Science Search, that the Berkeley team is building.

As a proof-of-concept, the team is working with staff at the Department of Energy's (DOE) Molecular Foundry, located at Berkeley Lab, to demonstrate the concepts of Science Search on the images captured by the facility's instruments. A beta version of the platform has been made available to Foundry researchers.

"A tool like Science Search has the potential to revolutionize our research," says Colin Ophus, a Molecular Foundry research scientist within the National Center for Electron Microscopy (NCEM) and Science Search Collaborator. "We are a taxpayer-funded National User Facility, and we would like to make all of the data widely available, rather than the small number of images chosen for publication. However, today, most of the data that is collected here only really gets looked at by a handful of people—the data producers, including the PI (principal investigator), their postdocs or graduate students—because there is currently no easy way to sift through and share the data. By making this raw data easily searchable and shareable, via the Internet, Science Search could open this reservoir of 'dark data' to all scientists and maximize our facility's scientific impact."

The Challenges of Searching Science Data

Today, search engines are ubiquitously used to find information on the Internet but searching  data presents a different set of challenges. For example, Google's algorithm relies on more than 200 clues to achieve an effective search. These clues can come in the form of keywords on a webpage, metadata in images or audience feedback from billions of people when they click on the information they are looking for. In contrast, scientific data comes in many forms that are radically different than an average web page, requires context that is specific to the science and often also lacks the metadata to provide context that is required for effective searches.

At National User Facilities like the Molecular Foundry, researchers from all over the world apply for time and then travel to Berkeley to use extremely specialized instruments free of charge. Ophus notes that the current cameras on microscopes at the Foundry can collect up to a terabyte of data in under 10 minutes. Users then need to manually sift through this data to find quality images with "good resolution" and save that information on a secure shared file system, like Dropbox, or on an external hard drive that they eventually take home with them to analyze.

Oftentimes, the researchers that come to the Molecular Foundry only have a couple of days to collect their data. Because it is very tedious and time-consuming to manually add notes to terabytes of scientific data and there is no standard for doing it, most researchers just type shorthand descriptions in the filename. This might make sense to the person saving the file but often doesn't make much sense to anyone else.

"The lack of real metadata labels eventually causes problems when the scientist tries to find the data later or attempts to share it with others," says Lavanya Ramakrishnan, a staff scientist in Berkeley Lab's Computational Research Division (CRD) and co-principal investigator of the Science Search project. "But with machine-learning techniques, we can have computers help with what is laborious for the users, including adding tags to the data. Then we can use those tags to effectively search the data."

To address the metadata issue, the Berkeley Lab team uses machine-learning techniques to mine the "science ecosystem"—including instrument timestamps, facility user logs, scientific proposals, publications and file system structures—for contextual information. The collective information from these sources including the timestamp of the experiment, notes about the resolution and filter used and the user's request for time, all provide critical contextual information. The Berkeley lab team has put together an innovative software stack that uses machine-learning techniques including natural language processing pull contextual keywords about the scientific experiment and automatically create metadata tags for the data.

For the proof-of-concept, Ophus shared data from the Molecular Foundry's TEAM 1 electron microscope at NCEM that was recently collected by the facility staff, with the Science Search Team. He also volunteered to label a few thousand images to give the machine-learning tools some labels from which to start learning. While this is a good start, Science Search co-principal investigator Gunther Weber notes that most successful machine-learning applications typically require significantly more data and feedback to deliver better results. For example, in the case of search engines like Google, Weber notes that training datasets are created and machine-learning techniques are validated when billions of people around the world verify their identity by clicking on all the images with street signs or storefronts after typing in their passwords, or on Facebook when they're tagging their friends in an image.

Berkeley Lab researchers use machine learning to search science data
This screen capture of the Science Search interface shows how users can easily validate metadata tags that have been generated via machine learning or add information that hasn't already been captured. Credit: Gonzalo Rodrigo, Berkeley Lab

"In the case of science data only a handful of domain experts can create training sets and validate machine-learning techniques, so one of the big ongoing problems we face is an extremely small number of training sets," says Weber, who is also a staff scientist in Berkeley Lab's CRD.

To overcome this challenge, the Berkeley Lab researchers used to transfer learning to limit the degrees of freedom, or parameter counts, on their convolutional neural networks (CNNs). Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model on a second task, which allows the user to get more accurate results from a smaller training set. In the case of the TEAM I microscope, the data produced contains information about which operation mode the instrument was in at the time of collection. With that information, Weber was able to train the neural network on that classification so it could generate that mode of operation label automatically. He then froze that convolutional layer of the network, which meant he'd only have to retrain the densely connected layers. This approach effectively reduces the number of parameters on the CNN, allowing the team to get some meaningful results from their limited training data.

Machine Learning to Mine the Scientific Ecosystem

In addition to generating metadata tags through training datasets, the Berkeley Lab team also developed tools that use machine-learning techniques for mining the science ecosystem for data context. For example, the data ingest module can look at a multitude of information sources from the scientific ecosystem—including instrument timestamps, user logs, proposals, and publications—and identify commonalities. Tools developed at Berkeley Lab that uses natural language-processing methods can then identify and rank words that give context to the data and facilitate meaningful results for users later on. The user will see something similar to the results page of an Internet search, where content with the most text matching the user's search words will appear higher on the page. The system also learns from user queries and the search results they click on.

Because scientific instruments are generating an ever-growing body of data, all aspects of the Berkeley team's science search engine needed to be scalable to keep pace with the rate and scale of the data volumes being produced. The team achieved this by setting up their system in a Spin instance on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC). Spin is a Docker-based edge-services technology developed at NERSC that can access the facility's high-performance computing systems and storage on the back end.

"One of the reasons it is possible for us to build a tool like Science Search is our access to resources at NERSC," says Gonzalo Rodrigo, a Berkeley Lab postdoctoral researcher who is working on the natural language processing and infrastructure challenges in Science Search. "We have to store, analyze and retrieve really large datasets, and it is useful to have access to a supercomputing facility to do the heavy lifting for these tasks. NERSC's Spin is a great platform to run our search engine that is a user-facing application that requires access to large datasets and analytical data that can only be stored on large supercomputing storage systems."

An Interface for Validating and Searching Data

When the Berkeley Lab team developed the interface for users to interact with their system, they knew that it would have to accomplish a couple of objectives, including effective search and allowing human input to the machine learning models. Because the system relies on domain experts to help generate the training data and validate the machine-learning model output, the interface needed to facilitate that.

"The tagging interface that we developed displays the original data and metadata available, as well as any machine-generated tags we have so far. Expert users then can browse the data and create new tags and review any machine-generated tags for accuracy," says Matt Henderson, who is a Computer Systems Engineer in CRD and leads the user interface development effort.

To facilitate an effective search for users based on available information, the team's search interface provides a query mechanism for available files, proposals and papers that the Berkeley-developed machine-learning tools have parsed and extracted tags from. Each listed search result item represents a summary of that data, with a more detailed secondary view available, including information on tags that matched this item. The team is currently exploring how to best incorporate user feedback to improve the models and tags.

"Having the ability to explore datasets is important for scientific breakthroughs, and this is the first time that anything like Science Search has been attempted," says Ramakrishnan. "Our ultimate vision is to build the foundation that will eventually support a 'Google' for scientific data, where researchers can even  distributed datasets. Our current work provides the foundation needed to get to that ambitious vision."

"Berkeley Lab is really an ideal place to build a tool like Science Search because we have a number of user facilities, like the Molecular Foundry, that has decades worth of data that would provide even more value to the scientific community if the data could be searched and shared," adds Katie Antypas, who is the principal investigator of Science Search and head of NERSC's Data Department. "Plus we have great access to machine-learning expertise in the Berkeley Lab Computing Sciences Area as well as HPC resources at NERSC in order to build these capabilities."

Categorized in Online Research

Google has confirmed rumors that a search algorithm update took place on Monday. Some sites may have seen their rankings improve, while others may have seen negative or zero change.

Google has posted on Twitter that it released a “broad core algorithm update” this past Monday. Google said it “routinely” does updates “throughout the year” and referenced the communication from the previous core update.

Google explained that core search updates happen “several times per year” and that while “some sites may note drops or gains,” there is nothing specific a site can do to tweak its rankings around these updates. In general, Google says to continue to improve your overall site quality, and the next time Google runs these updates, hopefully, your website will be rewarded.

Google explained that “pages that were previously under-rewarded” would see a benefit from these core updates.

Here is the statement Google previously made about this type of update:

Each day, Google usually releases one or more changes designed to improve our results. Some are focused around specific improvements. Some are broad changes. Last week, we released a broad core algorithm update. We do these routinely several times per year.

As with any update, some sites may note drops or gains. There’s nothing wrong with pages that may now perform less well. Instead, it’s that changes to our systems are benefiting pages that were previously under-rewarded.

There’s no “fix” for pages that may perform less well, other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages.

Here is Google’s confirmation from today about the update on Monday:

Screenshot 4

 

Source: This article was published searchengineland.com By Barry Schwartz

Categorized in Search Engine

Still, growing frustration with rude, and even phony, online posting begs for some mechanism to filter out rubbish. So, rather than employ costly humans to monitor online discussion, we try to do it with software.

Software does some things fabulously well, but interpreting language isn’t usually one of them.

I’ve never noticed any dramatic difference in attitudes or civility between the people of Vermont and New Hampshire, yet the latest tech news claims that Vermont is America’s top source of “toxic” online comments, while its next-door neighbor New Hampshire is dead last.

Reports also claim that the humble Chicago suburb of Park Forest is trolls’ paradise.

After decades living in the Chicago Metropolitan area, I say without hesitation that the people of Park Forest don’t stand out from the crowd, for trolling or anything else. I don’t know whether they wish to stand out or not, but it’s my observation that folks from Park Forest just blend in. People may joke about Cicero and Berwyn, but not Park Forest.

So what’s going on? Software.

Perspective, a tool intended to identify “toxic” online comments, is one of the Jigsaw projects, Google experiments aimed at promoting greater safety online. Users feed it comments, and Perspective returns a 0-100 score for the percent of respondents likely to find the comment “toxic,” that is, likely to make them leave the conversation.

It was released months ago, but has drawn a blast of new publicity in the past few days since Wired used it for development of “Trolls Across America,” an article featuring an online map highlighting supposed trolling hotspots across the country.

Interpreting language is one of the most complex and subtle things that people do. The meaning of human communication is based in much more than the dictionary meaning of words. Tone of voice, situation, personal history and many other layers of context have roles to play.

The same remark may hold different significance for each person who hears it. Even one person may view a statement differently at different moments. Human language just does not lend itself to the kinds of strict rules of interpretation that are used by computers.

As soon as Perspective (which is clearly labeled as a research project) was announced, prospective users were warned about its limitations. Automated moderation was not recommended, for example. One suggested use was helping human moderators decide what to review.

David Auerbach, writing for MIT’s Technology Review, soon pointed out that “It’s Easy to Slip Toxic Language Past Alphabet’s Toxic-Comment Detector. Machine-learning algorithms are no match for the creativity of human insults.” He tested an assortment of phrases, getting results like these:

  • “‘Trump sucks’ scored a colossal 96 percent, yet neo-Nazi codeword ‘14/88’ only scored 5 percent.” [I also tested “14/88” and got no results at all. In fact, I tested all of the phrases mentioned by Auerbach and got somewhat different results, though the patterns were all similar.]
  • “Jews are human,” 72. “Jews are not human,” 64.
  • “The Holocaust never happened,” 21.

Twitter’s all atwitter with additional tests results from machine learning researchers and other curious people. Here is a sample of the phrases that were mentioned, in increasing order of toxicity scores from Perspective:

  1. I love the Führer, 8
  2. I am a man, 20
  3. I am a woman, 41
  4. You are a man, 52
  5. Algorithms are likely to reproduce human gender and racial biases, 56
  6. I am a Jew, 74
  7. You are a woman, 79

Linguistically speaking, most of these statements are just facts. If I’m a woman, I’m a woman. If you’re a man, you’re a man. If we interpret such statements as something more than neutral facts, we may be reading too much into them. “I love the Führer” is something else entirely.  To look at these scores, though, you’d get a very different impression.

The problem is, the scoring mechanism can’t be any better than the rules behind it.

Nobody at Google set out to make a rule that assigned a low toxicity score to “I love the Führer” or a high score to “I am a Jew.” The rules were created in large part through automation, presenting a crowd of people with sample comments and collecting opinions on those comments, then assigning scores to new comments based on similarity to the example comments and corresponding ratings.

This approach has limitations. The crowd of people are not without biases, and those will be reflected in the scores. And terminology not included in the sample data will create gaps in results.

A couple of years ago, I heard a police trainer tell a group of officers that removing one just word from their vocabulary could prevent 80% of police misconduct complaints filed by the public. The officers had no difficulty guessing the word. It’s deeply embedded in police jargon, and has been for so long that it got its own chapter in the 1978 law enforcement book Policing: A View from the Street.

Yet the same word credited for abundant complaints of police misconduct has appeared in at least 3 articles here on Forbes in the past month (123.), and not drawn so much as a comment.

Often, it’s not the words that offend, but the venom behind them. And that’s hard, if not impossible, to capture in an algorithm.

This isn’t to say that technology can’t do some worthwhile things with human language.

Text analytics algorithms, rules used by software to convert open-ended text into more conventional types of data, such as categories or numeric scores, can be useful. They lie at the heart of online search technology, for example, helping us find documents to topics of interest. Some other applications include:

  • e-discovery, which increases productivity for legal teams reviewing large quantities of documents for litigation
  • Warranty claim investigation, where text analysis helps manufacturers to identify product flaws early and enable corrective action
  • Targeted advertising, which uses text from content that users read or create to present relevant ads

It takes more than a dictionary to understanding the meaning of language. Context, on the page and off, is all important.

People recognize the connections between the things that people write or say, and the unspoken parts of the story. Software doesn’t do that so well.

Meta S. Brown is author of Data Mining for Dummies and creator of the Storytelling for Data Analysts and Storytelling for Tech workshops. http://www.metabrown.com.

Source: This article was published forbes.com

Categorized in Search Engine

Google is in the process of revamping its existing search algorithm to curb the promotion of extreme views, conspiracy theories and, most importantly, fake news.

The internet giant that has an internal and undisclosed ranking for websites and their URLs said it will demote "low-quality" websites especially those circulating misleading or fake content. A group of 10,000-plus staff, known as "raters" will assess search results and flag web pages that host hoaxes, conspiracy theories and content that is sub-par.

"In a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system," Google's Ben Gomes said in a blog post. "In order to have long-term and impactful changes, more structural changes [to Google's search engine] are needed."

Check out some of the major changes Google has made public regarding its algorithm change:

  • Users can now report offensive suggestions from the Autocomplete feature and false statements in Google's Direct Answer box, which will be manually checked by a moderator.
  • Users can even flag content that appears on Featured Snippets in search
  • Instead of bots that have been traditionally used by search companies, Google assures real people will assess the quality of Google's search results
  • Low-quality web pages with content of conspiracy theories, extremism and unreliable sources will be demoted in ranking
  • More authoritative pages with strong sources and facts will be rated higher
  • Linking to offending websites and hiding text on a page that is invisible to humans, but visible to the search algorithms can also demote a webpage
  • Suspicious files and formats not recognised on landing pages which the company warns is malware in many cases

For a detailed explanation on how the company determines its search and rankings check out its search quality evaluation guidelines that has been updated. The company that has been secretive about its search strategy in the past, has now promised more transparency to let people know how the business works after coming under fire for failing to combat fake and extremist content.

Source : ibtimes.co.uk by Agamoni Ghosh

Categorized in Search Engine

Yesterday, Google released a new quality raters guidelines PDF document that was specifically updated to tell the quality raters how to spot and flag offensive, upsetting, inaccurate and hateful web pages in the search results.

Paul Haahr, a lead search engineer at Google who celebrated his 15th year at the company, told us that Google has been working on algorithms to combat web pages that are offensive, upsetting, inaccurate and hateful in their search results. He said it only impacts about 0.1% of the queries but it is an important problem.

With that, they want to make sure their algorithms are doing a good job. So that is why they have updated their quality raters guidelines so that they can test to make sure the search results reflect their algorithms. If they don't that data goes back to the engineers where they can tweak things or make new algorithms or machine learning techniques to weed out even more of the content Google doesn't want in their search results.

Paul Haahr explained that there are times where people specifically want to find hateful or inaccurate information. Maybe on the inaccurate side, they like satire sites or maybe on the hate side, they hate people. Google should not prevent people from finding content that they want, Paul said. And the quality raters guidelines explains with key examples on how raters should rate such pages.

But overall, ever since the elections, Google, Facebook and others have been under fire to do something about facts and hate and more. They released fact checking schema for news stories. They supposedly banned AdSense publishers. They removed certain classes of hate and inaccurate results from the search results. And they tweaked the top stories algorithm to show more accurate and authoritative results.

Google has been working on this and they want to continue working on this. The quality raters will help make sure what the engineers are doing, does translate into proper search results. At the same time, as you all mostly know, quality raters have no power to remove search results or adjust rankings, they just rate the search results and that data goes back to the Google engineers to use.

Both Danny Sullivan dug into this and Jennifer Slegg dug into the quality raters guidelines changes. So go to those two sites to read the summaries on how Google defines them, overall it is pretty fascinating because it is not an easy solution or easy judgement calls - so Google has to define them pretty precisely.

It is an important problem, but with only 0.1% of queries impacted, seems like a lot of effort is being put on this.

Download the updated raters guidelines over here..

Forum discussion at WebmasterWorld.

Author : Barry Schwartz

Source : https://www.seroundtable.com/google-algorithms-targets-offensive-inaccurate-hate-23558.html

Categorized in Search Engine

There appears to be a large Google ranking or algorithm update happening right now in the core Google web search results. I was on the fence about covering it this morning and decided to wait for tomorrow until I heard more chatter, but the chatter is espesially hot right now and SEOs keep asking me about it this morning.

The Black Hat World forums and WebmasterWorld forums are the most active discussing it - it does seem to be focused more in the black hat space, since that thread is going insane right now. So this might be a link related Google algorithm shift, but Google has not confirmed anything and I doubt they will - I have already asked.

The first signs of this update were earlier this morning, March 8, 2017. Here are some of the comments from the threads:

A MASSIVE drop in traffic happened on my site on the 7th of March at 8pm and has continued today (8th March). I have no idea why.

I have also suffered from the 20%-40% drops in traffic in Feb

Iam also having some serious issues. My Websites dropped out of the Index around 9 hours ago. Im still trying to find out whats going on. None of them has used Black hat Methods or some other stuff. They just disappeared from the normal search index. Still the site: command shows that they are. On the upper hand some serious spam in my niche just came up from feed-scrappers and stuff. Maybe someone has an explanation.

I've noticed dramatic shifting in the past 24 hours with my website! Any one else experience the same thing?


Yes, experiencing a crazy Google dance too. At first I thought my site was deindexed!

90% of my keywords flew into oblivion. While 5% of the keywords I'm ranking have sank to deep search results; while the other 5% is still ranking as normal.

I am having the same issue. It started around 18:00 from yesterday. My traffic from google almost stopped. At that time, I was getting 70-100 users at same time and now I am getting 5-12. My small site had aprox 5500-7000 uniques day and around 13000-16000 page views/day . No black hat seo, but I have some spam links, probably from competitors. My site have almost 10 years.

I can't quote all the posts, there are just too many. The tools are not on fire, yet - I suspect some of them may show these signals tomorrow morning. So I'll probably report back tomorrow about what the tools are saying and what theories are out there regarding this update.

It seems some are saying that their rankings tanked this morning, some saw pages completely delisted, and some are reporting that around 1pm EST their rankings returned back to normal. Things are in massive flux right now.

It is hard to tell if this is solely focused on spammy links, low quality content, or something else. It could also be a Google bug for all I know.

Like I said, I am working on getting you all more details but for now, here are some more resources from the community where the discussion is actively happening now.

Forum discussion at WebmasterWorldBlack Hat WorldGoogle Webmaster Help and many on Twitter.

Author : Barry Schwartz

Source : https://www.seroundtable.com/google-algorithm-ranking-update-23523.html

Categorized in Search Engine

Early yesterday morning, the SEO underground was buzzing with alarm, as webmasters shared horror stories of dropped rankings, tanking keywords, and halted traffic.

Search Engine Roundtable’s Barry Schwartz reported chatter occurring in the wee hours of March 8th, with several forums proclaiming “massive” drops in traffic. Over the course of the morning, users commiserated with each other and shared news of the statuses of their sites, which laid out a pretty bleak landscape in the wake of the possible algorithm update. Here’s what people were reporting:

  • 90% loss of keyword positions
  • Sites disappearing while others stayed – or even improved
  • 20,000 visits down to 2,000
  • Mobile pages being deindexed

Some webmasters even reported their rankings returning by the afternoon. “Things are in massive flux right now,” says Schwartz, who continues to keep his ears and eyes open for changes.

Granted, these are just some examples of what people were experiencing. They are in no way meant to describe what this algorithm update/ranking change may exactly entail. However, they are significant enough that we felt the need to share it so you’re aware.

Is it link-related?

Perhaps. Many webmasters have admitted to their sites having spam links (“from competitors,” they have said), which, depending on the severity of the links and the potency of the update/ranking shift could affect their site. Still, to drop the sites entirely from the SERPs, as some have claimed, seems very extreme. Even so, sites have been disappearing across a wide variety of niches. It would depend on the types of sites before we could make any sort of guess as to why this could be happening.

One common theory seems to be that Google is making a major move against private blog networks (PBNs). These are a set of blog sites under a single owner that link to the same owner’s “money sites,” effectively spreading link juice back to the sites that make money. PBNs have been considered grey hat territory for a while, but this algorithm update could prove how Google really feels about them.

My website has been hit – help!

Stay calm. It’s possible this is a bug in Google’s systems or simply a “dance,” as some have phrased it, going on with rankings. Whatever is happening, the fluctuations we’ve seen seem too extreme for Google to stay quiet for long.

Author : John Caiozzo

Source : http://www.business2community.com/seo/warning-possible-google-algorithm-update-blame-massive-losses-traffic-01796047#5e4GOHIZkAxjudRr.97

Categorized in Search Engine
Page 1 of 2

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now