Was there a major Google algorithm change this week? Many webmasters believe so.

 

Earlier this month, we reported about significant chatter around a Google algorithm update. Well, it looks like we have another update to report to you this week.

On Tuesday of this week, there were some early signals of a Google update. Those signalsintensified Thursday and seem to just be getting stronger day by day.

In short, the webmaster and SEO community is confident that there was an algorithm change with the Google organic search results this week. Not only are the SEO forums and communities discussing it, the tracking tools from MozcastAccurankerRankRanger and others have also shown significant fluctuations in the organic rankings in Google.

Google’s PR team wouldn’t directly comment. Instead, they pointed to a tweet by John Mueller from Google: “nothing specific, sorry — we’re always working to improve things!” This is in response to questions about an algorithm update. John also said this morning on Twitter that these are normal fluctuations:

In any event, it seems this is not directly related to the Google Penguin update we are all anxiously awaiting.

 

Source : http://searchengineland.com/google-downplays-google-algorithm-ranking-update-week-normal-fluctuations-258923

Categorized in Search Engine

Brands grapple daily with the best social media marketing strategy for their objectives. As technology and audience preferences change, achieving successful marketing transformation is like trying to hit a moving target.

Creating compelling and effective content for today’s social media landscape isn’t always easy, but Facebook and Google have been working hard lately to determine what audiences want in their news feeds, their search results, and even their advertisements. The brands’ research was aimed at increasing their own ad revenue, but enterprise marketers can also benefit tremendously from their findings. Here are the highlights

Learn the Meaning of Quality

Facebook launched a beta program that allowed some of its users to rate the quality of content in their News Feeds, and the criteria was how informative the content was. The aim of this metric and experiment was to eliminate misleading “clickbait” from the user experience—thus reducing the amount of times users clicked out of their News Feeds only to be disappointed by what they found. Facebook seeks to eliminate such content while focusing on posts shared by friends and family, as opposed to brands or celebrity personalities, which has only upped the ante for brands’ content teams. Now, more than ever, content must be compelling enough to stand on its own instead of relying on catchy headlines or brand status updates.

But what makes a piece of content compelling? To get to the bottom of that question, Facebook developed its Feed Quality program, which involves a huge panel of users who, according to USA Today, rate posts in their News Feeds on a scale of one (“really not informative”) to five (“really informative”). USA Today reported:

The Feed Quality Program surveys the opinions of tens of thousands of people a day, Facebook says. From there, Facebook developed a methodology—a ranking signal combined with how relevant the story might be to you personally—to predict which of the posts would most interest individual users, taking into account their relationship to the person or publisher and what they typically choose to click on, comment on, or share.

coffee ipad

Stay Relevant

Google recently released its Search Quality Rating Guidelines document in its entirety, which fully explains the methods behind the search giant’s madness. While poring through its 160 pages might not directly lead to marketing transformation within your business, the transparency and key takeaways both help us understand what it takes to remain relevant in News Feeds and search results alike.

As we already know, Google likes to see that websites are authoritative and trustworthy, and high-quality, frequently shared content goes a long way towards creating an internet “paper trail” that demonstrates those very things. Just like Facebook tweaked its algorithms to favor content that’s shared by friends, Google views cross-links and shares by real, active users as endorsements for a page’s quality.

To stay relevant, brands need to create high-quality, shareable content that real people find valuable and want to show their friends. That will show Google that your webpage is worth displaying when people are searching for a topic you’re an expert in. This might sound like SEO 101, but in the ever-changing landscape of social network algorithms, revamped search engines, and social media marketing strategies, it’s important to identify exactly how things are working today. The good news? It’s not as mysterious as it might seem.

Learn from the Best

We’re witnessing a major pivot in the internet marketing timeline. Buzzfeed built a billion-dollar business starting with clickbait titles and listicles, but it has since upgraded its business model to become a diverse and credible media outlet. Brands that once saw success by emulating the headline-heavy, substance-light content that Buzzfeed employed are now realizing that as algorithms and audiences alike get wise to their ways, the quality of their content is more important than ever. And even though Facebook and Google’s experiments were both aimed at making quotidian user experience better in hopes of keeping eyeballs on their pages longer (all the better to sell you ads, my reader), we also know how people really feel about interrupt advertising. If your social media marketing strategy still revolves around branded pages or purchased bandwidth, it may be time to reevaluate and take a hint from the internet’s heaviest hitters.

There’s no question that Facebook and Google influence the tides of the internet as powerfully as the moon moves our oceans—so learning from their revamped algorithms and rating systems is a great way to position your brand’s content for success. While that might sound like it requires an alchemical understanding of some computer hidden deep in Silicon Valley, it’s actually much simpler. Write high-quality content that people you know might like to read and share, and the results will follow.

If there’s anything we can learn from the Feed Quality program and Search Quality Rating Guidelines, it’s that Facebook and Google are just trying to make their computers better at understanding what people actually want to see. So don’t stress about pleasing computers—focus on pleasing people, and the results will speak for themselves.

Source : http://www.skyword.com/contentstandard/creativity/the-latest-social-media-marketing-lessons-from-facebook-and-google-algorithms/

 

THE SAGA OF Facebook Trending Topics never seems to end—and it drives us nuts.

First, Gizmodo said that biased human curators hired by Facebook—not just automated algorithms—were deciding what news stories showed up as Trending Topics on the company’s social network, before sprucing them up with fresh headlines and descriptions. Then a US Senator demanded an explanation from Facebook because Gizmodo said those biased humans were suppressing conservative stories. So, eventually, Facebook jettisoned the human curators so that Trending Topics would be “more automated.” Then people complained that the more algorithmically driven system chose a fake story about Fox News anchor Megyn Kelly as a Trending Topic.

Don’t get us wrong. The Facebook Trending Topics deserve scrutiny. They’re a prominent source of news on a social network that serves over 1.7 billion people. But one important issue was lost among all the weird twists and turns—and the weird way the tech press covered those twists and turns. What everyone seems incapable of realizing is that everything on the Internet is run by a mix of automation and humanity. That’s just how things work. And here’s the key problem: prior to Gizmodo’s piece, Facebook seemed to imply that Trending Topics was just a transparent looking glass into what was most popular on the social network.

Yes, everything on the Internet is a mix of the human and inhuman. Automated algorithms play a very big role in some services, like, say, the Google Search Engine. But humans play a role in these services too. Humans whitelist and blacklist sites on the Google Search Engine. They make what you might think of as manual decisions, in part because today’s algorithms are so flawed. What’s more—and this is just stating what should be obvious—humans write the algorithms. That’s not insignificant. What it means is that algorithms carry human biases. They carry the biases of the people who write them and the companies those people work for. Algorithms drive the Google Search Engine, but the European Union is still investigating whether Google—meaning: the humans at Google—instilled this search engine with a bias in favor of other Google services and against competing services.

“We have to let go of the idea that there are no humans,” says Tarleton Gillespie, a principal researcher at Microsoft Research who focuses on the impact of social media on public discourse. That’s worth remembering when you think about the Facebook Trending Topics. Heck, it’s worth repeating over and over and over again.

Facebook’s ‘Crappy’ Algorithm

Jonathan Koren worked on the technology behind the Facebook Trending Topics. The bottom line, says the former Facebook engineer, is that the algorithm is “crappy.” As he puts it, this automated system “finds ‘lunch’ every day at noon.” That’s not the indictment you may think it is. The truth is that so many of today’s computer algorithms are crappy—though companies and coders are always working to improve them. And because they’re crappy, they need help from humans.

That’s why Facebook hired those news curators. “Identifying true news versus satire and outright fabrication is hard—something computers don’t do well,” Koren says. “If you want to ship a product today, you hire some curators and the problem goes away. Otherwise, you fund a research project that may or may not meet human equivalence, and you don’t have a product until it does.” This is a natural thing for Facebook or any other Internet company to do. For years, Facebook, Twitter, and other social networks used humans to remove or flag lewd and horrific content on their platforms.

So, Koren and about five or six other engineers ran a Trending Topics algorithm at Facebook headquarters in Menlo Park, California, and across the country in New York, news curators filtered and edited the algorithm’s output. According to Gizmodo, they also “injected” stories that in some cases weren’t trending at all. (A leaked document obtained by The Guardian, however, showed Facebook guidelines said a topic had to appear in at least one tool before it could be considered for the Trending module.) The setup made sense, though Koren says he privately thought that the humans involved were overqualified. “It always struck me as a waste to have people with real journalism degrees essentially surfing the web,” he says.

Trending versus ‘Trending’

When it looked like Gizmodo’s story was finally blowing over, Facebook got rid of its journalist news curators—then it promptly had to deal with the fake Megyn Kelly story. People blamed the more algorithmically driven system, but Facebook said all along that humans would still play a role—and they did. A human working for Facebook still approved the hoax topic over that weekend, something many people probably don’t realize. But they were outraged that Facebook’s review system, now without a single journalist employed, let a fake story slip through.

 

Koren says the whole thing was “a bit overblown.” And that’s an understatement. From where he was sitting, “there wasn’t someone within the company going ‘bwahaha’ and killing conservative news stories.” But even if there was an anti-conservative bias, this is the kind of thing that happens on any web service, whether it’s Google or Amazon or The New York Times or WIRED. That’s because humans are biased. And that means companies are biased too. Don’t buy the argument? Well, some people want fake stories about Megyn Kelly, just because they’re what everyone is talking about or just because they’re funny.

The issue is whether Facebook misrepresented Trending Topics. Prior to the Gizmodo article, a Facebook help page read: “Trending shows you topics that have recently become popular on Facebook. The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked, and your location.” It didn’t mention curators or the possibility that the system allowed a story to be added manually. We could deconstruct the language on that help page. But that’s seems silly. Algorithms don’t exist in a vacuum. They require humans. Besides, Facebook has now changed the description. “Our team is responsible for reviewing trending topics to ensure that they reflect real world events,” it says.

What we will say is that Facebook—like everyone else—needs to be more aware of the realities at work here. Koren says that Facebook’s relationship to the broader issues behind Trending Topics was characterized by a kind of “benign obliviousness.” It was just focused on making its product better. The folks building the algorithm didn’t really talk to the curators in New York. Well, however benign its obliviousness may be, Facebook shouldn’t be oblivious. Given its power to influence our society, it should work to ensure that people understand how its services work and, indeed, that they understand how the Internet works.

What’s important here is getting the world to realize that human intervention is status quo on the Internet, and Facebook is responsible for the misconceptions that persist. But so is Google—especially Google. And so is the tech press. They’ve spent years feeding the notion that the Internet is entirely automated. Though it doesn’t operate that way, people want it to. When someone implies that it does, people are apt to believe that it does. “There’s a desire to treat algorithms as if they’re standalone technical objects, because they offer us this sense of finally not having to worry about human subjectivity, error, or personal bias—things we’ve worried about for years,” says Gillespie.

 

Humans Forever

Sorry, folks, algorithms don’t give us that. Certainly, algorithms are getting better. With the rise of deep neural networks—artificially intelligent systems that learn tasks by analyzing vast amounts of data—humans are playing a smaller role in what algorithms ultimately deliver. But they still play a role. They build the neural networks. They decide what data the neural nets train on. They still decide when to whitelist and blacklist. Neural nets work alongside so many other services.

Besides, deep neural networks only work well in certain situations—at least today. They can recognize photos. They can identify spoken words. They help choose search results on Google. But they can’t run the entire Google search engine. And they can’t run the Trending Topics on Facebook. Like Google, Facebook is at the forefront of deep learning research. If it could off-load Trending Topics onto a neural net, it would.

But the bigger point is that even neural nets carry human biases. All algorithms do. Sure, you can build an algorithm that generates Trending Topics solely based on the traffic stories are getting. But then people would complain because it would turn up fake stories about Megyn Kelly. You have to filter the stream. And once you start to filter the stream, you make human judgments—whether humans are manually editing material or not. The tech press (including WIRED) is clamoring for Twitter to deal with harassment on its social network. If it does, it can use humans to intervene, build algorithms, or use a combination of both. But one thing is certain: those algorithms will carry bias. After all: What is harassment? There is no mathematical answer.

Like Twitter, Facebook is a powerful thing. It has a responsibility to think long and hard about what it shows and what it doesn’t show. It must answer to widespread public complaints about the choices it makes. It must be open and honest about how it makes these choices. But this humans versus algorithms debate is a bit ridiculous. “We’ll never get away from the bias question,” Gillespie says. “We just bury them inside of systems, but apply it much more widely, and at much bigger scale.”

Source : http://www.wired.com/2016/09/facebook-trending-humans-versus-algorithms/

 

Categorized in Search Engine

What is the Google RankBrain algorithm update all about & how does it work? How does this machine learning Artificial Intelligence (AI) affect SEO? In my previous article on facts and myths of Artificial Intelligence, I wrote about Strong AI and Super AI. I said that they may take time before they arrive; that it’s just a matter of time before someone cracks how to make a machine think like humans. I also said corporates would be glad to fund such projects if they promise better profits. For one, Google now has a “brain” that works well and it is called Google RankBrain. It may not be able to think yet but who knows the future! What surprised me was a comment from a Google executive saying they can’t understand what Google RankBrain AI is doing.

What is Google RankBrain AI

AI stands for Artificial Intelligence, and I will be using the acronym here to keep it easy. Before proceeding to the part where we will talk about Google not being able to understand what its own creation is doing, this section introducesBrainRank AI Search to readers who don’t know about search engine algorithms.

Search Engines like Google depend on hundreds of factors to bring the best possible results to anything you enter in the search box. Earlier they were dumb and focused just on keywords. But the keywords could also be dumb. For example, people can search for “explain top of the food chain”. This can easily confuse a search engine into assuming that maybe the person searching is asking something about food chains like restaurants so give him a list of top restaurants in the area

But the person is actually searching for the name of which, the top carnivore. The food chain starts with single cell animals, goes on to herbs, then herbivorous animals, carnivorous animals, humans and ends with a predator on the top.

Google and other search engines store plenty of information on their servers so that they can provide you with the results you want. For that, they check out many factors. So far, no artificial intelligence was involved. Among the hundreds of factors, it was ‘items in bold’, ‘headings’, ‘subheadings’, ‘repetition of a word or phrase’ and many such things.

If the person who is searching on Google, types in irrelevant things into the search box, the results were always garbage. The first principle of machines is if you feed garbage to machines, they’ll give out the garbage. You may search GIGO(garbage in, garbage out) for examples of this principle.

To tackle such situations, Google kept on making changes to its search algorithms and then secretly included BrainRank into it somewhere in 2015. It kept it a secret until recently. An event was held in March, and that is when they acknowledged that their engineers do not know how the thing works. It does send out wrong signals. 

RankBrain is part of Google’s Hummingbird search algorithm, and is said to be the third-most important signal – the first probably being the quality of back-links. It will soon change the way SEO works.

Here is what Google RankBrain AI search algorithm does according to what I could grasp from my research. Instead of focusing on each search initiated, it focusses on the entire search session. Normally, to get proper results and to narrow down, many researchers use synonyms and words related to what they are searching. Like in the above example, one may use “topmost consumer in the food chain” and “what’s the highest level of food chain called”. He or she may use more keywords depending upon what the person wants to know.

So as the searches progress in the session, from the first search to nth search, Google RankBrain AI will start presenting more and more relevant pages to the researcher. This may include pages that do not even include the keyword but provides more related information about the same.

What does Google RankBrain work

Google RankBrain AI

Here comes the problem. The creators of the RankBrain AI themselves do not understand how it works. Since it is limited to search, it is not a scary situation. But imagine creating a similar thing in a domain that is related to weapons? What are the odds against a machine growing mature enough to take its own stand against the creators? What if we create AI-based robots for the army, mass produce them and some things go wrong to make them turn against their own generals? It doesn’t look right. The chances are 50:50 – a good amount of risk.

In an event called SMX, Google’s Paul Haahr, who goes by the handle @haahr on Twitter told many interesting things about the algorithm and acknowledged that Google engineers who work on RankBrain don’t know how it works. Either Haahr was not willing to share information or the creators really don’t know much about their creation.

If later is the case, it should ring some alarm bells. Already many scholars have raised their fears on AI and the fast growing research in the domain. They petitioned governments to stop funding projects leading to strong and super AI.

Google RankBrain AI is just the beginning!

Source : http://www.thewindowsclub.com/google-rankbrain 

Categorized in Search Engine

It’s almost impossible to see any meaningful search engine optimization (SEO) results without spending some time building and honing your inbound link profile.Of the two main deciding factors for site rankings (relevance and authority), one (authority) is largely dependent on the quantity and quality of links pointing to a given page or domain.

As most people know, Google’s undergone some major overhauls in the past decade, changing its SERP layout, offering advanced voice-search functionality and significantly revising its ranking processes. But even though its evaluation of link quality has changed, links have been the main point of authority determination for most of Google’s existence.

Why is Google so dependent on link metrics for its ranking calculations, and how much longer will links be so important?

The concept of PageRank

To understand the motivation here, we have to look back at the first iteration of PageRank, the signature algorithm of Google Search named after co-founder Larry Page. It uses the presence and quality of links pointing to a site to determine how to gauge a site’s authoritativeness.

Let’s say there are 10 sites, labeled A through J. Every site links to site A, and most sites link to site B, but the other sites don’t have any links pointing to them. In this simple model, site A would be far likelier to rank for a relevant query than any other site, with site B as a runner-up.

links-site-a-site-b

But let’s say there are two more sites that enter the fray, sites K and L. Site L is linked to from sites C, D and E, which don’t have much authority, but site K is linked to from site A, which has lots of authority. Even though site K has fewer links, the higher authority link matters more — and might propel site K to a similar position as site A or B.

link-authority-chart

The big flaw

PageRank was designed to be a natural way to gauge authority based on what neutral third parties think of various sites; over time, in a closed system, the most authoritative and trustworthy sites would rise to the top.

The big flaw is that this isn’t a closed system; as soon as webmasters learned about PageRank, they began cooking up schemes to manipulate their own site authority, such as creating link wheels and developing software that could automatically acquire links on hundreds or thousands of unsuspecting websites at the push of a button. This undermined Google’s intentions and forced them to develop a series of checks and balances.

Increasing phases of sophistication

Over the years, Google has cracked down hard on such rank manipulators, first punishing the most egregious offenders by blacklisting or penalizing anyone participating in a known link scheme. From there, they moved on to more subtle developments that simply refined the processes Google used to evaluate link-based authority in the first place.

One of the most significant developments was Google Penguin, which overhauled the quality standards Google set for links. Using more advanced judgments, Google could now determine whether a link appeared “natural” or “manipulative,” forcing link-building tactics to shift while not really overhauling the fundamental idea behind PageRank.

Other indications of authority

Of course, links aren’t the only factor responsible for determining a domain or page’s overall authority. Google also takes the quality of on-site content into consideration, thanks in part to the sophisticated Panda update that rewards sites with “high-quality” (well-researched, articulate, valuable) content.

The functionality of your site, including its mobile-friendliness and the availability of content to different devices and browsers, can also affect your rankings. But it’s all these factors together that determine your authority, and links are still a big part of the overall mix.

Modern link building and the state of the web

Today, link building must prioritize the perception of “naturalness” and value to the users encountering those links. That’s why link building largely exists in two forms: link attraction andmanual link building.

Link attraction is the process of creating and promoting valuable content in the hope that readers will naturally link to it on their own, while manual link building is the process of placing links on high-authority sources. Even though marketers are, by definition, manipulating their rankings whenever they do anything known to improve their rankings, there are still checks and balances in place that keep these tactics in line with Google’s Webmaster Guidelines.

Link attraction tactics won’t attract any links unless the content is worthy of those links, and manual link-building tactics won’t result in any links unless the content is good enough to pass a third-party editorial review.

The only sustainable, ongoing manual link-building strategy I recommend is guest blogging, the process by which marketers develop relationships with editors of external publications, pitch stories to them, and then submit those stories in the hope of having them published. Once published, these stories achieve myriad benefits for the marketer, along with (usually) a link.

Could something (such as social signals) replace links?

Link significance and PageRank have been the foundation for Google’s evaluation of authority for most of Google’s existence, so the big question is: could anything ever replace these evaluation metrics?

More user-centric factors could be a hypothetical replacement, such as traffic numbers or engagement rates, but user behavior is too variable and may be a poor indication of true authority. It also eliminates the relative authority of each action that’s currently present in link evaluation (i.e., some users wouldn’t be more authoritative than others).

Peripheral factors like content quality and site performance could also grow in their significance to overtake links as a primary indicator. The challenge here is determining algorithmically whether content is high-quality or not without using links as a factor in that calculation.

Four years agoMatt Cutts squelched that notion, stating at SMX Advanced 2012, “I wouldn’t write the epitaph for links just yet.” Years later, in a Google Webmaster Video from February 2014, a user asked if there was a version of Google that excludes backlinks as a ranking factor. Cutts responded:

We have run experiments like that internally, and the quality looks much, much worse. It turns out backlinks, even though there’s some noise and certainly a lot of spam, for the most part, are still a really, really big win in terms of quality of our search results. So we’ve played around with the idea of turning off backlink relevance, and at least for now, backlink relevance still really helps in terms of making sure that we return the best, most relevant, most topical set of search results.

The safe bet is that links aren’t going anywhere anytime soon. They’re too integrated as a part of the web and too important to Google’s current ranking algorithm to be the basis of a major overhaul. They may evolve over the next several years, but if so, it’ll certainly be gradual, so keep link building as a central component of your SEO and content marketing strategy.

Source : http://searchengineland.com/links-still-core-authority-signal-googles-algorithm-255452

Categorized in Search Engine
Page 6 of 6

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now