Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

Source: This article was originally Published in makeuseof.com By Dan Price - Contributed by Member: Carol R. Venuti

Of course, most social networks have their own search engines built in, but they’re fundamentally limited by the fact they can only search their own database. And how you are supposed to know whether Aunt Mary is on Facebook, Google Plus, or one of the other myriad options?

The solution? Use a network-agnostic social search engine. They can search all the most common networks, as well as lots of the niche, smaller ones.

If you need a social search engine, you’ve come to the right place. Here are six options for you to consider.

1. Pipl

Pipl offers a vast database of online accounts – almost three billion are accessible through its search algorithms.

The search engine doesn’t only scan social media networks. It also scans a list of both personal and work emails, deep web archives such as court records, news reports, and publicly available government lists.

6 Most Powerful Search Engines for Social Networks pipl 670x449

To use the tool, enter the person’s name, email address, or social media username into the search box. If you wish, you can also enter a location. Click on the magnifying glass icon to start the search.

How to Check for Open Usernames on Dozens of Social Media Sites at Once How to Check for Open Usernames on Dozens of Social Media Sites at OnceIf you want to create a new presence across social media sites, this tool will help you find a username that you can use on all of them!READ MORE

The results page will show you hits from across the site’s various databases. You can use the filters on the left-hand side of the screen to narrow the results by location and age.

Twitter itself also allows you to search for tweets by location.

2. Social Mention

Social Mention is both a social search engine and a way to aggregate user-generated content across a number of networks into a single feed. It helps you search for phrases, events, and mentions, but it won’t let you find individual people.

The site supports more than 80 social networks, including Twitter, Facebook, YouTube, Digg, Google Plus, and Instagram. It can also scan blogs, bookmarks, and even comments.

6 Most Powerful Search Engines for Social Networks socialmention 670x476

In the left-hand panel of the results page, you’ll see an abundance of data about the phrases you entered. You can find out how frequently the page is mentioned, a list of associated keywords and hashtags, top users, and more.

On the right-hand side of the screen you’ll find links for exporting data into a CSV file, and along the top of the screen are various filter options.

3. snitch.name

The snitch.name site is one of the easiest on this list to use.

The site has several advantages over a regular search query on Google. For example, many social networks are either not indexed by Google, or only have very limited indexing. It also prioritizes “people pages,” whereas a regular Google search will also return results for results for posts mentioning the person, associated hashtags, and other content.

6 Most Powerful Search Engines for Social Networks snitch name 670x480

Obviously, even after running a search, some profiles theoretically remain restricted depending on the said user’s privacy settings. However, as long as you can access the account through your own social media account, you will be able to access the listing on snitch.name.

To use the site, fire up the homepage, enter your search terms, and mark the checkboxes next to the networks you want to scan. When you’re ready, click Search.

4. Social-Searcher

Social-Searcher is another web app that works across a broad array of social networks and other platforms.

You can use the site without making an account. Non-registered users can search Twitter, Google Plus, Facebook, YouTube, Instagram, Tumblr, Reddit, Flickr, Dailymotion, and Vimeo. You can also save your searches and set up email alerts.

6 Most Powerful Search Engines for Social Networks social searcher 670x316

If you need a more powerful solution, you should consider signing up for one of the paid plans. For $3.50 per month, you get 200 searches per day, three email alerts, three keyword monitors, and space for up to 3,000 saved posts. The top-level plan, which costs $20 per month, increases the limits even further.

5. Social-Searcher: Google Social Search

The same team who is responsible for the previously-mentioned Social-Searcher has also developed a Google Social Search tool.

It works with six networks. They are Facebook, Twitter, Google Plus, Instagram, LinkedIn, and Pinterest. You can mark the checkboxes next to the networks’ logos to limit your search to particular sites.

6 Most Powerful Search Engines for Social Networks google social search 670x353

The usual Google search tricks apply. For example, putting quotation marks around a set of words will force Google to only return results with an exact match, adding a minus sign will exclude specific words from the results, and typing OR between words will let you roll several terms into one search result.

Results are sorted by networks, and you can click on Web or Images to toggle between the different media.

6. Buzzsumo

Buzzsumo takes a slightly different approach to the tools we have mentioned so far. It specializes in searching for trends and keyword performance.

That makes it an ideal tool for businesses; they can find out what content is going to have the biggest impact when they share it, as well as gaining an insight into the words and phrases their competitors are using.

On the results page, you can use the panel on the left-hand side of the screen to create filters. Date, content type, language, country, and even word counts are searchable parameters.

6 Most Powerful Search Engines for Social Networks buzzsumo 670x300

On the right-hand side of the page, you can see how successful each post was. Analytics for Facebook, LinkedIn, Twitter, and Pinterest are shown, as are the total number of shares.

Free users can only see the top 10 results; you will need a Pro account for $79 per month to unlock more. It’s probably too much money for individual users, but for businesses the cost is negligible.

Which Social Media Search Engines Do You Use?

In this article, we have introduced you to six of the best social media search engines. Each of them focuses on a different type of user and presents its results in a different way. If you use them all, you should be able to quickly find the topic, person, trend, or keyword you’re looking for.

Published in Search Engine

Source: This article was Published searchengineland.com By Barry Schwartz - Contributed by Member: Clara Johnson

Some SEOs are seeing more fluctuations with the Google rankings now, but Google has confirmed the August 1 update has been fully rolled out.

Google has just confirmed that the core search algorithm update that began rolling out a week ago has now finished fully rolling out. Google search liaison Danny Sullivan said on Twitter, “It’s done” when I asked him if the rollout was complete.

Danny did add that if we are seeing other changes, “We always have changes that happen, both broad and more specific.” This is because some of the tracking tools are seeing more fluctuations today, and if they are unrelated to this update, the question is what they can be attributed to.

Here is Danny’s tweet:

@dannysullivan is the rollout of the core update complete? Seeing fluctuations today.

It's done. That said, we always have changes that happen, both broad and more specific.

Based on our research, the August 1 update was one of the more significant updates we have seen from Google on the organic search side in some time. It continued to roll out over the weekend and has now completed.

Google’s current advice on this update is that webmasters do not need to make any technical changes to their websites. In fact, the company said, “no fix” is required and that it is aimed at promoting sites that were once undervalued. Google has said that you should continue to look at ways of making your overall website better and provide even better-quality content and experiences to your website visitors.

Now that the rollout is complete, you can check to see if your site was impacted. But as Danny Sullivan said above, there are always changes happening in search.

 

Published in Search Engine

 Source: This article was published forbes.com By Jayson DeMers - Contributed by Member: William A. Woods

Some search optimizers like to complain that “Google is always changing things.” In reality, that’s only a half-truth; Google is always coming out with new updates to improve its search results, but the fundamentals of SEO have remained the same for more than 15 years. Only some of those updates have truly “changed the game,” and for the most part, those updates are positive (even though they cause some major short-term headaches for optimizers).

Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes but introduced semantic search, which transformed SEO for the better—both for users and for marketers.

What Is Semantic Search?

I’ll start with a briefer on what semantic search actually is, in case you aren’t familiar. The so-called Hummingbird update came out back in 2013 and introduced a new way for Google to consider user-submitted queries. Up until that point, the search engine was built heavily on keyword interpretation; Google would look at specific sequences of words in a user’s query, then find matches for those keyword sequences in pages on the internet.

Search optimizers built their strategies around this tendency by targeting specific keyword sequences, and using them, verbatim, on as many pages as possible (while trying to seem relevant in accordance with Panda’s content requirements).

Hummingbird changed this. Now, instead of finding exact matches for keywords, Google looks at the language used by a searcher and analyzes the searcher’s intent. It then uses that intent to find the most relevant search results for that user’s intent. It’s a subtle distinction, but one that demanded a new approach to SEO; rather than focusing on specific, exact-match keywords, you had to start creating content that addressed a user’s needs, using more semantic phrases and synonyms for your primary targets.

Voice Search and Ongoing Improvements

Of course, since then, there’s been an explosion in voice search—driven by Google’s improved ability to recognize spoken words, its improved search results, and the increased need for voice searches with mobile devices. That, in turn, has fueled even more advances in semantic search sophistication.

One of the biggest advancements, an update called RankBrain, utilizes an artificial intelligence (AI) algorithm to better understand the complex queries that everyday searchers use, and provide more helpful search results.

Why It's Better for Searchers

So why is this approach better for searchers?

  • Intuitiveness. Most of us have already taken for granted how intuitive searching is these days; if you ask a question, Google will have an answer for you—and probably an accurate one, even if your question doesn’t use the right terminology, isn’t spelled correctly, or dances around the main thing you’re trying to ask. A decade ago, effective search required you to carefully calculate which search terms to use, and even then, you might not find what you were looking for.
  • High-quality results. SERPs are now loaded with high-quality content related to your original query—and oftentimes, a direct answer to your question. Rich answers are growing in frequency, in part to meet the rising utility of semantic search, and it’s giving users faster, more relevant answers (which encourages even more search use on a daily basis).
  • Content encouragement. The nature of semantic search forces searches optimizers and webmasters to spend more time researching topics to write about and developing high-quality content that’s going to serve search users’ needs. That means there’s a bigger pool of content developers than ever before, and they’re working harder to churn out readable, practical, and in-demand content for public consumption.

Why It's Better for Optimizers

The benefits aren’t just for searchers, though—I’d argue there are just as many benefits for those of us in the SEO community (even if it was an annoying update to adjust to at first):

  • Less pressure on keywords. Keyword research has been one of the most important parts of the SEO process since search first became popular, and it’s still important to gauge the popularity of various search queries—but it isn’t as make-or-break as it used to be. You no longer have to ensure you have exact-match keywords at exactly the right ratio in exactly the right number of pages (an outdated concept known as keyword density); in many cases, merely writing about the general topic is incidentally enough to make your page relevant for your target.
  • Value Optimization. Search optimizers now get to spend more time optimizing their content for user value, rather than keyword targeting. Semantic search makes it harder to accurately predict and track how keywords are specifically searched for (and ranked for), so we can, instead, spend that effort on making things better for our core users.
  • Wiggle room. Semantic search considers synonyms and alternative wordings just as much as it considers exact match text, which means we have far more flexibility in our content. We might even end up optimizing for long-tail phrases we hadn’t considered before.

The SEO community is better off focusing on semantic search optimization, rather than keyword-specific optimization. It’s forcing content producers to produce better, more user-serving content, and relieving some of the pressure of keyword research (which at times is downright annoying).

Take this time to revisit your keyword selection and content strategies, and see if you can’t capitalize on these contextual queries even further within your content marketing strategy.

Published in Search Engine

Source: This article was published bizjournals.com By Sheila Kloefkorn - Contributed by Member:Anthony Frank

You may have heard about Google’s mobile-first indexing. Since nearly 60 percent of all searches are mobile, it makes sense that Google would give preference to mobile-optimized content in its search results pages.

Are your website and online content ready? If not, you stand to lose search-engine rankings and your website may not rank in the future.

Here is how to determine if you need help with Google’s mobile-first algorithm update:

What is mobile-first indexing?

Google creates an index of website pages and content to facilitate each search query. Mobile-first indexing means the mobile version of your website will weigh heavier in importance for Google’s indexing algorithm. Mobile responsive, fast-loading content is given preference in first-page SERP website rankings.

Mobile first doesn’t mean Google only indexes mobile sites. If your company does not have a mobile-friendly version, you will still get indexed, but your content will be ranked below mobile-friendly content. Websites with a great mobile experience will receive better search-engine rankings than a desktop-only version. Think about how many times you scroll to the second page of search results. Likely, not very often. That is why having mobile optimized content is so important.

How to determine if you need help

If you want to make sure you position your company to take advantage of mobile indexing as it rolls out, consider whether you can manage the following tasks on your own or if you need help:

  • Check your site: Take advantage of Google’s test site to see if your site needs help.
  • Mobile page speed: Make sure you enhance mobile page speed and load times. Mobile optimized content should load in 2 seconds or less. You want images and other elements optimized to render well on mobile devices.
  • Content: You want high-quality, relevant and informative mobile-optimized content on your site. Include text, videos, images and more that are crawlable and indexable.
  • Structured data: Use the same structured data on both desktop and mobile pages. Use mobile version of URLs in your structured data on mobile pages.
  • Metadata: Make sure your metadata such as titles and meta descriptions for all pages is updated.
  • XML and media sitemaps: Make sure your mobile version can access any links to sitemaps. Include robots.txt and meta-robots tags and include trust signals like links to your company’s privacy policy.
  • App index: Verify the mobile version of your desktop site relates to your app association files and others if you use app indexation for your website.
  • Server capacity: Make sure your hosting servers have the needed capacity to handle crawl mobile and desktop crawls.
  • Google Search Console: If you use Google Search Console, make sure you add and verify your mobile site as well.

What if you do not have a mobile site or mobile-optimized content?
If you have in-house resources to upgrade your website for mobile, the sooner you can implement the updates, the better.

If not, reach out to a full-service digital marketing agency like ours, which can help you update your website so that it can continue to compete. Without a mobile-optimized website, your content will not rank as well as websites with mobile-friendly content.

Published in Search Engine

Source: This article was published helpnetsecurity.com - Contributed by Member: Corey Parker

Ben-Gurion University of the Negev and University of Washington researchers have developed a new generic method to detect fake accounts on most types of social networks, including Facebook and Twitter.

According to their new study in Social Network Analysis and Mining, the new method is based on the assumption that fake accounts tend to establish improbable links to other users in the networks.

“With recent disturbing news about failures to safeguard user privacy, and targeted use of social media by Russia to influence elections, rooting out fake users has never been of greater importance,” explains Dima Kagan, lead researcher and a researcher in the BGU Department of Software and Information Systems Engineering.

“We tested our algorithm on simulated and real-world datasets on 10 different social networks and it performed well on both.”

The algorithm consists of two main iterations based on machine-learning algorithms. The first constructs a link prediction classifier that can estimate, with high accuracy, the probability of a link existing between two users.

The second iteration generates a new set of meta-features based on the features created by the link prediction classifier. Lastly, the researchers used these meta-features and constructed a generic classifier that can detect fake profiles in a variety of online social networks.

Here’s a helpful video explanation of how it all works:

“Overall, the results demonstrated that in a real-life friendship scenario we can detect people who have the strongest friendship ties as well as malicious users, even on Twitter,” the researchers say. “Our method outperforms other anomaly detection methods and we believe that it has considerable potential for a wide range of applications particularly in the cyber-security arena.”

Other researchers who contributed are Dr. Michael Fire of the University of Washington (former Ben-Gurion U. doctoral student) and Prof. Yuval Elovici, director of Cyber@BGU and a member of the BGU Department of Software and Information Systems Engineering.

The Ben-Gurion University researchers previously developed the Social Privacy Protector (SPP) Facebook app to help users evaluate their friend's list in seconds to identify which have few or no mutual links and might be “fake” profiles.

Published in Social

A new book shows how Google’s search algorithms quietly reinforce racist stereotypes.

Are search engines making us more racist?

According to Safiya Umoja Noble, a professor of communication at the University of Southern California, the answer is almost certainly yes.

Noble’s new book, Algorithms of Oppression: How Search Engines Reinforce Racism, challenges the idea that search engines like Google provide a level playing field for all ideas, values, and identities. She says they’re inherently discriminatory and favor the groups that designed them, as well as the companies that fund them.

This isn’t a trivial topic, especially in a world where people get more information from search engines than they do from teachers or libraries. For Noble, Google is not just telling people what they want to know but also determining what’s worth knowing in the first place.

I reached out to Noble last week to find out what she had learned about the unseen factors driving these algorithms, and what the consequences of ignoring them might be.

A lightly edited transcript of our conversation follows.

Sean Illing

What are you arguing in this book?

Safiya Umoja Noble

I’m arguing that large, multinational advertising platforms online are not trusted, credible public information portals. Most people think of Google and search engines in particular as a public library, or as a trusted place where they can get accurate information about the world. I provide a lot of examples to show the incredible influence of advertising dollars on the kinds of things we find, and I show how certain people and communities get misrepresented in service of making money on these platforms.

Sean Illing

Who gets misrepresented and how?

Safiya Umoja Noble

I started the book several years ago by doing collective searches on keywords around different community identities. I did searches on “black girls,” “Asian girls,” and “Latina girls” online and found that pornography was the primary way they were represented on the first page of search results. That doesn’t seem to be a very fair or credible representation of women of color in the United States. It reduces them to sexualized objects.

So that begs the question: What’s going on in these search engines? What are the well-funded, well-capitalized industries behind them who are purchasing keywords and using their influence to represent people and ideas in this way? The book was my attempt to answer these questions.

Sean Illing

Okay, so at the time you did this research, if someone went to Google and searched for “black women,” they would get a bunch of pornography. What happens if they type in “white girls” or “white women”? Or if they search for what should be a universal category, like “beautiful people”?

Safiya Umoja Noble

Now, fortunately, Google has responded to this. They suppressed a lot of porn, in part because we’ve been speaking out about this for six or seven years. But if you go to Google today and search for “Asian girls” or “Latina girls,” you’ll still find the hypersexualized content.

For a long time, if you did an image search on the word “beautiful,” you would get scantily clad images of almost exclusively white women in bikinis or lingerie. The representations were overwhelmingly white women.

People often ask what happens when you search “white girls.” White women don’t typically identify as white; they just think of themselves as girls or women or individuals. I think what you see there is the gaze of people of color looking at white women and girls and naming whiteness as an identity, which is something that you don’t typically see white women doing themselves.

Sean Illing

These search algorithms aren’t merely selecting what information we’re exposed to; they’re cementing assumptions about what information is worth knowing in the first place. That might be the most insidious part of this.

Safiya Umoja Noble

There is a dominant male, a Western-centric point of view that gets encoded into the organization of information. You have to remember that an algorithm is just an automated decision tree. If these keywords are present, then a variety of assumptions have to be made about what to point to in all the trillions of pages that exist on the web.

And those decisions always correlate to the relationship of advertisers to the platform. Google has a huge empire called AdWords, and people bid in a real-time auction to optimize their content.

That model — of information going to the highest bidder — will always privilege people who have the most resources. And that means that people who don’t have a lot of resources, like children, will never be able to fully control the ways in which they’re represented, given the logic and mechanisms of how search engines work.

Sean Illing

In the book, you talk about how racist websites gamed search engines to control the narrative around Martin Luther King Jr. so that if you searched for MLK, you’d find links to white supremacist propaganda. You also talk about the stakes involved here and point to Dylann Roof as an example.

Safiya Umoja Noble

In his manifesto, Dylann Roof has a diatribe against people of color, and he says that the first event that truly awakened him was the Trayvon Martin story. He says he went to Google and did a search on “black-on-white crime.” Now, most of us know that black-on-white crime is not an American epidemic — that, in fact, most crime happens within a community. But that’s a separate discussion.

So Roof goes to Google and puts in a white nationalist red herring (“black-on-white crime.”) And of course, it immediately takes him to white supremacist websites, which in turn take him down a racist rabbit hole of conspiracy and misinformation. Often, these racist websites are designed to appear credible and benign, in part because that helps them game the algorithms, but also because it convinces a lot of people that the information is truthful.

This is how Roof gets radicalized. He says he learns about the “true history of America,” and about the “race problem” and the “Jewish problem.” He learns that everything he’s ever been taught in school is a lie. And then he says, in his own words, that this makes him research more and more, which we can only imagine is online, and this leads to his “racial awareness.”

And now we know that shortly thereafter, he steps into the “Mother” Emanuel AME Church in Charleston, South Carolina, and murders nine African-American worshippers in cold blood, in order to start a race war.

So the ideas that people are encountering online really matter. It matters that Dylann Roof didn’t see the FBI statistics that tell the truth about how crime works in America. It matters that he didn’t get any counterpoints. It matters that people like him are pushed in these directions without resistance or context.

Sean Illing

My guess is that these algorithms weren’t designed to produce this effect, but I honestly don’t know. What is driving the decision-making process? Is this purely about commercial interests?

Safiya Umoja Noble

It’s difficult to know exactly what Google’s priorities are, because Google’s search algorithm is proprietary, so no one can really make sense of the algorithm except by looking at the output. All of us who study this do it by looking at the end results, and then we try to reverse-engineer it as best we can.

But yes, it’s pretty clear that what’s ultimately driving tech companies like Google is profit. I don’t imagine that a bunch of racists is sitting around a table at Google thinking of ways to create a racist product, but what happens, however, is that engineers simply don’t think about the social consequences of their work. They’re designing technologies for society, and they know nothing about society.

In its own marketing materials, Google says there are over 200 different factors that go into deciding what type of content they surface. I’m sure they have their own measures of relevance for what they think people want. Of course, they’re also using predictive technologies, like autosuggestion, where they fill in the blank. They’re doing that based on what other people have looked at or clicked on in the past.

Sean Illing

But the autosuggestion tool guarantees that majority perspectives will be consistently privileged over others, right?

Safiya Umoja Noble

Right. People who are a numerical minority in society will never be able to use this kind of “majority rules” logic to their benefit. The majority will always be able to control the notions of what’s important, or what’s important to click on, and that’s not how the information landscape ought to work.

Sean Illing

I’m sure some people will counter and say that these are essentially neutral platforms, and if they’re biased, they’re biased because of the human users that make them up. In other words, the problem isn’t the platform; it’s the people.

Safiya Umoja Noble

The platform exists because it’s made by people. It didn’t come down from an alien spacecraft. It’s made by human beings, and the people who make it are biased, and they code their biases into search. How can these things not inform their judgment?

So it’s disingenuous to suggest that the platform just exists unto itself and that the only people who can manipulate it or influence it are the people who use it when actually, the makers of the platform are the primary source of responsibility. I would say that there are makers, as well as users, of a platform. They have to take responsibility for their creations.

Source: This article was published vox.com By Sean Illing

Published in Search Engine

Google has confirmed rumors that a search algorithm update took place on Monday. Some sites may have seen their rankings improve, while others may have seen negative or zero change.

Google has posted on Twitter that it released a “broad core algorithm update” this past Monday. Google said it “routinely” does updates “throughout the year” and referenced the communication from the previous core update.

Google explained that core search updates happen “several times per year” and that while “some sites may note drops or gains,” there is nothing specific a site can do to tweak its rankings around these updates. In general, Google says to continue to improve your overall site quality, and the next time Google runs these updates, hopefully, your website will be rewarded.

Google explained that “pages that were previously under-rewarded” would see a benefit from these core updates.

Here is the statement Google previously made about this type of update:

Each day, Google usually releases one or more changes designed to improve our results. Some are focused around specific improvements. Some are broad changes. Last week, we released a broad core algorithm update. We do these routinely several times per year.

As with any update, some sites may note drops or gains. There’s nothing wrong with pages that may now perform less well. Instead, it’s that changes to our systems are benefiting pages that were previously under-rewarded.

There’s no “fix” for pages that may perform less well, other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages.

Here is Google’s confirmation from today about the update on Monday:

Screenshot 4

 

Source: This article was published searchengineland.com By Barry Schwartz

Published in Search Engine

Google is the dominating force in the world of search engines, and there’s an entire industry dedicated to maximizing visibility within its search engine results: search engine optimization (SEO).

People like me have built their careers on finding ways to benefit from the central ranking algorithm at Google’s core. But here’s the interesting thing: Google doesn’t explicitly publish how its search algorithm works and often uses vague language when describing its updates.

So how much do we really know about Google’s ranking algorithm? And why is Google so secretive about it?

Why Google Keeps Things Secret

Google has come under fire lately, most recently by German Chancellor Angela Merkel, because it keeps its algorithm secret. Her main argument is that transparency is vitally important to maintaining a balanced society; after all, our daily searches shape our behavior in subtle and blatant ways, and not knowing the mechanisms that influence that behavior can leave us in the dark.

But Google isn’t withholding its algorithm so that it can manipulate people with reckless abandon. There are two good reasons why the company would want to keep the information a closely-guarded secret.

First, Google’s algorithm is proprietary, and it has become the dominant search competitor because of its sheer sophistication. If other competitors have free and open access to the inner workings of that algorithm, they could easily introduce a competing platform with comparable power, and Google’s search share could unfairly plummet.

Second, there are already millions of people who make a living by improving their positions within Google, and many of them are willing to use ethically questionable tactics or spam people in an effort to get more search visibility. If Google fully publishes its search algorithm, they could easily find bigger loopholes, and ruin the relatively fair search engine results pages (SERPs) we’ve come to expect from the giant.

How We Learn

So if Google withholds all the information on its algorithm, how can search optimizers know how to improve the search rankings of web pages?

  • Google revelations. Google doesn’t leave webmasters totally in the dark. While it refuses to disclose specifics about how the algorithm functions, it’s pretty open about the general intentions of the algorithm, and what webmasters can take away from it. For example, Google has published and regularly updates a guidelines manual on search quality ratings; 160 pages long, and last updated July of last year, it’s a fairly comprehensive guidebook that explains general concepts of how Google judges the quality of a given page. Google has also been known to explain its updates as they roll out—especially the larger ones—with a short summary and a list of action items for webmasters. These are all incredibly helpful sources of information.
  • Direct research. Google doesn’t give us everything, however. If you scroll through Moz’s fairly comprehensive guide on the history of Google’s algorithm changes, you’ll notice dozens of small updates that Google didn’t formally announce, and in many cases, refuses to acknowledge. How does the search community know that these algorithm changes unfolded? We have volatility indicators like MozCast, which measure how much the SERPs are changing within a given period of time; a period of high volatility is usually the signature of some kind of algorithm change. We can also conduct experiments, such as using two different tactics on two different pages and seeing which one ranks higher at the end of the experiment period. And because the SEO community is pretty open about sharing this information, one experiment is all it takes to give the whole community more experience and knowledge.
  • Experience and intuition. Finally, after several years of making changes and tracking growth patterns, you can rely a bit on your own experience and intuition. When search traffic plummets, you can usually identify a handful of potential red flags and come up with ideas for tweaks to take you back to your baseline.

What Do We Know?

So what do we really know about Google’s search algorithm?

  • The basics. We know the basic concept behind the search platform: to give users the best possible results for their queries. Google does this by presenting results that offer a combination of relevance (how appropriate the topic is) and authority (how trustworthy the source is).
  • Core ranking factors. We also know the core ranking factors that will influence your rank. Some of these come directly from Google’s webmaster guidelines, and some of them come from the results of major experiments. In any case, we have a good idea what changes are necessary to earn a high rank, and what factors could stand in your way. I covered 101 of them here.
  • Algorithm extensions and updates.We also know when there’s a new Google update, thanks to the volatility indicator, and we can almost always form a reasonable conclusion on the update’s purpose—even when Google doesn’t tell us directly.

While we still don’t know the specifics of how Google’s algorithm works—and unless the EU’s transparency campaign kicks into high gear soon, we probably won’t for the foreseeable future—we do know enough about it to make meaningful changes to our sites, and maximize our ranking potential.

Moreover, the general philosophy behind the algorithm and the basic strategies needed to take advantage of it aren’t hard to learn. If you’re willing to read Google’s documentation and learn from the experiments of others, you can get up to speed in a matter of weeks.

 Source: This article was published forbes.com By Jayson DeMers,

Published in Search Engine

HIGHLIGHTS

  • Google Search finds quality of newsy content algorithmically
  • Search results to omit fake news through improved ranking signals
  • India marks 2x growth in daily active search users on Google

Google Search already receive some artificial intelligence (AI) tweaks to enhance user experience. But with the swift growth of inferior-quality content, Google is now in the process of improving the quality of its search results. VP of Engineering Shashidhar Thakur on the sidelines of Google for India 2017 on Tuesday stated that Google is making continuous efforts to cut down on the amount of fake news content listed on its search engine.

"Whether it's in India or internationally, we make sure that we uphold a high bar when it comes to the quality of newsy content. Generally, in search, we find this type of content algorithmically," Thakur told Gadgets 360. The algorithms deployed behind Google Search look for the authoritativeness of the content and its quality to rank them appropriately. Thakur said that this continuous improvement will uplift the quality of the search results over time.

"We improve ranking signals on our search engine from time to time to overcome the issue of fake news. Signals help the system understand a query or the language of the query or the text or matching different keywords to provide relevant results," explained Thakur.

Similar to other search engines that use code-based bots to crawl different webpages, Google Search indexes hundreds of billions of webpages consistently. Once indexed, Google Search adds webpages to different entries that include all the words available on those pages. This data is then processed to the Knowledge Graph that not just looks for any particular keywords but also picks user interests to give relevant results.

Related...

"Inferior-quality content on the Web isn't a new and special problem," Thakur said. "But certainly, it is a problem that we need to solve by continuous tuning and making the underlying search algorithms better. This is indeed a very crucial area of focus for us."

Google isn't the only Web company that is taking the menace of fake news seriously. Facebook and Microsoft's Bing are also testing new developments to curb fake news. A recent report by Gartner predicted that fake news will grow multifold by 2022 and people in mature economies will consume more amount of false information over the information that is true and fair.

Having said that, Google is dominating the Web space and its search engine is the most prominent area for counterfeit content. Thakur at the Google for India stage revealed the number of daily active search users in India has grown two times in the last one year. The Mountain View, California-headquartered company also released Google Go as the lightweight version of the flagship Google app on Android devices.

 

Source: This article was published gadgets.ndtv.com By Jagmeet Singh

 

Published in Search Engine

Editor’s note: This post is part of an ongoing series looking back at the history of Google algorithm updates. Enjoy!


Google’s Freshness, or “fresher results”, update – as the name suggests – was a significant ranking algorithm change, building on the Caffeine update, which rolled out in June 2010.

When Google announced an algorithm change on November 3, 2011, impacting ~35 percent of total searches (6-10 percent of search results to a noticeable degree), focusing on providing the user with ‘fresher, more recent search results‘, the SEO industry and content marketers alike stood up and took notice.

Where Does the Name Come From?

The freshness or ‘fresher results’ name for this algorithm update is directly taken from the official Google Inside Search blog announcement.

Google Freshness Update Nov 2011

Why Was the Freshness Update Launched?

It is predicted that more data will be created in 2017 than the previous 5,000 years of humanity, a trend which has been ongoing for a few years now, and one driving Google to act to cater for this availability and demand for up to date, fresh, new content.

When you combine this data and content growth, with the levels of new and unique queries Google handles, you begin to establish justification for identifying, handling, prioritizing and ranking fresh content within the Google search index.

According to a 2012 ReadWrite article, 16 to 20 percent of queries that get asked every day have never been asked before.

A key intention of this update is to provide greater emphasis on the importance of recentness of content specifically tied to areas like latest news, events, politics, celebrities, trends and more, specifically where the user is expected to want to know the most current information.

Someone searching for “Taylor Swift boyfriend” will likely want to know the current person she is dating, therefore content time/date stamped yesterday, with lots of social shares, engagement, and backlinks over the past few hours, will likely displace prior ranking content which has not been updated, or providing the same activity freshness signals.

Here are the results for this query as at the time of writing this article.

Tailor Swift SERPs Oct 2017

Who Was Impacted by Freshness Algorithm?

At a noticeable level, between 6 to 10 percent of search queries were impacted by the Freshness algorithm, but some degree of change was applied to a collective third (35 percent) of all searches.

One of the interesting aspects of the Freshness Algorithm update was the fact that many more sites appeared to have gained from the update, as opposed to having seen lost rankings or visibility from them. This is quite uncommon with most changes to the Google algorithm.

Looking specifically at the identified “winners” from the update, according to Searchmetrics:

Google prefers sites like news sites, broadcast sites, video portals and a lot Brand sites. This is also a type of sites which have regularly fresh content and a big brand with higher CTRs.

Industry Reaction to the Freshness Update

Due to the nature of the update being an overarching positive change; one rewarding content creators, fresh/relevant/latest news providers, and many bigger brands investing in content, the initial reaction was tied towards analysis of the change and the logical nature of the update.

The analysis of the change was associated with the expected “big” impact from the Google announcement of 35 percent of search results being affected, and the actual disproportionately small amount of negative impact being reported.

The Solution/Recovery Process

The Freshness update is one of my favorite Google algorithms as it makes perfect sense, and was impactful for changing SERPs for the better, in a logical, easy to understand, and practical way.

If you’re covering a topic area and the information you have is out of date, time-sensitive, hasn’t been refreshed or updated in some time, or is simply being surpassed by more engaging, fresh and new competing content, it is likely that you need to give that content/topic some more attention, both on page and off page.

An important part of the freshness update is that it is not just about refreshing content, but also tied to the frequency of content related to the topic.

For example; the expected frequency of content prominently ranking during a political campaign spanning weeks, would reflect the latest campaign changes rather than static (even day old) content, with since surpassed relevancy, accuracy, and associated user engagement and social sharing signals.

This update was building on Google’s established “query deserves freshness” (QDF) methodology:

THE QDF solution revolves around determining whether a topic is “hot.” If news sites or blog posts are actively writing about a topic, the model figures that it is one for which users are more likely to want current information. The model also examines Google’s own stream of billions of search queries, which Mr. Singhal believes is an even better monitor of global enthusiasm about a particular subject.

It also was made possible by Google’s Caffeine web search index update:

With Caffeine, we analyze the web in small portions and update our search index on a continuous basis, globally. As we find new pages, or new information on existing pages, we can add these straight to the index. That means you can find fresher information than ever before—no matter when or where it was published.

Practical Tactics for Recovering from the Freshness Algorithm

Five of the best ways to recover from any lost ranking (or to take advantage of the new untapped opportunity) as a result of the Freshness Algorithm change include:

1. Revisit Existing Content

Look through year on year, or even previous period content performance. Identify pages/topics that previously drove volumes of impressions, traffic, and rankings to the website, and prioritize refreshing them.

You may find that time and date stamped content in blogs, news, and media sections, have seen significant data change/drops. If this is the case, consider the value of updating the historical content by citing new sources, updating statistics, including more current quotes, and adding terms reflecting latest search queries.

2. Socially Share & Amplify Content

Social signals, fresh link signals, and associated external interest/buzz surrounding your content can fuel ranking gains tied to QDF and previous algorithm updates like the Freshness update.

Don’t underestimate the value of successful social sharing and PR activities driving new content discovery, engagement, and interaction.

3. Reconsider Content Frequency

If your website covers industry change, key events, and any degree of breaking news/insight, you may need to think about the frequency that you are informing your audience, and adding content to your website.

People are digesting more content than ever before, and users demand the latest news as it happens – minor frequency changes can make a positive difference between being first to market, or being late to the party.

4. Take a Tiered Approach to Content Creation 

With voice, video, images, virtual reality, and a host of content types, plus common website inclusion approaches (blogs, news, media, content hubs, microsites, more), adding layers of content to your digital offering will enable broader visibility of the brand on key ranking areas, plus extra  leverage of the various search verticals at your disposal.

Whether these updates result in new landing pages or adding of depth and content value to existing URLs, will differ on intent, but either way, this will support many of the freshness points relating to recovery or gains tied to this update.

5. Add Evergreen Content Into Your Content Mix 

Evergreen content is the deeper content creation that has more redundancy to the test of time, and is able to perform month in and month out, contributing to search rankings and traffic over many months, even years. Typically evergreen content reflects:

  • Thorough topical research.
  • Unique insight.
  • Targeted application of expertise on a given topic.
  • Refined content that gets updated every few months when changes require modification.
  • Longer form content (often in the several thousands of works criteria).
  • Mixed content type inclusive.

You may see this as your hero content pieces, those warranting budget, promotion, and reinvestment of time and resource.

How Successful was the Freshness Algorithm Update?

Although the Freshness Algorithm change isn’t frequently mentioned in many industry topical conversations and often gets overshadowed by the likes of Penguin, Panda, Hummingbird, Mobile First, RankBrain, and others, to me, this reinforces the level of success it had.

When you look for time intent queries like [football results] you will notice that dominant sites are providing:

  • Live scores
  • In-game updates
  • Latest results
  • Interactive scoreboards
  • Current fixtures
  • Much more

These useful and changing (often changing by the hour) results reflect the practical benefits that this update has had to our search experience, and the opportunity this brings to value-based companies, able to act on the latest data.

Freshness Myths & Misconceptions

The biggest misconception related to this algorithm update was the anticipated negative impact tied to the scale of results (~35 percent) that would be applicable to Google Freshness.

As this was one of the more positive and practical algorithm changes, the freshness update has been overlooked by many, playing the role of unsung auditor of tired, unloved content needing to be improved, and of active content use able to satisfy searcher needs, and rank for more time-sensitive user intent.

Source: This article was published searchengineland.com By Lee Wilson

Published in Search Engine
Page 1 of 5

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media