Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

Google’s John Mueller revealed that the search engine’s algorithms do not punish keyword stuffing too harshly.

In fact, keyword stuffing may be ignored altogether if the content is found to otherwise have value to searchers.

This information was provided on Twitter in response to users inquiring about keyword stuffing. More specifically, a user was concerned about a page ranking well in search results despite obvious signs of keyword repetition.

Prefacing his statement with the suggestion to focus on one’s own content rather than someone else’s, Mueller goes on to say that there are over 200 factors used to rank pages and “the nice part is that you don’t have to get them all perfect.”

When the excessive keyword repetition was further criticized by another user, Mueller said this practice shouldn’t result in a page being removed from search results, and “boring keyword stuffing” may be ignored altogether.

Official AdWords Campaign Templates
Select your industry. Download your campaign template. Custom built with exact match keywords and converting ad copy with high clickthrough rates.

“Yeah, but if we can ignore boring keyword stuffing (this was popular in the 90’s; search engines have a lot of practice here), there’s sometimes still enough value to be found elsewhere. I don’t know the page, but IMO keyword stuffing shouldn’t result in removal from the index.”

There are several takeaways from this exchange:

  • An SEO’s time is better spent improving their own content, rather than trying to figure out why other content is ranking higher.
  • Excessive keyword stuffing will not result in a page being removed from indexing.
  • Google may overlook keyword stuffing if the content has value otherwise.
  • Use of keywords is only one of over 200 ranking factors.

Overall, it’s probably not a good idea to overuse keywords because it arguably makes the content less enjoyable to read. However, keyword repetition will not hurt a piece of content when it comes to ranking in search results.

Source: This article was published searchenginejournal.com By Matt Southern

Published in Search Engine

Some search optimizers like to complain that “Google is always changing things.” In reality, that’s only a half-truth; Google is always coming out with new updates to improve its search results, but the fundamentals of SEO have remained the same for more than 15 years. Only some of those updates have truly “changed the game,” and for the most part, those updates are positive (even though they cause some major short-term headaches for optimizers).

Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes but introduced semantic search, which transformed SEO for the better—both for users and for marketers.

What Is Semantic Search?

I’ll start with a briefer on what semantic search actually is, in case you aren’t familiar. The so-called Hummingbird update came out back in 2013 and introduced a new way for Google to consider user-submitted queries. Up until that point, the search engine was built heavily on keyword interpretation; Google would look at specific sequences of words in a user’s query, then find matches for those keyword sequences in pages on the internet.

Search optimizers built their strategies around this tendency by targeting specific keyword sequences, and using them, verbatim, on as many pages as possible (while trying to seem relevant in accordance with Panda’s content requirements).

Hummingbird changed this. Now, instead of finding exact matches for keywords, Google looks at the language used by a searcher and analyzes the searcher’s intent. It then uses that intent to find the most relevant search results for that user’s intent. It’s a subtle distinction, but one that demanded a new approach to SEO; rather than focusing on specific, exact-match keywords, you had to start creating content that addressed a user’s needs, using more semantic phrases and synonyms for your primary targets.

Voice Search and Ongoing Improvements

Of course, since then, there’s been an explosion in voice search—driven by Google’s improved ability to recognize spoken words, its improved search results, and the increased need for voice searches with mobile devices. That, in turn, has fueled even more advances in semantic search sophistication.

One of the biggest advancements, an update called RankBrain, utilizes an artificial intelligence (AI) algorithm to better understand the complex queries that everyday searchers use, and provide more helpful search results.

Why It's Better for Searchers

So why is this approach better for searchers?

  • Intuitiveness. Most of us have already taken for granted how intuitive searching is these days; if you ask a question, Google will have an answer for you—and probably an accurate one, even if your question doesn’t use the right terminology, isn’t spelled correctly, or dances around the main thing you’re trying to ask. A decade ago, effective search required you to carefully calculate which search terms to use, and even then, you might not find what you were looking for.
  • High-quality results. SERPs are now loaded with high-quality content related to your original query—and oftentimes, a direct answer to your question. Rich answers are growing in frequency, in part to meet the rising utility of semantic search, and it’s giving users faster, more relevant answers (which encourages even more search use on a daily basis).
  • Content encouragement. The nature of semantic search forces searches optimizers and webmasters to spend more time researching topics to write about and developing high-quality content that’s going to serve search users’ needs. That means there’s a bigger pool of content developers than ever before, and they’re working harder to churn out readable, practical, and in-demand content for public consumption.

Why It's Better for Optimizers

The benefits aren’t just for searchers, though—I’d argue there are just as many benefits for those of us in the SEO community (even if it was an annoying update to adjust to at first):

  • Less pressure on keywords. Keyword research has been one of the most important parts of the SEO process since search first became popular, and it’s still important to gauge the popularity of various search queries—but it isn’t as make-or-break as it used to be. You no longer have to ensure you have exact-match keywords at exactly the right ratio in exactly the right number of pages (an outdated concept known as keyword density); in many cases, merely writing about the general topic is incidentally enough to make your page relevant for your target.
  • Value Optimization. Search optimizers now get to spend more time optimizing their content for user value, rather than keyword targeting. Semantic search makes it harder to accurately predict and track how keywords are specifically searched for (and ranked for), so we can, instead, spend that effort on making things better for our core users.
  • Wiggle room. Semantic search considers synonyms and alternative wordings just as much as it considers exact match text, which means we have far more flexibility in our content. We might even end up optimizing for long-tail phrases we hadn’t considered before.

The SEO community is better off focusing on semantic search optimization, rather than keyword-specific optimization. It’s forcing content producers to produce better, more user-serving content, and relieving some of the pressure of keyword research (which at times is downright annoying).

Take this time to revisit your keyword selection and content strategies, and see if you can’t capitalize on these contextual queries even further within your content marketing strategy.

 Source: This article was published forbes.com By Jayson DeMers

Published in Search Engine

After killing off prayer time results in Google several years ago, Google brings the feature back for some regions.

The prayer times can be triggered for some queries that seem to be asking for that information and also include geographic designators, such as [prayer times mecca], where Islamic prayer times are relevant. It’s possible that queries without a specific location term, but conducted from one of those locations, would also trigger the prayer times, but we weren’t able to test that functionality.

A Google spokesperson told Search Engine Land “coinciding with Ramadan, we launched this feature in a number of predominantly Islamic countries to make it easier to find prayer times for locally popular queries.”

“We continue to explore ways we can help people around the world find information about their preferred religious rituals and celebrations,” Google added.

Here is a screenshot of prayer times done on desktop search:

Google gives you the ability to customize the calculation method used to figure out when the prayer times are in that region. Depending on your religious observance, you may hold one method over another. Here are the available Islamic prayer time calculation methods that Google offers:

Not all queries return this response, and some may return featured snippets as opposed to this specific prayer times box. So please do not be confused when you see a featured snippet versus a prayer-time one-box.

This is what a featured snippet looks like in comparison to the image above:

The most noticeable way to tell this isn’t a real prayer-times box is that you cannot change the calculation method in the featured snippet. In my opinion, it would make sense for Google to remove the featured snippets for prayer times so searchers aren’t confused. Since featured snippets may be delayed, they probably aren’t trustworthy responses for those who rely on these prayer times. Smart answers are immediate and are calculated by Google directly.

Back in 2011, Google launched prayer times rich snippets, but about a year later, Google killed off the feature. Now, Google has deployed this new approach without using markup or schema; instead, Google does the calculation internally without depending on third-party resources or websites.

Source: This article was published searchengineland.com By Barry Schwartz

Published in Search Engine

Google’s John Mueller revealed that the company is looking into simplifying the process of adding multiple properties to Search Console.

Currently, site owners are required to add multiple versions of the same domain separately. That means individually adding the WWW, non-WWW, HTTP, and HTTPS versions and verifying each one.

A simplified process would involve adding just the root of a website to Search Console, and then Google would automatically add all different versions to the same listing.

This is a topic that came up during a recent Google Webmaster Central hangout. A site owner was looking for confirmation that it’s still necessary to add the WWW and non-WWW versions of a domain to Search Console.

Mueller confirmed that is still required for the time being. However, Google is looking into ways to make the process easier. The company is even open to hearing ideas from webmasters about how to do this.

The full response from Mueller is as follows:

“We’re currently looking into ways to make that process a little bit easier.

So we’ll probably ask around for input from, I don’t know, on Twitter or somewhere else, to see what your ideas are there. Where basically you just add the root of your website and then we automatically include the dub-dub-dub, non-dub-dub-dub, HTTP, HTTPS versions in the same listing. So that you have all of the data in one place.

Maybe it would even make sense to include subdomains there. I don’t know, we’d probably like to get your feedback on that. So probably we will ask around for more tips from your side in that regard.

But at the moment if you want to make sure you have all of the data I definitely recommend adding all of those variations, even though it clutters things up a little bit.“

You can see Mueller give this answer in the video below, starting at the 11:15 mark.

Source: This article was published searchenginejournal.com By Matt Southern

Published in Search Engine

Consumers do enjoy the convenience of the apps they use but are individually overwhelmed when it comes to defending their privacy.

When it comes to our collective sense of internet privacy, 2018 is definitely the year of awareness. It’s funny that it took Facebook’s unholy partnership with a little-known data-mining consulting firm named Cambridge Analytica to raise the alarm. After all, there were already abundant examples of how our information was being used by unidentified forces on the web. It really took nothing more than writing the words "Cabo San Lucas" as part of a throwaway line in some personal email to a friend to initiate a slew of Cabo resort ads and Sammy Hagar’s face plastering the perimeters of our social media feeds.

In 2018, it’s never been more clear that when we embrace technological developments, all of which make our lives easier, we are truly taking hold of a double-edged sword. But has our awakening come a little too late? As a society, are we already so hooked on the conveniences internet-enabled technologies provide us that we’re hard-pressed making the claim that we want the control of our personal data back?

It’s an interesting question. Our digital marketing firm recently conducted a survey to better understand how people feel about internet privacy issues and the new movement to re-establish control over what app providers and social networks do with our personal information.

Given the current media environment and scary headlines regarding online security breaches, the poll results, at least on the surface, were fairly predictable. According to our study, web users overwhelmingly object to how our information is being shared with and used by third-party vendors. No surprise here, a whopping 90 percent of those polled were very concerned about internet privacy. In a classic example of "Oh, how the mighty have fallen," Facebook and Google have suddenly landed in the ranks of the companies we trust the least, with only 3 percent and 4 percent of us, respectively, claiming to have any faith in how they handled our information.

Despite consumers’ apparent concern about online security, the survey results also revealed participants do very little to safeguard their information online, especially if doing so comes at the cost of convenience and time. In fact, 60 percent of them download apps without reading terms and conditions and close to one in five (17 percent) report that they’ll keep an app they like, even if it does breach their privacy by tracking their whereabouts.

While the survey reveals only 18 percent say they are “very confident” when it comes to trusting retails sites with their personal information, the sector is still on track to exceed a $410 billion e-commerce spend this year. This, despite more than half (54 percent) reporting they feel less secure purchasing from online retailers after reading about online breach after online breach.

What's become apparent from our survey is that while people are clearly dissatisfied with the state of internet privacy, they feel uninspired or simply ill-equipped to do anything about it. It appears many are hooked on the conveniences online living affords them and resigned to the loss of privacy if that’s what it costs to play.

The findings are not unique to our survey. In a recent Harvard Business School study, people who were told the ads appearing in their social media timelines had been selected specifically based on their internet search histories showed far less engagement with the ads, compared to a control group who didn't know how they'd been targeted. The study revealed that the actual act of company transparency, coming clean about the marketing tactics employed, dissuaded user response in the end.

As is the case with innocent schoolchildren, the world is a far better place when we believe there is an omniscient Santa Claus who magically knows our secret desires, instead of it being a crafty gift exchange rigged by the parents who clearly know the contents of our wish list. We say we want safeguards and privacy. We say we want transparency. But when it comes to a World Wide Web, where all the cookies have been deleted and our social media timeline knows nothing about us, the user experience becomes less fluid.

The irony is, almost two-thirds (63 percent) of those polled in our survey don’t believe that companies having access to our personal information leads to a better, more personalized, online experience at all, which is the chief reason companies like Facebook state for wanting our personal information in the first place. And yet, when an app we've installed doesn't let us tag our location to a post or inform us when a friend has tagged us in a photo or alerted us that the widget we were searching for is on sale this week, we feel slighted by our brave new world.

With the introduction of GDPR regulations this summer, the European Union has taken, collectively, the important first steps toward regaining some of the online privacy that we, as individuals, have been unable to take. GDPR casts the first stone at the Goliath that’s had free rein leveraging our personal information against us. By doling out harsh penalties and fines for those who abuse our private stats -- or at least those who aren’t abundantly transparent as to how they intend to use those stats -- the EU, and by extension, those countries conducting online business with them, has finally initiated a movement to curtail the hitherto laissez-faire practices of commercial internet enterprises. For this cyberspace Wild West, there’s finally a new sheriff in town.

I imagine that our survey takers applaud this action, although only about 25 percent were even aware of GDPR. At least on paper, the legislation has given us back some control over the privacy rights we’ve been letting slip away since we first signed up for a MySpace account. Will this new regulation affect our user experience on the internet? More than half of our respondents don’t think so, and perhaps, for now, we are on the way toward a balancing point between the information that makes us easier to market to and the information that’s been being used for any purpose under the sun. It’s time to leverage this important first step, and stay vigilant of its effectiveness with a goal of gaining back even more privacy while online.

Source: This article was published entrepreneur.com By Brian Byer

Published in Internet Privacy

You may have heard about Google’s mobile-first indexing. Since nearly 60 percent of all searches are mobile, it makes sense that Google would give preference to mobile-optimized content in its search results pages.

Are your website and online content ready? If not, you stand to lose search-engine rankings and your website may not rank in the future.

Here is how to determine if you need help with Google’s mobile-first algorithm update:

What is mobile-first indexing?

Google creates an index of website pages and content to facilitate each search query. Mobile-first indexing means the mobile version of your website will weigh heavier in importance for Google’s indexing algorithm. Mobile responsive, fast-loading content is given preference in first-page SERP website rankings.

Mobile first doesn’t mean Google only indexes mobile sites. If your company does not have a mobile-friendly version, you will still get indexed, but your content will be ranked below mobile-friendly content. Websites with a great mobile experience will receive better search-engine rankings than a desktop-only version. Think about how many times you scroll to the second page of search results. Likely, not very often. That is why having mobile optimized content is so important.

How to determine if you need help

If you want to make sure you position your company to take advantage of mobile indexing as it rolls out, consider whether you can manage the following tasks on your own or if you need help:

  • Check your site: Take advantage of Google’s test site to see if your site needs help.
  • Mobile page speed: Make sure you enhance mobile page speed and load times. Mobile optimized content should load in 2 seconds or less. You want images and other elements optimized to render well on mobile devices.
  • Content: You want high-quality, relevant and informative mobile-optimized content on your site. Include text, videos, images and more that are crawlable and indexable.
  • Structured data: Use the same structured data on both desktop and mobile pages. Use mobile version of URLs in your structured data on mobile pages.
  • Metadata: Make sure your metadata such as titles and meta descriptions for all pages is updated.
  • XML and media sitemaps: Make sure your mobile version can access any links to sitemaps. Include robots.txt and meta-robots tags and include trust signals like links to your company’s privacy policy.
  • App index: Verify the mobile version of your desktop site relates to your app association files and others if you use app indexation for your website.
  • Server capacity: Make sure your hosting servers have the needed capacity to handle crawl mobile and desktop crawls.
  • Google Search Console: If you use Google Search Console, make sure you add and verify your mobile site as well.

What if you do not have a mobile site or mobile-optimized content?
If you have in-house resources to upgrade your website for mobile, the sooner you can implement the updates, the better.

If not, reach out to a full-service digital marketing agency like ours, which can help you update your website so that it can continue to compete. Without a mobile-optimized website, your content will not rank as well as websites with mobile-friendly content.

Source: This article was published bizjournals.com By Sheila Kloefkorn

Published in Search Engine

Ever wondered how the results of some popular keyword research tools stack up against the information Google Search Console provides? This article looks at comparing data from Google Search Console (GSC) search analytics against notable keyword research tools and what you can extract from Google.

As a bonus, you can get related searches and people also search data results from Google search results by using the code at the end of this article.

This article is not meant to be a scientific analysis, as it only includes data from seven websites. To be sure, we were gathering somewhat comprehensive data: we selected websites from the US and the UK plus different verticals.

Procedure

1. Started by defining industries with respect to various website verticals

We used SimilarWeb’s top categories to define the groupings and selected the following categories:

  • Arts and entertainment.
  • Autos and vehicles.
  • Business and industry.
  • Home and garden.
  • Recreation and hobbies.
  • Shopping.
  • Reference.

We pulled anonymized data from a sample of our websites and were able to obtain unseen data from search engine optimization specialists (SEOs) Aaron Dicks and Daniel Dzhenev. Since this initial exploratory analysis involved quantitative and qualitative components, we wanted to spend time understanding the process and nuance rather than making the concessions required in scaling up an analysis. We do think this analysis can lead to a rough methodology for in-house SEOs to make a more informed decision on which tool may better fit their respective vertical.

2. Acquired GSC data from websites in each niche

Data was acquired from Google Search Console by programming and using a Jupyter notebook.

Jupyter notebooks are an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text to extract website-level data from the Search Analytics API daily, providing much greater granularity than is currently available in Google’s web interface.

3. Gathered ranking keywords of a single internal page for each website

Since home pages tend to gather many keywords that may or may not be topically relevant to the actual content of the page, we selected an established and performing internal page so the rankings are more likely to be relevant to the content of the page. This is also more realistic since users tend to do keyword research in the context of specific content ideas.

The image above is an example of the home page ranking for a variety of queries related to the business but not directly related to the content and intent of the page.

We removed brand terms and restricted the Google Search Console queries to first-page results.

Finally, we selected ahead term for each page. The phrase “head term” is generally used to denote a popular keyword with high search volume. We chose terms with relatively high search volume, though not the absolute highest search volume. Of the queries with the most impressions, we selected the one that best represented the page.

4. Did keyword research in various keyword tools and looked for the head term

We then used the head term selected in the previous step to perform keyword research in three major tools: Ahrefs, Moz, and SEMrush.

The “search suggestions” or “related searches” options were used, and all queries returned were kept, regardless of whether or not the tool specified a metric of how related the suggestions were to the head term.

Below we listed the number of results from each tool. In addition, we extracted the “people also search for” and “related searches” from Google searches for each head term (respective to country) and added the number of results to give a baseline of what Google gives for free.

**This result returned more than 5,000 results! It was truncated to 1,001, which is the max workable and sorted by descending volume.

We compiled the average number of keywords returned per tool:

5.  Processed the data

We then processed the queries for each source and website by using some language processing techniques to transform the words into their root forms (e.g., “running” to “run”), removed common words such as  “a,” “the” and “and,” expanded contractions and then sorted the words.

For example, this process would transform “SEO agencies in Raleigh” to “agency Raleigh SEO.”  This generally keeps the important words and puts them in order so that we can compare and remove similar queries.

We then created a percentage by dividing the number of unique terms by the total number of terms returned by the tool. This should tell us how much redundancy there are in the tools.

Unfortunately, it does not account for misspellings, which can also be problematic in keyword research tools because they add extra cruft (unnecessary, unwanted queries) to the results. Many years ago, it was possible to target common misspellings of terms on website pages. Today, search engines do a really good job of understanding what you typed, even if it’s misspelled.

In the table below, SEMrush had the highest percentage of unique queries in their search suggestions.

This is important because, if 1,000 keywords are only 70 percent unique, that means 300 keywords basically have no unique value for the task you are performing.

Next, we wanted to see how well the various tools found queries used to find these performing pages. We took the previously unique, normalized query phrases and looked at the percentage of GSC queries the tools had in their results.

In the chart below, note the average GSC coverage for each tool and that Moz is higher here, most likely because it returned 1,000 results for most head terms. All tools performed better than related queries scraped from Google (Use the code at the end of the article to do the same).

Getting into the vector space

After performing the previous analysis, we decided to convert the normalized query phrases into vector space to visually explore the variations in various tools.

Assigning to vector space uses something called pre-trained word vectors that are reduced in dimensionality (x and y coordinates) using a Python library called t-distributed Stochastic Neighbor Embedding (TSNE). Don’t worry if you are unfamiliar with this; generally, word vectors are words converted into numbers in such a way that the numbers represent the inherent semantics of the keywords.

Converting the words to numbers helps us process, analyze and plot the words. When the semantic values are plotted on a coordinate plane, we get a clear understanding of how the various keywords are related. Points grouped together will be more semantically related, while points distant from one another will be less related.

Shopping

This is an example where Moz returns 1,000 results, yet the search volume and searcher keyword variations are very low.  This is likely caused by Moz semantically matching particular words instead of trying to match more to the meaning of the phrase. We asked Moz’s Russ Jones to better understand how Moz finds related phrases:

“Moz uses many different methods to find related terms. We use one algorithm that finds keywords with similar pages ranking for them, we use another ML algorithm that breaks up the phrase into constituent words and finds combinations of related words producing related phrases, etc. Each of these can be useful for different purposes, depending on whether you want very close or tangential topics. Are you looking to improve your rankings for a keyword or find sufficiently distinct keywords to write about that are still related? The results returned by Moz Explorer is our attempt to strike that balance.”

Moz does include a nice relevancy measure, as well as a filter for fine-tuning the keyword matches. For this analysis, we just used the default settings:

In the image below, the plot of the queries shows what is returned by each keyword vendor converted into the coordinate plane. The position and groupings impart some understanding of how keywords are related.

In this example, Moz (orange) produces a significant volume of various keywords, while other tools picked far fewer (Ahrefs in green) but more related to the initial topic:

Autos and vehicles

This is a fun one. You can see that Moz and Ahrefs had pretty good coverage of this high-volume term. Moz won by matching 34 percent of the actual terms from Google Search Console. Moz had double the number of results (almost by default) that Ahrefs had.

SEMrush lagged here with 35 queries for a topic with a broad amount of useful variety.

The larger gray points represent more “ground truth” queries from Google Search Console. Other colors are the various tools used. Gray points with no overlaid color are queries that various tools did not match.

Internet and telecom

This plot is interesting in that SEMrush jumped to nearly 5,000 results, from the 50-200 range in other results. You can also see (toward the bottom) that there were many terms outside of what this page tended to rank for or that were superfluous to what would be needed to understand user queries for a new page:

Most tools grouped somewhat close to the head term, while you can see that SEMrush (in purplish-pink) produced a large number of potentially more unrelated points, even though Google People Also Search were found in certain groupings.

General merchandise   

Here is an example of a keyword tool finding an interesting grouping of terms (groupings indicated by black circles) that the page currently doesn’t rank for. In reviewing the data, we found the grouping to the right makes sense for this page:

The two black circles help to visualize the ability to find groupings of related queries when plotting the text in this manner.

Analysis

Search engine optimization specialists with experience in keyword research know there is no one tool to rule them all.  Depending on the data you need, you may need to consult a few tools to get what you are after.

Below are my general impressions from each tool after reviewing, qualitatively:

  • The query data and numbers from our analysis of the uniqueness of results.
  • The likelihood of finding terms that real users use to find performing pages.

Moz     

Moz seems to have impressive numbers in terms of raw results, but we found that the overall quality and relevance of results was lacking in several cases.

Even when playing with the relevancy scores, it quickly went off on tangents, providing queries that were in no way related to my head term (see Moz suggestions for “Nacho Libre” in the image above).

With that said, Moz is very useful due to its comprehensive coverage, especially for SEOs working in smaller or newer verticals. In many cases, it is exceedingly difficult to find keywords for newer trending topics, so more keywords are definitely better here.

An average of 64 percent coverage for real user data from GSC for selected domains was very impressive  This also tells you that while Moz’s results can tend to go down rabbit holes, they tend to get a lot right as well. They have traded off a loss of fidelity for comprehensiveness.

Ahrefs

Ahrefs was my favorite in terms of quality due to their nice marriage of comprehensive results with the minimal amount of clearly unrelated queries.

It had the lowest number of average reported keyword results per vendor, but this is actually misleading due to the large outlier from SEMrush. Across the various searches, it tended to return a nice array of terms without a lot of clutter to wade through.

Most impressive to me was a specific type of niche grill that shared a name with a popular location. The results from Ahrefs stayed right on point, while SEMrush returned nothing, and Moz went off on tangents with many keywords related to the popular location.

A representative of Ahrefs clarified with me that their tool “search suggestions” uses data from Google Autosuggest.  They currently do not have a true recommendation engine the way Moz does. Using “Also ranks for” and “Having same terms” data from Ahrefs would put them more on par with the number of keywords returned by other tools.

 SEMrush   

SEMrush overall offered great quality, with 90 percent of the keywords being unique It was also on par with Ahrefs in terms of matching queries from GSC.

It was, however, the most inconsistent in terms of the number of results returned. It yielded 1,000+ keywords (actually 5,000) for Internet and Telecom > Telecommunications yet only covered 22 percent of the queries in GSC. For another result, it was the only one not to return related keywords. This is a very small dataset, so there is clearly an argument that these were anomalies.

Google: People Also Search For/Related Searches 

These results were extremely interesting because they tended to more closely match the types of searches users would make while in a particular buying state, as opposed to those specifically related to a particular phrase. 

For example, looking up “[term] shower curtains” returned “[term] toilet seats.”

These are unrelated from a semantic standpoint, but they are both relevant for someone redoing their bathroom, suggesting the similarities are based on user intent and not necessarily the keywords themselves.

Also, since data from “people also search” are tied to the individual results in Google search engine result pages (SERPs), it is hard to say whether the terms are related to the search query or operate more like site links, which are more relevant to the individual page.

Code used

When entered into the Javascript Console of Google Chrome on a Google search results page, the following will output the “People also search for” and “Related searches” data in the page, if they exist.

1    var data = {};
2    var out = [];
3    data.relatedsearches = [].map.call(document.querySelectorAll(".brs_col p"), e => ({ query: e.textContent }));
4    
5    data.peoplesearchfor = [].map.call(document.querySelectorAll(".rc > div:nth-child(3) > div > div > div:not([class])"), e => {
6    if (e && !e.className) {
7    return { query: e.textContent };
8     }
9     });
10   
11    for (d in data){
12
13    for (i in data[d]){
14    out.push(data[d][i]['query'])
15     }
16
17    }
18    console.log(out.join('\n'))

In addition, there is a Chrome add-on called Keywords Everywhere which will expose these terms in search results, as shown in several SERP screenshots throughout the article. 

Conclusion

Especially for in-house marketers, it is important to understand which tools tend to have data most aligned to your vertical. In this analysis, we showed some benefits and drawbacks of a few popular tools across a small sample of topics. We hoped to provide an approach that could form the underpinnings of your own analysis or for further improvement and to give SEOs a more practical way of choosing a research tool.

Keyword research tools are constantly evolving and adding newly found queries through the use of clickstream data and other data sources. The utility in these tools rests squarely on their ability to help us understand more succinctly how to better position our content to fit real user interest and not on the raw number of keywords returned. Don’t just use what has always been used. Test various tools and gauge their usefulness for yourself.

 Source: This article was published searchengineland.com By R Oakes

Published in Online Research

The new Google College search feature aggregates data on colleges like admission rates, student demographics, majors available at the college, notable alumni and more, and displays them as a search result.

After rolling out job search feature on Search, Google now aims to make it easier for students to find the college of their choice. The company is rolling out a new feature to Search, which will enable users to simply search for a college and get information like admissions, cost, student life and more, directly as a search result. To provide an idea of how much a college will cost, Search will also display information about the average cost after applying student aid, including breakdowns by household income. 

The feature is currently available only in the US and Google says that it displays the results based on data sourced from public information from the U.S. Department of Education’s College Scorecard and Integrated Postsecondary Education Data System (IPEDS), which is a comprehensive data set available for 4-year colleges. We have reached out to Google for comments on whether or not this feature be made available for Indian users looking to study in the US or for those looking at colleges within India. The story will be updated once we receive a response. 

Google has also worked with education researchers and non-profit organizations, high school counselors, and admissions professionals to “build an experience to meet your college search needs.” When one searches for a college or a university, alongside the above-mentioned cost breakdown, there are also some other tabs that provide additional information about enrollment rates, majors available at the college, student demographics, notable alumni and more. There is also an ‘Outcome’ tab where one will find the percentage of students graduating from colleges or universities, along with the typical annual income of a graduate. In case you are interested in exploring other options, there is also ‘Similar Colleges’ tab.

Google states in its blog, “Information is scattered across the internet, and it’s not always clear what factors to consider and which pieces of information will be most useful for your decision. In fact, 63 percent of recently-enrolled and prospective students say they have often felt lost when researching college or financial aid options.” The new feature is now rolling out on mobile and some of the features will also be available on desktops.

Source: This article was published digit.in By Shubham Sharma

Published in Search Engine

UNLIKE GOOGLE, THE DUCKDUCKGO SEARCH ENGINE DOESN’T TRACK YOU.

In 2006 Gabriel Weinberg sold a company for millions. A year and a half later, he founded his next project with the money: an alternative search engine named DuckDuckGo. Initially, the goal was to make it more efficient and compelling than Google by cutting down on spam and providing instant answers, similar to a Wikipedia or IMDb. The project launched in 2008, bringing Weinberg’s brainchild into public consciousness.

But Weinberg didn’t realize at the time that the main reason people were wary of Google wasn’t the user experience but how the search engine tracked its users. Being the astute entrepreneur that Weinberg is, he instantly saw this as an area for an opportunity and a way to compete with one of the largest companies in the world. As a result, DuckDuckGo became the go-to search engine for privacy — long before the NSA leaks in 2013, when the government got “Snowdened,” and Facebook’s recent Cambridge Analytica scandal — all with a better user experience.

Here’s why you should consider making the move to the “Duck Side.”

1. THE SEARCH ENGINE THAT DOESN’T TRACK YOU

DuckDuckGo browser
DUCKDUCKGO

According to a micro-site connected to DuckDuckGo — DontTrack.us — Google tracks users on 75% of websites. The information gathered from your site visits and search terms can be used to follow you across over two million websites and applications. Oh, and all that private information is stored by Google indefinitely. (Hint: Don’t use Google for embarrassing searches that might cost you money during a divorce, for example. All that information can be subpoenaed by lawyers.)

Even Facebook tracks you across the internet. According to Weinberg, the social media company “operates a massive hidden tracker network.” He claims they’re “lurking behind about 25% of the top million sites, where consumers don’t expect to be tracked by Facebook.” And, as of now, there is no way to opt out of this so-called “experience.” (Don’t forget: Facebook owns Instagram.)

And since there are no digital privacy laws currently active in the United States, at the time of this writing anyway, consumers are forced to vote with their attention and time once again. As it stands now, companies are not required by federal law to share what information they collect, how it’s used, and whether or not it’s even been stolen. You’ve got to protect yourself by choosing your platforms and tools wisely.

As for DuckDuckGo, they do not track you or store your personal information. And while they do have some advertising on their platform for revenue purposes, you only see ads for what you search for — and those ads won’t stalk you around the web like a rabid spider.

2. DUCKDUCKGO IS A COMPANY WITH SERIOUS BALLS

DuckDuckGo browser
DUCKDUCKGO

Weinberg resembles a younger, techier version of Eric Bana, and he’s got the same gall of the actor/rally racer. Case in point: in 2011, Weinberg pulled a highly successful publicity stunt for his alternative search engine by strategically placing a billboard right in Google’s backyard that called out the company for tracking its users. It earned the scrappy start-up valuable press from the likes of USA Today, Business Insider and Wired.

For those opposed to Google’s handling of users’ data, the billboard represented a major burn. Of course, it’s just one of the many ways Weinberg helped his company gain users. I highly recommend Traction, a wildly useful book co-written by Weinberg and Justin Mares. It’s a must-read for any start-up founder or creative entrepreneur.

3. KEEP YOUR SEARCHES PRIVATE & EFFICIENT

DuckDuckGo browser

DUCKDUCKGO

As for working with search engines, think of all the “embarrassing searches” you wish to keep private, whatever they may be. Now imagine that Google has all that information stored indefinitely — plus, it can be held against you in a court of law. Scary stuff, right? Turns out that what you search for online can be far more sensitive than the things you openly share on social media platforms. So how can you keep that stuff private?

In 2017 DuckDuckGo was able to integrate with the Brave Browser to provide a potential solution. With most browsers, websites can still track and monitor your behavior, even while you’re in “private browsing mode.” However, with this new combination of Brave’s privacy protection features and DuckDuckGo’s private search capabilities, you can surf the web without having your search terms or personal information collected, sold or shared.

But that’s not the only thing DuckDuckGo has to offer for a more empowered user experience. Another feature the search engine has become known for are “bangs!” Here’s how they work.

DuckDuckGo browser
DUCKDUCKGO

Random example: Let’s say you want to find Camille Paglia books on Amazon. If you were to search via Google, you might type “site: amazon camille paglia.” Your results might look like this:

DuckDuckGo browser

GOOGLE

Now let’s say you do the same thing with DuckDuckGo’s bangs. In this case, you would type “!a Camille Paglia.” Here’s what you’d get:

DuckDuckGo browser

AMAZON

Bang! You’re right there on Amazon, redirected to their internal search page from DuckDuckGo.

Of course, you might be thinking, “Why not just search Amazon.com for the answer, to begin with?” Well, bangs aren’t just for searching Amazon. You can use bangs to search nearly 11,000 sites (as of this writing), including eBay, YouTube (owned by Google), Wikipedia, Instagram and more. You can even suggest new ones.

Plus, with DuckDuckGo, you can see social media profiles by searching the user’s handle, explore app stores and discover alternative apps, shorten and expand links/URLs, generate complex passwords, find rhymes, determine whether or not sites are down (or if it’s really just you), calculate loan payments, receive instant answers to questions and more — all without having to leave the search engine.

4. IT’S GROWING — FAST

DuckDuckGo browser

DUCKDUCKGO

In a sense, Weinberg has achieved his initial goal of creating a search engine that offers a more direct and spam-free user experience. It just also happens to be much more private and way less creepy than the buzzword alternatives. Perhaps that’s why it’s growing so damn fast — 10 years after launching, that is.

In fact, 2017 was a monumental year for DuckDuckGo, accounting for 36% of all searches ever conducted through the search engine. It was also during 2017 that the company achieved 55% growth in daily private searches, crossing the threshold of 20 million private searches a day. Sure, the experience isn’t as highly customized as Google’s — which relies on your personal data to fine-tune results — but this little search engine that could still manage to provide solid, relevant results without infringing on your personal privacy.

5. BALANCING THE SCALES OF GOOD & EVIL

DuckDuckGo browser

DUCKDUCKGO

When Google first started, it touted the mantra “Don’t be evil.” Curiously, it’s since changed to “Do the right thing.” It’s only now that most users have started to ask, “Do the right thing for whom?” And in light of the recent Facebook scandals, these same users are starting to wonder, “What the hell is my data actually being used for? Who does it benefit? And who actually has it?” Unsurprisingly, these are turning into the biggest questions of our time.

In the past, users assumed they had nothing to hide, and that it was even shameful to consider hiding their internet histories or online preferences. “Nobody cares about me. I’m nobody.” But to a major data company, one without constraints, how you spend your time and money, with whom, and on what sites can easily be sold to the highest bidder at your expense. So while Dax the Duck may not need to say, “We’re a source for good,” the brains behind DuckDuckGo seem to be balancing the scales in that direction anyway.

Through their donations to private organizations, as well as their micro-sites providing eye-opening data, various email campaigns to help internet users maintain their privacy, and plenty of generous content outlining the trouble with “informed consent” online, DuckDuckGo has become a force for good in the digital age. Of course, Google doesn’t have to become obsolete in the process — they still offer some remarkable services — but there need to be more alternatives if only to provide a choice. What do you want as a search engine user? And how do you want your information to be handled?

That’s the real service DuckDuckGo provides: it gives you the option to say no to track. And without real policies in place in the U.S. to protect internet users, your best bet for privacy and data protection may just be to #ComeToTheDuckSide. end

 

 Source: This article was published crixeo.com By A.J. SØRENSEN

Published in Search Engine

San Francisco, Google took action on nearly 90,000 user reports of spam in its Search in 2017 and has now asked more users to come forward and help the tech giant spot and squash spam.

According to Juan Felipe Rincon, Global Search Outreach Lead at Google, the automated Artificial Intelligence (AI)-based systems are constantly working to detect and block spam.

"Still, we always welcome hearing from you when something seems phishy. Reporting spam, malware, and other issues you find help us protect the site owner and other searchers from this abuse," Rincon said in a blog post.

"You can file a spam report, a phishing report or a malware report. You can also alert us to any issue with Google search by clicking on the 'Send feedback' link at the bottom of the search results page," he added.



Last year, Google sent over 45 million notifications to registered website owners, alerting them to possible problems with their websites which could affect their appearance in a search.

"Just as Gmail fights email spam and keeps it out of your inbox, our search spam fighting systems work to keep your search results clean," Rincon said.

In 2017, Google conducted over 250 webmaster meetups and office hours around the world reaching more than 220,000 website owners.

"Last year, we sent 6 million manual action messages to webmasters about practices we identified that were against our guidelines, along with information on how to resolve the issue," the Google executive said.

With AI-based systems, Google was able to detect and remove more than 80 percent of compromised sites from search results last year.

"We're also working closely with many providers of popular content management systems like WordPress and Joomla to help them fight spammers that abuse forums and comment sections," the blog post said.

Source: This article was published cio.economictimes.indiatimes.com

Published in Search Engine
Page 1 of 77
Newsletter

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Internet research courses

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media