Articles
Pages
Products
Research Papers
Blogs
Search Engines
Events
Webinar, Seminar, Live Classes

 [Source: This article was Published in mirror.co.uk BY Sophie Curtis - Uploaded by AIRS Member: Issac Avila]

Google now lets you automatically delete your location history after a fixed period of time

It probably comes as no surprise that Google keeps track of everywhere you go via the apps you use on your smartphone.

This information is used to give you more personalised experiences, like maps and recommendations based on places you've visited, real-time traffic updates about your commute, help to find your phone and more targeted ads.

But while these things can be useful, you may not feel comfortable with the idea of Google holding on to that information indefinitely.

In the past, if you chose to enable Location History, the only way to delete that data was to go into your app settings and remove it manually.

But Google recently introduced a new setting that allows you to automatically delete your location history after a fixed period of time.

There are currently only two options - automatically deleting your Location History after three months or after 18 months - but it beats leaving a trail of information that you might not want Google or others to see.

Here's how to automatically delete your Location History on Android and iOS:

  1. Open the Google Maps app
  2. In the top left, tap the Menu icon and select "Your timeline".
  3. In the top right, tap the More icon and select "Settings and privacy".
  4. Scroll to "Location settings".
  5. Tap "Automatically delete Location History".
  6. Follow the on-screen instructions.

If you'd prefer to turn off Location History altogether, you can do so in the "Location History" section of your Google Account.

 

You can also set time limits how long Google can keep your Web & App Activity, which includes data about websites you visit and apps you use.

Google uses this data to give you faster searches, better recommendations and more personalised experiences in Maps, Search and other Google services.

Again, you have to option to automatically delete this data after three or 18 months.

  1. ​Open the Gmail app.
  2. In the top left, tap the Menu icon and select "Settings".
  3. Select your account and then tap "Manage your Google Account".
  4. At the top, tap Data & personalisation.
  5. Under "Activity controls" tap Web & App Activity"
  6. Tap "Manage activity".
  7. At the top right, tap the More icon and then select "Keep activity for".
  8. Tap the option for how long you want to keep your activity and then tap "Next".
  9. Confirm to save your choice.

The new tools are part of Google's efforts to give users more control over their data.

The company has also introduced "incognito mode" in a number of its smartphone apps, which stops Google tracking your activity

It is also putting pressure on web and app developers to be more transparent about their use of cookies so that users can make more informed choices about whether to accept them.

Categorized in Internet Privacy

[Source: This article was Published in exchangewire.com By Mathew Broughton - Uploaded by AIRS Member: Eric Beaudoin]

Talk about Google, along with their domination of the digital ad ecosystem, would not be on the lips of those in ad tech were it not for their original product: the Google Search engine.

Despite negative press coverage and EU fines, some estimates suggest the behemoth continues to enjoy a market share of just under 90% in the UK search market. However, there have been rumblings of discontent from publishers, which populate the results pages, about how they have been treated by the California-based giant.

This anger, combined with concerns over GDPR and copyright law violations, has prompted the launch of new ‘disruptive’ search engines designed to address these concerns. But will these have any effect on Google’s stranglehold on the global search industry? ExchangeWire details the problems publishers are experiencing with Google along with some of the new players in the search market, what effect they have had thus far, and how advertisers could capitalize on privacy-focused competition in the search market.

Google vs publishers

Publishers have experienced margin squeezes for years, whilst Google’s sales have simultaneously skyrocketed, with parent company Alphabet’s revenue reaching USD$36.3bn (£28.7bn) in the first quarter of 2019 alone. Many content producers also feel dismay towards Google’s ‘enhanced search listings’, as these essentially scrape content from their sites and show it in their search results, eliminating the need for users to visit their site, and in turn their monetization opportunity.

Recent changes to the design of the search results page, at least on mobile devices, which are seemingly aimed at making the differences between ads and organic listings even more subtle (an effect which is particularly noticeable on local listings) will also prove perturbing for the publishers which do not use Google paid search listings.

DuckDuckGo: The quack grows louder

Perhaps the best-known disruptive search engine is DuckDuckGo, which markets itself on protecting user privacy whilst also refining results by excluding low-quality sources such as content mills. In an attempt to battle against privacy concerns, and in recognition of anti-competitive investigations, Google has added DuckDuckGo to Chrome as a default search option in over 60 markets including the UK, US, Australia and South Africa. Further reflecting their increased presence in the search market: DuckDuckGo’s quack has become louder recently, adding momentum to the recent calls to transform the toothless ‘Do Not Track’ option into giving more meaningful protections to user privacy, as originally intended.

Qwant: Local search engines fighting Google

Qwant is a France-based search engine which, similar to DuckDuckGo, preserves user privacy by not tracking their queries. Several similar locally-based engines have been rolled out across Europe, including Mojeek (UK) and Unbubble (Germany). Whilst they currently only occupy a small percentage (~6%) of the French search market, Qwant’s market share has grown consistently year-on-year since their launch in 2013, to the extent that they are now challenging established players such as Yahoo! in the country. In recognition of their desire to increase their growth across Europe, whilst continuing to operate in a privacy-focused manner, Qwant has recently partnered with Microsoft to leverage their various tech solutions. A further sign of their growing level of gravitas is the French government’s decision to eschew Chrome in favour of their engine.

Ahrefs: The 90/10 profit share model

A respected provider of performance-monitoring tools within search, Ahrefs is now working on directly competing with Google with their own engine, according to a series of tweets from founder & CEO Dmitry Gerasimenko. Whilst a commitment to privacy will please users, content creators will be more interested in the proposed profit-share model, whereby 90% of the prospective search revenue will be given to the publisher. Though there is every change that this tweet-stage idea will never come to fruition, the Singapore-based firm already has impressive crawling capabilities which are easily transferable for indexing, so it is worth examining in the future.

Opportunity for advertisers

With the launch of Google privacy tools, along with stricter forms of intelligent tracking prevention (ITP) on the Safari and Firefox browsers, discussions have abounded within the advertising industry on whether budgets will be realigned away from display and video towards fully contextual methods such as keyword-based search. Stricter implementation of GDPR and the prospective launch of similar privacy legislation across the globe will further the argument that advertisers need to examine privacy-focused solutions.

Naturally, these factors will compromise advertisers who rely on third-party targeting methods and tracking user activity across the internet, meaning they need to identify ways of diversifying their offering. Though they have a comparatively tiny market share, disruptive search engines represent a potential opportunity for brands and advertisers to experiment with privacy-compliant search advertising.

Categorized in Search Engine

[Source: This article was Published in hannity.com By Hannity Staff - Uploaded by AIRS Member: Logan Hochstetler]

Google and other American tech companies were thrust into the national spotlight in recent weeks, with critics claiming the platforms are intentionally censoring conservative voices, “shadow-banning” leading personalities, and impacting American elections in an unprecedented way.

In another explosive exposé, Project Veritas Founder James O’Keefe revealed senior Google officials vowing to prevent the “Trump Situation” from occurring again during the 2020 elections.

The controversy dates back much further. In the fall of 2018, The SEO Tribunal published an article detailing 63 “fascinating Google search statistics.”

The article shows the planet’s largest search engine handles more than 63,000 requests per second, owns more than 90% of the global market share, and generated $95 billion in ad sales during 2017.

1. Google receives over 63,000 searches per second on any given day.

(Source: SearchEngineLand)

That’s the average figure of how many people use Google a day, which  translates into at least 2 trillion searches per year, 3.8 million searches per minute, 228 million searches per hour, and 5.6 billion searches per day. Pretty impressive, right?

2. 15% of all searches have never been searched before on Google.

(Source: SearchEngineLand)

Out of  trillions of searches every year, 15% of these queries have never been seen by Google before. Such queries mostly relate to day-to-day activities, news, and trends, as confirmed per Google search stats.

3. Google takes over 200 factors into account before delivering you the best results to any query in a fraction of a second.

(Source: Backlinko)

Of course, some of them are rather controversial, and others may vary significantly, but there are also those that are proven and important, such as content and backlinks.

4. Google’s ad revenue amounted to almost $95.4 billion in 2017.

(Source: Statista)

According to recent Google stats, that is 25% up from 2016. The search giant saw nearly 22% ad revenue growth in the fourth quarter only.

5. Google owns about 200 companies.

(Source: Investopedia)

That is, on average, as if they’ve been acquiring more than one company per week since 2010. Among those there are companies involved in mapping, telecommunications, robotics, video broadcasting, and advertising.

6. Google’s signature email product has a 27% share of the global email client market.

(Source: Litmus)

This is up by 7% since 2016.

7. Upon going public, Google figures show the company was valued at $27 billion.

(Source: Entrepreneur)

More specifically, the company sold over 19 million shares of stock for $85 per share. In other words, it was valued as much as General Motors.

8. The net US digital display ad revenue for Google was $5.24 billion in 2017.

(Source: Emarketer)

Google statistics show that this number is significantly lower than Facebook, which made $16.33 billion, but much higher than Snapchat, which brought in $770 million from digital display ads.

9. Google has a market value of $739 billion.

(Source: Statista)

As of May 2018, the search market leader has a market value of $739 billion, coming behind Apple, which has a market value of $924 billion, Amazon, which has a market value of $783 billion, and Microsoft, which has a market value of  $753.

10. Google’s owner, Alphabet, reported an 84% rise in profits for the last quarter.

(Source: The Guardian)

The rising global privacy concerns didn’t affect Google’s profits. According to Thomson Reuters I/B/E/S, the quarterly profit of $9.4 billion exceeded estimates of $6.56 billion. Additionally, the price for clicks and views of ads sold by Google rose in its favor mostly due to advertisers who pursued ad slots on its search engine, YouTube video service, and partner apps and websites.

Read the full list at The SEO Tribunal.

Categorized in Search Engine

 [Source: This article was Published in searchenginejournal.com By Barry Schwartz - Uploaded by AIRS Member: Martin Grossner]

Google says the June 3 update is not a major one, but keep an eye out for how your results will be impacted.

Google has just announced that tomorrow it will be releasing a new broad core search algorithm update. These core updates impact how search results are ranked and listed in the Google search results.

Here is Google’s tweet:

searchliaison

Previous updates. Google has done previous core updates. In fact, it does one every couple months or so. The last core update was released in March 2019. You can see our coverage of the previous updates over here.

Why pre-announce this one? Google said the community has been asking Google to be more proactive when it comes to these changes. Danny Sullivan, Google search liason, said there is nothing specifically “big” about this update compared to previous updates. Google is being proactive about notifying site owners and SEOs, Sullivan said, so people aren’t left “scratching their heads after-the-fact.”

casey markee

When is it going live? Monday, June 3, Google will make this new core update live. The exact timing is not known yet, but Google will also tweet tomorrow when it does go live.

eric mitz

Google’s previous advice. Google has previously shared this advice around broad core algorithm updates:

“Each day, Google usually releases one or more changes designed to improve our results. Some are focused around specific improvements. Some are broad changes. Last week, we released a broad core algorithm update. We do these routinely several times per year.

As with any update, some sites may note drops or gains. There’s nothing wrong with pages that may now perform less well. Instead, it’s that changes to our systems are benefiting pages that were previously under-rewarded.

There’s no ‘fix’ for pages that may perform less well other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages.”

 

Categorized in Search Engine

 [Source: This article was Published in searchenginejournal.com By Dave Davies - Uploaded by AIRS Member: Clara Johnson]

Let’s begin by answering the obvious question:

What Is Universal Search?

There are a few definitions for universal search on the web, but I prefer hearing it from the horse’s mouth on things like this.

While Google hasn’t given a strict definition that I know of as to what universal search is from an SEO standpoint, they have used the following definition in their Search Appliance documentation:

“Universal search is the ability to search all content in an enterprise through a single search box. Although content sources might reside in different locations, such as on a corporate network, on a desktop, or on the World Wide Web, they appear in a single, integrated set of search results.”

Adapted for SEO and traditional search, we could easily turn it into:

“Universal search is the ability to search all content across multiple databases through a single search box. Although content sources might reside in different locations, such as a different index for specific types or formats of content, they appear in a single, integrated set of search results.”

What other databases are we talking about? Basically:

Universal Search

On top of this, there are additional databases that information is drawn from (hotels, sports scores, calculators, weather, etc.) and additional databases with user-generated information to consider.

These range from reviews to related searches to traffic patterns to previous queries and device preferences.

Why Universal Search?

I remember a time, many years ago, when there were 10 blue links…

search

It was a crazy time of discovery. Discovering all the sites that didn’t meet your intent or your desired format, that is.

And then came Universal Search. It was announced in May of 2007 (by Marissa Mayer, if that gives it context) and rolled out just a couple months after they expanded on the personalization of results.

The two were connected and not just by being announced by the same person. They were connected in illustrating their continued push towards Google’s mission statement:

“Our mission is to organize the world’s information and make it universally accessible and useful.”

Think about those 10 blue links and what they offered. Certainly, they offered scope of information not accessible at any point in time prior, but they also offered a problematic depth of uncertainty.

Black hats aside (and there were a lot more of them then), you clicked a link in hopes that you understood what was on the other side of that click and we wrote titles and descriptions that hopefully fully described what we had to offer.

A search was a painful process, we just didn’t know it because it was better than anything we’d had prior.

Enter Universal Search

Then there was Universal Search. Suddenly the guesswork was reduced.

Before we continue, let’s watch a few minutes of a video put out by Google shortly after Universal Search launched.

The video starts at the point where they’re describing what they were seeing in the eye tracking of search results and illustrates what universal search looked like at the time.

 

OK – notwithstanding that this was a core Google video, discussing a major new Google feature and it has (at the time of writing) 4,277 views and two peculiar comments – this is an excellent look at the “why” of Universal Search as well as an understanding of what it was at the time, and how much and how little it’s changed.

How Does It Present Itself?

We saw a lot of examples of Universal Search in my article on How Search Engines Display Search Results.

Where then we focused on the layout itself and where each section comes from, here we’re discussing more the why and how of it.

At a root level and as we’ve all seen, Universal Search presents itself as sections on a webpage that stand apart from the 10 blue links. They are often, but not always, organically generated (though I suspect they are always organically driven).

This is to say, whether a content block exists would be handled on the organic search side, whereas what’s contained in that content block may-or-may-not include ads.

So, let’s compare then versus now, ignoring cosmetic changes and just looking at what the same result would look like with and without Universal Search by today’s SERP standards.

sej civil war serp

This answers two questions in a single image.

It answers the key question of this section, “How does Universal Search present itself?”

This image also does a great job of answering the question, “Why?”

Imagine the various motivations I might have to enter the query [what was the civil war]. I may be:

  • A high school student doing an essay.
  • Someone who simply is not familiar with the historic event.
  • Looking for information on the war itself or my query may be part of a larger dive into civil wars across nations or wars in general.
  • Someone who prefers articles.
  • Someone who prefers videos.
  • Just writing an unrelated SEO article and need a good example.

The possibilities are virtually endless.

If you look at the version on the right, which link would you click?

How about if you prefer video results?

The decision you make will take you longer than it likely does with Universal Search options.

And that’s the point.

The Universal Search structure makes decision making faster across a variety of intents, while still leaving the blue links (though not always 10 anymore) available for those looking for pages on the subject.

In fact, even if what you’re looking for exists in an article, the simple presence of Universal Search results will help filter out the results you don’t want and leaves SEO pros and website owners free to focus our articles to ranking in the traditional search results and other types and formats in appropriate sections.

How Does Google Pick the Sections?

Let me begin this section by stating very clearly – this is the best guess.

As we’re all aware, Google’s systems are incredibly complicated. There may be more pieces than I am aware of, obviously.

There are two core areas I can think of that they would use for these adjustments.

Users

Now, before you say, “But Google says they don’t use user metrics to adjust search results!” let’s consider the specific wording that Google’s John Mueller used when responding to a question on user signals:

“… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going in the right way.

But for individual pages, I don’t think that’s something worth focusing on at all.”

So, they do use the data. They use it on their end, but not to rank individual pages.

What you can take this as it relates to Universal Search is that Google will test different blocks of data for different types of queries to determine how users interact with them. It is very likely that Bing does something similar.

Most certainly they pick locations for possible placements, limitations on the number of different result types/databases, and have determined starting points (think: templates for specific query types) for their processes and then simply let machine learning take over running slight variances or testing layouts on pages generated for unknown queries, or queries where new associations may be attained.

For example, a spike in a query that ties to a sudden rise in new stories related to the query could trigger the news carousel being inserted into the search results, provided that past similar instances produced a positive engagement signal and it would remain as long as user engagement indicated it.

Query Data

It is virtually a given that a search engine would use their own query data to determine which sections to insert into the SERPs.

If a query like [pizza] has suggested queries like:

Recommended Pizza Searches

Implying that most such searchers are looking for restaurants it makes sense that in a Universal Search structure, the first organic result would not be a blue link but:

Pizza SERP

It is very much worth remembering that the goal of a search engine is to provide a single location where a user can access everything they are looking for.

At times this puts them in direct competition with themselves in some ways. Not that I think they mind losing traffic to another of their own properties.

Let’s take YouTube for example. Google’s systems will understand not just which YouTube videos are most popular but also which are watched through, when people eject, skip or close out, etc.

They can use this not just to understand which videos are likely to resonate on Google.com but also understand more deeply what supplemental content people are interested in when they search for more general queries.

I may search for [civil war], but that doesn’t mean I’m not also interested in the Battle at Antietam specifically.

So, I would suggest that the impact of these other databases does not simply impact the layouts as illustrated in Universal Search but that these databases themselves can and likely are being used to connect topics and information together and thus impacting the core search rankings themselves.

Takeaway

So, what does this all mean for you?

For one, you can use the machine learning systems of the search engines to assist in your content development strategies.

Sections you see appearing in Universal Search tell us a lot about the types and formats of content that users expect or engage with.

Also important is that devices and technology are changing rapidly. I suspect the idea of Universal Search is about to go through a dramatic transformation.

This is due in part to voice search, but I suspect it will have more to do with the push by Google to provide a solution rather than options.

A few well-placed filters could provide refinement that produces only a single result and many of these filters could be automatically applied based on known user preferences.

I’m not sure we’ll get to a single result in the next two to three years but I do suspect that we will see it for some queries and where the device lends itself to it.

If I query “weather” why would the results page not look like:

Weather SERP

In my eyes, this is the future of Universal Search.

Or, as I like to call it, search.

Categorized in Search Engine

[Source: This article was Published in thesun.co.uk By Sean Keach - Uploaded by AIRS Member: Anna K. Sasaki]

GOOGLE MAPS has invented a new feature that will warn you if a rogue taxi driver is taking you off-route.

It could put a stop to conmen drivers who take passengers out of their way to rack up journey charges.

 When you input a destination, you can choose new safety options

The "off-route alerts" will flag to users when you're sidetracked from a journey by more than 500 meters.

The feature was first revealed by tech site XDA Developers, who spotted it in the live version of Google Maps.

However, the feature appears to be stuck in "testing" right now, which means not everyone can use it.

But if it comes to Google Maps more generally, it could save Brits loads of cash.

 One of the options lets you receive warnings if you go off-route

 One of the options lets you receive warnings if you go off-route

 Google Maps will alert you if you've strayed more than 500 metres from the fastest route

Google Maps will alert you if you've strayed more than 500 meters from the fastest routeCredit: XDA Developers / Google Maps

First, simply select a journey you want to take when making a taxi ride.

Before you hit "Start", you'll see a new option called "Stay Safer" that you can press.

Inside you'll find another option to "get off-route alerts", which promises: "Get an alert if your taxi or ride goes off route."

When you start the journey, it will tell you if you're still on route.

And if you go off the route by more than 500 meters, you'll receive an alert on your phone.

That would prompt you to ask your driver why you're going the wrong way – and whether or not the route can be corrected.

But if the feature became popular, it could put rogue drivers off from even trying to illicitly extend your trip in the first place.

How to see Google's map tracking everywhere you've been

Here's what you need to know...

There are several ways to check your own Google Location History.

The easiest way is to follow the link to the Google Maps Timeline page:

This lets you see exactly where you've been on a given day, even tracking your methods of travel and the times you were at certain locations.

Alternatively, if you've got the Google Maps app, launch it and press the hamburger icon – three horizontal lines stacked on top of each other.

Then go to the Your Timeline tab, which will show places you've previously visited on a given day.

If you've had Google Location History turned on for a few years without realizing, this might be quite shocking.

Suddenly finding out that Google has an extremely detailed map of years of your real-world movements can seem creepy – so you might want to turn the feature off.

The good news is that it's possible to immediately turn Google Location History off at any time.

You can turn off Location History here:

However, to truly stop Google from tracking you, you'll also need to turn off Web & Activity Tracking.

You can see your tracked location markers here:

Unfortunately, these location markers are intermingled with a host of other information, so it's tricky to locate (and delete them).

To turn it off, simply click the above link then head to Activity Controls.

From there, you'll be able to turn off Web & Activity Tracking across all Google sites, apps and services.

Of course, some taxi drivers know shortcuts that can shave time off a Google Maps route, so don't immediately panic if you find yourself in a cab going the wrong way.

And it'll probably get on the nerves of seasoned cabbies who will hate being second-guessed by phone-wielding Brits.

It's not clear when Google will roll the off-route alerts feature to all phones.

We've asked Google for comment and will update this story with any response.

Categorized in Search Engine

[Source: This article was Published in msn.com By JR Raphael - Uploaded by AIRS Member: Edna Thomas]

Google Maps is great for helping you find your way — or even helping you find your car— but the app can also help other people find you.

Maps have an easily overlooked feature for sharing your real-time whereabouts with someone so they can see exactly where you are, even if you’re moving, and then navigate to your location. You can use the same feature to let a trusted person keep tabs on your travel progress to a particular place and know precisely when you’re set to arrive.

The best part? It’s all incredibly simple to do. The trick is knowing where to look.

Share your real-time location

When you want someone to be able to track your location:

  • Open the Maps app on your iOS or Android device
  • Tap the blue dot, which represents your current location, then select “Share location” from the menu that appears. (If it’s your first time using Maps for such a purpose, your phone may prompt you to authorize the app to access your contacts before continuing.)
  • If you want to share your location for a specific amount of time, select the “1-hour” option, and then use the blue plus and minus buttons to increase or decrease the time as needed
  • If you want to share your location indefinitely — until you manually turn it off — select the “Until you turn this off” option
  • On Android, select the person with whom you want to share your location from the list of suggested contacts or select an app (like Gmail or Messages) to send a private link. You can also opt to copy the link to your system clipboard and then paste it wherever you like.
  • On an iPhone, tap “Select People” to choose a person from your contacts, select “Message” to send a private link to someone in your messaging app, or select “More” to send a private link via another communication service. Your phone may prompt you to give Maps ongoing access to your location before it moves forward.
  • If you share your location within Maps itself — by selecting a contact as opposed to sending a link via an external app — the person with whom you are sharing your location will get a notification on their phone. In addition, when you select “Location sharing” in Maps’ side menu, you will see an icon on top for both you and the person you’re sharing with. Select the person’s icon, and a bar at the bottom of the screen will let you stop sharing, share your location again, or request that the person share their location with you.

To manually stop Maps from sharing your location:

  • Open the Maps app, and look for the “Sharing your location” bar at the bottom of the screen
  • Tap the “x” next to the line that says how and for how long your location is being shared

Share your trip’s progress

When you want someone to be able to see your location and estimated arrival time while you’re en route to a particular destination:

  • Open the Maps app, and start navigating to your destination
  • Swipe up on the bar at the bottom of the screen (where your remaining travel time is shown), then select “Share trip progress” from the menu that appears
  • Select the name of the person with whom you want to share your progress or select an app you want to use for sharing

If you want to stop sharing your progress before your trip is complete:

  • Swipe up again on the bar at the bottom of the screen
  • Select “Stop sharing” from the menu that appears

Categorized in Search Engine

[Source: This article was Published in itsfoss.com By   - Uploaded by AIRS Member: Jay Harris]

Brief: In this age of the internet, you can never be too careful with your privacy. Use these alternative search engines that do not track you.

Google – unquestionably being the best search engine out there, makes use of powerful and intelligent algorithms (including A.I. implementations) to let the users get the best out of a search engine with a personalized experience.

This sounds good until you start to live in a filter bubble. When you start seeing everything that ‘suits your taste’, you get detached from reality. Too much of anything is not good. Too much of personalization is harmful as well.

This is why one should get out of this filter bubble and see the world as it is. But how do you do that?

You know that Google sure as hell tracks a lot of information about your connection and the system when you perform a search and take an action within the search engine or use other Google services such as Gmail.

So, if Google keeps on tracking you, the simple answer would be to stop using Google for searching the web. But what would you use in place of Google? Microsoft’s Bing is no saint either.

So, to address the netizens concerned about their privacy while using a search engine, I have curated a list of privacy oriented alternative search engines to Google. 

Best 8 Privacy-Oriented Alternative Search Engines To Google

Do note that the alternatives mentioned in this article are not necessarily “better” than Google, but only focuses on protecting users privacy. Here we go!

1. DuckDuckGo

privacy oriented search engine duckduckgo

DuckDuckGo is one of the most successful privacy oriented search engines that stands as an alternative to Google. The user experience offered by DuckDuckGo is commendable. I must say – “It’s unique in itself”.

DuckDuckGo, unlike Google, utilizes the traditional method of “sponsored links” to display the advertisements. The ads are not focused on you but only the topic you are searching for – so there is nothing that could generate a profile of you in any manner – thereby respecting your privacy.

Of course, DuckDuckGo’s search algorithm may not be the smartest around (because it has no idea who you are!). And, if you want to utilize one of the best privacy oriented alternative search engines to Google, you will have to forget about getting a personalized experience while searching for something.

The search results are simplified with specific meta data’s. It lets you select a country to get the most relevant result you may be looking for. Also, when you type in a question or searching for a fix, it might present you with an instant answer (fetched from the source).

Although, you might miss quite a few functionalities (like filtering images by license) – that is an obvious trade-off to protect your privacy.

DuckDuckGo

Suggested read  ProtonMail: An Open Source Privacy-Focused Email Service Provider

2. Qwant

qwant best privacy oriented search engines

Qwant is probably one of the most loved privacy oriented search engines after DuckDuckGo. It ensures neutrality, privacy, and digital freedom while you search for something on the Internet.

If you thought privacy-oriented search engines generally tend to offer a very casual user experience, you need to rethink after trying out Qwant. This is a very dynamic search engine with trending topics and news stories organized very well. It may not offer a personalized experience (given that it does not track you) – but it does feel like it partially with a rich user experience offered to compensate that in a way.

Qwant is a very useful search engine alternative to Google. It lists out all the web resources, social feeds, news, and images on the topic you search for.

Qwant

3. Startpage

startpage best privacy oriented search engines

Startpage is a good initiative as a privacy-oriented search engine alternative to Google. However, it may not be the best one around. The UI is very similar to that of Google’s (while displaying the search results – irrespective of the functionalities offered). It may not be a complete rip-off but it is not very impressive – everyone has got their own taste.

To protect your privacy, it lets you choose it. You can either select to visit the web pages using the proxy or without it. It’s all your choice. You also get to change the theme of the search engine. Well, I did enjoy my switch to the “Night” theme. There’s an interesting option with the help of which you can generate a custom URL keeping your settings intact as well.

Startpage

4. Privatelee

privatelee best privacy oriented search engines

Privatelee is another kind of search engine specifically tailored to protect your online privacy. It does not track your search results or behavior in any way. However, you might get a lot of irrelevant results after the first ten matched results.

The search engine isn’t perfect to find a hidden treasure on the Internet but more for general queries. Privatelee also supports power commands – more like shortcuts – which helps you search for the exact thing in an efficient manner. It will save a lot of your time for pretty simple tasks such as searching for a movie on Netflix. If you were looking for a super fast privacy oriented search engine for common queries, Privatelee would be a great alternative to Google.

Privatelee

Suggested read  Librem 5 is a Security and Privacy Focused Smartphone Based on Linux

5. Swisscows

swisscows best privacy oriented search engines

Well, it isn’t dairy farm portfolio site but a privacy-oriented search engine as an alternative to Google. You may have known about it as Hulbee – but it has recently redirected its operation to a new domain. Nothing has really changed except for the name and domain of the search engine. It works the same way it was before as Hulbee.com.

Swisscows utilizes Bing to deliver the search results as per your query. When you search for something, you would notice a tag cloud on the left sidebar which is useful if you need to know about the related key terms and facts. The design language is a lot simpler but one of its kind among the other search engines out there. You get to filter the results according to the date but that’s about it – no more advanced options to tweak your search results. It utilizes a tile search technique (a semantic technology) to fetch the best results to your queries. The search algorithm makes sure that it is a family-friendly search engine with pornography and violence ruled out completely.

Swisscows

6. searX

searX best privacy oriented search engines

searX is an interesting search engine – which is technically defined as a “metasearch engine”. In other words, it utilizes other search engines and accumulates the results to your query in one place. It does not store your search data being an open source metasearch engine at the same time. You can review the source code, contribute, or even customize it as your own metasearch engine hosted on your server.

If you are fond of utilizing Torrent clients to download stuff, this search engine will help you find the magnet links to the exact files when you try searching for a file through searX. When you access the settings (preferences) for searX, you would find a lot of advanced things to tweak from your end. General tweaks include – adding/removing search engines, rewrite HTTP to HTTPS, remove tracker arguments from URL, and so on. It’s all yours to control. The user experience may not be the best here but if you want to utilize a lot of search engines while keeping your privacy in check, searX is a great alternative to Google.

searX

Suggested read  Free and Open Source Skype Alternative Ring 1.0 Released!

7. Peekier

peekier best privacy oriented search engines

Peekier is another fascinating privacy oriented search engine. Unlike the previous one, it is not a metasearch engine but has its own algorithm implemented. It may not be the fastest search engine I’ve ever used but it is an interesting take on how search engines can evolve in the near future. When you type in a search query, it not only fetches a list of results but also displays the preview images of the web pages listed. So, you get a “peek” on what you seek. While the search engine does not store your data, the web portals you visit do track you.

So, in order to avoid that to an extent, Peekier accesses the site and generates a preview image to decide whether to head into the site or not (without you requiring to access it). In that way, you allow less websites to know about you – mostly the ones you trust.

Peekier

8. MetaGer

metager best privacy oriented search engines

MetaGer is yet another open source metasearch engine. However, unlike others, it takes privacy more seriously and enforces the use of Tor network for anonymous access to search results from a variety of search engines. Some search engines who claim to protect your privacy may share your information to the government (whatever they record) because the server is bound to US legal procedures. However, with MetaGer, the Germany-based server would protect even the anonymous data recorded while using MetaGer.

They do house a few number of advertisements (without trackers of course)- but you can get rid of those as well by joining in as a member of the non-profit organization – SUMA-EV – which sponsors the MetaGer search engine.

MetaGer

Suggested read  7 Open Source Chrome Alternative Web Browsers For Linux

Wrapping Up

If you are concerned about your privacy, you should also take a look at some of the best privacy-focused Linux distributions. Among the search engine alternatives mentioned here – DuckDuckGo – is my personal favorite. But it really comes down to your preference and whom would you choose to trust while surfing the Internet.

Do you know some more interesting (but good) privacy-oriented alternative search engines to Google?

Categorized in Search Engine

[Source: This article was Published in moz.com  - Uploaded by AIRS Member: Barbara larson]

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet's content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It's arguably the most important piece of the SEO puzzle: If your site can't be found, there's no way you'll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher's query, which means that results are ordered by most relevant to least relevant.

What is search engine crawling?

Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

What's that word mean?

Having trouble with any of the definitions in this section? Our SEO glossary has chapter-specific definitions to help you stay up-to-speed.

Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher's query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

 

In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that's nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your pages?

As you've just learned, making sure your site gets crawled and indexed is a prerequisite to showing up in the SERPs. If you already have a website, it might be a good idea to start off by seeing how many of your pages are in the index. This will yield some great insights into whether Google is crawling and finding all the pages you want it to, and none that you don’t.

One way to check your indexed pages is "site:yourdomain.com", an advanced search operator. Head to Google and type "site:yourdomain.com" into the search bar. This will return results Google has in its index for the site specified:

A screenshot of a site:moz.com search in Google, showing the number of results below the search box.

The number of results Google displays (see “About XX results” above) isn't exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don't currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google's index, among other things.

If you're not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn't been crawled yet.
  • Your site isn't linked to from any external websites.
  • Your site's navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.
 

Tell search engines how to crawl your site

If you used Google Search Console or the “site:domain.com” advanced search operator and found that some of your important pages are missing from the index and/or some of your unimportant pages have been mistakenly indexed, there are some optimizations you can implement to better direct Googlebot how you want your web content crawled. Telling search engines how to crawl your site can give you better control of what ends up in the index.

Most people think about making sure Google can find their important pages, but it’s easy to forget that there are likely pages you don’t want Googlebot to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

To direct Googlebot away from certain pages and sections of your site, use robots.txt.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn't crawl, as well as the speed at which they crawl your site, via specific robots.txt directives.

How Googlebot treats robots.txt files

  • If Googlebot can't find a robots.txt file for a site, it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site, it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot encounters an error while trying to access a site’s robots.txt file and can't determine if one exists or not, it won't crawl the site.
 

Optimize for crawl budget!

Crawl budget is the average number of URLs Googlebot will crawl on your site before leaving, so crawl budget optimization ensures that Googlebot isn’t wasting time crawling through your unimportant pages at risk of ignoring your important pages. Crawl budget is most important on very large sites with tens of thousands of URLs, but it’s never a bad idea to block crawlers from accessing the content you definitely don’t care about. Just make sure not to block a crawler’s access to pages you’ve added other directives on, such as canonical or noindex tags. If Googlebot is blocked from a page, it won’t be able to see the instructions on that page.

Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail address scrapers) build bots that don't follow this protocol. In fact, some bad actors use robots.txt files to find where you’ve located your private content. Although it might seem logical to block crawlers from private pages such as login and administration pages so that they don’t show up in the index, placing the location of those URLs in a publicly accessible robots.txt file also means that people with malicious intent can more easily find them. It’s better to NoIndex these pages and gate them behind a login form rather than place them in your robots.txt file.

You can read more details about this in the robots.txt portion of our Learning Center.

Defining URL parameters in GSC

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly:

https://www.example.com/products/women/dresses/green.htm

https://www.example.com/products/women?category=dresses&color=green

https://example.com/shopindex.php?product_id=32&highlight=green+dress
&cat_id=1&sessionid=123$affid=43

How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages. If you use this feature to tell Googlebot “crawl no URLs with ____ parameter,” then you’re essentially asking to hide this content from Googlebot, which could result in the removal of those pages from search results. That’s what you want if those parameters create duplicate pages, but not ideal if you want those pages to be indexed.

Can crawlers find all your important content?

Now that you know some tactics for ensuring search engine crawlers stay away from your unimportant content, let’s learn about the optimizations that can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It's important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

A boarded-up door, representing a site that can be crawled to but not crawled through.

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won't see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there's no guarantee they will be able to read and understand it just yet. It's always best to add text within the markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

A depiction of how pages that are linked to can be found by crawlers, whereas a page not linked to in your site navigation exists as an island, undiscoverable.

Common navigation mistakes that can keep crawlers from seeing all of your sites:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it's essential that your website has clear navigation and helpful URL folder structures.

Do you have clean information architecture?

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and findability for users. The best information architecture is intuitive, meaning that users shouldn't have to think very hard to flow through your website or to find something.

Are you utilizing sitemaps?

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google's standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Ensure that you’ve only included URLs that you want indexing by search engines, and be sure to give crawlers consistent directions. For example, don’t include a URL in your sitemap if you’ve blocked that URL via robots.txt or include URLs in your sitemap that are duplicates rather than the preferred, canonical version (we’ll provide more information on canonicalization in Chapter 5!).

Learn more about XML sitemaps 
If your site doesn't have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console. There's no guarantee they'll include a submitted URL in their index, but it's worth a try!

Are crawlers getting errors when they try to access your URLs?

In the process of crawling the URLs on your site, a crawler may encounter errors. You can go to Google Search Console’s “Crawl Errors” report to detect URLs on which this might be happening - this report will show you server errors and not found errors. Server log files can also show you this, as well as a treasure trove of other information such as crawl frequency, but because accessing and dissecting server log files is a more advanced tactic, we won’t discuss it at length in the Beginner’s Guide, although you can learn more about it here.

Before you can do anything meaningful with the crawl error report, it’s important to understand server errors and "not found" errors.

4xx Codes: When search engine crawlers can’t access your content due to a client error

4xx errors are client errors, meaning the requested URL contains bad syntax or cannot be fulfilled. One of the most common 4xx errors is the “404 – not found” error. These might occur because of a URL typo, deleted page, or broken redirect, just to name a few examples. When search engines hit a 404, they can’t access the URL. When users hit a 404, they can get frustrated and leave.

5xx Codes: When search engine crawlers can’t access your content due to a server error

5xx errors are server errors, meaning the server the web page is located on failed to fulfill the searcher or search engine’s request to access the page. In Google Search Console’s “Crawl Error” report, there is a tab dedicated to these errors. These typically happen because the request for the URL timed out, so Googlebot abandoned the request. View Google’s documentation to learn more about fixing server connectivity issues. 

Thankfully, there is a way to tell both searchers and search engines that your page has moved — the 301 (permanent) redirect.

 

Create custom 404 pages!

Customize your 404 pages by adding in links to important pages on your site, a site search feature, and even contact information. This should make it less likely that visitors will bounce off your site when they hit a 404.

Say you move a page from example.com/young-dogs/ to example.com/puppies/. Search engines and users need a bridge to cross from the old URL to the new. That bridge is a 301 redirect.

 When you do implement a 301:When you don’t implement a 301: 
Link Equity Transfers link equity from the page’s old location to the new URL. Without a 301, the authority from the previous URL is not passed on to the new version of the URL.
Indexing Helps Google find and index the new version of the page. The presence of 404 errors on your site alone don't harm search performance, but letting ranking / trafficked pages 404 can result in them falling out of the index, with rankings and traffic going with them — yikes!
User Experience Ensures users find the page they’re looking for. Allowing your visitors to click on dead links will take them to error pages instead of the intended page, which can be frustrating.

The 301 status code itself means that the page has permanently moved to a new location, so avoid redirecting URLs to irrelevant pages — URLs where the old URL’s content doesn’t actually live. If a page is ranking for a query and you 301 it to a URL with different content, it might drop in rank position because the content that made it relevant to that particular query isn't there anymore. 301s are powerful — move URLs responsibly!

You also have the option of 302 redirecting a page, but this should be reserved for temporary moves and in cases where passing link equity isn’t as big of a concern. 302s are kind of like a road detour. You're temporarily siphoning traffic through a certain route, but it won't be like that forever.

Watch out for redirect chains!

It can be difficult for Googlebot to reach your page if it has to go through multiple redirects. Google calls these “redirect chains” and they recommend limiting them as much as possible. If you redirect example.com/1 to example.com/2, then later decide to redirect it to example.com/3, it’s best to eliminate the middleman and simply redirect example.com/1 to example.com/3.

Once you’ve ensured your site is optimized for crawl ability, the next order of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store your pages?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page's contents. All of that information is stored in its index.

A robot storing a book in a library.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time Googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing "Cached":

A screenshot of where to see cached results in the SERPs.

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a "not found" error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can use the URL Inspection tool to learn the status of the page, or use Fetch as Google which has a "Request Indexing" feature to submit individual URLs to the index. (Bonus: GSC’s “fetch” tool also has a “render” option that allows you to see if there are any issues with how Google is interpreting your page).

Tell search engines how to index your site

Robots meta directives

Meta directives (or "meta tags") are instructions you can give to search engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like "do not index this page in search results" or "don’t pass any link equity to any on-page links". These instructions are executed via Robots Meta Tags in theof your HTML pages (most commonly used) or via the X-Robots-Tag in the HTTP header.

Robots meta tag

The robots meta tag can be used within theof the HTML of your webpage. It can exclude all or specific search engines. The following are the most common meta directives, along with what situations you might apply them in.

index/noindex tells the engines whether the page should be crawled and kept in a search engines' index for retrieval. If you opt to use "noindex," you’re communicating to crawlers that you want the page excluded from search results. By default, search engines assume they can index all pages, so using the "index" value is unnecessary.

  • When you might use: You might opt to mark a page as "noindex" if you’re trying to trim thin pages from Google’s index of your site (ex: user generated profile pages) but you still want them accessible to visitors.

follow/nofollow tells search engines whether links on the page should be followed or nofollowed. “Follow” results in bots following the links on your page and passing link equity through to those URLs. Or, if you elect to employ "nofollow," the search engines will not follow or pass any link equity through to the links on the page. By default, all pages are assumed to have the "follow" attribute.

  • When you might use: nofollow is often used together with noindex when you’re trying to prevent a page from being indexed as well as prevent the crawler from following links on the page.

noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.

  • When you might use: If you run an e-commerce site and your prices change regularly, you might consider the noarchive tag to prevent searchers from seeing outdated pricing.

Here’s an example of a meta robots noindex, nofollow tag:

...

This example excludes all search engines from indexing the page and from following any on-page links. If you want to exclude multiple crawlers, like googlebot and bing for example, it’s okay to use multiple robot exclusion tags.

 

Meta directives affect indexing, not crawling

Googlebot needs to crawl your page in order to see its meta directives, so if you’re trying to prevent crawlers from accessing certain pages, meta directives are not the way to do it. Robots tags must be crawled to be respected.

X-Robots-Tag

The x-robots tag is used within the HTTP header of your URL, providing more flexibility and functionality than meta tags if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

For example, you could easily exclude entire folders or file types (like moz.com/no-bake/old-recipes-to-noindex):

 Header set X-Robots-Tag “noindex, nofollow”

The derivatives used in a robots meta tag can also be used in an X-Robots-Tag.

Or specific file types (like PDFs):

 Header set X-Robots-Tag “noindex, nofollow”

For more information on Meta Robot Tags, explore Google’s Robots Meta Tag Specifications.

 

WordPress tip:

In Dashboard > Settings > Reading, make sure the "Search Engine Visibility" box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Understanding the different ways you can influence crawling and indexing will help you avoid the common pitfalls that can prevent your important pages from getting found.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

An artistic interpretation of ranking, with three dogs sitting pretty on first, second, and third-place pedestals.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: "We’re making quality updates all the time." This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics — the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or "inbound links" are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

A depiction of how inbound links and internal links work.

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real-life WoM (Word-of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    • Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    • Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    • Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    • Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google's core algorithm) is a link analysis algorithm named after one of Google's founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three have stayed fairly consistent: links to your website (which serve as a third-party credibility signals), on-page content (quality content that fulfills a searcher’s intent), and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google’s core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, it’s always learning, and because it’s always learning, search results should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better result to users than the higher ranking URLs, you can bet that RankBrain will adjust those results, moving the more relevant result higher and demoting the lesser relevant pages as a byproduct.

An image showing how results can change and are volatile enough to show different rankings even hours later.

Like most things with the search engine, we don’t know exactly what comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?

Because Google will continue leveraging RankBrain to promote the most relevant, helpful content, we need to focus on fulfilling searcher intent more than ever before. Provide the best possible information and experience for searchers who might land on your page, and you’ve taken a big first step to performing well in a RankBrain world.

Engagement metrics: correlation, causation, or both?

With Google rankings, engagement metrics are most likely part correlation and part causation.

When we say engagement metrics, we mean data that represents how searchers interact with your site from search results. This includes things like:

  • Clicks (visits from search)
  • Time on page (amount of time the visitor spent on a page before leaving it)
  • Bounce rate (the percentage of all website sessions where users viewed only one page)
  • Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that engagement metrics correlate with higher ranking, but causation has been hotly debated. Are good engagement metrics just indicative of highly ranked sites? Or are sites ranked highly because they possess good engagement metrics?

What Google has said

While they’ve never used the term “direct ranking signal,” Google has been clear that they absolutely use click data to modify the SERP for particular queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”

Another comment from former Google engineer Edmond Lau corroborates this:

“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it obvious that it uses click data with its patents on systems like rank-adjusted content items.”

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than correlation, but it would appear that Google falls short of calling engagement metrics a “ranking signal” because those metrics are used to improve search quality, and the rank of individual URLs is just a byproduct of that.

What tests have confirmed

Various tests have confirmed that Google will adjust SERP order in response to searcher engagement:

  • Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1 spot after getting around 200 people to click on the URL from the SERP. Interestingly, ranking improvement seemed to be isolated to the location of the people who visited the link. The rank position spiked in the US, where many participants were located, whereas it remained lower on the page in Google Canada, Google Australia, etc.
  • Larry Kim’s comparison of top pages and their average dwell time pre- and post-RankBrain seemed to indicate that the machine-learning component of Google’s algorithm demotes the rank position of pages that people don’t spend as much time on.
  • Darren Shaw’s testing has shown user behavior’s impact on local search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for quality, and rank position changes as a byproduct, it’s safe to say that SEOs should optimize for engagement. Engagement doesn’t change the objective quality of your web page, but rather your value to searchers relative to other results for that query. That’s why, after no changes to your page or its backlinks, it could decline in rankings if searchers’ behaviors indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker. Objective factors such as links and content first rank the page, then engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results

Back when search engines lacked a lot of the sophistication they have today, the term “10 blue links” was coined to describe the flat structure of the SERP. Any time a search was performed, Google would return a page with 10 organic results, each in the same format.

A screenshot of what a 10-blue-links SERP looks like.

In this search landscape, holding the #1 spot was the holy grail of SEO. But then something happened. Google began adding results in new formats on its search result pages, called SERP features. Some of these SERP features include:

  • Paid advertisements
  • Featured snippets
  • People Also Ask boxes
  • Local (map) pack
  • Knowledge panel
  • Sitelinks

And Google is adding new ones all the time. They even experimented with “zero-result SERPs,” a phenomenon where only one result from the Knowledge Graph was displayed on the SERP with no results below it except for an option to “view more results.”

The addition of these features caused some initial panic for two main reasons. For one, many of these features caused organic results to be pushed down further on the SERP. Another byproduct is that fewer searchers are clicking on the organic results since more queries are being answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied by different content formats. Notice how the different types of SERP features match the different types of query intents.

Query IntentPossible SERP Feature Triggered
Informational Featured snippet
Informational with one answer Knowledge Graph/instant answer
Local Map pack
Transactional Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know that answers can be delivered to searchers in a wide array of formats, and how you structure your content can impact the format in which it appears in search.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical location customers can visit (ex: dentist) or for a business that travels to visit their customers (ex: plumber), make sure that you claim, verify, and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine the ranking:

  1. Relevance
  2. Distance
  3. Prominence

Relevance

Relevance is how well a local business matches what the searcher is looking for. To ensure that the business is doing everything it can to be relevant to searchers, make sure the business’ information is thoroughly and accurately filled out.

Distance

Google uses your geo-location to better serve your local results. Local search results are extremely sensitive to proximity, which refers to the location of the searcher and/or the location specified in the query (if the searcher included one).

Organic search results are sensitive to a searcher's location, though seldom as pronounced as in local pack results.

Prominence

With prominence as a factor, Google is looking to reward businesses that are well-known in the real world. In addition to a business’ offline prominence, Google also looks to some online factors to determine the local ranking, such as:

Reviews

The number of Google reviews a local business receives, and the sentiment of those reviews, have a notable impact on their ability to rank in local results.

Citations

A "business citation" or "business listing" is a web-based reference to a local business' "NAP" (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources in continuously making up its local business index. When Google finds multiple consistent references to a business's name, location, and phone number it strengthens Google's "trust" in the validity of that data. This then leads to Google being able to show the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Organic ranking

SEO best practices also apply to local SEO, since Google also considers a website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google and users better understand your content.

[Bonus!] Local engagement

Although not listed by Google as a local ranking factor, the role of engagement is only going to increase as time goes on. Google continues to enrich local results by incorporating real-world data like popular times to visit and average length of visits...

 

Curious about a certain local business' citation accuracy? Moz has a free tool that can help out, aptly named Check Listing.

...and even provides searchers with the ability to ask the business questions!

A screenshot of the Questions & Answers result in local search.

Undoubtedly now more than ever before, local results are being influenced by real-world data. This interactivity is how searchers interact with and respond to local businesses, rather than purely static (and game-able) information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to searchers, it makes perfect sense for them to use real-time engagement metrics to determine quality and relevance.

You don’t have to know the ins and outs of Google's algorithm (that remains a mystery!), but by now you should have a great baseline knowledge of how the search engine finds, interprets, stores, and ranks content. Armed with that knowledge, let's learn about choosing the keywords your content will target in Chapter 3 (Keyword Research)!

 

Categorized in Search Engine

[This article is originally published in zdnet.com written by Steven J. Vaughan-Nichols - Uploaded by AIRS Member: Eric Beaudoin]

For less than a $100, you can have an open-source powered, easy-to-use server, which enables you -- and not Apple, Facebook, Google, or Microsoft -- to control your view of the internet.

On today's internet, most of us find ourselves locked into one service provider or the other. We find ourselves tied down to Apple, Facebook, Google, or Microsoft for our e-mail, social networking, calendaring -- you name it. It doesn't have to be that way. The FreedomBox Foundation has just released its first commercially available FreedomBox: The Pioneer Edition FreedomBox Home Server Kit. With it, you -- not some company -- control over your internet-based services.

The Olimex Pioneer FreedomBox costs less than $100 and is powered by a single-board computer (SBC), the open source hardware-based Olimex A20-OLinuXino-LIME2 board. This SBC is powered by a 1GHz A20/T2 dual-core Cortex-A7 processor and dual-core Mali 400 GPU. It also comes with a Gigabyte of RAM, a high-speed 32GB micro SD card for storage with the FreedomBox software pre-installed, two USB ports, SATA-drive support, a Gigabit Ethernet port, and a backup battery.

Doesn't sounds like much does it? But, here's the thing: You don't need much to run a personal server.

Sure, some of us have been running our own servers at home, the office, or at a hosting site for ages. I'm one of those people. But, it's hard to do. What the FreedomBox brings to the table is the power to let almost anyone run their own server without being a Linux expert.

The supplied FreedomBox software is based on Debian Linux. It's designed from the ground up to make it as hard as possible for anyone to exploit your data. It does this by putting you in control of your own corner of the internet at home. Its simple user interface lets you host your own internet services with little expertise.

You can also just download the FreedomBox software and run it on your own SBC. The Foundation recommends using the CubietruckCubieboard2BeagleBone BlackA20 OLinuXino Lime2A20 OLinuXino MICRO, and PC Engines APU. It will also run on most newer Raspberry Pi models.

Want an encrypted chat server to replace WhatsApp? It's got that. A VoIP server? Sure. A personal website? Of course! Web-based file sharing à la Dropbox? You bet. A Virtual Private Network (VPN) server of your own? Yes, that's essential for its mission.

The software stack isn't perfect. This is still a work in progress. So, for example, it still doesn't have a personal email server or federated social networking, such as GNU Social and Diaspora, to provide a privacy-respecting alternative to Facebook. That's not because they won't run on a FreedomBox; they will. What they haven't been able to do yet is to make it easy enough for anyone to do and not someone with Linux sysadmin chops. That will come in time.

As the Foundation stated, "The word 'Pioneer' was included in the name of these kits in order to emphasize the leadership required to run a FreedomBox in 2019. Users will be pioneers both because they have the initiative to define this new frontier and because their feedback will make FreedomBox better for its next generation of users."

To help you get up to speed the FreedomBox community will be offering free technical support for owners of the Pioneer Edition FreedomBox servers on its support forum. The Foundation also welcomes new developers to help it perfect the FreedomBox platform. 

Why do this?  Eben Moglen, Professor of Law at Columbia Law School, saw the mess we were heading toward almost 10 years ago: "Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age." That was before Facebook proved itself to be totally incompetent with security and sold off your data to Cambridge Analytica to scam 50 million US Facebook users with personalized anti-Clinton and pro-Trump propaganda in the 2016 election.

It didn't have to be that way. In an interview, Moglen told me this: "Concentration of technology is a surprising outcome of cheap hardware and free software. We could have had a world of peers. Instead, the net we built is the net we didn't want. We're in an age of surveillance with centralized control. We're in a world, which encourages swiping, clicking, and flame throwing."

With FreedomBox, "We can undo this. We can make it possible for ordinary people to provide internet services. You can have your own private messaging, services without a man in the middle watching your every move." 

We can, in short, rebuild the internet so that we, and not multi-billion dollar companies, are in charge.

I like this plan

Categorized in Internet Privacy
Page 1 of 84

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media