[Source: This article was Published in itsfoss.com By   - Uploaded by the Association Member: Jay Harris]

Brief: In this age of the internet, you can never be too careful with your privacy. Use these alternative search engines that do not track you.

Google – unquestionably being the best search engine out there, makes use of powerful and intelligent algorithms (including A.I. implementations) to let the users get the best out of a search engine with a personalized experience.

This sounds good until you start to live in a filter bubble. When you start seeing everything that ‘suits your taste’, you get detached from reality. Too much of anything is not good. Too much of personalization is harmful as well.

This is why one should get out of this filter bubble and see the world as it is. But how do you do that?

You know that Google sure as hell tracks a lot of information about your connection and the system when you perform a search and take an action within the search engine or use other Google services such as Gmail.

So, if Google keeps on tracking you, the simple answer would be to stop using Google for searching the web. But what would you use in place of Google? Microsoft’s Bing is no saint either.

So, to address the netizens concerned about their privacy while using a search engine, I have curated a list of privacy oriented alternative search engines to Google. 

Best 8 Privacy-Oriented Alternative Search Engines To Google

Do note that the alternatives mentioned in this article are not necessarily “better” than Google, but only focuses on protecting users privacy. Here we go!

1. DuckDuckGo

privacy oriented search engine duckduckgo

DuckDuckGo is one of the most successful privacy oriented search engines that stands as an alternative to Google. The user experience offered by DuckDuckGo is commendable. I must say – “It’s unique in itself”.

DuckDuckGo, unlike Google, utilizes the traditional method of “sponsored links” to display the advertisements. The ads are not focused on you but only the topic you are searching for – so there is nothing that could generate a profile of you in any manner – thereby respecting your privacy.

Of course, DuckDuckGo’s search algorithm may not be the smartest around (because it has no idea who you are!). And, if you want to utilize one of the best privacy oriented alternative search engines to Google, you will have to forget about getting a personalized experience while searching for something.

The search results are simplified with specific meta data’s. It lets you select a country to get the most relevant result you may be looking for. Also, when you type in a question or searching for a fix, it might present you with an instant answer (fetched from the source).

Although, you might miss quite a few functionalities (like filtering images by license) – that is an obvious trade-off to protect your privacy.

DuckDuckGo

Suggested read  ProtonMail: An Open Source Privacy-Focused Email Service Provider

2. Qwant

qwant best privacy oriented search engines

Qwant is probably one of the most loved privacy oriented search engines after DuckDuckGo. It ensures neutrality, privacy, and digital freedom while you search for something on the Internet.

If you thought privacy-oriented search engines generally tend to offer a very casual user experience, you need to rethink after trying out Qwant. This is a very dynamic search engine with trending topics and news stories organized very well. It may not offer a personalized experience (given that it does not track you) – but it does feel like it partially with a rich user experience offered to compensate that in a way.

Qwant is a very useful search engine alternative to Google. It lists out all the web resources, social feeds, news, and images on the topic you search for.

Qwant

3. Startpage

startpage best privacy oriented search engines

Startpage is a good initiative as a privacy-oriented search engine alternative to Google. However, it may not be the best one around. The UI is very similar to that of Google’s (while displaying the search results – irrespective of the functionalities offered). It may not be a complete rip-off but it is not very impressive – everyone has got their own taste.

To protect your privacy, it lets you choose it. You can either select to visit the web pages using the proxy or without it. It’s all your choice. You also get to change the theme of the search engine. Well, I did enjoy my switch to the “Night” theme. There’s an interesting option with the help of which you can generate a custom URL keeping your settings intact as well.

Startpage

4. Privatelee

privatelee best privacy oriented search engines

Privatelee is another kind of search engine specifically tailored to protect your online privacy. It does not track your search results or behavior in any way. However, you might get a lot of irrelevant results after the first ten matched results.

The search engine isn’t perfect to find a hidden treasure on the Internet but more for general queries. Privatelee also supports power commands – more like shortcuts – which helps you search for the exact thing in an efficient manner. It will save a lot of your time for pretty simple tasks such as searching for a movie on Netflix. If you were looking for a super fast privacy oriented search engine for common queries, Privatelee would be a great alternative to Google.

Privatelee

Suggested read  Librem 5 is a Security and Privacy Focused Smartphone Based on Linux

5. Swisscows

swisscows best privacy oriented search engines

Well, it isn’t dairy farm portfolio site but a privacy-oriented search engine as an alternative to Google. You may have known about it as Hulbee – but it has recently redirected its operation to a new domain. Nothing has really changed except for the name and domain of the search engine. It works the same way it was before as Hulbee.com.

Swisscows utilizes Bing to deliver the search results as per your query. When you search for something, you would notice a tag cloud on the left sidebar which is useful if you need to know about the related key terms and facts. The design language is a lot simpler but one of its kind among the other search engines out there. You get to filter the results according to the date but that’s about it – no more advanced options to tweak your search results. It utilizes a tile search technique (a semantic technology) to fetch the best results to your queries. The search algorithm makes sure that it is a family-friendly search engine with pornography and violence ruled out completely.

Swisscows

6. searX

searX best privacy oriented search engines

searX is an interesting search engine – which is technically defined as a “metasearch engine”. In other words, it utilizes other search engines and accumulates the results to your query in one place. It does not store your search data being an open source metasearch engine at the same time. You can review the source code, contribute, or even customize it as your own metasearch engine hosted on your server.

If you are fond of utilizing Torrent clients to download stuff, this search engine will help you find the magnet links to the exact files when you try searching for a file through searX. When you access the settings (preferences) for searX, you would find a lot of advanced things to tweak from your end. General tweaks include – adding/removing search engines, rewrite HTTP to HTTPS, remove tracker arguments from URL, and so on. It’s all yours to control. The user experience may not be the best here but if you want to utilize a lot of search engines while keeping your privacy in check, searX is a great alternative to Google.

searX

Suggested read  Free and Open Source Skype Alternative Ring 1.0 Released!

7. Peekier

peekier best privacy oriented search engines

Peekier is another fascinating privacy oriented search engine. Unlike the previous one, it is not a metasearch engine but has its own algorithm implemented. It may not be the fastest search engine I’ve ever used but it is an interesting take on how search engines can evolve in the near future. When you type in a search query, it not only fetches a list of results but also displays the preview images of the web pages listed. So, you get a “peek” on what you seek. While the search engine does not store your data, the web portals you visit do track you.

So, in order to avoid that to an extent, Peekier accesses the site and generates a preview image to decide whether to head into the site or not (without you requiring to access it). In that way, you allow less websites to know about you – mostly the ones you trust.

Peekier

8. MetaGer

metager best privacy oriented search engines

MetaGer is yet another open source metasearch engine. However, unlike others, it takes privacy more seriously and enforces the use of Tor network for anonymous access to search results from a variety of search engines. Some search engines who claim to protect your privacy may share your information to the government (whatever they record) because the server is bound to US legal procedures. However, with MetaGer, the Germany-based server would protect even the anonymous data recorded while using MetaGer.

They do house a few number of advertisements (without trackers of course)- but you can get rid of those as well by joining in as a member of the non-profit organization – SUMA-EV – which sponsors the MetaGer search engine.

MetaGer

Suggested read  7 Open Source Chrome Alternative Web Browsers For Linux

Wrapping Up

If you are concerned about your privacy, you should also take a look at some of the best privacy-focused Linux distributions. Among the search engine alternatives mentioned here – DuckDuckGo – is my personal favorite. But it really comes down to your preference and whom would you choose to trust while surfing the Internet.

Do you know some more interesting (but good) privacy-oriented alternative search engines to Google?

Categorized in Search Engine

[Source: This article was Published in moz.com  - Uploaded by the Association Member: Barbara larson]

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet's content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It's arguably the most important piece of the SEO puzzle: If your site can't be found, there's no way you'll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher's query, which means that results are ordered by most relevant to least relevant.

What is search engine crawling?

Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

What's that word mean?

Having trouble with any of the definitions in this section? Our SEO glossary has chapter-specific definitions to help you stay up-to-speed.

Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher's query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that's nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your pages?

As you've just learned, making sure your site gets crawled and indexed is a prerequisite to showing up in the SERPs. If you already have a website, it might be a good idea to start off by seeing how many of your pages are in the index. This will yield some great insights into whether Google is crawling and finding all the pages you want it to, and none that you don’t.

One way to check your indexed pages is "site:yourdomain.com", an advanced search operator. Head to Google and type "site:yourdomain.com" into the search bar. This will return results Google has in its index for the site specified:

A screenshot of a site:moz.com search in Google, showing the number of results below the search box.

The number of results Google displays (see “About XX results” above) isn't exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don't currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google's index, among other things.

If you're not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn't been crawled yet.
  • Your site isn't linked to from any external websites.
  • Your site's navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

Tell search engines how to crawl your site

If you used Google Search Console or the “site:domain.com” advanced search operator and found that some of your important pages are missing from the index and/or some of your unimportant pages have been mistakenly indexed, there are some optimizations you can implement to better direct Googlebot how you want your web content crawled. Telling search engines how to crawl your site can give you better control of what ends up in the index.

Most people think about making sure Google can find their important pages, but it’s easy to forget that there are likely pages you don’t want Googlebot to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

To direct Googlebot away from certain pages and sections of your site, use robots.txt.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn't crawl, as well as the speed at which they crawl your site, via specific robots.txt directives.

How Googlebot treats robots.txt files

  • If Googlebot can't find a robots.txt file for a site, it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site, it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot encounters an error while trying to access a site’s robots.txt file and can't determine if one exists or not, it won't crawl the site.

Optimize for crawl budget!

Crawl budget is the average number of URLs Googlebot will crawl on your site before leaving, so crawl budget optimization ensures that Googlebot isn’t wasting time crawling through your unimportant pages at risk of ignoring your important pages. Crawl budget is most important on very large sites with tens of thousands of URLs, but it’s never a bad idea to block crawlers from accessing the content you definitely don’t care about. Just make sure not to block a crawler’s access to pages you’ve added other directives on, such as canonical or noindex tags. If Googlebot is blocked from a page, it won’t be able to see the instructions on that page.

Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail address scrapers) build bots that don't follow this protocol. In fact, some bad actors use robots.txt files to find where you’ve located your private content. Although it might seem logical to block crawlers from private pages such as login and administration pages so that they don’t show up in the index, placing the location of those URLs in a publicly accessible robots.txt file also means that people with malicious intent can more easily find them. It’s better to NoIndex these pages and gate them behind a login form rather than place them in your robots.txt file.

You can read more details about this in the robots.txt portion of our Learning Center.

Defining URL parameters in GSC

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly:

https://www.example.com/products/women/dresses/green.htm

https://www.example.com/products/women?category=dresses&color=green

https://example.com/shopindex.php?product_id=32&highlight=green+dress
&cat_id=1&sessionid=123$affid=43

How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages. If you use this feature to tell Googlebot “crawl no URLs with ____ parameter,” then you’re essentially asking to hide this content from Googlebot, which could result in the removal of those pages from search results. That’s what you want if those parameters create duplicate pages, but not ideal if you want those pages to be indexed.

Can crawlers find all your important content?

Now that you know some tactics for ensuring search engine crawlers stay away from your unimportant content, let’s learn about the optimizations that can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It's important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

A boarded-up door, representing a site that can be crawled to but not crawled through.

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won't see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there's no guarantee they will be able to read and understand it just yet. It's always best to add text within the markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

A depiction of how pages that are linked to can be found by crawlers, whereas a page not linked to in your site navigation exists as an island, undiscoverable.

Common navigation mistakes that can keep crawlers from seeing all of your sites:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it's essential that your website has clear navigation and helpful URL folder structures.

Do you have clean information architecture?

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and findability for users. The best information architecture is intuitive, meaning that users shouldn't have to think very hard to flow through your website or to find something.

Are you utilizing sitemaps?

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google's standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Ensure that you’ve only included URLs that you want indexing by search engines, and be sure to give crawlers consistent directions. For example, don’t include a URL in your sitemap if you’ve blocked that URL via robots.txt or include URLs in your sitemap that are duplicates rather than the preferred, canonical version (we’ll provide more information on canonicalization in Chapter 5!).

Learn more about XML sitemaps 
If your site doesn't have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console. There's no guarantee they'll include a submitted URL in their index, but it's worth a try!

Are crawlers getting errors when they try to access your URLs?

In the process of crawling the URLs on your site, a crawler may encounter errors. You can go to Google Search Console’s “Crawl Errors” report to detect URLs on which this might be happening - this report will show you server errors and not found errors. Server log files can also show you this, as well as a treasure trove of other information such as crawl frequency, but because accessing and dissecting server log files is a more advanced tactic, we won’t discuss it at length in the Beginner’s Guide, although you can learn more about it here.

Before you can do anything meaningful with the crawl error report, it’s important to understand server errors and "not found" errors.

4xx Codes: When search engine crawlers can’t access your content due to a client error

4xx errors are client errors, meaning the requested URL contains bad syntax or cannot be fulfilled. One of the most common 4xx errors is the “404 – not found” error. These might occur because of a URL typo, deleted page, or broken redirect, just to name a few examples. When search engines hit a 404, they can’t access the URL. When users hit a 404, they can get frustrated and leave.

5xx Codes: When search engine crawlers can’t access your content due to a server error

5xx errors are server errors, meaning the server the web page is located on failed to fulfill the searcher or search engine’s request to access the page. In Google Search Console’s “Crawl Error” report, there is a tab dedicated to these errors. These typically happen because the request for the URL timed out, so Googlebot abandoned the request. View Google’s documentation to learn more about fixing server connectivity issues. 

Thankfully, there is a way to tell both searchers and search engines that your page has moved — the 301 (permanent) redirect.

Create custom 404 pages!

Customize your 404 pages by adding in links to important pages on your site, a site search feature, and even contact information. This should make it less likely that visitors will bounce off your site when they hit a 404.

Say you move a page from example.com/young-dogs/ to example.com/puppies/. Search engines and users need a bridge to cross from the old URL to the new. That bridge is a 301 redirect.

When you do implement a 301:When you don’t implement a 301:
Link Equity Transfers link equity from the page’s old location to the new URL. Without a 301, the authority from the previous URL is not passed on to the new version of the URL.
Indexing Helps Google find and index the new version of the page. The presence of 404 errors on your site alone don't harm search performance, but letting ranking / trafficked pages 404 can result in them falling out of the index, with rankings and traffic going with them — yikes!
User Experience Ensures users find the page they’re looking for. Allowing your visitors to click on dead links will take them to error pages instead of the intended page, which can be frustrating.

The 301 status code itself means that the page has permanently moved to a new location, so avoid redirecting URLs to irrelevant pages — URLs where the old URL’s content doesn’t actually live. If a page is ranking for a query and you 301 it to a URL with different content, it might drop in rank position because the content that made it relevant to that particular query isn't there anymore. 301s are powerful — move URLs responsibly!

You also have the option of 302 redirecting a page, but this should be reserved for temporary moves and in cases where passing link equity isn’t as big of a concern. 302s are kind of like a road detour. You're temporarily siphoning traffic through a certain route, but it won't be like that forever.

Watch out for redirect chains!

It can be difficult for Googlebot to reach your page if it has to go through multiple redirects. Google calls these “redirect chains” and they recommend limiting them as much as possible. If you redirect example.com/1 to example.com/2, then later decide to redirect it to example.com/3, it’s best to eliminate the middleman and simply redirect example.com/1 to example.com/3.

Once you’ve ensured your site is optimized for crawl ability, the next order of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store your pages?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page's contents. All of that information is stored in its index.

A robot storing a book in a library.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time Googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing "Cached":

A screenshot of where to see cached results in the SERPs.

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a "not found" error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can use the URL Inspection tool to learn the status of the page, or use Fetch as Google which has a "Request Indexing" feature to submit individual URLs to the index. (Bonus: GSC’s “fetch” tool also has a “render” option that allows you to see if there are any issues with how Google is interpreting your page).

Tell search engines how to index your site

Robots meta directives

Meta directives (or "meta tags") are instructions you can give to search engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like "do not index this page in search results" or "don’t pass any link equity to any on-page links". These instructions are executed via Robots Meta Tags in theof your HTML pages (most commonly used) or via the X-Robots-Tag in the HTTP header.

Robots meta tag

The robots meta tag can be used within theof the HTML of your webpage. It can exclude all or specific search engines. The following are the most common meta directives, along with what situations you might apply them in.

index/noindex tells the engines whether the page should be crawled and kept in a search engines' index for retrieval. If you opt to use "noindex," you’re communicating to crawlers that you want the page excluded from search results. By default, search engines assume they can index all pages, so using the "index" value is unnecessary.

  • When you might use: You might opt to mark a page as "noindex" if you’re trying to trim thin pages from Google’s index of your site (ex: user generated profile pages) but you still want them accessible to visitors.

follow/nofollow tells search engines whether links on the page should be followed or nofollowed. “Follow” results in bots following the links on your page and passing link equity through to those URLs. Or, if you elect to employ "nofollow," the search engines will not follow or pass any link equity through to the links on the page. By default, all pages are assumed to have the "follow" attribute.

  • When you might use: nofollow is often used together with noindex when you’re trying to prevent a page from being indexed as well as prevent the crawler from following links on the page.

noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.

  • When you might use: If you run an e-commerce site and your prices change regularly, you might consider the noarchive tag to prevent searchers from seeing outdated pricing.

Here’s an example of a meta robots noindex, nofollow tag:

...

This example excludes all search engines from indexing the page and from following any on-page links. If you want to exclude multiple crawlers, like googlebot and bing for example, it’s okay to use multiple robot exclusion tags.

Meta directives affect indexing, not crawling

Googlebot needs to crawl your page in order to see its meta directives, so if you’re trying to prevent crawlers from accessing certain pages, meta directives are not the way to do it. Robots tags must be crawled to be respected.

X-Robots-Tag

The x-robots tag is used within the HTTP header of your URL, providing more flexibility and functionality than meta tags if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

For example, you could easily exclude entire folders or file types (like moz.com/no-bake/old-recipes-to-noindex):

 Header set X-Robots-Tag “noindex, nofollow”

The derivatives used in a robots meta tag can also be used in an X-Robots-Tag.

Or specific file types (like PDFs):

 Header set X-Robots-Tag “noindex, nofollow”

For more information on Meta Robot Tags, explore Google’s Robots Meta Tag Specifications.

WordPress tip:

In Dashboard > Settings > Reading, make sure the "Search Engine Visibility" box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Understanding the different ways you can influence crawling and indexing will help you avoid the common pitfalls that can prevent your important pages from getting found.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

An artistic interpretation of ranking, with three dogs sitting pretty on first, second, and third-place pedestals.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: "We’re making quality updates all the time." This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics — the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or "inbound links" are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

A depiction of how inbound links and internal links work.

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real-life WoM (Word-of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    • Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    • Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    • Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    • Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google's core algorithm) is a link analysis algorithm named after one of Google's founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three have stayed fairly consistent: links to your website (which serve as a third-party credibility signals), on-page content (quality content that fulfills a searcher’s intent), and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google’s core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, it’s always learning, and because it’s always learning, search results should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better result to users than the higher ranking URLs, you can bet that RankBrain will adjust those results, moving the more relevant result higher and demoting the lesser relevant pages as a byproduct.

An image showing how results can change and are volatile enough to show different rankings even hours later.

Like most things with the search engine, we don’t know exactly what comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?

Because Google will continue leveraging RankBrain to promote the most relevant, helpful content, we need to focus on fulfilling searcher intent more than ever before. Provide the best possible information and experience for searchers who might land on your page, and you’ve taken a big first step to performing well in a RankBrain world.

Engagement metrics: correlation, causation, or both?

With Google rankings, engagement metrics are most likely part correlation and part causation.

When we say engagement metrics, we mean data that represents how searchers interact with your site from search results. This includes things like:

  • Clicks (visits from search)
  • Time on page (amount of time the visitor spent on a page before leaving it)
  • Bounce rate (the percentage of all website sessions where users viewed only one page)
  • Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that engagement metrics correlate with higher ranking, but causation has been hotly debated. Are good engagement metrics just indicative of highly ranked sites? Or are sites ranked highly because they possess good engagement metrics?

What Google has said

While they’ve never used the term “direct ranking signal,” Google has been clear that they absolutely use click data to modify the SERP for particular queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”

Another comment from former Google engineer Edmond Lau corroborates this:

“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it obvious that it uses click data with its patents on systems like rank-adjusted content items.”

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than correlation, but it would appear that Google falls short of calling engagement metrics a “ranking signal” because those metrics are used to improve search quality, and the rank of individual URLs is just a byproduct of that.

What tests have confirmed

Various tests have confirmed that Google will adjust SERP order in response to searcher engagement:

  • Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1 spot after getting around 200 people to click on the URL from the SERP. Interestingly, ranking improvement seemed to be isolated to the location of the people who visited the link. The rank position spiked in the US, where many participants were located, whereas it remained lower on the page in Google Canada, Google Australia, etc.
  • Larry Kim’s comparison of top pages and their average dwell time pre- and post-RankBrain seemed to indicate that the machine-learning component of Google’s algorithm demotes the rank position of pages that people don’t spend as much time on.
  • Darren Shaw’s testing has shown user behavior’s impact on local search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for quality, and rank position changes as a byproduct, it’s safe to say that SEOs should optimize for engagement. Engagement doesn’t change the objective quality of your web page, but rather your value to searchers relative to other results for that query. That’s why, after no changes to your page or its backlinks, it could decline in rankings if searchers’ behaviors indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker. Objective factors such as links and content first rank the page, then engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results

Back when search engines lacked a lot of the sophistication they have today, the term “10 blue links” was coined to describe the flat structure of the SERP. Any time a search was performed, Google would return a page with 10 organic results, each in the same format.

A screenshot of what a 10-blue-links SERP looks like.

In this search landscape, holding the #1 spot was the holy grail of SEO. But then something happened. Google began adding results in new formats on its search result pages, called SERP features. Some of these SERP features include:

  • Paid advertisements
  • Featured snippets
  • People Also Ask boxes
  • Local (map) pack
  • Knowledge panel
  • Sitelinks

And Google is adding new ones all the time. They even experimented with “zero-result SERPs,” a phenomenon where only one result from the Knowledge Graph was displayed on the SERP with no results below it except for an option to “view more results.”

The addition of these features caused some initial panic for two main reasons. For one, many of these features caused organic results to be pushed down further on the SERP. Another byproduct is that fewer searchers are clicking on the organic results since more queries are being answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied by different content formats. Notice how the different types of SERP features match the different types of query intents.

Query IntentPossible SERP Feature Triggered
Informational Featured snippet
Informational with one answer Knowledge Graph/instant answer
Local Map pack
Transactional Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know that answers can be delivered to searchers in a wide array of formats, and how you structure your content can impact the format in which it appears in search.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical location customers can visit (ex: dentist) or for a business that travels to visit their customers (ex: plumber), make sure that you claim, verify, and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine the ranking:

  1. Relevance
  2. Distance
  3. Prominence

Relevance

Relevance is how well a local business matches what the searcher is looking for. To ensure that the business is doing everything it can to be relevant to searchers, make sure the business’ information is thoroughly and accurately filled out.

Distance

Google uses your geo-location to better serve your local results. Local search results are extremely sensitive to proximity, which refers to the location of the searcher and/or the location specified in the query (if the searcher included one).

Organic search results are sensitive to a searcher's location, though seldom as pronounced as in local pack results.

Prominence

With prominence as a factor, Google is looking to reward businesses that are well-known in the real world. In addition to a business’ offline prominence, Google also looks to some online factors to determine the local ranking, such as:

Reviews

The number of Google reviews a local business receives, and the sentiment of those reviews, have a notable impact on their ability to rank in local results.

Citations

A "business citation" or "business listing" is a web-based reference to a local business' "NAP" (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources in continuously making up its local business index. When Google finds multiple consistent references to a business's name, location, and phone number it strengthens Google's "trust" in the validity of that data. This then leads to Google being able to show the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Organic ranking

SEO best practices also apply to local SEO, since Google also considers a website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google and users better understand your content.

[Bonus!] Local engagement

Although not listed by Google as a local ranking factor, the role of engagement is only going to increase as time goes on. Google continues to enrich local results by incorporating real-world data like popular times to visit and average length of visits...

 

Curious about a certain local business' citation accuracy? Moz has a free tool that can help out, aptly named Check Listing.

...and even provides searchers with the ability to ask the business questions!

A screenshot of the Questions & Answers result in local search.

Undoubtedly now more than ever before, local results are being influenced by real-world data. This interactivity is how searchers interact with and respond to local businesses, rather than purely static (and game-able) information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to searchers, it makes perfect sense for them to use real-time engagement metrics to determine quality and relevance.

You don’t have to know the ins and outs of Google's algorithm (that remains a mystery!), but by now you should have a great baseline knowledge of how the search engine finds, interprets, stores, and ranks content. Armed with that knowledge, let's learn about choosing the keywords your content will target in Chapter 3 (Keyword Research)!

 

Categorized in Search Engine

[This article is originally published in zdnet.com written by Steven J. Vaughan-Nichols - Uploaded by AIRS Member: Eric Beaudoin]

For less than a $100, you can have an open-source powered, easy-to-use server, which enables you -- and not Apple, Facebook, Google, or Microsoft -- to control your view of the internet.

On today's internet, most of us find ourselves locked into one service provider or the other. We find ourselves tied down to Apple, Facebook, Google, or Microsoft for our e-mail, social networking, calendaring -- you name it. It doesn't have to be that way. The FreedomBox Foundation has just released its first commercially available FreedomBox: The Pioneer Edition FreedomBox Home Server Kit. With it, you -- not some company -- control over your internet-based services.

The Olimex Pioneer FreedomBox costs less than $100 and is powered by a single-board computer (SBC), the open source hardware-based Olimex A20-OLinuXino-LIME2 board. This SBC is powered by a 1GHz A20/T2 dual-core Cortex-A7 processor and dual-core Mali 400 GPU. It also comes with a Gigabyte of RAM, a high-speed 32GB micro SD card for storage with the FreedomBox software pre-installed, two USB ports, SATA-drive support, a Gigabit Ethernet port, and a backup battery.

Doesn't sounds like much does it? But, here's the thing: You don't need much to run a personal server.

Sure, some of us have been running our own servers at home, the office, or at a hosting site for ages. I'm one of those people. But, it's hard to do. What the FreedomBox brings to the table is the power to let almost anyone run their own server without being a Linux expert.

The supplied FreedomBox software is based on Debian Linux. It's designed from the ground up to make it as hard as possible for anyone to exploit your data. It does this by putting you in control of your own corner of the internet at home. Its simple user interface lets you host your own internet services with little expertise.

You can also just download the FreedomBox software and run it on your own SBC. The Foundation recommends using the CubietruckCubieboard2BeagleBone BlackA20 OLinuXino Lime2A20 OLinuXino MICRO, and PC Engines APU. It will also run on most newer Raspberry Pi models.

Want an encrypted chat server to replace WhatsApp? It's got that. A VoIP server? Sure. A personal website? Of course! Web-based file sharing à la Dropbox? You bet. A Virtual Private Network (VPN) server of your own? Yes, that's essential for its mission.

The software stack isn't perfect. This is still a work in progress. So, for example, it still doesn't have a personal email server or federated social networking, such as GNU Social and Diaspora, to provide a privacy-respecting alternative to Facebook. That's not because they won't run on a FreedomBox; they will. What they haven't been able to do yet is to make it easy enough for anyone to do and not someone with Linux sysadmin chops. That will come in time.

As the Foundation stated, "The word 'Pioneer' was included in the name of these kits in order to emphasize the leadership required to run a FreedomBox in 2019. Users will be pioneers both because they have the initiative to define this new frontier and because their feedback will make FreedomBox better for its next generation of users."

To help you get up to speed the FreedomBox community will be offering free technical support for owners of the Pioneer Edition FreedomBox servers on its support forum. The Foundation also welcomes new developers to help it perfect the FreedomBox platform. 

Why do this?  Eben Moglen, Professor of Law at Columbia Law School, saw the mess we were heading toward almost 10 years ago: "Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age." That was before Facebook proved itself to be totally incompetent with security and sold off your data to Cambridge Analytica to scam 50 million US Facebook users with personalized anti-Clinton and pro-Trump propaganda in the 2016 election.

It didn't have to be that way. In an interview, Moglen told me this: "Concentration of technology is a surprising outcome of cheap hardware and free software. We could have had a world of peers. Instead, the net we built is the net we didn't want. We're in an age of surveillance with centralized control. We're in a world, which encourages swiping, clicking, and flame throwing."

With FreedomBox, "We can undo this. We can make it possible for ordinary people to provide internet services. You can have your own private messaging, services without a man in the middle watching your every move." 

We can, in short, rebuild the internet so that we, and not multi-billion dollar companies, are in charge.

I like this plan

Categorized in Internet Privacy

[This article is originally published in dailytrust.com.ng written by Zakariyya Adaramola - Uploaded by AIRS by Member: David J. Redcliff]

Google is giving users back some control over their data. The internet giant is introducing a new feature in account settings that will allow users to delete location, web, and app activity data automatically. The tools will become available in the coming weeks, according to Google.

Now, instead of requiring users to delete the data manually, Google is adding auto-delete controls. ‘‘We work to keep your data private and secure, and we’ve heard your feedback that we need to provide simpler ways for you to manage or delete it,’’ the firm explained in a blog post.



‘‘…We’re announcing auto-delete controls that make it even easier to manage your data.’’

Now, users can select a time limit for how long Google can hold onto their data.

Users select the option in settings that says ‘Choose to delete automatically.’

From there, they can choose between letting Google preserve their data for three months or 18 months.

‘‘You should always be able to manage your data in a way that works best for you – and we’re committed to giving you the best controls to make that happen’’,  the firm added.

The company said the feature is rolling out for location history and web and app activity to start, which suggests it could launch for more kinds of data in the future.

The move follows an explosive report last year from the Associated Press, which found that several Google apps and websites store user location even if users turned off Location History.

Following an investigation, the AP found that even with Location History turned off, Google stores user location when, for instance, the Google Maps app is opened, or when users conduct Google searches that aren’t related to location.

Researchers found Google logs a record of your current location each time you open its turn-by-turn navigation app, Google Maps.

Categorized in Internet Search

[This article is originally published in newyorker.com written By Ned Beauman - Uploaded by AIRS Member: Jennifer Levin]

An open-source investigation is a tool anybody can use; as it spreads, it will inevitably mingle with the sort of delirium and propaganda that Eliot Higgins has always meant it to cut through.

On a recent afternoon in central London, twelve people sat in a hotel conference room trying to figure out the exact latitude and longitude at which the actress Sharon Stone once posed for a photo in front of the Taj Mahal. Among them were two reporters, a human-rights lawyer, and researchers and analysts in the fields of international conflict, forensic science, online extremism, and computer security. They had each paid around twenty-four hundred dollars to join a five-day workshop led by Eliot Higgins, the founder of the open-source investigation Web site Bellingcat. Higgins had chosen this Sharon Stone photo because the photographer was standing on a raised terrace, which makes the angles confusing, and used a lens that makes Stone appear closer to the Taj than she actually was. The participants, working on laptops, compared the trees and paths visible in the photo to their correlates on Google Earth.

Stone’s location on that day—the northwest corner of the Great Gate—may not have been of grave historical importance, but the participants were employing the same techniques that have underlaid Bellingcat’s news-making investigations into subjects such as the downing of Malaysian Airlines Flight 17, over Ukraine, and the use of chemical weapons by the Syrian Army. When Higgins was profiled in The New Yorker, in 2013, he was still alone blogger calling himself Brown Moses, and the field of open-source investigation—the microscopic examination of publicly available material such as satellite images, social-media posts, YouTube videos, and online databases to uncover the truth about disputed events—was in its infancy. Today, it is firmly established. Last year, the International Criminal Court issued, for the first time, an arrest warrant based solely on video evidence from social media, and the recent report on gas attacks in Syria by the Organization for the Prohibition of Chemical Weapons leans heavily on images from Google Earth that are annotated in a Bellingcat style. Meanwhile, open-source investigation reached a new audience this spring when the research agency Forensic Architecture, which has often collaborated with Bellingcat, was the subject of a show at London’s Institute of Contemporary Arts. (It has since been shortlisted for the Turner Prize.)

Higgins, who lives in Leicester with his wife and two young children, is now fielding ever more interest from journalists, N.G.O.s, corporations, universities, and government agencies, eager for his expertise. One of the participants in the London workshop I attended, Christoph Reuter, is the Beirut-based Middle East correspondent for Der Spiegel, and has worked as a reporter for three decades; when I asked him about Higgins, he made a gesture of worshipfully bowing down. Higgins started Bellingcat with a Kickstarter campaign, in 2014, but today almost half of its funding comes from these paid workshops, which he has been running since last spring, with the first in the U.S.—in Washington, D.C., New York, and San Francisco—planned for later this year. Higgins is also developing a partnership with the Human Rights Center at the University of California, Berkeley, School of Law, and hopes to hire enough staff to expand Bellingcat’s coverage into Latin America.

Higgins’s work is animated by his leftist, anti-authoritarian politics. One of the workshop attendees, a Middle East analyst named Robert, didn’t want his full name used in this article because certain factions in the region may see any association with Bellingcat as suspicious. But an open-source investigation is a tool anybody can use; as it spreads, it will inevitably mingle with the sort of delirium and propaganda that Higgins has always meant it to cut through. Crowdsourced Reddit investigations into Pizzagate or QAnon often appear, at first glance, not so different from a Bellingcat report, full of marked-up screenshots from Google Maps or Facebook. Even on the mainstream-liberal side, a new conspiracy culture sees anti-Trump Twitter celebrating any amateur detective who can find a suspicious detail about Jared Kushner in a PDF.

At the same time, the Russian government, which has derided Bellingcat’s open-source investigations in the past, now issues satellite images of bombings in Syria, inviting members of the public to look closely and see for themselves; RT, the state-sponsored Russian news channel, has launched its own “digital verification” blog, seemingly modeled on Bellingcat. In ramping up both reporting and training, expanding Bellingcat into some combination of news magazine and academy, Higgins is working to, in his words, “formalize and structure a lot of the work we’ve been doing” at a moment when the methods he helped pioneer are more than ever threatened by distortion and misuse.

I asked Higgins whether he excludes anyone from these workshops. “We’re going to start explicitly saying that people from intelligence agencies aren’t allowed to apply,” he said. “They’re asking more and more. But we don’t really want to be training them, and it’s awkward for everybody in the room if there’s an M.I.5 person there.” I asked how he’d feel if a citizen journalist signed up with the intention of demonstrating that, say, many of the refugee children who apply for asylum in the U.S. are actually grizzled adult criminals. He said he’d let that person join. “If they want to use these techniques to do that reporting, and it’s an honest investigation, then the answers should be honest either way. They should find out they can’t prove their ideas. And maybe they’ll learn that their ideas aren’t as solid as they thought.” Ultimately, Higgins respects good detective work, no matter where it comes from. At one point in the workshop, he showed the group a video about 4chan users taking only thirty-seven hours to find and steal a flag emblazoned with “he will not divide us,” which Shia LaBeouf had erected in front of an Internet-connected camera in a secret location as a political art project. “4chan is terrible,” Higgins said. “But sometimes they do really amazing open-source investigations just to annoy people.”

After Sharon Stone, there were several more geolocation exercises, concluding, on Day Two, with a joint effort to piece together several dozen photos of the M2 Hospital, in Aleppo, after it was bombed by pro-government forces. The challenge was to use tiny details to figure out exactly how they connected together in three-dimensional space: to determine, for instance, whether two photos that showed very similar-looking chain-link barriers were actually of the same chain-link barrier from different angles. “Most of my pictures are of rubble, which is super-helpful,” Diane Cooke, a Ph.D. student at King’s College London’s Centre for Science and Security Studies, said.

Higgins mentioned that he had left out all the gory photos, but nevertheless this exercise was a war crime turned into a jigsaw puzzle. Earlier, he had paused a video projection on a frame of a nerve-gassed Syrian child’s constricted pupil, which stared down at us for an uncomfortably long time. “I am careful about that,” he told me when I asked him about his approach to such horrors. The example which most frequently upsets people, he said, is a Bellingcat investigation into a mass execution by the Libyan National Army, in 2017: fifteen dark blots are visible against the sand on a satellite image taken later the same day. “It’s horrible, but it’s such a good example,” Higgins said. “And if you’re geolocating bloodstains, you’ve got to show the bloodstains.”

Afterward, it was time for lunch outside in the sun. Robert, the Middle East analyst, complained that he had “geolocation vision”: after a few hours of these exercises, it is impossible to look around without noting the minute textures of the built environment, the cracks in the sidewalk and the soot on the walls.

Days Four and Five of a Bellingcat workshop give the participants a chance to practice the skills they’ve just learned by launching their own investigations. Earlier this year, when Christiaan Triebert, a Bellingcat investigator, was mugged by two men on mopeds while he was in London to teach a Bellingcat workshop, he recruited his workshop participants to help him investigate the city’s moped gangs. (“My adrenaline turned into that energy—like, ‘This is pretty interesting!’ ” he recalled. “We were basically analyzing the Instagram profiles, mapping out the networks, who is friends with whom and where are they operating.”) Triebert has also run workshops in several countries where reporters are under threat of death. In Iraq, for instance, he trained reporters from al-Ghad, a radio station broadcasting into isis-occupied Mosul. “Some of their friends and colleagues got slaughtered by isis militants, and there was the video of it—they were drowned in a cage in a swimming pool. They said, “We really want to know where this happened, so if Mosul ever gets recaptured we can visit, but also just to see where they murdered our friends.” We started mapping out Mosul swimming pools, and within an hour they found it.

In the London workshop, the participants split up into three teams: one was trying to geolocate a video showing a bombing by U.S. forces somewhere in Damascus; another was analyzing the connection between water shortages in Iraq and the filling of the Ilisu dam, in Turkey; a third was investigating the leaders of a recent rally in London protesting the jailing of the far-right activist Tommy Robinson. Space took on the atmosphere of a newsroom. By the afternoon, the Damascus team had divided its labor: Higgins and Reuter were pursuing a single electricity pylon in the background of the murky green night-vision footage, which they thought would be enough to geolocate the bombing; Marwan El Khoury, a forensic-science Ph.D. candidate at the University of Leicester, was trying to pick out Polaris from the constellations briefly visible in the sky, in the hopes of determining the orientation of the camera; and Beini Ye, a lawyer with the Open Society Justice Initiative, was combing through relevant news reports. “Nobody has ever been able to geolocate this video, so it’s a matter of pride,” Higgins said.

On the last day, pizza was ordered so the three teams could work through lunch. At the deadline of 2 p.m., Robert, representing the Ilisu team, got up first. “We haven’t found anything spectacularly new,” he said, “but we’ve discovered that a claim by the Iraqi Water Ministry might be more or less correct. That sounds really boring, but I think it’s important.”

The Tommy Robinson team was next. It had found out that “Danny Tommo,” one of the pseudonymous organizers of the pro-Tommy Robinson protest, was already known to the police under his real name. To some laughter, they displayed a headline from a Portsmouth newspaper reading “Bungling armed kidnappers jailed for ‘stupid’ attempt.”

Five minutes before the deadline, there had been a burst of excitement from the Damascus team: Higgins had remembered that a Russian news service had put GoPro cameras on the front of tanks when embedded with Syrian armed forces in 2015. In one YouTube video, the tank jolted from the recoil after firing its gun, and for a moment it was possible to see a pylon on the horizon—which was helpful, Higgins explained, but not quite enough. Still, that didn’t mean the crucial pylon would never be found: some photos from Bellingcat investigations have taken as long as two years to be geolocated. “It’s good that it was really hard,” Higgins told me later. “You have to rewire how people think about images. So they become really aware of how the world is constructed.”

Categorized in Investigative Research

[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Roger Montti - Uploaded by AIRS Member: Anthony Frank]

Ahrefs CEO Dmitry Gerasimenko announced a plan to create a search engine that supports content creators and protects users privacy. Dmitry laid out his proposal for a more free and open web, one that rewards content creators directly from search revenue with a 90/10 split in favor of publishers.

The goal for New Search Engine

Dmitry seeks to correct several trends at Google that he feels are bad for users and publishers. The two problems he seeks to solve is privacy, followed by addressing the monetization crisis felt by publishers big and small.

1. Believes Google is Hoarding Site Visitors

Dmitry tweeted that Google is increasingly keeping site visitors to itself, resulting in less traffic to the content creators.

“Google is showing scraped content on search results page more and more so that you don’t even need to visit a website in many cases, which reduces content authors’ opportunity to monetize.”

2. Seeks to Pry the Web from Privatized Access and Control

Gatekeepers to web content (such as Google and Facebook) exercise control over what kinds of content is allowed to reach people. The gatekeepers shape how content is produced and monetized. He seeks to wrest the monetization incentive away from the gatekeepers and put it back into the hands of publishers, to encourage more innovation and better content.

“Naturally such a vast resource, especially free, attracts countless efforts to tap into it, privatize and control access, each player pulling away their part, tearing holes in the tender fabric of this unique phenomena.”

3. Believes Google’s Model is Unfair

Dmitry noted that Google’s business model is unfair to content creators. By sharing search revenue, sites like Wikipedia wouldn’t have to go begging for money.

He then described how his search engine would benefit content publishers and users:

“Remember that banner on Wikipedia asking for donation every year? Wikipedia would probably get few billions from its content in profit share model. And could pay people who polish articles a decent salary.”

4. States that a Search Engine Should Encourage Publishers and Innovation

Dmitry stated that a search engine’s job of imposing structure to the chaos of the web should be one that encourages the growth of quality content, like plant a support that holds a vine up allowing it to consume more sunlight and grow.

“…structure wielded upon chaos should not be rigid and containing as a glass box around a venomous serpent, but rather supporting and spreading as a scaffolding for the vine, allowing it to flourish and grow new exciting fruits for humanity to grok and cherish. ”

For chaos needs structure to not get torn apart by its own internal forces, and structure needs chaos as a sampling pool of ideas to keep evolution rolling.”

Reaction to Announcement

The reaction on Twitter was positive.

Russ Jones of Moz tweeted:

russ jones

Several industry leaders generously offered their opinions.

Jon Henshaw

Jon Henshaw (@henshaw) is a Senior SEO Analyst at CBSi (CBS, GameSpot, and Metacritic) and founder of Coywolf.marketing, a digital marketing resource. He offered this assessment:

“I appreciate the sentiment and reasons for why Dmitry wants to build a search engine that competes with Google. A potential flaw in the entire plan has to do with searchers themselves.

Giving 90% of profit to content creators does not motivate the other 99% of searchers that are just looking for relevant answers quickly. Even if you were to offer incentives to the average searcher, it wouldn’t work. Bing and other search engines have tried that over the past several years, and they have all failed.

The only thing that will compete with Google is a search engine that provides better results than Google. I would not bet my money on Ahrefs being able to do what nobody else in the industry has been able to do thus far.”

Ryan Jones

Ryan Jones (@RyanJones), is a search marketer who also publishes WTFSEO.com said:

“This sounds like an engine focused on websites not users. So why would users use it?

There is a massive incentive to spam here, and it will be tough to control when the focus is on the spammer not the user.

It’s great for publishers, but without a user-centric focus or better user experience than Google, the philanthropy won’t be enough to get people to switch.”

Tony Wright

Tony Wright (@tonynwright) of search marketing agency WrightIMC shared a similar concern about getting users on board. An enthusiastic user base is what makes any online venture succeed.

“It’s an interesting idea, especially in light of the passage of Article 13 in the EU yesterday.

However, I think that without proper capitalization, it’s most likely to be a failed effort. This isn’t the early 2000’s.

The results will have to be as good or better than Google to gain traction, and even then, getting enough traction to make if economically feasible will be a giant hurdle.

I like the idea of compensating publishers, but I think policing the scammers on a platform like this will most likely be the biggest cost – even bigger than infrastructure.

It’s certainly an ambitious play, and I’ll be rooting for it. But based on just the tweets, it seems like it may be a bit too ambitious without significant capitalization.”

Announcement Gives Voice to Complaints About Google

The announcement echoes complaints by publishers who feel they are struggling. The news industry has been in crisis mode for over a decade trying to find a way to monetize digital content consumption. AdSense publishers have been complaining for years of dwindling earnings.

Estimates say that Google earns $16.5 million dollars per hour from search advertising.  When publishers ask how to improve earnings and traffic, Google’s encouragement to “be awesome” has increasingly acquired a tone of “Let them eat cake.”

A perception has set in that the entire online search ecosystem is struggling except for Google.
The desire for a new search engine has been around for several years. This is why DuckDuckGo has been received so favorably by the search marketing community. This announcement gives voice to long-simmering complaints about Google.

The reaction on Twitter was almost cathartic and generally enthusiastic because of the longstanding perception that Google is not adequately supporting the content creators upon which Google earns billions.

Will this New Search Engine Happen?

Whether this search engine lifts off remains to be seen. The announcement, however, does give voice to many complaints about Google.

No release date has been announced. The scale of this project is huge. It’s almost the online equivalent of going to the moon.

 

[This article is originally published in seroundtable.com written by Barry Schwartz - Uploaded by AIRS Member: Jason Bourne]

John Mueller from Google explained on Twitter the difference between the timestamp date in an XML Sitemaps lasmod date and the date on a web page. John said the sitemaps lastmod is when the page as a whole was last changed for crawling/indexing. The date on the page is the date to be associated with the primary content on the page.

John Mueller first said, "A page can change without its primary content changing." He said that doesn't think "crawling needs to be synced to the date associated with the content." The example he gave was for "site redesigns or site moves are pretty clearly disconnected from the content date."

He then added this tweet:

john tweet

So there you have it.

Forum discussion at Twitter.

Categorized in Search Engine

[This article is originally published in thenextweb.com written by Abhimanyu Ghoshal - Uploaded by AIRS Member: Carol R. Venuti]

The European Union is inching closer to enacting sweeping copyright legislation that would require platforms like Google, Facebook to pay publishers for the privilege of displaying their content to users, as well as monitoring copyright infringement by users on the sites and services they manage.

That’s set to open a Pandora’s Box of problems that could completely derail your internet experience because it’d essentially disallow platforms from displaying content from other sources. In a screenshot shared with Search Engine Land, Google illustrated how this might play out in its search results for news articles:

google
An example of what Google’s search results for news might look like if the EU goes ahead with its copyright directive

As you can see, the page looks empty, because it’s been stripped of all copyrighted content – headlines, summaries and images from articles from various publishers.

Google almost certainly won’t display unusable results like these, but it will probably only feature content from publishers it’s cut deals with (and it’s safe to assume that’s easier for larger companies than small ones).

That would reduce the number of sources of information you’ll be able to discover through the search engine, and it’ll likely lead to a drop in traffic for media outlets. It’s a lose-lose situation, and it’s baffling that EU lawmakers don’t see this as a problem – possibly because they’re fixated on how this ‘solution’ could theoretically benefit content creators and copyright holders by ruling that they must be paid for their output.

It isn’t yet clear when the new copyright directive will come into play – there are numerous processes involved that could take until 2021 before it’s implemented in EU countries’ national laws. Hopefully, the union’s legislators will see sense well before that and put a stop to this madness.

Update: We’ve clarified in our headline that this is Google’s opinion of how its search service will be affected by the upcoming EU copyright directive; it isn’t yet clear how it will eventually be implemented.

Categorized in Search Engine

[This article is originally published in blogs.scientificamerican.com written by Daniel M. Russell and Mario Callegaro - Uploaded by AIRS Member: Rene Meyer] 

Researchers who study how we use search engines share common mistakes, misperceptions, and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second, she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926 when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and runs with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites and the first search results for those terms provide just that. Even the difference between a query like [the who]versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however, most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to the given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study, we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now