[Source: This article was Published in thesun.co.uk By Sean Keach - Uploaded by the Association Member: Anna K. Sasaki]

GOOGLE MAPS has invented a new feature that will warn you if a rogue taxi driver is taking you off-route.

It could put a stop to conmen drivers who take passengers out of their way to rack up journey charges.

 When you input a destination, you can choose new safety options

The "off-route alerts" will flag to users when you're sidetracked from a journey by more than 500 meters.

The feature was first revealed by tech site XDA Developers, who spotted it in the live version of Google Maps.

However, the feature appears to be stuck in "testing" right now, which means not everyone can use it.

But if it comes to Google Maps more generally, it could save Brits loads of cash.

 One of the options lets you receive warnings if you go off-route

 One of the options lets you receive warnings if you go off-route

 Google Maps will alert you if you've strayed more than 500 metres from the fastest route

Google Maps will alert you if you've strayed more than 500 meters from the fastest routeCredit: XDA Developers / Google Maps

First, simply select a journey you want to take when making a taxi ride.

Before you hit "Start", you'll see a new option called "Stay Safer" that you can press.

Inside you'll find another option to "get off-route alerts", which promises: "Get an alert if your taxi or ride goes off route."

When you start the journey, it will tell you if you're still on route.

And if you go off the route by more than 500 meters, you'll receive an alert on your phone.

That would prompt you to ask your driver why you're going the wrong way – and whether or not the route can be corrected.

But if the feature became popular, it could put rogue drivers off from even trying to illicitly extend your trip in the first place.

How to see Google's map tracking everywhere you've been

Here's what you need to know...

There are several ways to check your own Google Location History.

The easiest way is to follow the link to the Google Maps Timeline page:

This lets you see exactly where you've been on a given day, even tracking your methods of travel and the times you were at certain locations.

Alternatively, if you've got the Google Maps app, launch it and press the hamburger icon – three horizontal lines stacked on top of each other.

Then go to the Your Timeline tab, which will show places you've previously visited on a given day.

If you've had Google Location History turned on for a few years without realizing, this might be quite shocking.

Suddenly finding out that Google has an extremely detailed map of years of your real-world movements can seem creepy – so you might want to turn the feature off.

The good news is that it's possible to immediately turn Google Location History off at any time.

You can turn off Location History here:

However, to truly stop Google from tracking you, you'll also need to turn off Web & Activity Tracking.

You can see your tracked location markers here:

Unfortunately, these location markers are intermingled with a host of other information, so it's tricky to locate (and delete them).

To turn it off, simply click the above link then head to Activity Controls.

From there, you'll be able to turn off Web & Activity Tracking across all Google sites, apps and services.

Of course, some taxi drivers know shortcuts that can shave time off a Google Maps route, so don't immediately panic if you find yourself in a cab going the wrong way.

And it'll probably get on the nerves of seasoned cabbies who will hate being second-guessed by phone-wielding Brits.

It's not clear when Google will roll the off-route alerts feature to all phones.

We've asked Google for comment and will update this story with any response.

Categorized in Search Engine

[Source: This article was Published in msn.com By JR Raphael - Uploaded by the Association Member: Edna Thomas]

Google Maps is great for helping you find your way — or even helping you find your car— but the app can also help other people find you.

Maps have an easily overlooked feature for sharing your real-time whereabouts with someone so they can see exactly where you are, even if you’re moving, and then navigate to your location. You can use the same feature to let a trusted person keep tabs on your travel progress to a particular place and know precisely when you’re set to arrive.

The best part? It’s all incredibly simple to do. The trick is knowing where to look.

Share your real-time location

When you want someone to be able to track your location:

  • Open the Maps app on your iOS or Android device
  • Tap the blue dot, which represents your current location, then select “Share location” from the menu that appears. (If it’s your first time using Maps for such a purpose, your phone may prompt you to authorize the app to access your contacts before continuing.)
  • If you want to share your location for a specific amount of time, select the “1-hour” option, and then use the blue plus and minus buttons to increase or decrease the time as needed
  • If you want to share your location indefinitely — until you manually turn it off — select the “Until you turn this off” option
  • On Android, select the person with whom you want to share your location from the list of suggested contacts or select an app (like Gmail or Messages) to send a private link. You can also opt to copy the link to your system clipboard and then paste it wherever you like.
  • On an iPhone, tap “Select People” to choose a person from your contacts, select “Message” to send a private link to someone in your messaging app, or select “More” to send a private link via another communication service. Your phone may prompt you to give Maps ongoing access to your location before it moves forward.
  • If you share your location within Maps itself — by selecting a contact as opposed to sending a link via an external app — the person with whom you are sharing your location will get a notification on their phone. In addition, when you select “Location sharing” in Maps’ side menu, you will see an icon on top for both you and the person you’re sharing with. Select the person’s icon, and a bar at the bottom of the screen will let you stop sharing, share your location again, or request that the person share their location with you.

To manually stop Maps from sharing your location:

  • Open the Maps app, and look for the “Sharing your location” bar at the bottom of the screen
  • Tap the “x” next to the line that says how and for how long your location is being shared

Share your trip’s progress

When you want someone to be able to see your location and estimated arrival time while you’re en route to a particular destination:

  • Open the Maps app, and start navigating to your destination
  • Swipe up on the bar at the bottom of the screen (where your remaining travel time is shown), then select “Share trip progress” from the menu that appears
  • Select the name of the person with whom you want to share your progress or select an app you want to use for sharing

If you want to stop sharing your progress before your trip is complete:

  • Swipe up again on the bar at the bottom of the screen
  • Select “Stop sharing” from the menu that appears

Categorized in Search Engine

[Source: This article was Published in money.cnn.com By David Goldman - Uploaded by the Association Member: Patrick Moore]

Some things just shouldn't be connected to the Internet. With Shodan, a search engine that finds connected devices, it's easy to locate dangerous things that anyone can access without so much as a username or password.

Traffic light controls

hack red light
This is why Caps Lock was invented.

When something that literally anyone in the world can access says "DEATH MAY OCCUR !!!" it's generally a good idea to build some kind of security around it.

Oops - no. For some reason, someone thought it would be a good idea to put traffic light controls on the Internet. Making matters way, way worse is that these controls require no login credentials whatsoever. Just type in the address, and you've got access.

You'd have to know where to go looking, but it's not rocket science. Security penetration tester Dan Tentler found the traffic light controls using Shodan, a search engine that navigates the Internet's back channels looking for the servers, webcams, printers, routers and all the other stuff that is connected to the Internet.

Traffic cameras

hack traffic camera
Hey, that's my car!

You know those cameras that snap photos of you speeding through a red light? Yeah, someone put an entire network of them on the Internet.

Made by a company called PIPS, a division of 3M (MMM), the "Autoplate" technology takes photos of cars going through intersections and loads their license plate numbers on a server. Those servers are intended to be accessed by police departments. They're definitely not supposed to be connected to the greater Internet without any log-in credentials.

That's what happened, though, and any Web lurker could check out who was zipping through the photo zones in the spot Tentler found. Added kicker: Autoplate actually records photos and registration information for every car that goes through the intersections it's watching -- not just speeders.

3M spokeswoman Jacqueline Berry noted that Autoplate's systems feature robust security protocols, including password protection and encryption. They just have to be used.

"We're very confident in the security of our systems," she said.

Tentler notified the FBI about the particular system he found.

A swimming pool acid pump

hack pool
Are you sure you want to get in the pool?

Swimming pools have acid pumps to adjust the pH balance of the water. They're usually not connected to the Internet.

At least one of them is, though. So, exactly how powerful and toxic is this acid pump?

"Can we turn people into soup?" wondered Tentler.

Tentler said there was no distinguishing text in this app to tip him off to where the pool was located or whom it is owned by, so the owners haven't been contacted. Enter at your own risk!

A hydroelectric plant

hack turbine
Wait, does that say kilowatts? 

French electric companies apparently like to put their hydroelectric plants online. Tentler found three of them using Shodan.

This one has a big fat button that lets you shut off a turbine. But what's 58,700 Watts between friends, right?

It's not just France that has a problem. The U.S. Department of Homeland Security commissioned researchers last year to see if they could find industrial control systems for nuclear power plants using Shodan. They found several.

Tentler told DHS about all the power plants he found -- actually, DHS called him after he accessed one of their control systems.

Once the controls were brought up on a Web browser, anyone could put lights into "test" mode. Seriously, do not try that at home.

Tentler declined to say which city put its traffic controls on the Internet, but he notified the U.S. Department of Homeland Security about it.

A hotel wine cooler

hack wine cooler
How cold do you like your champagne, exactly?

Okay, fine, there's no danger in putting a hotel wine cooler online. It's pretty strange, though.

Tentler also found controls for a display case at a seafood store, which included a lobster tank.

This wine cooler is still online at a large hotel in New York. So if your bubbly is a little toasty, you'll know why.

A hospital heart rate monitor

hack heart rate monitor
Beep ... beep ... beep ...

U.S. hospitals have to abide by the Health Insurance Portability and Accountability Act. Here's a violation: One hospital put its heart rate monitors online for the whole world to see.

Although this was a read-only tool -- you couldn't defibrillate a patient over the Internet -- it's still a major, major breach of the privacy law.

Tentler said that another security researcher reported this hospital to DHS' Industrial Control Systems Cyber Emergency Response Team last year.

A home security app

hack home control
Honey, did you leave the garage door open?

new wave of home automation tools offer a great way to control everything from your door locks to your alarm system online. But it's a good idea for your security system to have some, you know, security built into it.

Not this system. Anyone can change this home's temperature, alarm settings, and, yes, open its garage door.

Tentler said he has no idea who built this app, because there was no distinguishing text or information associated with it.

A gondola ride

hack gondola ride
Hey, why are the doors opening?

A gondola ride over a ski resort is a fun way to enjoy the mountain view. But not if you stop in the middle of the ride and the doors open.

Anyone could do that with a click of a button, even if they were sitting thousands of miles away. That's because this French ski resort put the control systems for the gondola ride on the Internet.

Attempts to contact the company was unsuccessful.

A car wash

hack car wash
Actually, I would like that undercoating!

Seriously, there is a car wash on the Internet.

By clicking through the control options, anyone in the world can adjust the chemicals used in the wash and lock someone inside. Or you could be nice and give every customer the works.

Tentler said he has no idea who owns the car wash or where it is. But if you happen to pass through this one, your next wash is on him.

Categorized in Internet Privacy

[Source: This article was Published in moz.com  - Uploaded by the Association Member: Barbara larson]

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet's content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It's arguably the most important piece of the SEO puzzle: If your site can't be found, there's no way you'll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher's query, which means that results are ordered by most relevant to least relevant.

What is search engine crawling?

Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

What's that word mean?

Having trouble with any of the definitions in this section? Our SEO glossary has chapter-specific definitions to help you stay up-to-speed.

Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher's query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that's nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your pages?

As you've just learned, making sure your site gets crawled and indexed is a prerequisite to showing up in the SERPs. If you already have a website, it might be a good idea to start off by seeing how many of your pages are in the index. This will yield some great insights into whether Google is crawling and finding all the pages you want it to, and none that you don’t.

One way to check your indexed pages is "site:yourdomain.com", an advanced search operator. Head to Google and type "site:yourdomain.com" into the search bar. This will return results Google has in its index for the site specified:

A screenshot of a site:moz.com search in Google, showing the number of results below the search box.

The number of results Google displays (see “About XX results” above) isn't exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don't currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google's index, among other things.

If you're not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn't been crawled yet.
  • Your site isn't linked to from any external websites.
  • Your site's navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

Tell search engines how to crawl your site

If you used Google Search Console or the “site:domain.com” advanced search operator and found that some of your important pages are missing from the index and/or some of your unimportant pages have been mistakenly indexed, there are some optimizations you can implement to better direct Googlebot how you want your web content crawled. Telling search engines how to crawl your site can give you better control of what ends up in the index.

Most people think about making sure Google can find their important pages, but it’s easy to forget that there are likely pages you don’t want Googlebot to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

To direct Googlebot away from certain pages and sections of your site, use robots.txt.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn't crawl, as well as the speed at which they crawl your site, via specific robots.txt directives.

How Googlebot treats robots.txt files

  • If Googlebot can't find a robots.txt file for a site, it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site, it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot encounters an error while trying to access a site’s robots.txt file and can't determine if one exists or not, it won't crawl the site.

Optimize for crawl budget!

Crawl budget is the average number of URLs Googlebot will crawl on your site before leaving, so crawl budget optimization ensures that Googlebot isn’t wasting time crawling through your unimportant pages at risk of ignoring your important pages. Crawl budget is most important on very large sites with tens of thousands of URLs, but it’s never a bad idea to block crawlers from accessing the content you definitely don’t care about. Just make sure not to block a crawler’s access to pages you’ve added other directives on, such as canonical or noindex tags. If Googlebot is blocked from a page, it won’t be able to see the instructions on that page.

Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail address scrapers) build bots that don't follow this protocol. In fact, some bad actors use robots.txt files to find where you’ve located your private content. Although it might seem logical to block crawlers from private pages such as login and administration pages so that they don’t show up in the index, placing the location of those URLs in a publicly accessible robots.txt file also means that people with malicious intent can more easily find them. It’s better to NoIndex these pages and gate them behind a login form rather than place them in your robots.txt file.

You can read more details about this in the robots.txt portion of our Learning Center.

Defining URL parameters in GSC

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly:

https://www.example.com/products/women/dresses/green.htm

https://www.example.com/products/women?category=dresses&color=green

https://example.com/shopindex.php?product_id=32&highlight=green+dress
&cat_id=1&sessionid=123$affid=43

How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages. If you use this feature to tell Googlebot “crawl no URLs with ____ parameter,” then you’re essentially asking to hide this content from Googlebot, which could result in the removal of those pages from search results. That’s what you want if those parameters create duplicate pages, but not ideal if you want those pages to be indexed.

Can crawlers find all your important content?

Now that you know some tactics for ensuring search engine crawlers stay away from your unimportant content, let’s learn about the optimizations that can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It's important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

A boarded-up door, representing a site that can be crawled to but not crawled through.

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won't see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there's no guarantee they will be able to read and understand it just yet. It's always best to add text within the markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

A depiction of how pages that are linked to can be found by crawlers, whereas a page not linked to in your site navigation exists as an island, undiscoverable.

Common navigation mistakes that can keep crawlers from seeing all of your sites:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it's essential that your website has clear navigation and helpful URL folder structures.

Do you have clean information architecture?

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and findability for users. The best information architecture is intuitive, meaning that users shouldn't have to think very hard to flow through your website or to find something.

Are you utilizing sitemaps?

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google's standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Ensure that you’ve only included URLs that you want indexing by search engines, and be sure to give crawlers consistent directions. For example, don’t include a URL in your sitemap if you’ve blocked that URL via robots.txt or include URLs in your sitemap that are duplicates rather than the preferred, canonical version (we’ll provide more information on canonicalization in Chapter 5!).

Learn more about XML sitemaps 
If your site doesn't have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console. There's no guarantee they'll include a submitted URL in their index, but it's worth a try!

Are crawlers getting errors when they try to access your URLs?

In the process of crawling the URLs on your site, a crawler may encounter errors. You can go to Google Search Console’s “Crawl Errors” report to detect URLs on which this might be happening - this report will show you server errors and not found errors. Server log files can also show you this, as well as a treasure trove of other information such as crawl frequency, but because accessing and dissecting server log files is a more advanced tactic, we won’t discuss it at length in the Beginner’s Guide, although you can learn more about it here.

Before you can do anything meaningful with the crawl error report, it’s important to understand server errors and "not found" errors.

4xx Codes: When search engine crawlers can’t access your content due to a client error

4xx errors are client errors, meaning the requested URL contains bad syntax or cannot be fulfilled. One of the most common 4xx errors is the “404 – not found” error. These might occur because of a URL typo, deleted page, or broken redirect, just to name a few examples. When search engines hit a 404, they can’t access the URL. When users hit a 404, they can get frustrated and leave.

5xx Codes: When search engine crawlers can’t access your content due to a server error

5xx errors are server errors, meaning the server the web page is located on failed to fulfill the searcher or search engine’s request to access the page. In Google Search Console’s “Crawl Error” report, there is a tab dedicated to these errors. These typically happen because the request for the URL timed out, so Googlebot abandoned the request. View Google’s documentation to learn more about fixing server connectivity issues. 

Thankfully, there is a way to tell both searchers and search engines that your page has moved — the 301 (permanent) redirect.

Create custom 404 pages!

Customize your 404 pages by adding in links to important pages on your site, a site search feature, and even contact information. This should make it less likely that visitors will bounce off your site when they hit a 404.

Say you move a page from example.com/young-dogs/ to example.com/puppies/. Search engines and users need a bridge to cross from the old URL to the new. That bridge is a 301 redirect.

When you do implement a 301:When you don’t implement a 301:
Link Equity Transfers link equity from the page’s old location to the new URL. Without a 301, the authority from the previous URL is not passed on to the new version of the URL.
Indexing Helps Google find and index the new version of the page. The presence of 404 errors on your site alone don't harm search performance, but letting ranking / trafficked pages 404 can result in them falling out of the index, with rankings and traffic going with them — yikes!
User Experience Ensures users find the page they’re looking for. Allowing your visitors to click on dead links will take them to error pages instead of the intended page, which can be frustrating.

The 301 status code itself means that the page has permanently moved to a new location, so avoid redirecting URLs to irrelevant pages — URLs where the old URL’s content doesn’t actually live. If a page is ranking for a query and you 301 it to a URL with different content, it might drop in rank position because the content that made it relevant to that particular query isn't there anymore. 301s are powerful — move URLs responsibly!

You also have the option of 302 redirecting a page, but this should be reserved for temporary moves and in cases where passing link equity isn’t as big of a concern. 302s are kind of like a road detour. You're temporarily siphoning traffic through a certain route, but it won't be like that forever.

Watch out for redirect chains!

It can be difficult for Googlebot to reach your page if it has to go through multiple redirects. Google calls these “redirect chains” and they recommend limiting them as much as possible. If you redirect example.com/1 to example.com/2, then later decide to redirect it to example.com/3, it’s best to eliminate the middleman and simply redirect example.com/1 to example.com/3.

Once you’ve ensured your site is optimized for crawl ability, the next order of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store your pages?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page's contents. All of that information is stored in its index.

A robot storing a book in a library.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time Googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing "Cached":

A screenshot of where to see cached results in the SERPs.

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a "not found" error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can use the URL Inspection tool to learn the status of the page, or use Fetch as Google which has a "Request Indexing" feature to submit individual URLs to the index. (Bonus: GSC’s “fetch” tool also has a “render” option that allows you to see if there are any issues with how Google is interpreting your page).

Tell search engines how to index your site

Robots meta directives

Meta directives (or "meta tags") are instructions you can give to search engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like "do not index this page in search results" or "don’t pass any link equity to any on-page links". These instructions are executed via Robots Meta Tags in theof your HTML pages (most commonly used) or via the X-Robots-Tag in the HTTP header.

Robots meta tag

The robots meta tag can be used within theof the HTML of your webpage. It can exclude all or specific search engines. The following are the most common meta directives, along with what situations you might apply them in.

index/noindex tells the engines whether the page should be crawled and kept in a search engines' index for retrieval. If you opt to use "noindex," you’re communicating to crawlers that you want the page excluded from search results. By default, search engines assume they can index all pages, so using the "index" value is unnecessary.

  • When you might use: You might opt to mark a page as "noindex" if you’re trying to trim thin pages from Google’s index of your site (ex: user generated profile pages) but you still want them accessible to visitors.

follow/nofollow tells search engines whether links on the page should be followed or nofollowed. “Follow” results in bots following the links on your page and passing link equity through to those URLs. Or, if you elect to employ "nofollow," the search engines will not follow or pass any link equity through to the links on the page. By default, all pages are assumed to have the "follow" attribute.

  • When you might use: nofollow is often used together with noindex when you’re trying to prevent a page from being indexed as well as prevent the crawler from following links on the page.

noarchive is used to restrict search engines from saving a cached copy of the page. By default, the engines will maintain visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.

  • When you might use: If you run an e-commerce site and your prices change regularly, you might consider the noarchive tag to prevent searchers from seeing outdated pricing.

Here’s an example of a meta robots noindex, nofollow tag:

...

This example excludes all search engines from indexing the page and from following any on-page links. If you want to exclude multiple crawlers, like googlebot and bing for example, it’s okay to use multiple robot exclusion tags.

Meta directives affect indexing, not crawling

Googlebot needs to crawl your page in order to see its meta directives, so if you’re trying to prevent crawlers from accessing certain pages, meta directives are not the way to do it. Robots tags must be crawled to be respected.

X-Robots-Tag

The x-robots tag is used within the HTTP header of your URL, providing more flexibility and functionality than meta tags if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

For example, you could easily exclude entire folders or file types (like moz.com/no-bake/old-recipes-to-noindex):

 Header set X-Robots-Tag “noindex, nofollow”

The derivatives used in a robots meta tag can also be used in an X-Robots-Tag.

Or specific file types (like PDFs):

 Header set X-Robots-Tag “noindex, nofollow”

For more information on Meta Robot Tags, explore Google’s Robots Meta Tag Specifications.

WordPress tip:

In Dashboard > Settings > Reading, make sure the "Search Engine Visibility" box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Understanding the different ways you can influence crawling and indexing will help you avoid the common pitfalls that can prevent your important pages from getting found.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

An artistic interpretation of ranking, with three dogs sitting pretty on first, second, and third-place pedestals.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: "We’re making quality updates all the time." This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics — the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or "inbound links" are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

A depiction of how inbound links and internal links work.

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real-life WoM (Word-of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    • Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    • Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    • Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    • Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google's core algorithm) is a link analysis algorithm named after one of Google's founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three have stayed fairly consistent: links to your website (which serve as a third-party credibility signals), on-page content (quality content that fulfills a searcher’s intent), and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google’s core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, it’s always learning, and because it’s always learning, search results should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better result to users than the higher ranking URLs, you can bet that RankBrain will adjust those results, moving the more relevant result higher and demoting the lesser relevant pages as a byproduct.

An image showing how results can change and are volatile enough to show different rankings even hours later.

Like most things with the search engine, we don’t know exactly what comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?

Because Google will continue leveraging RankBrain to promote the most relevant, helpful content, we need to focus on fulfilling searcher intent more than ever before. Provide the best possible information and experience for searchers who might land on your page, and you’ve taken a big first step to performing well in a RankBrain world.

Engagement metrics: correlation, causation, or both?

With Google rankings, engagement metrics are most likely part correlation and part causation.

When we say engagement metrics, we mean data that represents how searchers interact with your site from search results. This includes things like:

  • Clicks (visits from search)
  • Time on page (amount of time the visitor spent on a page before leaving it)
  • Bounce rate (the percentage of all website sessions where users viewed only one page)
  • Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that engagement metrics correlate with higher ranking, but causation has been hotly debated. Are good engagement metrics just indicative of highly ranked sites? Or are sites ranked highly because they possess good engagement metrics?

What Google has said

While they’ve never used the term “direct ranking signal,” Google has been clear that they absolutely use click data to modify the SERP for particular queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”

Another comment from former Google engineer Edmond Lau corroborates this:

“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it obvious that it uses click data with its patents on systems like rank-adjusted content items.”

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than correlation, but it would appear that Google falls short of calling engagement metrics a “ranking signal” because those metrics are used to improve search quality, and the rank of individual URLs is just a byproduct of that.

What tests have confirmed

Various tests have confirmed that Google will adjust SERP order in response to searcher engagement:

  • Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1 spot after getting around 200 people to click on the URL from the SERP. Interestingly, ranking improvement seemed to be isolated to the location of the people who visited the link. The rank position spiked in the US, where many participants were located, whereas it remained lower on the page in Google Canada, Google Australia, etc.
  • Larry Kim’s comparison of top pages and their average dwell time pre- and post-RankBrain seemed to indicate that the machine-learning component of Google’s algorithm demotes the rank position of pages that people don’t spend as much time on.
  • Darren Shaw’s testing has shown user behavior’s impact on local search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for quality, and rank position changes as a byproduct, it’s safe to say that SEOs should optimize for engagement. Engagement doesn’t change the objective quality of your web page, but rather your value to searchers relative to other results for that query. That’s why, after no changes to your page or its backlinks, it could decline in rankings if searchers’ behaviors indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker. Objective factors such as links and content first rank the page, then engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results

Back when search engines lacked a lot of the sophistication they have today, the term “10 blue links” was coined to describe the flat structure of the SERP. Any time a search was performed, Google would return a page with 10 organic results, each in the same format.

A screenshot of what a 10-blue-links SERP looks like.

In this search landscape, holding the #1 spot was the holy grail of SEO. But then something happened. Google began adding results in new formats on its search result pages, called SERP features. Some of these SERP features include:

  • Paid advertisements
  • Featured snippets
  • People Also Ask boxes
  • Local (map) pack
  • Knowledge panel
  • Sitelinks

And Google is adding new ones all the time. They even experimented with “zero-result SERPs,” a phenomenon where only one result from the Knowledge Graph was displayed on the SERP with no results below it except for an option to “view more results.”

The addition of these features caused some initial panic for two main reasons. For one, many of these features caused organic results to be pushed down further on the SERP. Another byproduct is that fewer searchers are clicking on the organic results since more queries are being answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied by different content formats. Notice how the different types of SERP features match the different types of query intents.

Query IntentPossible SERP Feature Triggered
Informational Featured snippet
Informational with one answer Knowledge Graph/instant answer
Local Map pack
Transactional Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know that answers can be delivered to searchers in a wide array of formats, and how you structure your content can impact the format in which it appears in search.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical location customers can visit (ex: dentist) or for a business that travels to visit their customers (ex: plumber), make sure that you claim, verify, and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine the ranking:

  1. Relevance
  2. Distance
  3. Prominence

Relevance

Relevance is how well a local business matches what the searcher is looking for. To ensure that the business is doing everything it can to be relevant to searchers, make sure the business’ information is thoroughly and accurately filled out.

Distance

Google uses your geo-location to better serve your local results. Local search results are extremely sensitive to proximity, which refers to the location of the searcher and/or the location specified in the query (if the searcher included one).

Organic search results are sensitive to a searcher's location, though seldom as pronounced as in local pack results.

Prominence

With prominence as a factor, Google is looking to reward businesses that are well-known in the real world. In addition to a business’ offline prominence, Google also looks to some online factors to determine the local ranking, such as:

Reviews

The number of Google reviews a local business receives, and the sentiment of those reviews, have a notable impact on their ability to rank in local results.

Citations

A "business citation" or "business listing" is a web-based reference to a local business' "NAP" (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources in continuously making up its local business index. When Google finds multiple consistent references to a business's name, location, and phone number it strengthens Google's "trust" in the validity of that data. This then leads to Google being able to show the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Organic ranking

SEO best practices also apply to local SEO, since Google also considers a website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google and users better understand your content.

[Bonus!] Local engagement

Although not listed by Google as a local ranking factor, the role of engagement is only going to increase as time goes on. Google continues to enrich local results by incorporating real-world data like popular times to visit and average length of visits...

 

Curious about a certain local business' citation accuracy? Moz has a free tool that can help out, aptly named Check Listing.

...and even provides searchers with the ability to ask the business questions!

A screenshot of the Questions & Answers result in local search.

Undoubtedly now more than ever before, local results are being influenced by real-world data. This interactivity is how searchers interact with and respond to local businesses, rather than purely static (and game-able) information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to searchers, it makes perfect sense for them to use real-time engagement metrics to determine quality and relevance.

You don’t have to know the ins and outs of Google's algorithm (that remains a mystery!), but by now you should have a great baseline knowledge of how the search engine finds, interprets, stores, and ranks content. Armed with that knowledge, let's learn about choosing the keywords your content will target in Chapter 3 (Keyword Research)!

 

Categorized in Search Engine

[This article is originally published in searchengineland.com - Uploaded by AIRS Member: Rene Meyer]

By multiple measures, Google is the internet’s most popular search engine. But Google’s not only a web search engine. Images, videos, mobile content — even Google TV!

Major Google Services

Google releases a dizzying array of new products and product updates on a regular basis, and Search Engine Land keeps you up-to-date with all the news. Here are just a few of our popular Google categories, where you can read past coverage:

Google: Our “everything” category, this lists all stories we’ve written about Google, regardless of subtopic.

Google Web Search: Our stories about Google’s web search engine, including changes and new features. Also see: Google: OneBox, Plus Box & Direct Answers, Google: Universal Search and Google: User Interface.

Google SEO: Articles from us about getting listed for free via SEO in Google’s search engine. Also see the related category of Google Webmaster Central.

Google AdWords: Our coverage of Google’s paid search advertising program.

Google AdSense: Stories about Google’s ad program for publishers, which allows content owners to carry Google ads and earn money.

Google Maps & Local: Coverage of Google Maps, which allows you to locate places, businesses, get directions and much more. Also see Google Earth for coverage of Google’s mapping application.

Google Street View: Articles about Google’s popular yet controversial Street View system that uses cars to take photos of homes and business, which are then made available through Google Maps.

Google YouTube & Video: Articles about Google’s YouTube service, which allows anyone to upload video content. YouTube also has so much search traffic that it stands out as a major search engine of its own.

Google Logos: Google loves to have special logos for holidays and to commemorate special events. We track some of the special “Google Doodles,” as the company calls them. Also see our retrospective story, Those Special Google Logos, Sliced & Diced, Over The Years.

Also see our special guide for searchers, How To Use Google To Search.

Google Resources

Further below is a full list of additional Google topics that we track. But first, here are a few sites that track Google in-depth.

First up is Google’s own Official Google Blog. Google also has many other blogs for individual products, which are listed on the official blog. This feed keeps you up-to-date on any official blog post, from any of Google’s blogs. Google also had a traditional press release area.

Beyond official Googledom are a number of news sites that track Google particularly in-depth. These include: Dirson (in Spanish), eWeek’s Google Watch, Google Blogoscoped, Google Operating System, John Battelle, Search Engine Land, Search Engine Roundtable, WebProNewsand ZDNet Googling Google.

The Full Google List

We said Google is more than just a web search engine, right? Below is the full list of various Google search and search-related products that we track. Click any link to see our stories in that particular area:

  • Google (stories from all categories below, combined)
  • Google: Accounts & Profiles
  • Google: Acquisitions
  • Google: Ad Planner
  • Google: AdSense
  • Google: AdWords
  • Google: Alerts
  • Google: Analytics
  • Google: APIs
  • Google: Apps For Your Domain
  • Google: Audio Ads
  • Google: Base
  • Google: Blog Search
  • Google: Blogger
  • Google: Book Search
  • Google: Browsers
  • Google: Business Issues
  • Google: Buzz
  • Google: Calendar
  • Google: Checkout
  • Google: Chrome
  • Google: Code Search
  • Google: Content Central
  • Google: Critics
  • Google: Custom Search Engine
  • Google: Dashboard
  • Google: Definitions
  • Google: Desktop
  • Google: Discussions
  • Google: Docs & Spreadsheets
  • Google: Domains
  • Google: DoubleClick
  • Google: Earth
  • Google: Editions
  • Google: Employees
  • Google: Enterprise Search
  • Google: FeedBurner
  • Google: Feeds
  • Google: Finance
  • Google: Gadgets
  • Google: Gears
  • Google: General
  • Google: Gmail
  • Google: Groups
  • Google: Health
  • Google: iGoogle
  • Google: Images
  • Google: Internet Access
  • Google: Jet
  • Google: Knol
  • Google: Labs
  • Google: Legal
  • Google: Logos
  • Google: Maps & Local
  • Google: Marketing
  • Google: Mobile
  • Google: Moderator
  • Google: Music
  • Google: News
  • Google: Offices
  • Google: OneBox, Plus Box & Direct Answers
  • Google: OpenSocial
  • Google: Orkut
  • Google: Other
  • Google: Other Ads
  • Google: Outside US
  • Google: Parodies
  • Google: Partnerships
  • Google: Patents
  • Google: Personalized Search
  • Google: Picasa
  • Google: Place Pages
  • Google: Print Ads & AdSense For Newspapers
  • Google: Product Search
  • Google: Q & A
  • Google: Reader
  • Google: Real Time Search
  • Google: Search Customization
  • Google: SearchWiki
  • Google: Security
  • Google: SEO
  • Google: Sidewiki
  • Google: Sitelinks
  • Google: Social Search
  • Google: SpyView
  • Google: Squared
  • Google: Street View
  • Google: Suggest
  • Google: Toolbar
  • Google: Transit
  • Google: Translate
  • Google: Trends
  • Google: TV
  • Google: Universal Search
  • Google: User Interface
  • Google: Voice Search
  • Google: Web History & Search History
  • Google: Web Search
  • Google: Webmaster Central
  • Google: Website Optimizer
  • Google: YouTube & Video

Categorized in Search Engine

[This article is originally in searchengineland.com written By Adam Dorfman - Uploaded by AIRS Member:: Issac Avila]

Sure, Google is still bigger, but contributor Adam Dorfman notes that Bing has been introducing significant innovations. Here's why the underdog search engine is worth another look.

When Microsoft announced strong annual financial results July 19, the growth of the company’s cloud services dominated the conversation. But I noticed something else in the company’s numbers: continued growth for Bing. Although Bing accounts for a small share of Microsoft’s revenues, the search platform grew 17 percent year over year.

As TechRadar reported,

As more people used Bing, the search revenue (excluding traffic acquisition costs) also grew, so it looks like things are moving in the right direction.

Bing remains a distant second to Google in terms of market share, but the marketplace needs Bing to grow. A prosperous Bing gives businesses an alternative to Google and another viable platform to grow their visibility.

Google and another viable platform to grow their visibility

Bing’s product improvements are good for brands and good for Google because healthy competition keeps everyone on their toes. Bing’s improvements also help business owners and search marketers in their optimization efforts. Let’s take a look at a number of Bing’s improvements and how we can use them to promote our businesses.

Basic Bing search

On a fundamental level, Bing has enriched basic search to encourage discovery beyond top-level search results. For example, if you use your smartphone to search for “movies” on both Bing and Google, both will show you what’s playing where you live. But Bing also displays tabs for movies on Netflix and Amazon, thus demonstrating an awareness of how we discover movies beyond the theater.

Or a search for the musician Drake on both engines displays prominently in search results news results and video content, but there are more visible social links on Google encouraging further exploration. These differences are subtle, but they matter given how search has become more of a process of deep discovery, especially as we use our voices to do more complex searches.

Along these same lines, Bing recently enhanced search with the rollout of a search entity API, which produces a richer contextual search result. As Bing announced in March:

Bing Entity Search API brings rich contextual information about people, places, things, and local businesses to any application, blog, or website for a more engaging user experience. With Bing Entity Search, you can identify the most relevant entity results based on the search term and provide users with primary details about those entities. With our latest coverage improvements, we now support multiple international markets and many more entity types such as famous people, places, movies, TV shows, video games, and books. With this new API, developers can enhance their apps by exposing rich data from the Bing Knowledge Graph directly in their apps.

A more robust knowledge graph means that businesses need to place more emphasis on the content and data they publish on their own pages, starting with Bing Places for Business. If you’ve been treating Bing Places for Business as an optional alternative to Google My Business, it’s time to start using it as another way to promote your brand.

Visual content

Bing has always been known for being a visually appealing search engine, including its basic layout and home page. From the start, Bing understood that we live in a visual age, with people uploading billions of images and video online every day.

Bing has continuously built upon its embrace of a more visually appealing search aesthetic. For example, Bing presents video search results via appealing thumbnail panels that are easy to explore:

Bing presents video search results via appealing thumbnail

By contrast, video results for Google look more utilitarian and less visually appealing:

 

By contrast video results for Google look more utilitarian and less visually appealing

Bing also recently announced the launch of visual search, which lets people use images to easily navigate the search engine and find content. With the Bing app on your smartphone, you can either take a photo or upload one, and then quickly perform visual searches.

Bing visual search was widely perceived as an answer to Google Lens. But Google’s own visual search capability is limited (iOS users lack access to it), whereas Bing made visual search widely available for Android and iOS.

Bing visual search is important because it’s yet another sign that businesses need to be visually savvy with their own content. Google has been placing more emphasis on the power of strong visuals in its knowledge graphs as a way of making a business more findable and useful to searchers. Visual search has a multitude of applications, an obvious one being for retailers, especially as people don’t always know how to describe a product they’re trying to find, making the use of a photo easier and faster for discovery.

What brands should do

Bing’s enhancement of more complex and visual search alone is a reason brands need to treat Bing as a powerful part of their search toolkit. Although it’s more work to maintain a presence on multiple platforms, the reward is greater, too.

One easy way to better understand Bing is to experience the platform regularly, as people do. If it’s not your default engine, make time to get yourself comfortable using Bing to navigate. Download the Bing app on your mobile device and compare the features to Google’s app. The more you explore through the eyes of your customer, the more likely you’ll find additional ways to be found on both Google and Bing.

Categorized in Search Engine

[This article is originally published in wired.com Written By BRIAN BARRETT - Uploaded By AIRS Member: Carol R. Venuti]

WHAT FINALLY BROKE me was the recipes.

On July 1, I abandoned Google search and committed myself instead to Bing. I downloaded the Bing app on my phone. I made it the default search mode in Chrome. (I didn't switch to Edge, Microsoft's browser, because I decided to limit this experiment strictly to search.) Since then, for the most part, any time I've asked the internet a question, Bing has answered.

A stunt? Sure, a little. But also an earnest attempt to figure out how the other half—or the other 6 percent overall, or 24 percent on desktop, or 33 percent in the US, depending on whose numbers you believe—finds their information online.

And Bing is big! The second-largest search engine by market share in the US, and one of the 50 most visited sites on the internet, according to Alexa rankings. (That’s the Amazon-owned analytics site, not the Amazon-made voice assistant.) I wanted to know how those people experienced the web, how much of a difference it makes when a different set of algorithms decides what knowledge you should see. The internet is a window on the world; a search engine warps and tints it.

There’s also never been a better time to give Bing an honest appraisal. If Google’s data-hoovering didn’t creep you out before, its attitude toward location tracking and Google+ privacy failings should. And while privacy-focused search options like DuckDuckGo go further to solve that problem, Bing is the most full-featured alternative out there. It’s the logical first stop on the express train out of Googletown.

A minor spoiler: This isn’t an excuse to dunk on Bing. It’s also not an extended “Actually, Bing Is Good” counterpoint. It’s just one person’s attempt to figure out what Bing is today, and why.

Bing Bang Boom

Let’s start with the Bing app, technically Microsoft Bing Search. This almost certainly isn’t how most people experience Microsoft’s search engine, but the app does have over 5 million downloads in the Google Play Store alone. People use it. Besides, what better way to evaluate Bing than drinking it up in its most distilled form?

 

Bing offers a maximalist counterpoint to the austerity of Google, whose search box sits unadorned, interrupted only for the occasional doodle reminder of a 19th-century physicist’s birthday. When you open the Bing app, the act of searching is almost incidental. A high-resolution, usually scenic photograph sweeps the display, with three icons—a camera, a magnifying glass, and a microphone—suggesting but not insisting on the different types of search you might enjoy. Below that, options: VideosNear MeNewsRestaurants. (Side-scroll a bit.) MoviesMusicFunImagesGas.

These are the categories Bing considers worthy of one-tap access in 2018. And honestly, why not? I like videos. I like fun.

Categorized in Search Engine

[This article is originally published in popsci.com written by David Nield - Uploaded by AIRS Member: Nevena Gojkovic Turunz]

Is your search engine of choice pulling its weight? It's perhaps a choice you've stopped thinking about, settling for whatever default option appears in your browser or on your phone—but as with most tech choices, you've got options.

Google has come to dominate search to the extent that it's become a verb in itself, but here we're going to check how Google stacks up against two of its biggest rivals including Microsoft's Bing, in 2019 and the privacy-focused search site known as DuckDuckGo.

 

Search results

Google Search results

Google results for "Abraham Lincoln."

We don't know what you're searching for, and without running thousands of searches across several months we can't really present you with a comprehensive comparison of how well these search engines scour the web. What we can do is tell you how these services performed on a few sample searches.

First we tried "Abraham Lincoln": All three search engines returned the Wikipedia page first, the History Channel site second, and Britannica third. DuckDuckGo listed Abraham Lincoln news above the search results, even though the 16th President of the United States hasn't really been in the news lately.

bing Search results

Bing results for "Abraham Lincoln."

As we wrote this article a few days after the 2019 Super Bowl, we tried "Super Bowl score" next, and all three search engines produced the right result in a box out above the search results. DuckDuckGo followed this with the official NFL site then some sports news sites, while Bing had a sports news site first and the NFL second. Google listed the score, then Super Bowl news, then some relevant tweets, and then other results.

Next we tried a question, specifically "how many days until Christmas?", to see how our search engines fared. Only Google presented the right answer front and center as part of its own interface, with DuckDuckGo and Bing returning links to Christmas countdown sites instead (though Bing did put "Wednesday December 25" right at the top).

duckduckgo Search results

DuckDuckGo results for "Abraham Lincoln."

For something a little more obscure we tried "Empire of the Sun" (both a 1987 Steven Spielberg movie and a music duo). Google returned the Wikipedia sites for the film then the band at the top, Bing returned the Wikipedia page for the movie then the band's official site, and DuckDuckGo returned the IMDB page for the Empire of the Sun film then the band's official site.

These are slight differences really, and the "best" one really depends on your personal preference (do you want to see Twitter results, or not?). All three sites are obviously very competent with basic searches, but Google obviously has the edge when it comes to finding content besides web pages, as well as answering questions directly (no doubt thanks to all that Google Assistant technology behind the scenes).

Search features

google Search features

Google really will flip a coin for you.

Speaking of Google Assistant, one of the advantages of Google is of course the way it ties into all the other Google apps and services: You can search for places on Google Maps, or bring up images in Google Photos, or query your Google Calendar, right from the Google homepage (as long as you're signed in). Try Googling "my trips" for example to see bookings stored in your Gmail account.

All three of these search engines feature filters for images, videos, news, and products; Bing and Google include a Maps option as well. You can dig in further on all three sites as well—filtering images by size or by color, for example. Google and Bing let you save searches to come back to later, whereas DuckDuckGo doesn't (see the separate section on privacy below).

bing Search features

Bing has a comprehensive image search feature.

Besides from basic searches, Google and DuckDuckGo do very well on extras: Unlike Bing, they can toss a coin, roll a die, or start a timer right there on the results screen, no more clicking required. Meanwhile, both Google and Bing can display details of a flight in a pop-up box outside the search results, whereas DuckDuckGo directs you to flight-tracking websites instead.

All three of our search engines can limit results to pages that have been published recently, but Google and Bing have a "custom date" search option (say 1980-1990, for example) that isn't available on DuckDuckGo. Google and Bing let you search by region too, whereas DuckDuckGo doesn't.

duckduck go Search features

DuckDuckGo can start a timer right in your web browser.

Appearance may not be number one in your list of priorities, but Bing presents its search box on top of an appealing full-screen wallpaper image, with links to news stories and other interesting articles underneath. It's more appealing visually than Google or DuckDuckGo, though Google has its doodles and DuckDuckGo has a few different color schemes to pick from.

As you can see, Google can do just about everything—it has been in the search engine game for a long time, after all. Bing and DuckDuckGo are able to match Google on some features, but not all, which makes it hard to switch from unless you have a specific reason to... and that brings us neatly on to the issue of user privacy.

User privacy

Google User privacy

 

Google knows a lot about you—and can serve up results from other Google apps, like Google Photos.

This is the big feature that DuckDuckGo sells itself on: As we've noted above, it doesn't log what you're searching for, and only puts up occasional advertising, which isn't personalized and can be disabled. If you're tired of the big tech firms hoovering up data on you, DuckDuckGo will appeal.

What's more, the sites you visit don't know the search terms you used to find them—something they can otherwise do by piecing together different clues from your browsing behavior and the data that your computer broadcasts publicly. DuckDuckGo also attaches to encrypted versions of site by default.

Bing User privacy

Bing, like Google, keeps a record of searches you've run.

Cookies aren't saved by DuckDuckGo either, those little files that sit locally on your computer and tell websites when you've visited before. Data like your IP address (your router's address on the web) and the browser you're using gets wiped by default too. You're effectively searching anonymously.

There's no doubt that both Google and Microsoft promise to protect your privacy and use the data they have on you responsibly—you can read their respective privacy policies here and here. However, it's also true that they collect much more data on you and what you're doing, so it's up to you whether you trust Google and Microsoft to use it wisely.

duckduckgo User privacy

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Roger Montti - Uploaded by AIRS Member: Anthony Frank]

Ahrefs CEO Dmitry Gerasimenko announced a plan to create a search engine that supports content creators and protects users privacy. Dmitry laid out his proposal for a more free and open web, one that rewards content creators directly from search revenue with a 90/10 split in favor of publishers.

The goal for New Search Engine

Dmitry seeks to correct several trends at Google that he feels are bad for users and publishers. The two problems he seeks to solve is privacy, followed by addressing the monetization crisis felt by publishers big and small.

1. Believes Google is Hoarding Site Visitors

Dmitry tweeted that Google is increasingly keeping site visitors to itself, resulting in less traffic to the content creators.

“Google is showing scraped content on search results page more and more so that you don’t even need to visit a website in many cases, which reduces content authors’ opportunity to monetize.”

2. Seeks to Pry the Web from Privatized Access and Control

Gatekeepers to web content (such as Google and Facebook) exercise control over what kinds of content is allowed to reach people. The gatekeepers shape how content is produced and monetized. He seeks to wrest the monetization incentive away from the gatekeepers and put it back into the hands of publishers, to encourage more innovation and better content.

“Naturally such a vast resource, especially free, attracts countless efforts to tap into it, privatize and control access, each player pulling away their part, tearing holes in the tender fabric of this unique phenomena.”

3. Believes Google’s Model is Unfair

Dmitry noted that Google’s business model is unfair to content creators. By sharing search revenue, sites like Wikipedia wouldn’t have to go begging for money.

He then described how his search engine would benefit content publishers and users:

“Remember that banner on Wikipedia asking for donation every year? Wikipedia would probably get few billions from its content in profit share model. And could pay people who polish articles a decent salary.”

4. States that a Search Engine Should Encourage Publishers and Innovation

Dmitry stated that a search engine’s job of imposing structure to the chaos of the web should be one that encourages the growth of quality content, like plant a support that holds a vine up allowing it to consume more sunlight and grow.

“…structure wielded upon chaos should not be rigid and containing as a glass box around a venomous serpent, but rather supporting and spreading as a scaffolding for the vine, allowing it to flourish and grow new exciting fruits for humanity to grok and cherish. ”

For chaos needs structure to not get torn apart by its own internal forces, and structure needs chaos as a sampling pool of ideas to keep evolution rolling.”

Reaction to Announcement

The reaction on Twitter was positive.

Russ Jones of Moz tweeted:

russ jones

Several industry leaders generously offered their opinions.

Jon Henshaw

Jon Henshaw (@henshaw) is a Senior SEO Analyst at CBSi (CBS, GameSpot, and Metacritic) and founder of Coywolf.marketing, a digital marketing resource. He offered this assessment:

“I appreciate the sentiment and reasons for why Dmitry wants to build a search engine that competes with Google. A potential flaw in the entire plan has to do with searchers themselves.

Giving 90% of profit to content creators does not motivate the other 99% of searchers that are just looking for relevant answers quickly. Even if you were to offer incentives to the average searcher, it wouldn’t work. Bing and other search engines have tried that over the past several years, and they have all failed.

The only thing that will compete with Google is a search engine that provides better results than Google. I would not bet my money on Ahrefs being able to do what nobody else in the industry has been able to do thus far.”

Ryan Jones

Ryan Jones (@RyanJones), is a search marketer who also publishes WTFSEO.com said:

“This sounds like an engine focused on websites not users. So why would users use it?

There is a massive incentive to spam here, and it will be tough to control when the focus is on the spammer not the user.

It’s great for publishers, but without a user-centric focus or better user experience than Google, the philanthropy won’t be enough to get people to switch.”

Tony Wright

Tony Wright (@tonynwright) of search marketing agency WrightIMC shared a similar concern about getting users on board. An enthusiastic user base is what makes any online venture succeed.

“It’s an interesting idea, especially in light of the passage of Article 13 in the EU yesterday.

However, I think that without proper capitalization, it’s most likely to be a failed effort. This isn’t the early 2000’s.

The results will have to be as good or better than Google to gain traction, and even then, getting enough traction to make if economically feasible will be a giant hurdle.

I like the idea of compensating publishers, but I think policing the scammers on a platform like this will most likely be the biggest cost – even bigger than infrastructure.

It’s certainly an ambitious play, and I’ll be rooting for it. But based on just the tweets, it seems like it may be a bit too ambitious without significant capitalization.”

Announcement Gives Voice to Complaints About Google

The announcement echoes complaints by publishers who feel they are struggling. The news industry has been in crisis mode for over a decade trying to find a way to monetize digital content consumption. AdSense publishers have been complaining for years of dwindling earnings.

Estimates say that Google earns $16.5 million dollars per hour from search advertising.  When publishers ask how to improve earnings and traffic, Google’s encouragement to “be awesome” has increasingly acquired a tone of “Let them eat cake.”

A perception has set in that the entire online search ecosystem is struggling except for Google.
The desire for a new search engine has been around for several years. This is why DuckDuckGo has been received so favorably by the search marketing community. This announcement gives voice to long-simmering complaints about Google.

The reaction on Twitter was almost cathartic and generally enthusiastic because of the longstanding perception that Google is not adequately supporting the content creators upon which Google earns billions.

Will this New Search Engine Happen?

Whether this search engine lifts off remains to be seen. The announcement, however, does give voice to many complaints about Google.

No release date has been announced. The scale of this project is huge. It’s almost the online equivalent of going to the moon.

 

[This article is originally published in thestar.com.my - Uploaded by AIRS Member: Anthony Frank]

The best search engine when it comes to privacy and results on par with Google is Startpage, a Dutch company. — dpa

Most people don't really think much further than Google when it comes to search engines. After all, there's a reason that everyone uses the verb "to Google" when they want to look something up.

But is Google the best search engine, objectively speaking?
In terms of search results, the answer is yes, and also in terms of user satisfaction.

But in terms of data protection, Google has some serious problems.
The best search engine when it comes to privacy and results on par with Google is Startpage, a Dutch company that also scores well when its apps are considered. The data protection regulations are much better, and the engine has many of Google's best features as well.

Startpage allows users to choose language and region manually just like Google. The results are essentially the same because the company uses the same technology as Google – the only difference being that it doesn't use trackers to store your data.

During test runs, Startpage was also able to deal with typos and vague search terms just as well as the US tech giant. – dpa
Categorized in Search Engine

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media