fbpx

Now that the Google January 2020 core update is mostly rolled out, we have asked several data providers to send us what they found with this Google search update. All of the data providers agree that this core update was a big one and impacted a large number of web sites.

The facts. What we know from Google, as we previously reported, is that the January 2020 core update started to roll out around 12:00 PM ET on Monday, January 13th. That rollout was “mostly done” by Thursday morning, on January 16th. We also know that this was a global update, and was not specific to any region, language or category of web sites. It is a classic “broad core update.”

What the tools are seeing. We have gone to third-party data companies asking them what their data shows about this update.

RankRanger. Mordy Oberstein from RankRanger said, “the YMYL (your money, your life) niches got hit very hard.” “This a huge update,” he added. “There is massive movement at the top of the SERP for the Health and Finance niches and incredible increases for all niches when looking at the top 10 results overall.”

Here is a chart showing the rank volatility broken down by industry and the position of those rankings:

 all-niche-data-jan2020-core-update-800x550.png

“Excluding the Retail niche, which according to what I am seeing was perhaps a focus of the December 6th update, the January 2020 core update was a far larger update across the board and at every ranking position,” Mordy Oberstein added. “However, when looking at the top 10 results overall during the core update, the Retail niche started to separate itself from the levels of volatility seen in December as well.”

SEMRush. Yulia Ibragimova from SEMRush said “We can see that the latest Google Update was quite big and was noticed almost in every category.” The most volatile categories according to SEMRush, outside of Sports and News, were Online communities, Games, Arts & Entertainments, and Finance. But Yulia Ibragimova added that all categories saw major changes and “we can assume that this update wasn’t aimed to any particular topics,” she told us.

SEMRush offers a lot of data available on its web site over here. But they sent us this additional data around this update for us.

Here is the volatility by category by mobile vs desktop search results:

semrush-catts-642x600.png

The top ten winners according to SEMRush were Dictionary.com, Hadith of the Day, Discogs, ABSFairings, X-Rates, TechCrunch, ShutterStock, 247Patience, GettyImages and LiveScores.com. The top ten losers were mp3-youtube.download, TotalJerkFace.com, GenVideos.io, Tuffy, TripSavvy, Honolulu.gov, NaughtyFind, Local.com, RuthChris and Local-First.org.

Sistrix. Johannes Beus from Sistrix posted their analysis of this core update. He said “Domains that relate to YMYL (Your Money, Your Life) topics have been re-evaluated by the search algorithm and gain or lose visibility as a whole. Domains that have previously been affected by such updates are more likely to be affected again. The absolute fluctuations appear to be decreasing with each update – Google is now becoming more certain of its assessment and does not deviate as much from the previous assessment.”

Here is the Sistrix chart showing the change:

 uk.sistrix.com_onhealth.com_seo_visibility-1-800x361.png

According to Sistrix, the big winners were goal.com, onhealth.com, CarGurus, verywellhealth.com, Fandango, Times Of Israel, Royal.uk, and WestField. The big losers were CarMagazine.co.uk, Box Office Mojo, SkySports, ArnoldClark.com, CarBuyer.co.uk, History Extra, Evan Shalshaw, and NHS Inform.

SearchMetrics. Marcus Tober, the founder of SearchMetrics, told us “the January Core Update seems to revert some changes for the better or worse depending on who you are. It’s another core update where thin content got penalized and where Google put an emphasis in YMYL. The update doesn’t seem to affect as many pages as with the March or September update in 2019. But has similar characteristics.”

Here are some specific examples SearchMetrics shared. First was that Onhealth.com has won at March 2019 Core update and lost at September 2019 and won again big time at January 2020 Core update

 onhealth-800x320.png

While Verywellhealth.com was loser during multiple core updates:

 verywell-800x316.png

Draxe.com, which has been up and down during core updates, with this update seems to be a big winner with +83%. but in previous core updates, it got hit hard:

 draxe-800x318.png

The big winners according to SearchMetrics were esty.com, cargurus.com, verywellhealth.com, overstock.com, addictinggames.com, onhealth.com, bigfishgames,com and health.com. The big losers were tmz.com, academy.com, kbhgames.com, orbitz.com, silvergames.com, autolist.com, etonline.com, trovit.com and pampers.com.

What to do if you are hit. Google has given advice on what to consider if you are negatively impacted by a core update in the past. There aren’t specific actions to take to recover, and in fact, a negative rankings impact may not signal anything is wrong with your pages. However, Google has offered a list of questions to consider if you’re site is hit by a core update.

Why we care. It is often hard to isolate what you need to do to reverse any algorithmic hit your site may have seen. When it comes to Google core updates, it is even harder to do so. If this data and previous experience and advice has shown us is that these core updates are broad, wide and cover a lot of overall quality issues. The data above has reinforced this to be true. So if your site was hit by a core update, it is often recommended to step back from it all, take a wider view of your overall web site and see what you can do to improve the site overall.

[Source: This article was published in searchengineland.com By Barry Schwartz - Uploaded by the Association Member: Edna Thomas]

Categorized in Search Engine

Michael struggles to find the search results he’s looking for, and would like some tips for better Googling

 Want to search like a pro? These tips will help you up you Googling game using the advanced tools to narrow down your results. Photograph: Alastair Pike/AFP via Getty Images
Last week’s column mentioned search skills. I’m sometimes on the third page of results before I get to what I was really looking for. I’m sure a few simple tips would find these results on page 1. All advice welcome. Michael

Google achieved its amazing popularity by de-skilling search. Suddenly, people who were not very good at searching – which is almost everyone – could get good results without entering long, complex searches. Partly this was because Google knew which pages were most important, based on its PageRank algorithm, and it knew which pages were most effective, because users quickly bounced back from websites that didn’t deliver what they wanted.

Later, Google added personalisation based on factors such as your location, your previous searches, your visits to other websites, and other things it knew about you. This created a backlash from people with privacy concerns, because your searches into physical and mental health issues, legal and social problems, relationships and so on can reveal more about you than you want anyone else – or even a machine – to know.

When talking about avoiding “the creepy line”, former Google boss Eric Schmidt said: “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

Google hasn’t got to that point, yet, but it does want to save you from typing. Today, Google does this through a combination of auto-complete search suggestions, Answer Boxes, and “People also ask” boxes, which show related questions along with their “feature snippets”. As a result, Google is much less likely to achieve its stated aim of sending you to another website. According to Jumpshot research, about half of browser-based searches no longer result in a click, and about 6% go to Google-owned properties such as YouTube and Maps.

You could get upset about Google scraping websites such as Wikipedia for information and then keeping their traffic, but this is the way the world is going. Typing queries into a browser is becoming redundant as more people use voice recognition on smartphones or ask the virtual assistant on their smart speakers. Voice queries need direct answers, not pages of links.

So, I can give you some search tips, but they may not be as useful as they were when I wrote about them in January 2004 – or perhaps not for as long.

Advanced Search for everyone
Advanced Search for everyone.jpg
 Google’s advanced search page is the tool to properly drill down into the results. Photograph: Samuel Gibbs/The Guardian

The easiest way to create advanced search queries in Google is to use the form on the Advanced Search page, though I suspect very few people do. You can type different words, phrases or numbers that you want to include or exclude into the various boxes. When you run the search, it converts your input into a single string using search shortcuts such as quotation marks (to find an exact word or phrase) and minus signs (to exclude words).

You can also use the form to narrow your search to a particular language, region, website or domain, or to a type of file, how recently it was published and so on. Of course, nobody wants to fill in forms. However, using the forms will teach you most of the commands mentioned below, and it’s a fallback if you forget any.

Happily, many commands work on other search engines too, so skills are transferable.

Use quotation marks
4759.jpg
 Quotation marks can be a powerful tool to specify exact search terms. Photograph: IKEA

If you are looking for something specific, quotation marks are invaluable. Putting quotation marks around single words tells the search engine that you definitely want them to appear on every page it finds, rather than using close matches or synonyms. Google will, of course, ignore this, but at least the results page will tell you which word it has ignored. You can click on that word to insist, but you will get fewer or perhaps no results.

Putting a whole phrase in inverted commas has the same effect, and is useful for finding quotations, people’s names, book and film titles, or particular phrases.

You can also use an asterisk as a wildcard to find matching phrases. For example, The Simpsons episode, Deep Space Homer, popularised the phrase: “I for one welcome our new insect overlords”. Searching for “I for one welcome our new * overlords” finds other overlords such as aliens, cephalopods, computers, robots and squirrels.

Nowadays, Google’s RankBrain is pretty good at recognising titles and common phrases without quote marks, even if they include “stop words” such as a, at, that, the and this. You don’t need quotation marks to search for the Force, The Who or The Smiths.

However, it also uses synonyms rather than strictly following your keywords. It can be quicker to use minus signs to exclude words you don’t want than add terms that are already implied. One example is jaguar -car.

Use site commands

2618.jpg
 Using the ‘site:’ command can be a powerful tool for quickly searching a particular website. Photograph: Samuel Gibbs/The Guardian

Google also has a site: command that lets you limit your search to a particular website or, with a minus sign (-site:), exclude it. This command uses the site’s uniform resource locator or URL.

For example, if you wanted to find something on the Guardian’s website, you would type site:theguardian.com (no space after the colon) alongside your search words.

You may not need to search the whole site. For example, site:theguardian.com/technology/askjack will search the Ask Jack posts that are online, though it doesn’t search all the ancient texts (continued on p94).

There are several similar commands. For example, inurl: will search for or exclude words that appear in URLs. This is handy because many sites now pack their URLs with keywords as part of their SEO (search-engine optimisation). You can also search for intitle: to find words in titles.

Web pages can include incidental references to all sorts of things, including plugs for unrelated stories. All of these will duly turn up in text searches. But if your search word is part of the URL or the title, it should be one of the page’s main topics.

You can also use site: and inurl: commands to limit searches to include, or exclude, whole groups of websites. For example, either site:co.uk or inurl:co.uk will search matching UK websites, though many UK sites now have .com addresses. Similarly, site:ac.uk and inurl:ac.uk will find pages from British educational institutions, while inurl:edu and site:edu will find American ones. Using inurl:ac.uk OR inurl:edu (the Boolean command must be in caps) will find pages from both. Using site:gov.uk will find British government websites, and inurl:https will search secure websites. There are lots of options for inventive searchers.

Google Search can also find different types of file, using either filetype: or ext: (for file extension). These include office documents (docx, pptx, xlxs, rtf, odt, odp, odx etc) and pdf files. Results depend heavily on the topic. For example, a search for picasso filetype:pdf is more productive than one for stormzy.

Make it a date

1700.jpg
 Narrowing your search by date can find older pieces. Photograph: Samuel Gibbs/The Guardian

We often want up-to-date results, particularly in technology where things that used to be true are not true any more. After you have run a search, you can use Google’s time settings to filter the results, or use new search terms. To do this, click Tools, click the down arrow next to “Any time”, and use the dropdown menu to pick a time period between “Past hour” and “Past year”.

Last week, I was complaining that Google’s “freshness algorithm” could serve up lots of blog-spam, burying far more useful hits. Depending on the topic, you can use a custom time range to get less fresh but perhaps more useful results.

Custom time settings are even more useful for finding contemporary coverage of events, which might be a company’s public launch, a sporting event, or something else. Human memories are good at rewriting history, but contemporaneous reports can provide a more accurate picture.

However, custom date ranges have disappeared from mobile, the daterange: command no longer seems to work in search boxes, and “sort by date” has gone except in news searches. Instead, this year, Google introduced before: and after: commands to do the same job. For example, you could search for “Apple iPod” before:2002-05-31 after:2001-10-15 for a bit of nostalgia. The date formats are very forgiving, so one day we may all prefer it.

 [Source: This article was published in theguardian.com - Uploaded by the Association Member: Carol R. Venuti] 

Categorized in Search Engine

Ever had to search for something on Google, but you’re not exactly sure what it is, so you just use some language that vaguely implies it? Google’s about to make that a whole lot easier.

Google announced today it’s rolling out a new machine learning-based language understanding technique called Bidirectional Encoder Representations from Transformers, or BERT. BERT helps decipher your search queries based on the context of the language used, rather than individual words. According to Google, “when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English.”

Most of us know that Google usually responds to words, rather than to phrases — and Google’s aware of it, too. In the announcement, Pandu Nayak, Google’s VP of search, called this kind of searching “keyword-ese,” or “typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.” It’s amusing to see these kinds of searches — heck, Wired has made a whole cottage industry out of celebrities reacting to these keyword-ese queries in their “Autocomplete” video series” — but Nayak’s correct that this is not how most of us would naturally ask a question.

As you might expect, this subtle change might make some pretty big waves for potential searchers. Nayak said this “[represents] the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.” Google offered several examples of this in action, such as “Do estheticians stand a lot at work,” which apparently returned far more accurate search results.

I’m not sure if this is something most of us will notice — heck, I probably wouldn’t have noticed if I hadn’t read Google’s announcement, but it’ll sure make our lives a bit easier. The only reason I can see it not having a huge impact at first is that we’re now so used to keyword-ese, which is in some cases more economical to type. For example, I can search “What movie did William Powell and Jean Harlow star in together?” and get the correct result (Libeled Lady; not sure if that’s BERT’s doing or not), but I can also search “William Powell Jean Harlow movie” and get the exact same result.

BERT will only be applied to English-based searches in the US, but Google is apparently hoping to roll this out to more countries soon.

[Source: This article was published in thenextweb.com By RACHEL KASER - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

The new language model can think in both directions, fingers crossed

Google has updated its search algorithms to tap into an AI language model that is better at understanding netizens' queries than previous systems.

Pandu Nayak, a Google fellow and vice president of search, announced this month that the Chocolate Factory has rolled out BERT, short for Bidirectional Encoder Representations from Transformers, for its most fundamental product: Google Search.

To pull all of this off, researchers at Google AI built a neural network known as a transformer. The architecture is suited to deal with sequences in data, making them ideal for dealing with language. To understand a sentence, you must look at all the words in it in a specific order. Unlike previous transformer models that only consider words in one direction – left to right – BERT is able to look back to consider the overall context of a sentence.

“BERT models can, therefore, consider the full context of a word by looking at the words that come before and after it—particularly useful for understanding the intent behind search queries,” Nayak said.

For example, below's what the previous Google Search and new BERT-powered search looks like when you query: “2019 brazil traveler to usa need a visa.”

2019 brazil

Left: The result returned for the old Google Search that incorrectly understands the query as a US traveler heading to Brazil. Right: The result returned for the new Google Search using BERT, which correctly identifies the search is for a Brazilian traveler going to the US. Image credit: Google.

BERT has a better grasp of the significance behind the word "to" in the new search. The old model returns results that show information for US citizens traveling to Brazil, instead of the other way around. It looks like BERT is a bit patchy, however, as a Google Search today still appears to give results as if it's American travelers looking to go to Brazil:

current google search

Current search result for the query: 2019 brazil traveler to USA need a visa. It still thinks the sentence means a US traveler going to Brazil

The Register asked Google about this, and a spokesperson told us... the screenshots were just a demo. Your mileage may vary.

"In terms of not seeing those exact examples, the side-by-sides we showed were from our evaluation process, and might not 100 percent mirror what you see live in Search," the PR team told us. "These were side-by-side examples from our evaluation process where we identified particular types of language understanding challenges where BERT was able to figure out the query better - they were largely illustrative.

"Search is dynamic, content on the web changes. So it's not necessarily going to have a predictable set of results for any query at any point in time. The web is constantly changing and we make a lot of updates to our algorithms throughout the year as well."

Nayak claimed BERT would improve 10 percent of all its searches. The biggest changes will be for longer queries, apparently, where sentences are peppered with prepositions like “for” or “to.”

“BERT will help Search better understand one in 10 searches in the US in English, and we’ll bring this to more languages and locales over time,” he said.

Google will run BERT on its custom Cloud TPU chips; it declined to disclose how many would be needed to power the model. The most powerful Cloud TPU option currently is the Cloud TPU v3 Pods, which contain 64 ASICs, each carrying performance of 420 teraflops and 128GB of high-bandwidth memory.

At the moment, BERT will work best for queries made in English. Google said it also works in two dozen countries for other languages, too, such as Korean, Hindi, and Portuguese for “featured snippets” of text. ®

[Source: This article was published in theregister.co.uk By Katyanna Quach - Uploaded by the Association Member: Anthony Frank]

Categorized in Search Engine

Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

Search-engine giant says one in 10 queries (and some advertisements) will see improved results from algorithm change

MOUNTAIN VIEW, Calif.—Google rarely talks about its secretive search algorithm. This week, the tech giant took a stab at transparency, unveiling changes that it says will surface more accurate and intelligent responses to hundreds of millions of queries each day.

Top Google executives, in a media briefing Thursday, said they had harnessed advanced machine learning and mathematical modeling to produce better answers for complex search entries that often confound its current algorithm. They characterized the changes—under a...

Read More...

[Source: This article was published in wsj.com By Rob Copeland - Uploaded by the Association Member: Jasper Solander] 

 
Categorized in Search Engine

Boolean searches make it easy to find what you're looking for in a Google search. The two basic Boolean search commands AND and OR are supported in Google. Boolean searches specify what you want to find and whether to make it more specific (using AND) or less specific (using OR).

A Boolean operator must be in uppercase letters because that's how Google understands it's a search operator and not a regular word. Be careful when typing the search operator; it makes a difference in the search results.

AND Boolean Operator

Use the AND operator in Google to search for all the search terms you specify. Using AND ensures that the topic you're researching is the topic you get in the search results.

For example, a search for Amazon on Google is likely to yield results relating to Amazon.com, such as the site's homepage, its Twitter account, Amazon Prime information, and items available for purchase on Amazon.com.

If you want information on the Amazon rainforest, a search for Amazon rainforest might yield results about Amazon.com or the word Amazon in general. To make sure each search result includes both Amazon and rainforest, use the AND operator.

amazon

Examples of the AND operator include:

  • Amazon AND rainforest
  • sausage AND biscuits
  • best AND college AND towns

In each of these examples, search results include web pages with all the terms connected by the Boolean operator AND.

OR Boolean Operator

Google uses the OR operator to search for one term or another term. An article can contain either word but doesn't have to include both. This usually works well when using two similar words or subjects you want to learn about.

For example, in a search for how to draw OR paint, the OR operator tells Google it doesn't matter which word is used since you'd like information on both.

Screenshot 2

To see the differences between the OR and AND operators, compare the results of how to draw OR paint versus how to draw AND paint. Since OR gives Google the freedom to show more content (since either word can be used), there are more results than if AND is used to restrict the search to include both words.

The break character (|) can be used in place of OR. The break character is the one attached to the backslash key (\).

Examples of the OR operator include:

  • how to draw OR paint
  • how to draw | paint
  • primal OR paleo recipes
  • red OR yellow triangle

Combine Boolean Searches and Use Exact Phrases

When searching for a phrase rather than a single word, group the words with quotation marks. For example, search for "sausage biscuits" (with the quotes included) to show only results for phrases that include the words together, without anything between them. It ignores phrases such as sausage and cheese biscuits.

However, a search for "sausage biscuits" | "cheese sauce" gives results for either exact phrase, so you'll find articles about cheese sauce and articles about sausage biscuits.

When searching for a phrase or more than one keyword, in addition to using a Boolean operator, use parentheses. Type recipes gravy (sausage | biscuit) to search for gravy recipes for either sausages or biscuits. To search for sausage biscuit recipes or reviews, combine the exact phrase with quotations and search for "sausage biscuit" (recipe | review).

If you want paleo sausage recipes that include cheese, type (with quotes) "paleo recipe" (sausage AND cheese).

Screenshot 3

Boolean Operators Are Case Sensitive

Google may not care about uppercase or lowercase letters in search terms, but Boolean searches are case sensitive. For a Boolean operator to work, it must be in all capital letters.

For example, a search for freeware for Windows OR Mac gives different results than a search for freeware for Windows or Mac.

Screenshot 4

[Source: This article was published in lifewire.com By Marziah Karch - Uploaded by the Association Member: Olivia Russell] 

Categorized in Search Engine

[Source: This article was published in nakedsecurity.sophos.com By Mark Stockley - Uploaded by the Association Member: Deborah Tannen]

The history of computing features a succession of organisations that looked, for a while at least, as if they were so deeply embedded in our lives that we’d never do without them.

IBM looked like that, and Microsoft did too. More recently it’s been Google and Facebook.

Sometimes they look unassailable because, in the narrow territory they occupy, they are.

When they do fall it isn’t because somebody storms that territory, they fall because the ground beneath them shifts.

For years and years Linux enthusiasts proclaimed “this will be the year that Linux finally competes with Windows on the desktop!”, and every year it wasn’t.

But Linux, under the brand name Android, eventually smoked Microsoft when ‘Desktop’ gave way to ‘Mobile’.

Google has been the 800-pound gorilla of web search since the late 1990s and all attempts to out-Google it has failed. Its market share is rock solid and it’s seen off all challengers from lumbering tech leviathans to nimble and disruptive startups.

Google will not cede its territory to a Google clone but it might one day find that its territory is not what it was.

The web is getting deeper and darker and Google, Bing and Yahoo don’t actually search most of it.

They don’t search the sites on anonymous, encrypted networks like Tor and I2P (the so-called Dark Web) and they don’t search the sites that have either asked to be ignored or that can’t be found by following links from other websites (the vast, virtual wasteland known as the Deep Web).

The big search engines don’t ignore the Deep Web because there’s some impenetrable technical barrier that prevents them from indexing it – they do it because they’re commercial entities and the costs and benefits of searching beyond their current horizons don’t stack up.

That’s fine for most of us, most of the time, but it means that there are a lot of sites that go un-indexed and lots of searches that the current crop of engines are very bad at.

That’s why the US’s Defence Advanced Research Projects Agency (DARPA) invented a search engine for the deep web called Memex.

Memex is designed to go beyond the one-size-fits-all approach of Google and deliver the domain-specific searches that are the very best solution for narrow interests.

In its first year it’s been tackling the problems of human trafficking and slavery – things that, according to DARPA, have a significant presence beyond the gaze of commercial search engines.

When we first reported on Memex in February, we knew that it would have potential far beyond that. What we didn’t know was that parts of it would become available more widely, to the likes of you and me.

A lot of the project is still somewhat murky and most of the 17 technology partners involved are still unnamed, but the plan seems to be to lift the veil, at least partially, over the next two years, starting this Friday.

That’s when an initial tranche of Memex components, including software from a team called Hyperion Gray, will be listed on DARPA’s Open Catalog.

The Hyperion Gray team described their work to Forbes as:

Advanced web crawling and scraping technologies, with a dose of Artificial Intelligence and machine learning, with the goal of being able to retrieve virtually any content on the internet in an automated way.

Eventually our system will be like an army of robot interns that can find stuff for you on the web, while you do important things like watch cat videos.

More components will follow in December and, by the time the project wraps, a “general purpose technology” will be available.

Memex and Google don’t overlap much, they solve different problems, they serve different needs and they’re funded in very different ways.

But so were Linux and Microsoft.

The tools that DARPA releases at the end of the project probably won’t be a direct competitor to Google but I expect they will be mature and better suited to certain government and business applications than Google is.

That might not matter to Google but there are three reasons why Memex might catch its eye.

The first is not news but it’s true none the less – the web is changing and so is internet use.

When Google started there was no Snapchat, Bitcoin or Facebook. Nobody cared about the Deep Web because it was hard enough to find the things you actually wanted and nobody cared about the Dark Web (remember FreeNet?) because nobody knew what it was for.

The second is this statement made by Christopher White, the man heading up the Memex team at DARPA, who’s clearly thinking big:

The problem we're trying to address is that currently access to web content is mediated by a few very large commercial search engines - Google, Microsoft Bing, Yahoo - and essentially it's a one-size fits all interface...

We've started with one domain, the human trafficking domain ... In the end we want it to be useful for any domain of interest.

That's our ambitious goal: to enable a new kind of search engine, a new way to access public web content

And the third is what we’ve just discovered – Memex isn’t just for spooks and G-Men, it’s for the rest of us to use and, more importantly, to play with.

It’s one thing to use software and quite another to be able to change it. The beauty of open-source software is that people are free to take it in new directions – just like Google did when it picked up Linux and turned it into Android.

Categorized in Search Engine

[Source: This article was published in theverge.com By Adi Robertson - Uploaded by the Association Member: Jay Harris]

Last weekend, in the hours after a deadly Texas church shooting, Google search promoted false reports about the suspect, suggesting that he was a radical communist affiliated with the antifa movement. The claims popped up in Google’s “Popular on Twitter” module, which made them prominently visible — although not the top results — in a search for the alleged killer’s name. Of course, the was just the latest instance of a long-standing problem: it was the latest of multiple similar missteps. As usual, Google promised to improve its search results, while the offending tweets disappeared. But telling Google to retrain its algorithms, as appropriate as that demand is, doesn’t solve the bigger issue: the search engine’s monopoly on truth.

Surveys suggest that, at least in theory, very few people unconditionally believe news from social media. But faith in search engines — a field long dominated by Google — appears consistently high. A 2017 Edelman survey found that 64 percent of respondents trusted search engines for news and information, a slight increase from the 61 percent who did in 2012, and notably more than the 57 percent who trusted traditional media. (Another 2012 survey, from Pew Research Center, found that 66 percent of people believed search engines were “fair and unbiased,” almost the same proportion that did in 2005.) Researcher danah boyd has suggested that media literacy training conflated doing independent research using search engines. Instead of learning to evaluate sources, “[students] heard that Google was trustworthy and Wikipedia was not.”

GOOGLE SEARCH IS A TOOL, NOT AN EXPERT

Google encourages this perception, as do competitors like Amazon and Apple — especially as their products depend more and more on virtual assistants. Though Google’s text-based search page is clearly a flawed system, at least it makes it clear that Google search functions as a directory for the larger internet — and at a more basic level, a useful tool for humans to master.

Google Assistant turns search into a trusted companion dispensing expert advice. The service has emphasized the idea that people shouldn’t have to learn special commands to “talk” to a computer, and demos of products like Google Home show off Assistant’s prowess at analyzing the context of simple spoken questions, then guessing exactly what users want. When bad information inevitably slips through, hearing it authoritatively spoken aloud is even more jarring than seeing it on a page.

Even if the search is overwhelmingly accurate, highlighting just a few bad results around topics like mass shootings is a major problem — especially if people are primed to believe that anything Google says is true. And for every advance Google makes to improve its results, there’s a host of people waiting to game the new system, forcing it to adapt again.

NOT ALL FEATURES ARE WORTH SAVING

Simply shaming Google over bad search results might actually play into its mythos, even if the goal is to hold the company accountable. It reinforces a framing where Google search’s ideal final state is a godlike, omniscient benefactor, not just a well-designed product. Yes, Google search should get better at avoiding obvious fakery, or creating a faux-neutral system that presents conspiracy theories next to hard reporting. But we should be wary of overemphasizing its ability, or that of any other technological system, to act as an arbiter of what’s real.

Alongside pushing Google to stop “fake news,” we should be looking for ways to limit trust in, and reliance on, search algorithms themselves. That might mean seeking handpicked video playlists instead of searching YouTube Kids, which recently drew criticism for surfacing inappropriate videos. It could mean focusing on reestablishing trust in human-led news curation, which has produced its own share of dangerous misinformation. It could mean pushing Google to kill, not improve, features that fail in predictable and damaging ways. At the very least, I’ve proposed that Google rename or abolish the Top Stories carousel, which offers legitimacy to certain pages without vetting their accuracy. Reducing the prominence of “Popular on Twitter” might make sense, too, unless Google clearly commits to strong human-led quality control.

The past year has made web platforms’ tremendous influence clearer than ever. Congress recently grilled Google, Facebook, and other tech companies over their role in spreading Russian propaganda during the presidential election. A report from The Verge revealed that unscrupulous rehab centers used Google to target people seeking addiction treatment. Simple design decisions can strip out the warning signs of a spammy news source. We have to hold these systems to a high standard. But when something like search screws up, we can’t just tell Google to offer the right answers. We have to operate on the assumption that it won’t ever have them.

Categorized in Search Engine

[Source: This article was published in searchenginejournal.com By Pratik Dholakiya - Uploaded by the Association Member: Barbara larson] 

Important changes are happening at Google and, in a world where marketing and algorithms intersect, those changes are largely happening under the radar.

The future of search looks like it will have considerably less search in it, and this isn’t just about the end of the 10 blue links, but about much more fundamental changes.

Let’s talk about some of those changes now, and what they mean for SEO.

Google Discover

Google Discover is a content recommendation engine that suggests content across the web-based on a user’s search history and behavior.

Discover isn’t completely new (it was introduced in December of 2016 as Google Feed). But Google made an important change in late October (announced in September) when they added it to the Google homepage.

The revamp and rebranding to Discover added features like:

  • Topic headers to categorize feed results.
  • More images and videos.
  • Evergreen content, as opposed to just fresh content.
  • A toggle to tell Google if you want more or less content similar to a recommendation.
  • Google claims the recommendations are personalized to your level of expertise with a topic.

Google Discover hardly feels revolutionary at first. In fact, it feels overdue.

Our social media feeds are already dominated by content recommendation engines, and the YouTube content recommendation engine is responsible for 70% of the time spent on the site.

But Discover could have massive implications for the future of how users interact with the content of the web.

While it’s unlikely Discover will ever reach the 70% level of YouTube’s content recommendation engine, if it swallows even a relatively small portion of Google search, say 10%, no SEO strategy will be complete without a tactic for earning that kind of traffic, especially since it will allow businesses to reach potential customers who aren’t even searching for the relevant terms yet.

Google Assistant

For most users, Google Assistant is a quiet and largely invisible revolution.

Its introduction to Android devices in February 2017 likely left most users feeling like it was little more than an upgraded Google Now, and in a sense that’s exactly what it is.

But as Google Assistant grows, it will increasingly influence how users interact with the web and decrease reliance on search.

Like its predecessor, Assistant can:

  • Search the web.
  • Schedule events and alarms.
  • Show Google account info.
  • Adjust device settings.

But the crucial difference is its ability to engage in two-way conversations, allowing users to get answers from the system without ever even looking at a search result.

An incredibly important change for the future of business and the web is the introduction of Google Express, the capability to add products to a shopping cart and order them entirely through Assistant.

But this feature is limited to businesses that are explicitly partnered with Google Express, an incredibly dramatic change from the Google search engine and its crawling of the open web.

Assistant can also identify what some images are. Google Duplex, an upcoming feature, will also allow Assistant to call businesses to schedule appointments and other similar actions on the user’s behalf.

The more users rely on Assistant, the less they will rely on Google search results, and the more businesses who hope to adapt will need to think of other ways to:

  • Leverage Assistant’s algorithms and other emerging technologies to fill in the gaps.
  • Adjust their SEO strategies to target the kind of behavior that is exclusive to search and search alone.

Google’s Declaration of a New Direction

Circa Google’s 20th anniversary, Google announced that its search product was closing an old chapter and opening a new one, with important new driving principles added.

They started by clarifying that these old principles wouldn’t be going away:

  • Focusing on serving the user’s information needs.
  • Providing the most relevant, high-quality information as quickly as possible.
  • Using an algorithmic approach.
  • Rigorously testing every change, including using quality rating guidelines to define search goals.

This means you should continue:

  • Putting the user first.
  • Being accurate and relevant.
  • Having some knowledge of algorithms.
  • Meeting Google’s quality rating guidelines.

But the following principles represent a dramatically new direction for Google Search:

Shifting from Answers to Journeys

Google is adding new features that will allow users to “pick up where they left off,” shifting the focus away from short-term answers to bigger, ongoing projects.

This currently already includes activity cards featuring previous pages visited and queries searched, the ability to add content to collections, and tabs that suggest what to learn about next, personalized to the user’s search history.

A new Topic layer has also been added to the Knowledge Graph, allowing Google to surface evergreen content suggestions for users interested in a particular topic.

Perhaps the most important change to watch carefully, Google is looking for ways to help users who don’t even make a search query.

Google Discover is central to this effort and the inclusion of evergreen content, not just fresh content, represents an important change in how Google is thinking about the feed. This means more and more traditional search content will become feed content instead.

Shifting from Text to Visual Representation

Google is making important changes in the way information is presented by adding new visual capabilities.

They are introducing algorithmically generated AMP Stories, video compilations with relevant caption text like age and notable events in a person’s life.

New featured videos have been added to the search, designed to offer an overview on topics you are interested in.

Image search has also been updated so that images featured on pages with relevant content take priority and pages where the image is central to the content rank better. Captions and suggested searches have been added as well.

Finally, Google Lens allows you to perform a visual search based on objects that Google’s AI can detect in the image.

These changes to search are slipping under the radar somewhat for now, since user behavior rarely changes overnight.

But the likelihood that these features and Google’s new direction will have a dramatic impact on how search works is very high.

SEOs who ignore these changes and continue operating with a 2009 mindset will find themselves losing ground to competitors.

SEO After Search

While queries will always be an important part of the way we find information online, we’re now entering a new era of search.

An era that demands we start changing the way we think about SEO soon, while we can capitalize on the changing landscape.

The situation is not unlike when Google first came on the scene in 1998 when new opportunities were on the horizon that most at the time were unaware of and ill-prepared for.

As the technological landscape changes, we will need to alter our strategies and start thinking about questions and ideas like these in our vision for the future of our brands:

  • Less focus on queries and more focus on context appears inevitable. Where does our content fit into a user’s journey? What would they have learned before consuming it, and what will they need to know next? Note that this is much more vital than simply a shift from keywords to topics, which has been happening for a very long time already. Discovery without queries is much more fundamental and impacts our strategies in a much more profound way.
  • How much can we incorporate our lead generation funnel into that journey as it already exists, and how much can we influence that journey to push it in a different direction?
  • How can we create content and resources that users will want to bookmark and add to collections?
  • Why would Google recommend our content as a useful evergreen resource in Discover, and for what type of user?
  • Can we partner with Google on emerging products? How do we adapt when we can’t?
  • How should we incorporate AMP stories and similar visual content into our content strategy?
  • What type of content will always be exclusive to query-based search, and should we focus more or less on this type of content?
  • What types of content will Google’s AI capacities ultimately be able to replace entirely, and on what timeline? What will Google Assistant and it’s successors never be able to do that only content can?
  • To what extent is it possible for SEOs to adopt a “post-content” strategy?

With the future of search having Google itself doing more of the “searching” on the user’s behalf, we will need to get more creative in our thinking.

We must recognize that surfacing content has never been Google’s priority. It has always been focused on providing information.

Bigger Than Google

The changes on the horizon also signal that the SEO industry ought to start thinking bigger than Google.

What does that mean?

It means expanding the scope of SEO from search to the broader world where algorithms and marketing intersect.

It’s time to start thinking more about how our skills apply to:

  • Content recommendation engines
  • Social media algorithms
  • Ecommerce product recommendation engines
  • Amazon’s search algorithms
  • Smart devices, smart homes, and the internet of things
  • Mobile apps
  • Augmented reality

As doors on search close, new doors open everywhere users are interacting with algorithms that connect to the web and the broader digital world.

SEO professionals should not see the decline of traditional search as a death knell for the industry.

Instead, we should look at the inexorably increasing role algorithms play in peoples’ lives as a fertile ground full of emerging possibilities.

Categorized in Search Engine
Page 1 of 19

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now