Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

Search-engine giant says one in 10 queries (and some advertisements) will see improved results from algorithm change

MOUNTAIN VIEW, Calif.—Google rarely talks about its secretive search algorithm. This week, the tech giant took a stab at transparency, unveiling changes that it says will surface more accurate and intelligent responses to hundreds of millions of queries each day.

Top Google executives, in a media briefing Thursday, said they had harnessed advanced machine learning and mathematical modeling to produce better answers for complex search entries that often confound its current algorithm. They characterized the changes—under a...

Read More...

[Source: This article was published in wsj.com By Rob Copeland - Uploaded by the Association Member: Jasper Solander] 

 
Categorized in Search Engine

Boolean searches make it easy to find what you're looking for in a Google search. The two basic Boolean search commands AND and OR are supported in Google. Boolean searches specify what you want to find and whether to make it more specific (using AND) or less specific (using OR).

A Boolean operator must be in uppercase letters because that's how Google understands it's a search operator and not a regular word. Be careful when typing the search operator; it makes a difference in the search results.

AND Boolean Operator

Use the AND operator in Google to search for all the search terms you specify. Using AND ensures that the topic you're researching is the topic you get in the search results.

For example, a search for Amazon on Google is likely to yield results relating to Amazon.com, such as the site's homepage, its Twitter account, Amazon Prime information, and items available for purchase on Amazon.com.

If you want information on the Amazon rainforest, a search for Amazon rainforest might yield results about Amazon.com or the word Amazon in general. To make sure each search result includes both Amazon and rainforest, use the AND operator.

amazon

Examples of the AND operator include:

  • Amazon AND rainforest
  • sausage AND biscuits
  • best AND college AND towns

In each of these examples, search results include web pages with all the terms connected by the Boolean operator AND.

OR Boolean Operator

Google uses the OR operator to search for one term or another term. An article can contain either word but doesn't have to include both. This usually works well when using two similar words or subjects you want to learn about.

For example, in a search for how to draw OR paint, the OR operator tells Google it doesn't matter which word is used since you'd like information on both.

Screenshot 2

To see the differences between the OR and AND operators, compare the results of how to draw OR paint versus how to draw AND paint. Since OR gives Google the freedom to show more content (since either word can be used), there are more results than if AND is used to restrict the search to include both words.

The break character (|) can be used in place of OR. The break character is the one attached to the backslash key (\).

Examples of the OR operator include:

  • how to draw OR paint
  • how to draw | paint
  • primal OR paleo recipes
  • red OR yellow triangle

Combine Boolean Searches and Use Exact Phrases

When searching for a phrase rather than a single word, group the words with quotation marks. For example, search for "sausage biscuits" (with the quotes included) to show only results for phrases that include the words together, without anything between them. It ignores phrases such as sausage and cheese biscuits.

However, a search for "sausage biscuits" | "cheese sauce" gives results for either exact phrase, so you'll find articles about cheese sauce and articles about sausage biscuits.

When searching for a phrase or more than one keyword, in addition to using a Boolean operator, use parentheses. Type recipes gravy (sausage | biscuit) to search for gravy recipes for either sausages or biscuits. To search for sausage biscuit recipes or reviews, combine the exact phrase with quotations and search for "sausage biscuit" (recipe | review).

If you want paleo sausage recipes that include cheese, type (with quotes) "paleo recipe" (sausage AND cheese).

Screenshot 3

Boolean Operators Are Case Sensitive

Google may not care about uppercase or lowercase letters in search terms, but Boolean searches are case sensitive. For a Boolean operator to work, it must be in all capital letters.

For example, a search for freeware for Windows OR Mac gives different results than a search for freeware for Windows or Mac.

Screenshot 4

[Source: This article was published in lifewire.com By Marziah Karch - Uploaded by the Association Member: Olivia Russell] 

Categorized in Search Engine

[Source: This article was published in nakedsecurity.sophos.com By Mark Stockley - Uploaded by the Association Member: Deborah Tannen]

The history of computing features a succession of organisations that looked, for a while at least, as if they were so deeply embedded in our lives that we’d never do without them.

IBM looked like that, and Microsoft did too. More recently it’s been Google and Facebook.

Sometimes they look unassailable because, in the narrow territory they occupy, they are.

When they do fall it isn’t because somebody storms that territory, they fall because the ground beneath them shifts.

For years and years Linux enthusiasts proclaimed “this will be the year that Linux finally competes with Windows on the desktop!”, and every year it wasn’t.

But Linux, under the brand name Android, eventually smoked Microsoft when ‘Desktop’ gave way to ‘Mobile’.

Google has been the 800-pound gorilla of web search since the late 1990s and all attempts to out-Google it has failed. Its market share is rock solid and it’s seen off all challengers from lumbering tech leviathans to nimble and disruptive startups.

Google will not cede its territory to a Google clone but it might one day find that its territory is not what it was.

The web is getting deeper and darker and Google, Bing and Yahoo don’t actually search most of it.

They don’t search the sites on anonymous, encrypted networks like Tor and I2P (the so-called Dark Web) and they don’t search the sites that have either asked to be ignored or that can’t be found by following links from other websites (the vast, virtual wasteland known as the Deep Web).

The big search engines don’t ignore the Deep Web because there’s some impenetrable technical barrier that prevents them from indexing it – they do it because they’re commercial entities and the costs and benefits of searching beyond their current horizons don’t stack up.

That’s fine for most of us, most of the time, but it means that there are a lot of sites that go un-indexed and lots of searches that the current crop of engines are very bad at.

That’s why the US’s Defence Advanced Research Projects Agency (DARPA) invented a search engine for the deep web called Memex.

Memex is designed to go beyond the one-size-fits-all approach of Google and deliver the domain-specific searches that are the very best solution for narrow interests.

In its first year it’s been tackling the problems of human trafficking and slavery – things that, according to DARPA, have a significant presence beyond the gaze of commercial search engines.

When we first reported on Memex in February, we knew that it would have potential far beyond that. What we didn’t know was that parts of it would become available more widely, to the likes of you and me.

A lot of the project is still somewhat murky and most of the 17 technology partners involved are still unnamed, but the plan seems to be to lift the veil, at least partially, over the next two years, starting this Friday.

That’s when an initial tranche of Memex components, including software from a team called Hyperion Gray, will be listed on DARPA’s Open Catalog.

The Hyperion Gray team described their work to Forbes as:

Advanced web crawling and scraping technologies, with a dose of Artificial Intelligence and machine learning, with the goal of being able to retrieve virtually any content on the internet in an automated way.

Eventually our system will be like an army of robot interns that can find stuff for you on the web, while you do important things like watch cat videos.

More components will follow in December and, by the time the project wraps, a “general purpose technology” will be available.

Memex and Google don’t overlap much, they solve different problems, they serve different needs and they’re funded in very different ways.

But so were Linux and Microsoft.

The tools that DARPA releases at the end of the project probably won’t be a direct competitor to Google but I expect they will be mature and better suited to certain government and business applications than Google is.

That might not matter to Google but there are three reasons why Memex might catch its eye.

The first is not news but it’s true none the less – the web is changing and so is internet use.

When Google started there was no Snapchat, Bitcoin or Facebook. Nobody cared about the Deep Web because it was hard enough to find the things you actually wanted and nobody cared about the Dark Web (remember FreeNet?) because nobody knew what it was for.

The second is this statement made by Christopher White, the man heading up the Memex team at DARPA, who’s clearly thinking big:

The problem we're trying to address is that currently access to web content is mediated by a few very large commercial search engines - Google, Microsoft Bing, Yahoo - and essentially it's a one-size fits all interface...

We've started with one domain, the human trafficking domain ... In the end we want it to be useful for any domain of interest.

That's our ambitious goal: to enable a new kind of search engine, a new way to access public web content

And the third is what we’ve just discovered – Memex isn’t just for spooks and G-Men, it’s for the rest of us to use and, more importantly, to play with.

It’s one thing to use software and quite another to be able to change it. The beauty of open-source software is that people are free to take it in new directions – just like Google did when it picked up Linux and turned it into Android.

Categorized in Search Engine

[Source: This article was published in theverge.com By Adi Robertson - Uploaded by the Association Member: Jay Harris]

Last weekend, in the hours after a deadly Texas church shooting, Google search promoted false reports about the suspect, suggesting that he was a radical communist affiliated with the antifa movement. The claims popped up in Google’s “Popular on Twitter” module, which made them prominently visible — although not the top results — in a search for the alleged killer’s name. Of course, the was just the latest instance of a long-standing problem: it was the latest of multiple similar missteps. As usual, Google promised to improve its search results, while the offending tweets disappeared. But telling Google to retrain its algorithms, as appropriate as that demand is, doesn’t solve the bigger issue: the search engine’s monopoly on truth.

Surveys suggest that, at least in theory, very few people unconditionally believe news from social media. But faith in search engines — a field long dominated by Google — appears consistently high. A 2017 Edelman survey found that 64 percent of respondents trusted search engines for news and information, a slight increase from the 61 percent who did in 2012, and notably more than the 57 percent who trusted traditional media. (Another 2012 survey, from Pew Research Center, found that 66 percent of people believed search engines were “fair and unbiased,” almost the same proportion that did in 2005.) Researcher danah boyd has suggested that media literacy training conflated doing independent research using search engines. Instead of learning to evaluate sources, “[students] heard that Google was trustworthy and Wikipedia was not.”

GOOGLE SEARCH IS A TOOL, NOT AN EXPERT

Google encourages this perception, as do competitors like Amazon and Apple — especially as their products depend more and more on virtual assistants. Though Google’s text-based search page is clearly a flawed system, at least it makes it clear that Google search functions as a directory for the larger internet — and at a more basic level, a useful tool for humans to master.

Google Assistant turns search into a trusted companion dispensing expert advice. The service has emphasized the idea that people shouldn’t have to learn special commands to “talk” to a computer, and demos of products like Google Home show off Assistant’s prowess at analyzing the context of simple spoken questions, then guessing exactly what users want. When bad information inevitably slips through, hearing it authoritatively spoken aloud is even more jarring than seeing it on a page.

Even if the search is overwhelmingly accurate, highlighting just a few bad results around topics like mass shootings is a major problem — especially if people are primed to believe that anything Google says is true. And for every advance Google makes to improve its results, there’s a host of people waiting to game the new system, forcing it to adapt again.

NOT ALL FEATURES ARE WORTH SAVING

Simply shaming Google over bad search results might actually play into its mythos, even if the goal is to hold the company accountable. It reinforces a framing where Google search’s ideal final state is a godlike, omniscient benefactor, not just a well-designed product. Yes, Google search should get better at avoiding obvious fakery, or creating a faux-neutral system that presents conspiracy theories next to hard reporting. But we should be wary of overemphasizing its ability, or that of any other technological system, to act as an arbiter of what’s real.

Alongside pushing Google to stop “fake news,” we should be looking for ways to limit trust in, and reliance on, search algorithms themselves. That might mean seeking handpicked video playlists instead of searching YouTube Kids, which recently drew criticism for surfacing inappropriate videos. It could mean focusing on reestablishing trust in human-led news curation, which has produced its own share of dangerous misinformation. It could mean pushing Google to kill, not improve, features that fail in predictable and damaging ways. At the very least, I’ve proposed that Google rename or abolish the Top Stories carousel, which offers legitimacy to certain pages without vetting their accuracy. Reducing the prominence of “Popular on Twitter” might make sense, too, unless Google clearly commits to strong human-led quality control.

The past year has made web platforms’ tremendous influence clearer than ever. Congress recently grilled Google, Facebook, and other tech companies over their role in spreading Russian propaganda during the presidential election. A report from The Verge revealed that unscrupulous rehab centers used Google to target people seeking addiction treatment. Simple design decisions can strip out the warning signs of a spammy news source. We have to hold these systems to a high standard. But when something like search screws up, we can’t just tell Google to offer the right answers. We have to operate on the assumption that it won’t ever have them.

Categorized in Search Engine

[Source: This article was published in searchenginejournal.com By Pratik Dholakiya - Uploaded by the Association Member: Barbara larson] 

Important changes are happening at Google and, in a world where marketing and algorithms intersect, those changes are largely happening under the radar.

The future of search looks like it will have considerably less search in it, and this isn’t just about the end of the 10 blue links, but about much more fundamental changes.

Let’s talk about some of those changes now, and what they mean for SEO.

Google Discover

Google Discover is a content recommendation engine that suggests content across the web-based on a user’s search history and behavior.

Discover isn’t completely new (it was introduced in December of 2016 as Google Feed). But Google made an important change in late October (announced in September) when they added it to the Google homepage.

The revamp and rebranding to Discover added features like:

  • Topic headers to categorize feed results.
  • More images and videos.
  • Evergreen content, as opposed to just fresh content.
  • A toggle to tell Google if you want more or less content similar to a recommendation.
  • Google claims the recommendations are personalized to your level of expertise with a topic.

Google Discover hardly feels revolutionary at first. In fact, it feels overdue.

Our social media feeds are already dominated by content recommendation engines, and the YouTube content recommendation engine is responsible for 70% of the time spent on the site.

But Discover could have massive implications for the future of how users interact with the content of the web.

While it’s unlikely Discover will ever reach the 70% level of YouTube’s content recommendation engine, if it swallows even a relatively small portion of Google search, say 10%, no SEO strategy will be complete without a tactic for earning that kind of traffic, especially since it will allow businesses to reach potential customers who aren’t even searching for the relevant terms yet.

Google Assistant

For most users, Google Assistant is a quiet and largely invisible revolution.

Its introduction to Android devices in February 2017 likely left most users feeling like it was little more than an upgraded Google Now, and in a sense that’s exactly what it is.

But as Google Assistant grows, it will increasingly influence how users interact with the web and decrease reliance on search.

Like its predecessor, Assistant can:

  • Search the web.
  • Schedule events and alarms.
  • Show Google account info.
  • Adjust device settings.

But the crucial difference is its ability to engage in two-way conversations, allowing users to get answers from the system without ever even looking at a search result.

An incredibly important change for the future of business and the web is the introduction of Google Express, the capability to add products to a shopping cart and order them entirely through Assistant.

But this feature is limited to businesses that are explicitly partnered with Google Express, an incredibly dramatic change from the Google search engine and its crawling of the open web.

Assistant can also identify what some images are. Google Duplex, an upcoming feature, will also allow Assistant to call businesses to schedule appointments and other similar actions on the user’s behalf.

The more users rely on Assistant, the less they will rely on Google search results, and the more businesses who hope to adapt will need to think of other ways to:

  • Leverage Assistant’s algorithms and other emerging technologies to fill in the gaps.
  • Adjust their SEO strategies to target the kind of behavior that is exclusive to search and search alone.

Google’s Declaration of a New Direction

Circa Google’s 20th anniversary, Google announced that its search product was closing an old chapter and opening a new one, with important new driving principles added.

They started by clarifying that these old principles wouldn’t be going away:

  • Focusing on serving the user’s information needs.
  • Providing the most relevant, high-quality information as quickly as possible.
  • Using an algorithmic approach.
  • Rigorously testing every change, including using quality rating guidelines to define search goals.

This means you should continue:

  • Putting the user first.
  • Being accurate and relevant.
  • Having some knowledge of algorithms.
  • Meeting Google’s quality rating guidelines.

But the following principles represent a dramatically new direction for Google Search:

Shifting from Answers to Journeys

Google is adding new features that will allow users to “pick up where they left off,” shifting the focus away from short-term answers to bigger, ongoing projects.

This currently already includes activity cards featuring previous pages visited and queries searched, the ability to add content to collections, and tabs that suggest what to learn about next, personalized to the user’s search history.

A new Topic layer has also been added to the Knowledge Graph, allowing Google to surface evergreen content suggestions for users interested in a particular topic.

Perhaps the most important change to watch carefully, Google is looking for ways to help users who don’t even make a search query.

Google Discover is central to this effort and the inclusion of evergreen content, not just fresh content, represents an important change in how Google is thinking about the feed. This means more and more traditional search content will become feed content instead.

Shifting from Text to Visual Representation

Google is making important changes in the way information is presented by adding new visual capabilities.

They are introducing algorithmically generated AMP Stories, video compilations with relevant caption text like age and notable events in a person’s life.

New featured videos have been added to the search, designed to offer an overview on topics you are interested in.

Image search has also been updated so that images featured on pages with relevant content take priority and pages where the image is central to the content rank better. Captions and suggested searches have been added as well.

Finally, Google Lens allows you to perform a visual search based on objects that Google’s AI can detect in the image.

These changes to search are slipping under the radar somewhat for now, since user behavior rarely changes overnight.

But the likelihood that these features and Google’s new direction will have a dramatic impact on how search works is very high.

SEOs who ignore these changes and continue operating with a 2009 mindset will find themselves losing ground to competitors.

SEO After Search

While queries will always be an important part of the way we find information online, we’re now entering a new era of search.

An era that demands we start changing the way we think about SEO soon, while we can capitalize on the changing landscape.

The situation is not unlike when Google first came on the scene in 1998 when new opportunities were on the horizon that most at the time were unaware of and ill-prepared for.

As the technological landscape changes, we will need to alter our strategies and start thinking about questions and ideas like these in our vision for the future of our brands:

  • Less focus on queries and more focus on context appears inevitable. Where does our content fit into a user’s journey? What would they have learned before consuming it, and what will they need to know next? Note that this is much more vital than simply a shift from keywords to topics, which has been happening for a very long time already. Discovery without queries is much more fundamental and impacts our strategies in a much more profound way.
  • How much can we incorporate our lead generation funnel into that journey as it already exists, and how much can we influence that journey to push it in a different direction?
  • How can we create content and resources that users will want to bookmark and add to collections?
  • Why would Google recommend our content as a useful evergreen resource in Discover, and for what type of user?
  • Can we partner with Google on emerging products? How do we adapt when we can’t?
  • How should we incorporate AMP stories and similar visual content into our content strategy?
  • What type of content will always be exclusive to query-based search, and should we focus more or less on this type of content?
  • What types of content will Google’s AI capacities ultimately be able to replace entirely, and on what timeline? What will Google Assistant and it’s successors never be able to do that only content can?
  • To what extent is it possible for SEOs to adopt a “post-content” strategy?

With the future of search having Google itself doing more of the “searching” on the user’s behalf, we will need to get more creative in our thinking.

We must recognize that surfacing content has never been Google’s priority. It has always been focused on providing information.

Bigger Than Google

The changes on the horizon also signal that the SEO industry ought to start thinking bigger than Google.

What does that mean?

It means expanding the scope of SEO from search to the broader world where algorithms and marketing intersect.

It’s time to start thinking more about how our skills apply to:

  • Content recommendation engines
  • Social media algorithms
  • Ecommerce product recommendation engines
  • Amazon’s search algorithms
  • Smart devices, smart homes, and the internet of things
  • Mobile apps
  • Augmented reality

As doors on search close, new doors open everywhere users are interacting with algorithms that connect to the web and the broader digital world.

SEO professionals should not see the decline of traditional search as a death knell for the industry.

Instead, we should look at the inexorably increasing role algorithms play in peoples’ lives as a fertile ground full of emerging possibilities.

Categorized in Search Engine

[Source: This article was published in lifehacker.com By Mike Epstein - Uploaded by the Association Member: Joshua Simon]

The internet, as a “place,” is constantly growing. We build more and more webpages every day—so many, in fact, that it can feel as if certain corners of it are lost to time.

As it turns out, they may only be lost to Google. Earlier this year, web developer-bloggers Tim Bray and Marco Fioretti noted that Google seems to have stopped indexing the entirety of the internet for Google Search. As a result, certain old websites—those more than 10 years old—did not show up through Google search. Both writers lamented that limiting Google’s effective memory to the last decade, while logical when faced with the daunting task of playing information concierge to our every whimsical question, forces us to reckon with the fact that, when you use Google for historical searches, there are probably more answers out there.

As a BoingBoing post based on Bray’s points out, DuckDuckGo and Bing both still seem to offer more complete records of the internet, specifically showing web pages that Google stopped indexing for search. If you’re looking for a specific website from before 2009 and can’t find it, either one is a solid first step. If that doesn’t work, it’s always possible someone else who needed the same page you were looking for saving it as an archive on the Wayback Machine.

But what about broad questions? Questions where you don’t already know the answer? Historical research from the early web? There are other, more specialized options for that. A Hacker News forum post suggests a couple of search engines. Million Short, which allows you to run a search and automatically skip the most popular answers to probe deeper into the web. Wiby.me is a “search engine for classic websites,” made to help people find hobbyist pages and other archaic features of the internet.

The Hacker news thread also brings up Pinboard, a minimalist bookmarking service similar to Pocket, which has a key feature for archivists: If you sign up for its premium service—$11 per year—Pinboard will make a web archive of every page you save. If you’re looking at older, unindexed material, such a tool can make it easier to go back to specific parts of the older internet that you may want or need to recall again.

Categorized in How to

 [Source: This article was Published in seroundtable.com By Barry Schwartz - Uploaded by the Association Member: David J. Redcliff]

Google is now sending out newish, not 100% new, alerts for changes in your top queries for your site. This is an email from Google Search Console that will show you either large increases or decreases in your ranking positions according to Google Search Console data.

The emails read with the subject line "change in top queries for your site." Then it says "Search Console has identified a recent change in the top queries leading to your site from Google Search. We thought that you might be interested to know these changes. Here is how some of your top queries performed in the week of." It then lists out the example queries and how it changed.

Here is a screenshot from Eli Schwartz on Twitter:

click for full size

This is not 100% new, Google sent out alerts via Search Console for changes in clicks and impressions. This is a variation of that.

Here are more screenshots of this:

Dawn Anderson@dawnieando

           Is this new @rustybrick ?

View image on Twitter
                 SEO Alive@seo_alive

          Google Search Console ? podría estar probando el envío de informes sobre cambios en el rendimiento de las keywords más importantes.  

          En breve, publicaremos un artículo en el blog: https://seoalive.com/blog/ 

          cc. @rustybrick @googlewmc

View image on Twitter
Eli Schwartz
@5le

Google Search Console now has push emails about query performance. Is this new? cc: @rustybrick

View image on Twitter
 
Categorized in Search Engine

 [Source: This article was Published in searchenginejournal.com By Dave Davies - Uploaded by the Association Member: Clara Johnson]

Let’s begin by answering the obvious question:

What Is Universal Search?

There are a few definitions for universal search on the web, but I prefer hearing it from the horse’s mouth on things like this.

While Google hasn’t given a strict definition that I know of as to what universal search is from an SEO standpoint, they have used the following definition in their Search Appliance documentation:

“Universal search is the ability to search all content in an enterprise through a single search box. Although content sources might reside in different locations, such as on a corporate network, on a desktop, or on the World Wide Web, they appear in a single, integrated set of search results.”

Adapted for SEO and traditional search, we could easily turn it into:

“Universal search is the ability to search all content across multiple databases through a single search box. Although content sources might reside in different locations, such as a different index for specific types or formats of content, they appear in a single, integrated set of search results.”

What other databases are we talking about? Basically:

Universal Search

On top of this, there are additional databases that information is drawn from (hotels, sports scores, calculators, weather, etc.) and additional databases with user-generated information to consider.

These range from reviews to related searches to traffic patterns to previous queries and device preferences.

Why Universal Search?

I remember a time, many years ago, when there were 10 blue links…

search

It was a crazy time of discovery. Discovering all the sites that didn’t meet your intent or your desired format, that is.

And then came Universal Search. It was announced in May of 2007 (by Marissa Mayer, if that gives it context) and rolled out just a couple months after they expanded on the personalization of results.

The two were connected and not just by being announced by the same person. They were connected in illustrating their continued push towards Google’s mission statement:

“Our mission is to organize the world’s information and make it universally accessible and useful.”

Think about those 10 blue links and what they offered. Certainly, they offered scope of information not accessible at any point in time prior, but they also offered a problematic depth of uncertainty.

Black hats aside (and there were a lot more of them then), you clicked a link in hopes that you understood what was on the other side of that click and we wrote titles and descriptions that hopefully fully described what we had to offer.

A search was a painful process, we just didn’t know it because it was better than anything we’d had prior.

Enter Universal Search

Then there was Universal Search. Suddenly the guesswork was reduced.

Before we continue, let’s watch a few minutes of a video put out by Google shortly after Universal Search launched.

The video starts at the point where they’re describing what they were seeing in the eye tracking of search results and illustrates what universal search looked like at the time.

 

OK – notwithstanding that this was a core Google video, discussing a major new Google feature and it has (at the time of writing) 4,277 views and two peculiar comments – this is an excellent look at the “why” of Universal Search as well as an understanding of what it was at the time, and how much and how little it’s changed.

How Does It Present Itself?

We saw a lot of examples of Universal Search in my article on How Search Engines Display Search Results.

Where then we focused on the layout itself and where each section comes from, here we’re discussing more the why and how of it.

At a root level and as we’ve all seen, Universal Search presents itself as sections on a webpage that stand apart from the 10 blue links. They are often, but not always, organically generated (though I suspect they are always organically driven).

This is to say, whether a content block exists would be handled on the organic search side, whereas what’s contained in that content block may-or-may-not include ads.

So, let’s compare then versus now, ignoring cosmetic changes and just looking at what the same result would look like with and without Universal Search by today’s SERP standards.

sej civil war serp

This answers two questions in a single image.

It answers the key question of this section, “How does Universal Search present itself?”

This image also does a great job of answering the question, “Why?”

Imagine the various motivations I might have to enter the query [what was the civil war]. I may be:

  • A high school student doing an essay.
  • Someone who simply is not familiar with the historic event.
  • Looking for information on the war itself or my query may be part of a larger dive into civil wars across nations or wars in general.
  • Someone who prefers articles.
  • Someone who prefers videos.
  • Just writing an unrelated SEO article and need a good example.

The possibilities are virtually endless.

If you look at the version on the right, which link would you click?

How about if you prefer video results?

The decision you make will take you longer than it likely does with Universal Search options.

And that’s the point.

The Universal Search structure makes decision making faster across a variety of intents, while still leaving the blue links (though not always 10 anymore) available for those looking for pages on the subject.

In fact, even if what you’re looking for exists in an article, the simple presence of Universal Search results will help filter out the results you don’t want and leaves SEO pros and website owners free to focus our articles to ranking in the traditional search results and other types and formats in appropriate sections.

How Does Google Pick the Sections?

Let me begin this section by stating very clearly – this is the best guess.

As we’re all aware, Google’s systems are incredibly complicated. There may be more pieces than I am aware of, obviously.

There are two core areas I can think of that they would use for these adjustments.

Users

Now, before you say, “But Google says they don’t use user metrics to adjust search results!” let’s consider the specific wording that Google’s John Mueller used when responding to a question on user signals:

“… that’s something we look at across millions of different queries, and millions of different pages, and kind of see in general is this algorithm going the right way or is this algorithm going in the right way.

But for individual pages, I don’t think that’s something worth focusing on at all.”

So, they do use the data. They use it on their end, but not to rank individual pages.

What you can take this as it relates to Universal Search is that Google will test different blocks of data for different types of queries to determine how users interact with them. It is very likely that Bing does something similar.

Most certainly they pick locations for possible placements, limitations on the number of different result types/databases, and have determined starting points (think: templates for specific query types) for their processes and then simply let machine learning take over running slight variances or testing layouts on pages generated for unknown queries, or queries where new associations may be attained.

For example, a spike in a query that ties to a sudden rise in new stories related to the query could trigger the news carousel being inserted into the search results, provided that past similar instances produced a positive engagement signal and it would remain as long as user engagement indicated it.

Query Data

It is virtually a given that a search engine would use their own query data to determine which sections to insert into the SERPs.

If a query like [pizza] has suggested queries like:

Recommended Pizza Searches

Implying that most such searchers are looking for restaurants it makes sense that in a Universal Search structure, the first organic result would not be a blue link but:

Pizza SERP

It is very much worth remembering that the goal of a search engine is to provide a single location where a user can access everything they are looking for.

At times this puts them in direct competition with themselves in some ways. Not that I think they mind losing traffic to another of their own properties.

Let’s take YouTube for example. Google’s systems will understand not just which YouTube videos are most popular but also which are watched through, when people eject, skip or close out, etc.

They can use this not just to understand which videos are likely to resonate on Google.com but also understand more deeply what supplemental content people are interested in when they search for more general queries.

I may search for [civil war], but that doesn’t mean I’m not also interested in the Battle at Antietam specifically.

So, I would suggest that the impact of these other databases does not simply impact the layouts as illustrated in Universal Search but that these databases themselves can and likely are being used to connect topics and information together and thus impacting the core search rankings themselves.

Takeaway

So, what does this all mean for you?

For one, you can use the machine learning systems of the search engines to assist in your content development strategies.

Sections you see appearing in Universal Search tell us a lot about the types and formats of content that users expect or engage with.

Also important is that devices and technology are changing rapidly. I suspect the idea of Universal Search is about to go through a dramatic transformation.

This is due in part to voice search, but I suspect it will have more to do with the push by Google to provide a solution rather than options.

A few well-placed filters could provide refinement that produces only a single result and many of these filters could be automatically applied based on known user preferences.

I’m not sure we’ll get to a single result in the next two to three years but I do suspect that we will see it for some queries and where the device lends itself to it.

If I query “weather” why would the results page not look like:

Weather SERP

In my eyes, this is the future of Universal Search.

Or, as I like to call it, search.

Categorized in Search Engine

[Source: This article was Published in zeenews.india.com - Uploaded by the Association Member: Dana W. Jimenez]

BBC based its investigation on three points – where exactly did it happen, when did it happen, and who is responsible for the atrocities.

A video emerged earlier this year, showing some armed men in uniform brutally killing a group of women and children, and triggered uproar across the globe. It was alleged that the video, released by Channel 4, showed the killing of civilians by Cameroon’s army over alleged links with dreaded terrorist group Boko Haram.

The Cameroon government denied the claim, saying that the video was not shot in the country and that the Army was being wrongly blamed for killing civilians. A statement was released by Cameroon Minister of Communication, Issa Tchiroma Bakary, to deny the claims.

“The video that is currently going around is nothing but an unfortunate attempt to distort actual facts and intoxicate the public. Its sincerity can be easily questioned,” Bakary had said in his statement. While the statement came in July 2018, the government a month later said that seven members of the military were facing investigation.

BBC, however, decided to carry out a fact check of the video and the claims and counterclaims around it. And the media group did so with the help of Google Earth. It based its investigation on three points – where exactly did it happen, when did it happen, and who is responsible for the atrocities. The BBC released its findings in a video titled ‘Cameroon: Anatomy of a killing’.

Where:

The first 40 seconds of the video captures a mountain range with a distinctive profile. BBC journalists spent hours to match the range with the topography of northern Cameroon, but it failed to give the desired result. After a tip-off from a source, they studied the topography of a particular region and discovered that the range was near a village named Krawa Mafa, few hundred meters away from the Nigerian border. Other details in the video, such as houses and dirt tracks, were matched with the topography of the village and found to be same.

When:

This was the trickiest part of the investigation done by BBC journalists. Several bits and pieces were put together to identify the range of period when the civilians were killed. A building was visible on the satellite imagery but only till 2016, which suggested that the killings took place before that period. Another building was spotted to have come up in March 2015. Then, a path was traced which appeared only between January and July. The journalists also analyzed the shadow of walking assailants on the basis of the concept of the sundial. After the whole analysis, it was confirmed that the killings took place between March 20 and April 5, 2015.

Who:

The statement released by Cameroonian Minister of Communication Issa Tchiroma Bakary was used for this purpose. For instance, the minister in his statement claimed that the weapons in the video were not the ones used by the Cameroonian Army, but BBC analysis showed that one of the guns used was Serbian made Zastava M21, which is used by some sections of the country’s military. The minister also claimed that the dress worn by the assailants was one used for operations in forests. Bakary claimed that the soldiers in the said area wore a desert-style uniform. But Cameroonian soldiers in the region in a Channel 4 video of 2015 were seen wearing forest-style camouflage, similar to those seen in the video.

The focus then shifted to a statement released by the government in August 2018, which named seven soldiers who were arrested and under investigation. The names mentioned in the video, such as Corporal Tchotcho, were matched with some Facebook profiles. A profile of Tchotcho Cyriaque Bityala, who was among the detainees named by the government. Two other soldiers, named in the government list, were identified in a similar manner.

The findings were shared by the BBC with the Cameroonian government. Responding to it, the Bakary said, “Seven soldiers were arrested, disarmed. They are under investigation right now. I can confirm that all seven of them are in prison.”

Categorized in Investigative Research
Page 1 of 19

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now