fbpx

While there doesn’t seem to be an end yet to the US-Huawei story, the latter has gone full force in preparing for a life without Google. They have been working on something called AppGallery, the alternative to Google Play Store and Huawei Mobile Services, their replacement for Google Play Services. One important thing that seems to be missing is the all-important search, but Huawei hasn’t forgotten it. They are now testing out the Huawei Search app, which can be both good news and bad news for the rest of the world.

XDA Developers says that the testing is currently going on in the UAE but they were able to load it on the Huawei Mate 30 Pro to see what the deal is. It seems to be just a basic search app where you put in a query and it will give you search results. You get webpages, videos, news, or images. The app also gives you shortcuts to weather, sports, unit conversion, and calculator. You are also able to see your search history, give feedback, change app settings, and even supports the dark theme of EMUI 10 (their version of Android 10).

Huawei Search is operated by Aspiegel Limited, their subsidiary that is based out of Ireland. But as to what search engine powers this app, that is less certain. It doesn’t seem to match results from Google, Bing, Yahoo, DuckDuckGo, Yandex, Ask, or AOL. They may not be using a third-party search engine, and so that’s where the bad news may lie. China has been known to control the information that comes out of their Internet, and despite disassociating themselves from the supposed close ties with the government, Huawei is still a Chinese company subject to Chinese laws.

Forbes reports that this is a “potential filter” that will still be serving content to hundreds of millions of users worldwide from a company that is based “in the most highly censored country on the planet”. This is a potential concern as the Search app is a big part of the whole Huawei operating ecosystem that will be serving both Chinese and non-Chinese customers. This is one of the unintended consequences of the U.S. blacklisting the Chinese company – the potential for Huawei to “carve itself a dominant position” in this new alternative to the currently still-dominant Android/Google eco-system.

In any case, it’s still early days for the Huawei Search app and the whole Huawei Mobile Services. We might even see the U.S. backtracking on their blacklist. The question would be if Huawei would go back to Google’s loving arms or if they will continue to pursue their own platform, which will eventually result in the issues mentioned above.

[Source: This article was published in androidcommunity.com By Ida Torres - Uploaded by the Association Member: Issac Avila]

Categorized in Search Engine

Google published a new Search Console training video all about how to use the index coverage report.

Google’s Daniel Waisberg explains how to use Search Console to learn which pages have been crawled and indexed by Google, and how to deal with any problems found during that process.

First, the video gives an overview of the different components of the index coverage report and how to read the data included in them.

What’s Contained in the Index Coverage Report?

Search Console’s index coverage report provides a detailed look at all pages of a website that Google has either indexed or tried to index. The report also logs all errors Googlebot encountered when crawling a page.

a.jpeg

The index coverage report is made up of the following components:

  • Errors: These are critical issues that prevent pages from being indexed. Errors could include pages with the ‘noindex’ directive, pages with a server error, or pages with a 404 error.
  • Valid with warnings: This section includes pages that may or may not be shown in search results depending on the issue. An example is an indexed page that’s blocked by robots.txt.
  • Valid: These are indexed pages that are eligible to be served in search results.
  • Excluded: These are pages that are intentionally not indexed and won’t be included in search results.

On the summary page of the index coverage report you will also see a checkbox you can click to show impressions for indexed pages in Google search.

b.jpeg

How Should I Use The Index Coverage Report?

It’s recommended that site owners start by checking the chart on the summary page to learn if the valid pages trend is somewhat steady. Some amount of fluctuation is normal here. If you’re aware of content being published or removed you will see that reflected in the report.

Next, move onto reviewing the various error sections. You can quickly identity the most pressing issues because they’re sorted by severity. Start at the top of the list and work your way down.

c.jpeg

Once you know what needs to be fixed you can either fix the issues yourself, if you feel comfortable doing so, or share the details with your developer who can make code changes to your website.

After an issue has been fixed you can click on “Validate Fix” and Google will validate the changes.

s.jpeg

How Often Should I Check the Index Coverage Report?

It’s not necessary to check the index coverage report every day, Google says, because emails will be sent out whenever Search Console detects a new indexing error.

However, if an existing error gets worse, Google will not send out an email notification. So it’s necessary to check on the report at least once in a while to make sure nothing is going from bad to worse.

Those are the basics of the Search Console index coverage report. See the full video below:

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Robert Hensonw]

Categorized in Search Engine

This was a pretty busy week, we may have had a Google search algorithm update this week and maybe, just maybe, Forbes got hit hard by it. Google is probably going to revert the favicon and black ad label user interface, lots of tests are going on now. Bing hides the ad label as well, it isn’t just Google. I posted a summary of everything you need to know about the Google feature snippet deduplication change, including Google might be giving us performance data on them, images in featured snippets may change, Google will move the right side featured snippet to the top and until then it stopped deduplicating the right side feature snippets. Google Search Console launched a new removals tool with a few set of features. Google may have issues indexing international pages. Google says they treat links in PDFs as nofollowed links but that contradicts earlier statements. Google said schema markup will continue to get more complicated. Google said do not translate your image URLs. I shared a fun people also ask that looks like an ad, but is not an ad. Google Assistant Actions do not give you a ranking boost. Google is still using Chrome 41 as the user agent when requesting resources but not for rendering. Google Ads switched all campaign types to standard delivery. Google My Business suspensions are at an all time high. Google Chrome is testing hiding URLs for the search results page. Google is hiring an SEO. I posted two vlogs this week, one with Thom Craver and one with Lisa Barone. Oh and if you want to help sponsor those vlogs, go to patreon.com/barryschwartz. That was the search news this week at the Search Engine Roundtable.

Make sure to subscribe to our video feed or subscribe directly on iTunes to be notified of these updates and download the video in the background. Here is the YouTube version of the feed:

Search Topics of Discussion:

 [Source: This article was published in seroundtable.com By Barry Schwartz - Uploaded by the Association Member: Olivia Russell]

Categorized in Search Engine

Google is enhancing its Collections in Search feature, making it easy to revisit groups of similar pages.

Similar to the activity cards in search results, introduced last year, Google’s Collections feature allows users to manually create groups of like pages.

Now, using AI, Google will automatically group together similar pages in a collection. This feature is compatible with content related to activities like cooking, shopping, and hobbies.

collection.jpeg

This upgrade to collections will be useful in the event you want to go back and look at pages that weren’t manually saved. Mona Vajolahi, a Google Search Product Manager, states in an announcement:

“Remember that chicken parmesan recipe you found online last week? Or that rain jacket you discovered when you were researching camping gear? Sometimes when you find something on Search, you’re not quite ready to take the next step, like cooking a meal or making a purchase. And if you’re like me, you might not save every page you want to revisit later.”

These automatically generated collections can be saved to keep forever, or disregarded if not useful. They can be accessed any time from the Collections tab in the Google app, or through the Google.com side menu in a mobile browser.

Once a collection is saved, Google can help users discover even more similar pages by tapping on the “Find More” button. Google is also adding a collaboration feature that allow users to share and work on creating collections with other people.

Auto-generated collections will start to appear for US English users this week. The ability to see related content will launch in the coming weeks.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Logan Hochstetler]

Categorized in Search Engine

It’s not paid inclusion, but it is paid occlusion

Happy Friday to you! I have been reflecting a bit on the controversy du jour: Google’s redesigned search results. Google is trying to foreground sourcing and URLs, but in the process it made its results look more like ads, or vice versa. Bottom line: Google’s ads just look like search results now.

I’m thinking about it because I have to admit that I don’t personally hate the new favicon -plus-URL structure. But I think that might be because I am not a normal consumer of web content. I’ve been on the web since the late ‘90s and I parse information out of URLs kind of without thinking about it. (In fact, the relative decline of valuable information getting encoded into the URL is a thing that makes me sad.)

I admit that I am not a normal user. I set up custom Chrome searches and export them to my other browsers. I know what SERP means and the term kind of slips out in regular conversation sometimes. I have opinions about AMP and its URL and caching structure. I’m a weirdo.

As that weirdo, Google’s design makes perfect sense and it’s possible it might do the same for regular folk. The new layout for search result is ugly at first glance — but then Google was always ugly until relatively recently. I very quickly learned to unconsciously take in the information from the top favicon and URL-esque info without it really distracting me.

...Which is basically the problem. Google’s using that same design language to identify its ads instead of much more obvious, visually distinct methods. It’s consistent, I guess, but it also feels deceptive.

Recode’s Peter Kafka recently interviewed Buzzfeed CEO Jonah Peretti, and Peretti said something really insightful: what if Google’s ads really aren’t that good? What if Google is just taking credit for clicks on ads just because people would have been searching for that stuff anyway? I’ve been thinking about it all day: what if Google ads actually aren’t that effective and the only reason they make so much is billions of people use Google?

The pressure to make them more effective would be fairly strong, then, wouldn’t it? And it would get increasingly hard to resist that pressure over time.

I am old enough to remember using the search engines before Google. I didn’t know how bad their search technology was compared to what was to come, but I did have to bounce between several of them to find what I wanted. Knowing what was a good search for WebCrawler and what was good for Yahoo was one of my Power User Of The Internet skills.

So when Google hit, I didn’t realize how powerful and good the PageRank technology was right away. What I noticed right away is that I could trust the search results to be “organic” instead of paid and that there were no dark patterns tricking me into clicking on an ad.

One of the reasons Google won search in the first place with old people like me was that in addition to its superior technology, it drew a harder line against allowing paid advertisements into its search results than its competitors.

With other search engines, there was the problem of “paid inclusion,” which is the rare business practice that does exactly what the phrase means. You never really knew if what you were seeing was the result of a web-crawling bot or a business deal.

This new ad layout doesn’t cross that line, but it’s definitely problematic and it definitely reduces my trust in Google’s results. It’s not so much paid inclusion as paid occlusion.

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

And most of all: if Google is willing to visually muddle ads, how long until its users lose trust in the algorithm itself? With this change, Google is becoming what it once sought to overcome: AltaVista.

Read More...

[Source: This article was published in theverge.com By Barry Schwartz - Uploaded by the Association Member: James Gill]

Categorized in Search Engine

Google has started testing a feature that will display the search query in the Chrome address bar rather than the actual page's URL when performing searches on Google.

This experimental feature is called "Query in Omnibox" and has been available as a flag in Google Chrome since Chrome 71, but is disabled by default.

In a test being conducted by Google, this feature is being enabled for some users and will cause the search keyword to be displayed in the browser's address bar, or Omnibox, instead of the URL that you normally see. 

enabled-search.jpg

Query in Omnibox enabled

In BleepingComputer's tests, this feature only affects searches on Google and does not affect any other search engine.

When this feature is not enabled, Google will display the URL of the search in the Omnibox as you would expect. This allows you to not only properly identify the site you are on, but also to easily share the search with another user.

experiment-disabled.jpg

Query in Omnibox Disabled​​​

For example, to see the above search, you can just copy the https://www.google.com/search?q=test link from the address bar and share it with someone else.

With the Query in Omnibox feature enabled, though, if you copy the search keyword it will just copy that keyword into the clipboard rather than the site's URL. If you want to access the URL, you need to right-click on the keyword and select 'Show URL'.

show-url.jpg

Google is eroding the URL

Google has made it clear that they do not think that the URL is very useful to users.

In a Wired interview, Adrienne Porter Felt, Chrome's engineering manager. explained that Google wants to change how they are displayed in Chrome as people have a hard time understanding them.

"People have a really hard time understanding URLs. They’re hard to read, it’s hard to know which part of them is supposed to be trusted, and in general I don’t think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone—they know who they’re talking to when they’re using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity."

Instead of removing them in one fell swoop, Google is gradually eroding the various elements of a URL until there is nothing left.

We saw the beginning of this transition when Google Chrome 79 was released and it stopped displaying the www subdomain in URLs.

no-www.jpg

WWW subdomain removed from URL

In this next phase, they are testing the removal of URLs altogether from Google searches, which as everyone knows, is by far the most used web search engine.

What is next? The removal of URLs on other search engines or only showing a page title when browsing a web site?

All these questions remain to be answered, but could it be that Google is not wrong about URLs?

I was opposed to the removal of the WWW trivial subdomain from URLs for a variety of reasons and now I don't even realize it's missing.

BleepingComputer has reached out to Google with questions about this test, but had not heard back as of yet.

 [This article is originally published in bleepingcomputer.com By Lawrence Abrams - Uploaded by AIRS Member: Dana W. Jimenez]

Categorized in Search Engine

Earlier today, Google  announced that it would be redesigning the redesign of its search results as a response to withering criticism from politicians, consumers and the press over the way in which search results displays were made to look like ads.

Google makes money when users of its search service click on ads. It doesn’t make money when people click on an unpaid search result. Making ads look like search results makes Google more money.

It’s also a pretty evil (or at least unethical) business decision by a company whose mantra was “Don’t be evil”(although they gave that up in 2018).

 

Users began noticing the changes to search results last week, and at least one user flagged the changes earlier this week.

There's something strange about the recent design change to google search results, favicons and extra header text: they all look like ads, which is perhaps the point?

Screenshot 1
EO0MQcEU0AAGtVR
 
Google responded with a bit of doublespeak from its corporate account about how the redesign was intended to achieve the opposite effect of what it was actually doing.

“Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded ‘Ad’ label for ads,” the company wrote.

Senator Mark Warner (D-VA) took a break from impeachment hearings to talk to The Washington Post about just how bad the new search redesign was.

“We’ve seen multiple instances over the last few years where Google has made paid advertisements ever more indistinguishable from organic search results,” Warner told the Post. “This is yet another example of a platform exploiting its bottleneck power for commercial gain, to the detriment of both consumers and also small businesses.”

Google’s changes to its search results happened despite the fact that the company is already being investigated by every state in the country for antitrust violations.

For Google, the rationale is simple. The company’s advertising revenues aren’t growing the way they used to, and the company is looking at a slowdown in its core business. To try and juice the numbers, dark patterns present an attractive way forward.

Indeed, Google’s using the same tricks that it once battled to become the premier search service in the U.S. When the company first launched its search service, ads were clearly demarcated and separated from actual search results returned by Google’s algorithm. Over time, the separation between what was an ad and what wasn’t became increasingly blurred.

 
Screenshot 2

Color fade: A history of Google ad labeling in search results http://selnd.com/2adRCdU 

CoOOsx WAAAgFhq
 
“Search results were near-instant and they were just a page of links and summaries – perfection with nothing to add or take away,” user experience expert Harry Brignull (and founder of the watchdog website darkpatterns.org) said of the original Google search results in an interview with TechCrunch.

“The back-propagation algorithm they introduced had never been used to index the web before, and it instantly left the competition in the dust. It was proof that engineers could disrupt the rules of the web without needing any suit-wearing executives. Strip out all the crap. Do one thing and do it well.”

“As Google’s ambitions changed, the tinted box started to fade. It’s completely gone now,” Brignull added.

The company acknowledged that its latest experiment might have gone too far in its latest statement and noted that it will “experiment further” on how it displays results.

 [Source: This article was published in techcrunch.com By Jonathan Shieber - Uploaded by the Association Member: Joshua Simon]

Categorized in Search Engine

Ever had to search for something on Google, but you’re not exactly sure what it is, so you just use some language that vaguely implies it? Google’s about to make that a whole lot easier.

Google announced today it’s rolling out a new machine learning-based language understanding technique called Bidirectional Encoder Representations from Transformers, or BERT. BERT helps decipher your search queries based on the context of the language used, rather than individual words. According to Google, “when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English.”

Most of us know that Google usually responds to words, rather than to phrases — and Google’s aware of it, too. In the announcement, Pandu Nayak, Google’s VP of search, called this kind of searching “keyword-ese,” or “typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.” It’s amusing to see these kinds of searches — heck, Wired has made a whole cottage industry out of celebrities reacting to these keyword-ese queries in their “Autocomplete” video series” — but Nayak’s correct that this is not how most of us would naturally ask a question.

As you might expect, this subtle change might make some pretty big waves for potential searchers. Nayak said this “[represents] the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.” Google offered several examples of this in action, such as “Do estheticians stand a lot at work,” which apparently returned far more accurate search results.

I’m not sure if this is something most of us will notice — heck, I probably wouldn’t have noticed if I hadn’t read Google’s announcement, but it’ll sure make our lives a bit easier. The only reason I can see it not having a huge impact at first is that we’re now so used to keyword-ese, which is in some cases more economical to type. For example, I can search “What movie did William Powell and Jean Harlow star in together?” and get the correct result (Libeled Lady; not sure if that’s BERT’s doing or not), but I can also search “William Powell Jean Harlow movie” and get the exact same result.

BERT will only be applied to English-based searches in the US, but Google is apparently hoping to roll this out to more countries soon.

[Source: This article was published in thenextweb.com By RACHEL KASER - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

Google confirmed an update affecting local search results has now fully rolled out, a process that began in early November.

Screenshot 1

In what’s been called the November 2019 Local Search Update, Google is now applying neural matching to local search results. To explain neural matching, Google points to a tweet published earlier this year that describes it as a super-synonym system.

That means neural matching allows Google to better understand the meaning behind queries and match them to the most relevant local businesses – even if the keywords in the query are not specifically included in the business name and description.

“The use of neural matching means that Google can do a better job going beyond the exact words in business name or description to understand conceptually how it might be related to the words searchers use and their intents.”

In other words, some business listings might now be surfaced for queries they wouldn’t have shown up for prior to this update. Hopefully, that proves to be a good thing.

Google notes that, although the update has finished rolling out, local search results as they are displayed now are not set in stone by any means. Like regular web searches, results can change over time.

Google has not stated to what extent local search results will be impacted by this update, though it was confirmed this is a global launch across all countries and languages.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Jasper Solander]

Categorized in Search Engine

Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine
Page 1 of 7

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media