[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Roger Montti - Uploaded by AIRS Member: Anthony Frank]

Ahrefs CEO Dmitry Gerasimenko announced a plan to create a search engine that supports content creators and protects users privacy. Dmitry laid out his proposal for a more free and open web, one that rewards content creators directly from search revenue with a 90/10 split in favor of publishers.

The goal for New Search Engine

Dmitry seeks to correct several trends at Google that he feels are bad for users and publishers. The two problems he seeks to solve is privacy, followed by addressing the monetization crisis felt by publishers big and small.

1. Believes Google is Hoarding Site Visitors

Dmitry tweeted that Google is increasingly keeping site visitors to itself, resulting in less traffic to the content creators.

“Google is showing scraped content on search results page more and more so that you don’t even need to visit a website in many cases, which reduces content authors’ opportunity to monetize.”

2. Seeks to Pry the Web from Privatized Access and Control

Gatekeepers to web content (such as Google and Facebook) exercise control over what kinds of content is allowed to reach people. The gatekeepers shape how content is produced and monetized. He seeks to wrest the monetization incentive away from the gatekeepers and put it back into the hands of publishers, to encourage more innovation and better content.

“Naturally such a vast resource, especially free, attracts countless efforts to tap into it, privatize and control access, each player pulling away their part, tearing holes in the tender fabric of this unique phenomena.”

3. Believes Google’s Model is Unfair

Dmitry noted that Google’s business model is unfair to content creators. By sharing search revenue, sites like Wikipedia wouldn’t have to go begging for money.

He then described how his search engine would benefit content publishers and users:

“Remember that banner on Wikipedia asking for donation every year? Wikipedia would probably get few billions from its content in profit share model. And could pay people who polish articles a decent salary.”

4. States that a Search Engine Should Encourage Publishers and Innovation

Dmitry stated that a search engine’s job of imposing structure to the chaos of the web should be one that encourages the growth of quality content, like plant a support that holds a vine up allowing it to consume more sunlight and grow.

“…structure wielded upon chaos should not be rigid and containing as a glass box around a venomous serpent, but rather supporting and spreading as a scaffolding for the vine, allowing it to flourish and grow new exciting fruits for humanity to grok and cherish. ”

For chaos needs structure to not get torn apart by its own internal forces, and structure needs chaos as a sampling pool of ideas to keep evolution rolling.”

Reaction to Announcement

The reaction on Twitter was positive.

Russ Jones of Moz tweeted:

russ jones

Several industry leaders generously offered their opinions.

Jon Henshaw

Jon Henshaw (@henshaw) is a Senior SEO Analyst at CBSi (CBS, GameSpot, and Metacritic) and founder of Coywolf.marketing, a digital marketing resource. He offered this assessment:

“I appreciate the sentiment and reasons for why Dmitry wants to build a search engine that competes with Google. A potential flaw in the entire plan has to do with searchers themselves.

Giving 90% of profit to content creators does not motivate the other 99% of searchers that are just looking for relevant answers quickly. Even if you were to offer incentives to the average searcher, it wouldn’t work. Bing and other search engines have tried that over the past several years, and they have all failed.

The only thing that will compete with Google is a search engine that provides better results than Google. I would not bet my money on Ahrefs being able to do what nobody else in the industry has been able to do thus far.”

Ryan Jones

Ryan Jones (@RyanJones), is a search marketer who also publishes WTFSEO.com said:

“This sounds like an engine focused on websites not users. So why would users use it?

There is a massive incentive to spam here, and it will be tough to control when the focus is on the spammer not the user.

It’s great for publishers, but without a user-centric focus or better user experience than Google, the philanthropy won’t be enough to get people to switch.”

Tony Wright

Tony Wright (@tonynwright) of search marketing agency WrightIMC shared a similar concern about getting users on board. An enthusiastic user base is what makes any online venture succeed.

“It’s an interesting idea, especially in light of the passage of Article 13 in the EU yesterday.

However, I think that without proper capitalization, it’s most likely to be a failed effort. This isn’t the early 2000’s.

The results will have to be as good or better than Google to gain traction, and even then, getting enough traction to make if economically feasible will be a giant hurdle.

I like the idea of compensating publishers, but I think policing the scammers on a platform like this will most likely be the biggest cost – even bigger than infrastructure.

It’s certainly an ambitious play, and I’ll be rooting for it. But based on just the tweets, it seems like it may be a bit too ambitious without significant capitalization.”

Announcement Gives Voice to Complaints About Google

The announcement echoes complaints by publishers who feel they are struggling. The news industry has been in crisis mode for over a decade trying to find a way to monetize digital content consumption. AdSense publishers have been complaining for years of dwindling earnings.

Estimates say that Google earns $16.5 million dollars per hour from search advertising.  When publishers ask how to improve earnings and traffic, Google’s encouragement to “be awesome” has increasingly acquired a tone of “Let them eat cake.”

A perception has set in that the entire online search ecosystem is struggling except for Google.
The desire for a new search engine has been around for several years. This is why DuckDuckGo has been received so favorably by the search marketing community. This announcement gives voice to long-simmering complaints about Google.

The reaction on Twitter was almost cathartic and generally enthusiastic because of the longstanding perception that Google is not adequately supporting the content creators upon which Google earns billions.

Will this New Search Engine Happen?

Whether this search engine lifts off remains to be seen. The announcement, however, does give voice to many complaints about Google.

No release date has been announced. The scale of this project is huge. It’s almost the online equivalent of going to the moon.

 

[This article is originally published in seroundtable.com written by Barry Schwartz - Uploaded by AIRS Member: Jason Bourne]

John Mueller from Google explained on Twitter the difference between the timestamp date in an XML Sitemaps lasmod date and the date on a web page. John said the sitemaps lastmod is when the page as a whole was last changed for crawling/indexing. The date on the page is the date to be associated with the primary content on the page.

John Mueller first said, "A page can change without its primary content changing." He said that doesn't think "crawling needs to be synced to the date associated with the content." The example he gave was for "site redesigns or site moves are pretty clearly disconnected from the content date."

He then added this tweet:

john tweet

So there you have it.

Forum discussion at Twitter.

Categorized in Search Engine

[This article is originally published in thenextweb.com written by Abhimanyu Ghoshal - Uploaded by AIRS Member: Carol R. Venuti]

The European Union is inching closer to enacting sweeping copyright legislation that would require platforms like Google, Facebook to pay publishers for the privilege of displaying their content to users, as well as monitoring copyright infringement by users on the sites and services they manage.

That’s set to open a Pandora’s Box of problems that could completely derail your internet experience because it’d essentially disallow platforms from displaying content from other sources. In a screenshot shared with Search Engine Land, Google illustrated how this might play out in its search results for news articles:

google
An example of what Google’s search results for news might look like if the EU goes ahead with its copyright directive

As you can see, the page looks empty, because it’s been stripped of all copyrighted content – headlines, summaries and images from articles from various publishers.

Google almost certainly won’t display unusable results like these, but it will probably only feature content from publishers it’s cut deals with (and it’s safe to assume that’s easier for larger companies than small ones).

That would reduce the number of sources of information you’ll be able to discover through the search engine, and it’ll likely lead to a drop in traffic for media outlets. It’s a lose-lose situation, and it’s baffling that EU lawmakers don’t see this as a problem – possibly because they’re fixated on how this ‘solution’ could theoretically benefit content creators and copyright holders by ruling that they must be paid for their output.

It isn’t yet clear when the new copyright directive will come into play – there are numerous processes involved that could take until 2021 before it’s implemented in EU countries’ national laws. Hopefully, the union’s legislators will see sense well before that and put a stop to this madness.

Update: We’ve clarified in our headline that this is Google’s opinion of how its search service will be affected by the upcoming EU copyright directive; it isn’t yet clear how it will eventually be implemented.

Categorized in Search Engine

[This article is originally published in blogs.scientificamerican.com written by Daniel M. Russell and Mario Callegaro - Uploaded by AIRS Member: Rene Meyer] 

Researchers who study how we use search engines share common mistakes, misperceptions, and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second, she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926 when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and runs with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites and the first search results for those terms provide just that. Even the difference between a query like [the who]versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however, most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to the given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study, we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Search Engine

[This article is originally published in scientificamerican.com written by Michael Shermer - Uploaded by AIRS Member: Jay Harris]

Google as a window into our private thoughts

What are the weirdest questions you've ever Googled? Mine might be (for my latest book): “How many people have ever lived?” “What do people think about just before death?” and “How many bits would it take to resurrect in a virtual reality everyone who ever lived?” (It's 10 to the power of 10123.) Using Google's autocomplete and Keyword Planner tools, U.K.-based Internet company Digitaloft generated a list of what it considers 20 of the craziest searches, including “Am I pregnant?” “Are aliens real?” “Why do men have nipples?” “Is the world flat?” and “Can a man get pregnant?”

This is all very entertaining, but according to economist Seth Stephens-Davidowitz, who worked at Google as a data scientist (he is now an op-ed writer for the New York Times), such searches may act as a “digital truth serum” for deeper and darker thoughts. As he explains in his book Everybody Lies (Dey Street Books, 2017), “In the pre-digital age, people hid their embarrassing thoughts from other people. In the digital age, they still hide them from other people, but not from the internet and in particular sites such as Google and PornHub, which protect their anonymity.” Employing big data research tools “allows us to finally see what people really want and really do, not what they say they want and say they do.”

People may tell pollsters that they are not racist, for example, and polling data do indicate that bigoted attitudes have been in steady decline for decades on such issues as interracial marriage, women's rights and gay marriage, indicating that conservatives today are more socially liberal than liberals were in the 1950s.

Using the Google Trends tool in analyzing the 2008 U.S. presidential election, however, Stephens-Davidowitz concluded that Barack Obama received fewer votes than expected in Democrat strongholds because of still latent racism. For example, he found that 20 percent of searches that included the N-word (hereafter, “n***”) also included the word “jokes” and that on Obama's first election night about one in 100 Google searches with “Obama” in them included “kkk” or “n***(s).”

“In some states, there were more searches for ‘[n***] president’ than ‘first black president,’” he reports—and the highest number of such searches were not predominantly from Southern Republican bastions as one might predict but included upstate New York, western Pennsylvania, eastern Ohio, industrial Michigan and rural Illinois. This difference between public polls and private thoughts, Stephens-Davidowitz observes, helps to explain Obama's underperformance in regions with a lot of racist searches and partially illuminates the surprise election of Donald Trump.

But before we conclude that the arc of the moral universe is slouching toward Gomorrah, a Google Trends search for “n*** jokes,” “bitch jokes” and “fag jokes” between 2004 and 2017, conducted by Harvard University psychologist Steven Pinker and reported in his 2018 book Enlightenment Now, shows downward-plummeting lines of frequency of searches. “The curves,” he writes, “suggest that Americans are not just more abashed about confessing to prejudice than they used to be; they privately don't find it as amusing.”

More optimistically, these declines in prejudice may be an underestimate, given that when Google began keeping records of searches in 2004 most Googlers were urban and young, who are known to be less prejudiced and bigoted than rural and older people, who adopted the search technology years later (when the bigoted search lines were in steep decline). Stephens-Davidowitz confirms that such intolerant searches are clustered in regions with older and less educated populations and that compared with national searches, those from retirement neighborhoods are seven times as likely to include “n*** jokes” and 30 times as likely to contain “fag jokes.” Additionally, he found that someone who searches for “n***” is also likely to search for older-generation topics such as “Social Security” and “Frank Sinatra.”

What these data show is that the moral arc may not be bending toward justice as smoothly upward as we would like. But as members of the Silent Generation (born 1925–1945) and Baby Boomers (born 1946–1964) are displaced by Gen Xers (born 1965–1980) and Millennials (born 1981–1996), and as populations continue shifting from rural to urban living, and as postsecondary education levels keep climbing, such prejudices should be on the wane. And the moral sphere will expand toward greater inclusiveness.

Categorized in Search Engine

[This article is originally published in business2community.com written by Graham Jones - Uploaded by AIRS Member: Joshua Simon] 

Web search is often wasting time for you. There, I have said it. Google is a master illusionist. It makes you think you are working when you are not.

The reason is simple, you can get an answer to any question within seconds. That makes it feel as though you have achieved something. Prior to Google, you may have needed to find a book, look something up in the index, locate the right page and then read it – only to find out it didn’t contain what you wanted. So, you had to choose another book. That might have needed a trip to the library. To find out one fact, it might have taken hours. Now, all it takes is seconds.

Of course, in the past, the information you needed might not have been in a book. You might have needed to speak with someone. Perhaps you could only get the information from an expert. Or, if it was about a company you needed to phone them. Many companies had an “information line” – a special number you could call to speak with someone to get details you needed about the business. All of that took time.

When things take a long time our perception of progress is slow. However, when we can do things rapidly our sense of achievement is heightened. So, when we use the web to search for things which we previously had to look up in a book, take a trip to the library, or make several phone calls, we get a sense of achieving something. It is a psychological illusion that we are working.

It is, therefore, no surprise to discover in recent research that business buyers prefer to obtain information about suppliers using the web, rather than any other tool.

b2b search

According to the study from Path Factory, almost 90% of buyers use web search as their preferred method of finding information. Only one in three people opt for the telephone. That’s no surprise, either. Research from O2, the mobile phone company, found that making phone calls was only the fifth most popular use of a smartphone. It turns out that the most popular use of a mobile phone is to browse the Internet – searching for information.

Web Search is Wasting Time

The problem with web search is that it is often wrong. Yet, most people accept the first result Google provides. For instance, search for “how many planets are there in our solar system” and the first result will tell you that there are eight planets. True, but not quite. Like many other facts, there are nuances which are not explained. Astronomers redefined what constitutes a planet and so our solar system contains eight planets and five “dwarf planets”, including Pluto (which was a planet when I grew up..!). Like many other “facts”, the first information we see on Google misses out the nuance.

Similarly, search for “duplicate content penalty” and you will find thousands of results telling you that you cannot duplicate content on your website because “Google will ban you” or “Google will downgrade your search engine ranking” or some other nonsense. And nonsense it is. Google has said so, repeatedly. Yet, many businesses trying to make their websites optimised for search engines will spend hours and hours recrafting content in order to “remove the penalty”. That’s an activity that is wasting time on work that is unnecessary, all because of search.

However, if you phoned a reliable expert on search engine optimisation you would have received the correct information about duplicating content, avoiding you hours of work. However, to make that phone call and have the conversation is slower than completing a web search and hence it feels less productive.

What this means is, if you need a new supplier you could well make a better selection if you did not use web search. Pick up the phone and speak with people who know the market, such as the contacts you make in business networking. It will feel slower doing this, but the answers you get will be more informed and less prone to the influence of an algorithm. Once you have the recommendations, then use web search to find out about the company.

Making phone calls is becoming a low priority activity. Your office phone rings less than it used to. You feel as though you are being productive because you search and find stuff online, but that is wasting time.

Categorized in Search Engine

[This article is originally published in searchengineland.com written by Matt McGee - Uploaded by AIRS Member: Bridget Miller] 

Google UK recently shared a list of 52 Things to Do on a variety of Google properties (found via Phil Bradley). It’s a collection of tools and tips about using Google products and services for some everyday functions. If you’re a search power user, you probably know most of them already. But Google’s message seems to be, “Did you know you could do all this stuff on Google?”

It got us thinking about non-Google search tools that might have slipped notice altogether, or just fallen off your radar. With that in mind, here’s a list of seven search tools you may not know about … but should.

Read on to discover about how to see search suggestions from all major search engines on one page; a “cover flow” interface to see face images from Google Images; a new way to get recommendations about music, movies and more; new tools to search multiple search engines from one place; a tool for finding hot event tickets and as assist for hunting through Flickr’s many photos.

Soovle

Soovle offers a unique search interface that puts a variety of search sites on a single page. But what makes it unique is that, as you type in the search box, Soovle shows you the auto-completion phrases that each search site recommends. In addition to being original, that function could serve to help with a keyword research project. It looks like this:

Google is the default search site when you arrive, but you can use the right-arrow on your keyboard to quickly select a different site to perform your search. And there’s also a daily update on the top auto-complete terms. Each day, Soovle queries the search sites to find out what they show as the top results for each letter of the alphabet. Pretty cool stuff.

facesaerch

If you like the “cover flow” feature that Apple iTunes offers, you’ll like this new image search engine. facesaerch (yes, “a” before “e”) takes a Google image search, eliminates everything but faces, and gives the results a more modern interface. It looks like this:

It’s nothing groundbreaking overall, but one nice addition is a customizable widget that lets you embed a facesaerch widget on your blog or web page, complete with cool thumbnail scrolling and all. (For your Oprah Winfrey fan page, of course.)

TasteKid

TasteKid is more of a recommendation engine than a search engine. It covers movies, music, and books, offering suggestions for things you might like based on what you search for. The interface is gorgeous (albeit a bit dark/goth), and the recommendations are generally good. Search for U2, for example, and TasteKid suggests you try out INXS, R.E.M., Sting, Bruce Springsteen, Coldplay, and several other artists — most of which fit what a typical U2 fan might enjoy.

There are question marks next to each recommendation. When you mouseover a question mark, TasteKid displays additional information from Wikipedia, YouTube, and Amazon about that artist (or book, movie, actor, etc.). It uses Google Gadgets to offer a widget that can be embedded into your web page or blog.

Fasteagle is a combination search tool and web directory rolled into one interface, with a little touch of feed reader built in, too. The home page gives you quick access to search a dozen different sites, from Google to Delicious to eBay to FriendFeed.

It would be nice to be able to customize those 12 options, or add more to the original 12 to make your own personal search portal. But I don’t see that option anywhere on fasteagle, which is still in beta. Meanwhile, clicking on the categories in the top menu (Tools, News, Business, etc.) leads to new sets of sub-categories in the left-side menu. Under the Tech category, for example, the left menu changes to show sub-categories such as Web World, Tech Vloggers, IT News, Computing, Apple, Google, Mobile Computing, and Web Marketing. That last sub-category includes sites like Search Engine Land, Marketing Pilgrim, Search Engine Watch, and several others. Click on any link, and the site shows up in the main fasteagle window, with the top and side menus still showing — making fasteagle almost like a feed reader that gives you quick access to hundreds of web sites in rapid succession.

FanSnap

Have you searched for event tickets lately? It's not fun, and it's not easy. FanSnap hopes to change that by providing a one-stop source for finding tickets to sporting events, theatre productions, and concerts.

FanSnap doesn’t sell tickets; it lets you find tickets being sold by brokers and others in the secondary ticket market. At the moment, I don’t see inventory from official ticket sellers such as Ticketmaster or TicketsWest. They get inventory from more than 50 ticket resellers, making it a much easier way to shop than visiting the individual web sites of that many ticket brokers. To borrow a comparison Om Malik recently made, it’s like Zillow for event tickets.

compfight

Strange name for a Flickr image search engine, but don’t let it keep you away. Compfight offers a handful of customizations that help you drill down into Flickr’s enormous pool of user-uploaded photos.

You can search the full text of a photo page (title, description, and tags), or if that’s producing too many matches, you can just search tags. You can search for photos that allow Creative Commons commercial usage. You can search for photos that are original to Flickr. You can also turn Flickr’s Safe Search on or off. And you can combine all these options in any search combination you want. And rather than Flickr’s clunky, default, 10-at-a-time search results, you get dozens of thumbnails with compfight.

Kedrix

There are plenty of meta-search engines out there, but only one that wants you to “mearch” instead of “search.” That one is Kedrix, which is trying to coin a new word based on the words “meta” and “search.” That doesn’t work for me, but the search engine does, thankfully.

The Kedrix premise is simple: It’s actually not a meta-search engine in the traditional sense. Rather than mash results from different search engines together (as Metacrawler, Dogpile, Mamma, and others do), Kedrix separates the results from the four main search engines on tabs. Google results are all under one tab, Yahoo under another, and so forth. In that sense, it’s more like a search engine comparison tool. And that makes it somewhat more valuable to SEOs (who like to compare results across different engines) than your standard meta-search engine.

Categorized in Search Engine

[This article is originally published in 9to5google.com written by Abner Li - Uploaded by AIRS Member: Dorothy Allen]

Since the European Union Copyright Directive was introduced last year, Google and YouTube have been lobbying against it by enlisting creators and users. Ahead of finalized language for Article 11 and 13 this month, Google Search is testing possible responses to the “link tax.”

Article 11 requires search engines and online news aggregators — like Google Search and News, respectively — to pay licensing fees when displaying article snippets or summaries. The end goal is for online tech giants to sign commercial licenses to help publishers adapt online and provide a source of revenue.

Google discussed possible ramifications in December if Article 11 was not altered. Google News could be shut down in Europe, while fewer news articles would appear in Search results. This could be a determinate to news sites, especially smaller ones, that rely on Search to get traffic.

The company is already testing the impact of Article 11 on Search. Screenshots from Search Engine Land show a “latest news” query completely devoid of context. The Top Stories carousel would not feature images or headlines, while the 10 blue links would not include any summary or description when linking to news sites. What’s left is the name of the domain and the URL for users to click on.

 

This A/B test is possibly already live for users in continental Europe. Most of the stories in the top carousel lack cover images, while others just use generic graphics. Additionally, links from European publications lack any description, just the full, un-abbreviated page title, and domain.

Google told Search Engine Land that it is currently conducting experiments “to understand what the impact of the proposed EU Copyright Directive would be to our users and publisher partners.” This particular outcome might occur if Google does not sign any licensing agreements with publishers.

Meanwhile, if licenses are signed, Google would be “in the position of picking winners and losers” by having to select what deals it wants to make. Presumably, the company would select the most popular at the expense of smaller sites. In December, the company’s head of news pointed out that “it’s unlikely any business will be able to license every single news publisher.”

Effectively, companies like Google will be put in the position of picking winners and losers. Online services, some of which generate no revenue (for instance, Google News) would have to make choices about which publishers they’d do deals with. Presently, more than 80,000 news publishers around the world can show up in Google News, but Article 11 would sharply reduce that number. And this is not just about Google, it’s unlikely any business will be able to license every single news publisher in the European Union, especially given the very broad definition being proposed.

Google will make a decision on its products and approach after the final language of the Copyright Directive is released.

Dylan contributed to this article

Categorized in Search Engine

[This article is originally published in cpomagazine.com written by  - Uploaded by AIRS Member: Robert Hensonw]

In an age where the Internet is simply an indispensable part of life, the use of a search engine is possibly at the foundation of the user experience. This is a world where near instantaneous access to information is not simply a ‘nice to have’ for researchers and writers, it is at the bedrock of our modern consumer society. Is the way in which we find takeout food, restaurants, household furnishings, fashion – and yes even friends and lovers. In short, without search engines, the machine that powers our modern world begins to falter.

We are increasingly reliant on search engines – but it may be instructive to understand just how much data Google is now handling. Within Google’s range of products, there are seven with at least one billion users. In its privacy policy, Alphabet (Google’s parent company) outlines its broad and far-reaching data collection. The amount of data the company stores is simply staggering. Google holds an estimated 15 exabytes of data, or the capacity of around 30 million personal computers.1

However, it is worth noting that Google is not alone in the search engine space. There are other players such as Microsoft’s Big. Yahoo Search and Baidu. All of them are mining data. However, there can only be that one ‘Gorilla in the Sandpit’ – and that is undoubtedly Google. To explore just how search engines may infringe on our rights to privacy Google gives us a yardstick to what they would characterize as ‘best practice’.

Nothing in life is free … Including search engines

Consumers are becoming increasingly aware that the old maxim of ‘nothing in life is free’ is even more applicable than when it was penned. In fact, there is an associated saying ‘if something is free you are getting exactly what you pay for.’

Herein lies the problem with the use of search engines. They offer an essential service – but that service is certainly not free of cost. That cost is a certain level of intrusion into our lives in the form of search engine companies like Google gathering data about our online habits and using that data to fine-tune marketing efforts (often by selling that data to third parties for their use).

But that is only the outcome of using a search engine. For many consumers and consumer advocate groups, the real problem lies deeper than that. It revolves around awareness and permission. Are search engine companies free to gather and use our data without explicit permission- can we opt out of such an arrangement?

The answer is both yes and no. Reading search engine company user agreements it becomes clear that we (at least historically) we have been empowering companies like Google to use the data that they gather in almost any way that they see fit. But lately, we have seen a huge effort by search engine companies to make sure that consumers are aware that they can limit the amount of data that is gathered. That was not always the case – user agreements are almost never perused with great care. Most people are not freelance attorneys and are defeated by the legalese and intricacies of most user agreements and outlines of a privacy policy.

However, the real problem is that although the gathering of data and the leveraging of that data for profit may represent a betrayal of the relationship between consumer and search engine company there is a larger issue at stake, beyond even the right to privacy – and this is data security.

Google has a far from the perfect record as regards security – but it is better than many other tech companies. However, mistakes do happen. In 2009, there was a bug in Google docs that potentially leaked 0.05% of all documents stored in the service. Taken as a percentage this does not seem like a terribly large number, but 05% of 1 billion users is still 500,000 people. Google has no room for error when it comes to data protection.

Another fact worth noting is that Google’s Chrome browser is a potential nightmare when it comes to privacy issues. All user activity within that browser can then be linked to a Google account. If Google controls your browser, your search engine, and has tracking scripts on the sites you visit (which they more often than not do, they hold the power to track you from multiple angles. That is something that is making Internet users increasingly uncomfortable.

Fair trade of service for data

It may seem that consumers should automatically feel extremely uncomfortable about search engines making use of the data that they gather from a user search. However, as uncomfortable as it may seem to some consumers are entering into a commercial relationship with a search engine provider. To return to a previous argument ‘there are no free lunches’. Search engines cost money to maintain. Their increasingly powerful algorithms are the result of many man hours (and processing power) which all cost huge amounts of money. In return for access to vast amounts of information, we are asked to tolerate the search engine companies use our data. In most instances, this will have a minimum impact on the utilitarian value of a search engine. Is this not a tradeoff that we should be willing to tolerate?

However, there is a darker side to search engine companies harvesting and using data that they have gleaned from consumer activity. Take for instance the relationship between government agencies and search engine companies. Although the National Security Agency in the United States has refused to confirm (or deny) that there is any relationship between Google and itself there are civil rights advocates who are becoming increasingly vocal about the possible relationship.

As far back as 2011, the Electronic Privacy Information Center submitted a Freedom of Information Act request regarding NSA records about the 2010 cyber-attack on Google users in China. The request was denied – the NSA said that disclosing the information would put the US Government’s information systems at risk.

Just how comfortable should we be that the relationship between a company like Google and the NSA sees that government agency acting as a de facto guardian of its practices and potential weaknesses when it comes to data protection – and by extension privacy?

It’s complicated

The search for a middle ground between the rights of the individual to privacy and the bedrock of data protection vs the commercial relationship between themselves and search engine companies is fraught with complexities. What is becoming increasingly clear is that a new paradigm must be explored. One that will protect the commercial interests of companies that offer an invaluable service and the rights of the individual. Whether that relationship will be defined in a court of law or by legislation remains to be seen.

Categorized in Search Engine

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now