fbpx

Machine learning-driven results entirely bypass the traditional search box.

Several weeks ago, without much fanfare, Google added new shortcut icons to its mobile app and website. They appear immediately under the search box to provide quick access to current weather, sports, entertainment and restaurant information.

These are essentially prepackaged queries, using a range of data behind the scenes, to replace typing with tapping. These shortcuts have quietly turned Google’s local search and discovery experience into a powerful competitor to Yelp.

 

Below is a screen grab of the conventional local-mobile search experience for “lunch near me.” Users see a local pack, a map, organic links and images down the first page (not pictured). Here’s what it looks like — pretty familiar:

But when you tap the “eat & drink” shortcut, you get a different experience that brings a much richer set of results. Also on display is the full range of Google’s mobile and location data capabilities.

 

Google is providing personalized recommendations and offering a plethora of other choices and options. These are grouped by interest, cuisines, atmosphere and various attributes. All of this is driven and accompanied by rich data. This is also an argument for adding more enhanced data as part of your local SEO strategy.

Google presents “places for you,” based on your location history — your actual visits to other restaurants that establish patterns and preferences. Google also uses machine learning to group venues into useful categories by interest and attributes: “popular with foodies,” “best lunch,” “recently opened,” “great beers” and so on.

It’s not clear how much usage this is getting; Google hasn’t done much to build awareness other than place the shortcuts under the search bar. But it offers a dramatically improved experience that eliminates the need to do multiple queries and click around. It’s like a super carousel on AI. (Note: I didn’t say “steroids.”)

The experience represents a template for other kinds of mobile search results beyond the four categories currently present. Shopping and Travel come immediately to mind. Android features more shortcuts than iOS.

 

Currently, you can buy movie tickets via the entertainment shortcut. We can expect more transactional capabilities like this to roll out to other categories.

Right now there are no ads, but assume there will be if it gains widespread usage. If it indeed does gain momentum, we could see large numbers of people entirely bypass the search box in certain key categories.

Source: This article was published searchengineland.com By Greg Sterling

Categorized in Search Engine

Scientists believe they have moved a step closer to proving the existence of a parallel universe with the discovery of a mysterious ‘cold spot’.

This cool patch of space, that was first spotted by the NASA WMAP satellite in 2004, is part of the radiation that was thought to have been produced during the formation of the universe some 13 billion years ago.

 

However, research conducted by Professor Tom Shanks from Durham University has uncovered a new theory – that the Cold Spot was formed when universes COLLIDED.

The cold spot could be evidence of a larger multiverse (Flickr)
The cold spot could be evidence of a larger multiverse (Flickr)

 

Professor Shanks theorises that this is ‘the first evidence for the multiverse – and billions of other universes may exist like our own”.

He explained: “We can’t entirely rule out that the spot is caused by an unlikely fluctuation explained by the standard [theory of the Big Bang].

“But if that isn’t the answer, then there are more exotic explanations.

“Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble universe.”

He added: “If further, more detailed, analysis… proves this to be the case then the Cold Spot might be taken as the first evidence for the multiverse.”

Source: This article was published Yahoo News UK By Andy wells

Categorized in Internet Technology

Hidden algorithms reflect and amplify racism and other human biases, but researchers hope to fix them

In 1796, German physiologist Franz Joseph Gall thought he had made a world-altering discovery. By carefully measuring the contours of the human skull, he hypothesized, one could infer information about an individual — including their mental capabilities, personality, skills, and social proclivities.

The result was phrenology, a strain of nineteenth century pseudoscience that went on to inspire centuries of “scientific” justifications for racism. Almost a century later, Italian anthropologist Cesare Lombroso founded a school of criminology that claimed criminality is a trait inherited at birth, theorizing that these “criminaloids” can be detected by measuring the distances between certain features on the face. The theories were later used alongside other fields like eugenics to justify slavery, the Nazis’ pursuit of a white Aryan master race, and other historical atrocities.

While these theories have long since been debunked, a chilling aspect of their legacy lives on in some of today’s advances in artificial intelligence and machine learning. Much like phrenology, researchers say machine learning algorithms are now being given far too much power, invisibly influencing decisions on everything from whether a school teacher gets fired to whether a criminal suspect is released on bail. And perhaps most worryingly, their decisions are frequently painted as “objective” and “unbiased” — when in reality they’re anything but.

Consider a recent paper, in which two Chinese researchers describe an artificial neural network they say can predict whether someone will commit a crime based solely on their facial features. The researchers claim the system’s results are “objective,” reasoning that a computer algorithm has “no biases whatsoever.”

Private companies are already pitching these capabilities to law enforcement agencies. An Israeli face recognition company called Faception has controversially claimed its algorithms can predict whether someone is a “terrorist” or a “pedophile” with 80 percent accuracy. The company is now actively seeking to sell its software to police and governments, telling the Washington Post that it has already signed contracts with at least one unnamed government’s “homeland security agency.”

But a brief look at the researchers’ paper shows the system is trained by analyzing photos of people who have already been convicted under the criminal system. In other words, the system simply defines the common facial features of people who have been labeled “criminal” in the past and reapplies that label to people with similar features. The result evokes a computer-aided rehash of phrenology, with computer vision and learning algorithms standing in for cranial measurement tools.

 

Thus, those who are already disproportionately targeted by the criminal justice system — African Americans are far more likely to be arrested for drugs, for instance, despite using them at around the same rate as whites — are again disproportionately branded by the algorithm, whose stewards then defend the results by pointing to the system’s supposed “objectivity.”

Rather than removing human biases, the algorithm creates a feedback loop that reflects and amplifies them.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data. Our biases are built into that training data,” Kate Crawford, a principle researcher at Microsoft, said during a talk on AI last week at SXSW in Austin, Texas.

Crawford warned that biased and opaque machine learning algorithms can become especially dangerous in the wrong hands. She mentioned a system built by Palantir, a data-mining company built by President Trump’s’ tech advisor Peter Thiel, that could help power Trump’s crackdown on immigrants and Muslims. And she noted how primitive computer-aided systems helped authoritarians of the past commit atrocities, like the Hollerith tabulating machines built by IBM, which helped the Nazis track and identify Jews and other groups during the World War II.

Today, an algorithm can be the perfect weapon for authoritarian leaders because it lets them efficiently and opaquely enforce systems that are already biased against oppositional and marginalized groups. For example, in a major exposé last year, ProPublica discovered that systems used by courts in Broward County, Florida to assign “risk scores” to criminal defendants consistently rated black defendants with a higher level of risk than whites facing the same charges.

Even worse, authorities who use such systems can easily make claims to their “neutrality” and distance themselves from the consequences — while hiding the system’s inner workings from public view.

“The reason a lot of these algorithms are put into place is so people can deny responsibility for the process as a whole,” Cathy O’Neil, a mathematician and author who frequently writes about the human impacts of big data, told Vocativ. “Sometimes the standard for whether it works or not is whether someone gets to abscond from responsibility, or better yet, whether they get to impose a punitive, inscrutable process.”

 

In her recent book “Weapons of Math Destruction,” O’Neil outlines several examples of how machine learning systems can mirror and amplify human biases to destructive ends. A recurring theme, she said, is that many of those system are built by third parties and haven’t been independently assessed for bias and fairness. But so far, the algorithms’ creators lack either the means, the desire, or the incentives to conduct those tests.

“It’s a very, very dumb thing,” O’Neil said. “Can you imagine buying a car not knowing whether it’s gonna drive, or not knowing whether it’s safe? That’s just not a reasonable way of going about it. It’s like a car industry where we haven’t developed standards yet.”

AI researchers say that creating those standards is one of the most crucial steps to making machine-learning systems accountable to the humans they pass judgement on. Last September, AI Now, an Obama White House-commissioned report that has since spun off into a research organization led by Crawford, highlighted the need to create tools capable of bringing accountability to “black box” algorithms. That includes mechanisms that allow people affected by these systems to contest their decisions, seek redress, and opt-out of automated decision-making processes altogether.

“AI systems are being integrated into existing social and economic domains, and deployed within new products and contexts, without an ability to measure or calibrate their impact,” the report warns. “The situation can be likened to conducting an experiment without bothering to note the results.”

AI auditing tools wouldn’t necessarily need to inspect the system’s proprietary source code, said O’Neil. They would only need to analyze its input data and the resulting decisions to help humans determine whether the system is functioning correctly and fairly, or whether a skewed dataset is contaminating the output by introducing harmful human bias.

On a more fundamental level, fighting discriminatory AI is about making sure the systems are being ethically designed in the first place. That means teaching ethics alongside regular STEM education and building new standards for accountability into every step of the process, so that the injustices of the past don’t get hard-coded into the robots and neural networks of the future.

“A lot of these algorithms are trotted out in the name of fairness,” says O’Neil. “We should do better than pay lip service to fairness.”

Source : vocativ.com

Categorized in Online Research

My Business Insights Show How You’re Being Found

Google has rolled out enhanced insights for Google My Business pages. When logged into GMB, you’ll now be able to see the total views to your GMB page, where visitors are coming from, and how they found your page.

Google Search vs. Google Maps

Where are the visitors to your GMB page coming from? Google Search and Google Maps both send traffic to GMB pages, and now you’ll be able to see a breakdown of how many visitors are coming from which source.

Direct vs. Discovery

How are people finding your GMB page? At times people will find it by directly typing your business or brand name in the search bar. At other times, people may find it by searching for a related keyword. Now Google will show you a comparison between who found your page by searching for your name directly, and who discovered it by searching for a related keyword. Unfortunately, when it comes to the actual keywords used to find your GMB page, those are ‘not provided’.

 

With the addition of these new insights Google finds it no longer necessary to include Google+ statistics in the GMB dashboard, so those have been removed going forward. The company expects to introduce even more new insights to GMB pages in the near future.

Source : https://www.searchenginejournal.com/new-google-business-insights-show-youre-found/170448/

Categorized in Business Research

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more