fbpx

Visual search engines will be at the center of the next phase of evolution for the search industry, with Pinterest, Google, and Bing all announcing major developments recently. 

How do they stack up today, and who looks best placed to offer the best visual searchexperience?

Historically, the input-output relationship in search has been dominated by text. Even as the outputs have become more varied (video and image results, for example), the inputs have been text-based. This has restricted and shaped the potential of search engines, as they try to extract more contextual meaning from a relatively static data set of keywords.

Visual search engines are redefining the limits of our language, opening up a new avenue of communication between people and computers. If we view language as a fluid system of signs and symbols, rather than fixed set of spoken or written words, we arrive at a much more compelling and profound picture of the future of search.

Our culture is visual, a fact that visual search engines are all too eager to capitalize on.

visual culture

Already, specific ecommerce visual search technologies abound: Amazon, Walmart, and ASOS are all in on the act. These companies’ apps turn a user’s smartphone camera into a visual discovery tool, searching for similar items based on whatever is in frame. This is just one use case, however, and the potential for visual search is much greater than just direct ecommerce transactions.

After a lot of trial and error, this technology is coming of age. We are on the cusp of accurate, real-time visual search, which will open a raft of new opportunities for marketers.

 

Below, we review the progress made by three key players in visual search: Pinterest, Google, and Bing.

Pinterest

Pinterest’s visual search technology is aimed at carving out a position as the go-to place for discovery searches. Their stated aim echoes the opening quote from this article: “To help you find things when you don’t have the words to describe them.”

Pinterest 200M_0

Rather than tackle Google directly, Pinterest has decided to offer up something subtly different to users – and advertisers. People go to Pinterest to discover new ideas, to create mood boards, to be inspired.  Pinterest therefore urges its 200 million users to “search outside the box”, in what could be deciphered as a gentle jibe at Google’s ever-present search bar.

All of this is driven by Pinterest Lens, a sophisticated visual search tool that uses a smartphone camera to scan the physical world, identify objects, and return related results. It is available via the smartphone app, but Pinterest’s visual search functionality can be used on desktop through the Google Chrome extension too.

Pinterest’s vast data set of over 100 billion Pins provides the perfect training material for machine learning applications. As a result, new connections are forged between the physical and digital worlds, using graphics processing units (GPUs) to accelerate the process.

pinterest object detection

In practice, Pinterest Lens works very well and is getting noticeably better with time. The image detection is impressively accurate and the suggestions for related Pins are relevant.

 

Below, the same object has been selected for a search using Pinterest and also Samsung visual search:

Pinterest_Samsung

The differences in the results are telling.

On the left, Pinterest recognizes the object’s shape, its material, its purpose, but also the defining features of the design. This allows for results that go deeper than a direct search for another black mug. Pinterest knows that the less tangible, stylistic details are what really interest its users. As such, we see results for mugs in different colors, but that are of a similar style.

On the right, Samsung’s Bixby assistant recognizes the object, its color, and its purpose. Samsung’s results are powered by Amazon, and they are a lot less inspiring than the options served up by Pinterest. The image is turned into a keyword search for [black coffee mugs], which renders the visual search element a little redundant.

Visual search engines work best when they express something for us that we would struggle to say in words. Pinterest understands and delivers on this promise better than most.

Pinterest visual search: The key facts

  • Over 200 million monthly users
  • Focuses on the ‘discovery’ phase of search
  • Pinterest Lens is the central visual search technology
  • Great platform for retailers, with obvious monetization possibilities
  • Paid search advertising is a core growth area for the company
  • Increasingly effective visual search results, particularly on the deeper level of aesthetics

Google

Google made early waves in visual search with the launch of Google Goggles. This Android app was launched in 2010 and allowed users to search using their smartphone camera. It works well on famous landmarks, for example, but it has not been updated significantly in quite some time.

It seemed unlikely that Google would remain silent on visual search for long, and this year’s I/O development revealed what the search giant has been working on in the background.

google lens

Google Lens, which will be available via the Photos app and Google Assistant, will be a significant overhaul of the earlier Google Goggles initiative.

Any nomenclative similarities to Pinterest’s product may be more than coincidental. Google has stealthily upgraded its image and visual search engines of late, ushering in results that resemble Pinterest’s format:

Google_Image_Search

Pinterest_image_search

Google’s ‘similar items’ product was another move to cash in on the discovery phase of search, showcasing related results that might further pique a consumer’s curiosity.

Google Lens will provide the object detection technology to link all of this together in a powerful visual search engine. In its BETA format, Lens offers the following categories for visual searches:

  • All
  • Clothing
  • Shoes
  • Handbags
  • Sunglasses
  • Barcodes
  • Products
  • Places
  • Cats
  • Dogs
  • Flowers

Some developers have been given the chance to try an early version of Lens, with many reporting mixed results:

Lens_BETA

Looks like Google doesn’t recognize its own Home smart hub… (Source: XDA Developers)

These are very early days for Google Lens, so we can expect this technology to improve significantly as it learns from its mistakes and successes.

When it does, Google is uniquely placed to make visual search a powerful tool for users and advertisers alike. The opportunities for online retailers via paid search are self-evident, but there is also huge potential for brick-and-mortar retailers to capitalize on hyper-local searches.

For all its impressive advances, Pinterest does not possess the ecosystem to permeate all aspects of a user’s life in the way Google can. With a new Pixel smartphone in the works, Google can use visual search alongside voice search to unite its software and hardware. For advertisers using DoubleClick to manage their search and display ads, that presents a very appealing prospect.

We should also anticipate that Google will take this visual search technology further in the near future.

Google is set to open its ARCore product up to all developers, which will bring with it endless possibilities for augmented reality. ARCore is a direct rival to Apple’s ARKit and it could provide the key to unlock the full potential of visual search. We should also not rule out another move into the wearables market, potentially through a new version of Google Glass.

Google visual search: The key facts

  • Google Goggles launched in 2010 as an early entrant to the visual search market
  • Goggles still functions well on some landmarks, but struggles to isolate objects in crowded frames
  • Google Lens scheduled to launch later this year (Date TBA) as a complete overhaul of Goggles
  • Lens will link visual search to Google search and Google Maps
  • Object detection is not perfected, but the product is in BETA
  • Google is best placed to create an advertising product around its visual search engine, once the technology increases in accuracy

Bing

Microsoft had been very quiet on this front since sunsetting its Bing visual search product in 2012. It never really took off and perhaps the appetite wasn’t quite there yet among a mass public for a visual search engine.

Recently, Bing made an interesting re-entry to the fray with the announcement of a completely revamped visual search engine:

This change of tack has been directed by advances in artificial intelligence that can automatically scan images and isolate items.

The early versions of this search functionality required input from users to draw boxes around certain areas of an image for further inspection. Bing announced recently that this will no longer be needed, as the technology has developed to automate this process.

The layout of visual search results on Bing is eerily similar to Pinterest. If imitation is the sincerest form of flattery, Pinterest should be overwhelmed with flattery by now.

Bing_Pinterest

The visual search technology can hone in on objects within most images, and then suggests further items that may be of interest to the user. This is only available on Desktop for the moment, but Mobile support will be added soon.

 

The results are patchy in places, but when an object is detected relevant suggestions are made. In the example below, a search made using an image of a suit leads to topical, shoppable links:

Bing_Suit

It does not, however, take into account the shirt or tie – the only searchable aspect is the suit.

Things get patchier still for searches made using crowded images. A search for living room decor ideas made using an image will bring up some relevant results, but will not always hone in on specific items.

As with all machine learning technologies, this product will continue to improve and for now, Bing is a step ahead of Google in this aspect. Nonetheless, Microsoft lacks the user base and the mobile hardware to launch a real assault on the visual search market in the long run.

Visual search thrives on data; in this regard, both Google and Pinterest have stolen a march on Bing.

Bing visual search: The key facts

  • Originally launched in 2009, but removed in 2012 due to lack of uptake
  • Relaunched in July 2017, underpinned by AI to identify and analyze objects
  • Advertisers can use Bing visual search to place shoppable images
  • The technology is in its infancy, but the object recognition is quite accurate
  • Desktop only for now, but mobile will follow soon

So, who has the best visual search engine?

For now, Pinterest. With billions of data points and some seasoned image search professionals driving the technology, it provides the smoothest and most accurate experience. It also does something unique by grasping the stylistic features of objects, rather than just their shape or color. As such, it alters the language at our disposal and extends the limits of what is possible in search marketing.

Bing has made massive strides in this arena of late, but it lacks the killer application that would make it stand out enough to draw searchers from Google. Bing visual search is accurate and functional, but does not create connections to related items in the way that Pinterest can.

The launch of Google Lens will surely shake up this market altogether, too. If Google can nail down automated object recognition (which it undoubtedly will), Google Lens could be the product that links traditional search to augmented reality. The resources and the product suite at Google’s disposal make it the likely winner in the long run.

Source: This article was published searchenginewatch.com By Clark Boyd

Categorized in Search Engine

Google may have announced lots of new hardware on Wednesday, but the software it demonstrated might actually be cooler. 

Among the new offerings: Google Lens, a visual search engine that will come loaded on every new Pixel 2 phone. 

Google CEO Sundar Pichai first announced the feature back in May, but we got a more comprehensive look at how it works on Wednesday.

Google says that for now, Lens is just a "preview," which may be Google-speak for a beta version. And since it lives in the Pixel 2, most people won't be able to try it out. But Lens is still an exciting peek at what's to come. 

 

Lens basically works like a "smart" magnifying glass. Using the Pixel's camera, you can look up things like artwork, landmarks, and movies — all you have to do is point your camera at the object and press the Lens button in your camera app. Lens can then identify what it is you're looking at and pull up relevant information for you to peruse. 

Google LensThe Lens feature looks like a tiny spyglass in the Pixel's camera app. Google

Another cool feature of Lens: You can point it at something like the flyer above and it will automatically detect a URL, email address, or phone number without you having to manually type it in. If you scan over an email address, for example, Google will prompt you to send that person a message using Gmail.

 

 

And Google said it plans on adding features on an ongoing basis, including the ability to use Lens inside Google Assistant. This could mean the Lens feature will eventually arrive on iOS devices, since you can currently use Google Assistant on your iPhone.

Here's what Lens looks like in action:

Google Lens is similar to a feature Pinterest introduced last February, also called "Lens." But Pinterest Lens has a slightly different focus from Google Lens: inspiration.

Pinterest Lens lets you point your camera at a pair of shoes, for instance, you'll be able to see similar styles on Pinterest and get ideas for how to wear them. Pinterest's Lens feature is less focused on identifying an exact item than it is on showing you similar or related items. 

Still, there's a clear trend in technology like this: visual search tools that can "Shazam" the world around you. 

Source: This article was published businessinsider.com By Avery Hartmans

Categorized in Search Engine

At this moment in history, there are more satellites photographing Earth from orbit than just about anyone knows what to do with. Planet, Inc., has more than 150 orbiting cameras, each the size of a shoebox. DigitalGlobe has five dump-truck-sized sensors. And more startups are planning to launch their own.

What should we do with all that imagery? How can we search it and process it? Descartes Labs, a startup that uses machine learning to identify crop health and other economic indicators in satellite imagery, has created a tool to better index and surf through it. They call it Geovisual Search.

Geovisual Search allows users to find similar-looking objects in aerial maps of China, the United States, and the world. It’s free and available online right now. Click on a visible feature—like an oil tank, an empty swimming pool, or a stack of shipping containers—and Geovisual Search will find other objects like it on the map.

Here’s a search, for instance, for solar farm-looking features in China:

(Courtesy of Descartes Labs)

“Imagine these big data sets coming along from Planet. Suddenly you’re getting daily pictures of the globe. You kind of want to count these things, every single day, and watch how they change through time,” says Mark Johnson, the CEO of Descartes Labs.  

“The neural nets that we trained here are the beginning of counting oil tanks, or buildings, or windmills. Imagine we wanted to look at sustainable energy infrastructure—solar farms, solar panels on roof—you could start to think about counting their growth through time. You start to get really interesting data streams,” he told me.

It’s a legitimately cool way to search satellite imagery, and it’s great to be able to surf through the terrain of China and the United States as a whole. It reminded me of Terrapattern, an art project created by artists and geographers at Carnegie Mellon University last summer. Terrapattern had a near-identical interface and near-identical capabilities to Descartes Search, but it only accessed certain urban areas in the U.S., including Pittsburgh, New York, and the Bay Area.

The Decartes team tips their hat to Terrapattern in their announcement blogpost, calling the earlier project a “ground-breaking demonstration of visual search over satellite imagery.”

 

“We loved it.  The demo aligned with many ideas we had been kicking around at Descartes Labs, and it was great to see somebody just go out and do it,” the blog post says.

Despite this admiration, Descartes only ran their implementation past the Terrapattern team 12 hours before its release. “Their approach is virtually the same as what we did a year ago, with some tweaks to deal with scale,” said Golan Levin, who led the Carnegie Mellon team, in an email.

“It’s quite typical for new-media artworks to, er, ‘inspire’ commercial projects—this is unfortunately quite common,” he said. “Since our team is artists and students and academics, the chance or option to have collaborated would have been much more fun.”

In fact, Levin has written about how Google Streetview, Sony Eyetoy, and a Nike product called the “Chalkbot” were all inspired by new-media artistic experiments. He added that Terrapattern is now working with a major satellite-imagery provider and a design firm to create a similarly scaled-up version of its product.

Perhaps this method of searching a geographic environment will eventually have the same renown as Google  Streetview. If the sheer amount of new daily satellite imagery continues to expand, it seems like a possible fate. For its part, Descartes plans to keep expanding the use of machine-learning algorithms on satellite imagery. It will also continue producing its corn-health forecasts.

 

Author : ROBINSON MEYER

Source : https://www.theatlantic.com/technology/archive/2017/03/a-new-way-to-search-satellite-imagery/518757/

Categorized in Search Engine

Pinterest said today it’s launching three new products today that will point out specific elements in pictures — whether viewed live through a camera or through a typical image search — and use them as a jumping point for search.

All of these are designed to keep Pinterest coming back over and over to discover ideas based on images. Pinterest has been increasingly trying to close the gap from a user initially viewing an image to being able to jump to ideas and products with a single step, and adding these new in-image search capabilities is another step toward that.

“Early information technology used words to connect ideas, like hyperlinks,” co-founder and chief product officer Evan Sharp said. “Search engines we built today have drafted on that, they rely on words to get you answers to your questions. But when it comes to searching for ideas, words aren’t the right way. Sometimes you don’t really know what you’re looking for until you see it.”

 

So let’s break down each product, starting with the most important one, Lens. That gives users a way to open their camera, look at any image and Pinterest’s Lens feature will automatically pick apart the objects in an image. That can drill down into foods, animals, or even patterns like hexagons. That gives users the ability to start searching for related elements through that. Lens is launching in beta today on iOS.

pinterest lens results

The main reason why this is so critical is that it means Pinterest may be able to capture that brief moment that a customer might have to just make an impulse purchase. That moment can be incredibly fleeting, and lowering the friction toward seeing something in the real world and making that purchase can capture that in a way that other companies may not be able.

 

Pinterest is also updating its visual search when it comes to finding specific products, isolating each product within an image. So if you’re looking at a pin from a company that may be selling a jacket, it will also pick up the image of the boots and let you jump to them. Users can also jump to additional related content to those products or elements in the photo. With most of Pinterest’s content coming from Pinterest, this gives Pinterest a way to seamlessly jump through products — and offers businesses a way to build awareness for their other products.

pinterest shop the look

Instant Ideas adds a small little circle to the bottom of each pin, allowing them to jump straight into related elements and gather additional ideas related to that topic. This one seems pointed toward getting users to find products and ideas that they’ll save on their Pinboards — like recipes or potential styles.

pinterest instant ideas

Pinterest has largely become synonymous with visual search, which has become the company’s specialty and point of differentiation against other networks. With 150 million users, Pinterest is geared toward getting people to come in and start sort of wandering around to discover ideas and products they might not have known they wanted.

 

However, we’re starting to see some of these tools trickle down into other services, though maybe in a different fashion. Houzz, for example, breaks down specific products in a photo of a room or home that users can purchase. There are startups like Clarifai want to equip small businesses with similar visual search tools, though they take more of a metadata and tagging approach that can train their algorithms. And there’s always Google, which has invested heavily in visual search, but has yet to necessarily weaponize it in the same way Pinterest has for potential advertisers.

 

Nevertheless, these Pinterest products are a potential gold mine for those marketers. Pinterest is able to potentially engage with users at different points in their purchasing lifetime. Whether that’s in the mode where they are looking to discover ideas — and build brand awareness — to drilling them into finding a specific product and buying it, Pinterest offers a wide range of advertising products to get at each part of the customer’s shopping timeline.

Pinterest is going to have to solidify its pitch that it is one of the best visual search companies in order to continue to woo advertisers, which may still be treating Pinterest as more of a curiosity than a consistent ad buy. Pinterest is going to have to battle Snap, which is expected to go public next year, as a tool for building brand awareness and capture a potential customer’s attention at the beginning of their shopping lifetime. And there’s always Facebook, which has become a mainstay of marketers.

That’s going to come through a combination of new ad products — like its new addition of search ads — and also by improving its suite of products that it can present to advertisers as unique and differentiated from traditional ad buys. Pinterest, while growing quickly, was a bit off targets it initially set in early 2015 and has to figure out how to re-adjust its expectations as to what kind of advertising and consumer products marketers want.

 

“These three new products make anything in the world an entry point to the 100 billion ideas in Pinterest,” Pinterest CEO and co-foudner Ben Silbermann said. “Together they create a whole new discovery experience that’s unlike anything that’s out there today. You can get ideas whether you’re opening the app or walking through town. The more people the use it, the better the results become, the more we can recommend inspiring ideas.”

Author : Matthew Lynley

Source : https://techcrunch.com/2017/02/08/pinterest-adds-visual-search-for-elements-in-images-and-through-your-camera/

Categorized in Social

Blippar expanded its augmented reality visual search browser on Tuesday to recognize faces in real time with a simple smartphone camera and return information about that person.

The Blippar app feature, Augmented Reality Face Profiles, should build a bigger consumer following for the app, according to Blippar CEO Ambarish Mitra. He believes it will put the app in the hands of more consumers, so when brands like Nestlé, Condé Nast, Time, Procter & Gamble, Kraft, Heinz, PepsiCo, Coca-Cola, and Anheuser Busch launch campaigns, a larger number of consumers will already be familiar with how it works.

"In the world of augmented reality, that was missing," he said. "For any app to do well, you must have a compelling consumer story."

The feature allows users to point the camera phone at any real person or their image in a picture on television and the Blippar app returns information about the person from the company's database filled with more than three billion facts.

Real-time facial recognition is the latest tool, amidst expansion in artificial intelligence and deep-learning capabilities.

 

For public figures, their faces will be automatically discovered with information drawn from Blipparsphere, the company's visual knowledge Graph that pulls information from publicly accessible sources, which was released earlier this year.

Public figures can also set up their own AR Face Profile. The tool enables them to engage with their fans and to communicate information that is important to them by leveraging their most personal brand -- their face.

Users also can create fact profiles -- Augmented Reality profiles on someone’s face, which users create so they can express who they are visually.Users can view each other’s profiles that have been uploaded and published and can add pictures or YouTube videos, as well as AR moods and much more to express themselves in the moment.

The public figure blipping capabilities are now live and the augmented reality face profile feature is coming soon.

Author:  Laurie Sullivan

Source:  http://www.mediapost.com/

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media