fbpx

WHY YOU SHOULD CARE Because companies will soon be able to recognize your face.

As you may already be aware, we know a lot about you. Yes, you! We know whether you clicked here from Facebook or if you came via our home page. We will know how long you spend reading this article and what you click to read next. That information enables us to figure out what interests you, which may affect what shows up next time you visit our site. If OZY were as big a part of your life as Facebook or Google is — if not yet, then soon! — we would know so much about you that we could show you advertisements targeted to your every interest.

But what if we could recognize your face? If so, we might adjust our messages to you depending on whether you look happy or sad, distracted or engaged. We’d adjust if you were looking to the left of the screen, or to the right. We’re not watching you read us, at least not yet, but the capability might not be too far away.

Indeed, facial recognition technology is almost ready for the mainstream. “Computer vision” is “moving very fast” toward the creation of browsers of the visual world, says Ambarish Mitra, co-founder and CEO of Blippar, an app that scans and recognizes images and faces and then shows you search and social-network results on your screen, combining two of the tech world’s current favorite functionalities — machine learning and augmented reality (AR). The dream, so it seems, is to become like a real-time, image-based search engine, a face-based social network and Pokémon Go, all rolled into one. Blippar is not the only “facial network” out there. The Russian website FindFace generated controversy last year when its search-by-faces function was used to reveal the identities of porn actresses. And just as a website to connect college students created by a fresh-faced Mark Zuckerberg in his dorm room eventually revolutionized the news media and advertising industries, these technologies could change how companies market products — and a whole lot more — in untold ways.

“We’re producing a paradigm shift in thinking in marketing and advertising,” says Mitra. His superlatives aren’t entirely unjustified: The Blippar logo has appeared on 12 billion products in the past four years through the company’s marketing partnerships with more than 1,500 brands, including Heinz, Coca-Cola and Anheuser-Busch. These “Blippable” products can be scanned in Blippar’s smartphone app, but rather than just linking to a webpage the way QR codes do, the phone displays interactive AR marketing features. “And why should I need to recognize an ugly square,” Mitra asks, referring to QR codes, “when I can recognize something on its own?”

WHAT [APPLE’S] SIRI IS TRYING TO BE FOR THE AUDIO WORLD, BLIPPAR IS TRYING TO BE FOR THE VISUAL WORLD.

Imagine scanning the face of a billboard model to see an AR version of you wearing the same necklace, or scanning the car in front of you through your screen to get information on where you can buy the same model at local dealerships. (If this sounds like something from Netflix’s dystopian satire Black Mirror, it is — check out the first episode of the third season.) And what about a future where you walk into a store that recognizes your face and then offers you bespoke deals based on the shopping habits you revealed during your previous visit? That’s possible, says Dr. Gary Wilcox, a leading expert on social media and advertising. Indeed, it’s only an extension of existing geolocated marketing techniques that ping deals to your phone when you’re near a certain brand’s store — or its competitor’s.

“There’s a history of advertising staying pretty close to technological developments,” says Wilcox. But as technologies have evolved from print to radio to TV to the internet, marketers have largely relied on trial and error to find the techniques that work best, so “some of these early ideas” for virtual and augmented reality ads “are kinda silly,” he says. For Mitra, one medium remains to be conquered — the visual world: “What a lot of CMOs do not understand is that the biggest [form of] media in the world is actually products themselves. … We will reach a stage where if someone is curious about something, that’s the exact point [where] advertisers will put a very contextual message.”

Somewhat ironically, the personalization of technology that enables marketers to know everything about you essentially brings us back to the pretechnology era of personalized commerce when you were friends with the local store manager, says Harikesh Nair, a professor of marketing at the Stanford Graduate School of Business. If you’re a little freaked out by the idea of companies recognizing your face, “the market will determine” how much of this intrusion society will accept, says Nair — humans will probably never be comfortable having medical, family or financial details digitally attached to their faces as they walk down the street, he thinks. But as far as he’s concerned, if it makes it easier for him to find a suit he likes, it’s all good. “I think we as a society have already implicitly accepted that trade-off,” he says.

To be sure, Blippar isn’t the new Facebook or Google — at least not yet. “You would have to have a [new piece of] hardware that aids these sorts of applications,” says Dokyun Lee, a professor of business analytics at Carnegie Mellon University, insisting that people aren’t going to walk around viewing the entire world through smartphone cameras all the time. Facial-recognition software was banned from Google Glass because it was considered too creepy, and even then the product never made it to market. The future would certainly be more awkward than most sci-fi films suggest if we have to view everything — and everyone’s faces — through our phone screens.

But for the Blippar CEO, the Pokémon Go mania wasn’t some alienating dystopia; it was the start of an enlightened future. “Mark my words,” says Mitra, “computer vision and AR will go mainstream well before head-mounted devices take off, and it’s gonna happen through phones.” Just be sure to watch where you’re going when some irritating ad pops into your AR universe.

This article was  published in ozy.com by James Watkins

Categorized in Others

Facial recognition technology is a pretty amazing thing. It can be used in a variety of applications, from identifying Uber drivers, to tagging friends on Facebook, to paying for goods on Amazon (eventually). Now, its use has inevitably spread to that most common of internet pastimes: porn.

Belgian site Megacams.me, which calls itself a “live sex search engine,” has introduced what’s claimed to be the first ever “sex doppelgänger” feature. It involves desperate customers sending in a photo of a fantasy partner – co-worker, movie star, unsuspecting neighbor – and the software finding a camgirl from the 180,000 available who looks like the person in the picture.

“This way,” says Megacams, “it feels like you are having live sex with the person in your picture.” The system requires a front on photograph of the subject - not easy when you're taking it from a bush 20 feet away.

TechCrunch tested the system and the best match it found was rated at just 47 percent likeness. So you may have to squint a bit to feel like you’re interacting with the secret object of your affections/stalking target.

Considering how it’s being used, Megacams isn’t willing to reveal which facial recognition software is powering the process, though TechCrunch believes it to be Microsoft’s Cognitive Services. Formerly called Project Oxford, the technology appears in the Redmond firm’s popular “Guess your age” and “Guess your emotion” tools.

Microsoft’s API gives developers 30,000 free searches a month, after that it’s $1.50 for every 1000 lookups.

Megacams denied that the feature was creepy. "People are watching their girl next door all the time," spokesperson Eddy L told The Verge. "If they don't use this tool they click and click forever until they find the doppelgänger in porn. We just give them the tools right now to make that search easier."

The site says that uploaded photos are deleted as soon as the search has been performed.

Facial recognition technology may offer amazing ways to enhance out lives but, like the Russian privacy nightmare FindFace, most people would agree that this isn’t one of them.

Source : http://www.techspot.com

Categorized in Search Engine

We’re all a bit worried about the terrifying surveillance state that becomes possible when you cross omnipresent cameras with reliable facial recognition — but a new study suggests that some of the best algorithms are far from infallible when it comes to sorting through a million or more faces.

The University of Washington’s MegaFace Challenge is an open competition among public facial recognition algorithms that’s been running since late last year. The idea is to see how systems that outperform humans on sets of thousands of images do when the database size is increased by an order of magnitude or two.

See, while many of the systems out there learn to find faces by perusing millions or even hundreds of millions of photos, the actual testing has often been done on sets like the Labeled Faces in the Wild one, with 13,000 images ideal for this kind of thing. But real-world circumstances are likely to differ.

“We’re the first to suggest that face recs algorithms should be tested at ‘planet-scale,'” wrote the study’s lead author, Ira Kemelmacher-Shlizerman, in an email to TechCrunch. “I think that many will agree it’s important. The big problem is to create a public dataset and benchmark (where people can compete on the same data). Creating a benchmark is typically a lot of work but a big boost to a research area.”

The researchers started with existing labeled image sets of people — one set consisting of celebrities from various angles, another of individuals with widely varying ages. They added noise to this signal in the form of “distractors,” faces scraped from Creative Commons licensed photos on Flickr.

They ran the test with as few as 10 distractors or as many as a million — essentially, the number of needles stayed the same but they piled on the hay.

megaface results

The results show a few surprisingly tenacious algorithms: The clear victor for the age-varied set is Google’s FaceNet, while it and Russia’s N-TechLab are neck and neck in the celebrity database. (SIAT MMLab, from Shenzhen, China, gets honorable mention.)

Conspicuously absent is Facebook’s DeepFace, which in all likelihood would be a serious contender. But as participation is voluntary and Facebook hasn’t released its system publicly, its performance on MegaFace remains a mystery.

Both leaders showed a steady decline as more distractors were added, although efficacy doesn’t fall off quite as fast as the logarithmic scale on the graphs makes it look. The ultra-high accuracy rate touted by Google in its FaceNet paper doesn’t survive past 10,000 distractors, and by the time there are a million, despite a hefty lead, it’s not accurate enough to serve much of a purpose.

Still, getting three out of four right with a million distractors is impressive — but that success rate wouldn’t hold water in court or as a security product. It seems we still have a ways to go before that surveillance state becomes a reality — that one in particular, anyway.

The researchers’ work will be presented a week from today at the Conference on Computer Vision and Pattern Recognition in Las Vegas.

Source : https://techcrunch.com/2016/06/23/facial-recognition-systems-stumble-when-confronted-with-million-face-database/

Categorized in Search Engine

FACIAL RECOGNITION MAKES

sense as a method for your computer to recognize you. After all, humans already use a powerful version of it to tell each other apart. But people can be fooled (disguises! twins!), so it’s no surprise that even as computer vision evolves, new attacks will trick facial recognition systems, too. Now researchers have demonstrated a particularly disturbing new method of stealing a face: one that’s based on 3-D rendering and some light Internet stalking.

Earlier this month at the Usenix security conference, security and computer vision specialists from the University of North Carolina presented a system that uses digital 3-D facial models based on publicly available photos and displayed with mobile virtual reality technology to defeat facial recognition systems. A VR-style face, rendered in three dimensions, gives the motion and depth cues that a security system is generally checking for. The researchers used a VR system shown on a smartphone’s screen for its accessibility and portability.

Their attack, which successfully spoofed four of the five systems they tried, is a reminder of the downside to authenticating your identity with biometrics. By and large your bodily features remain constant, so if your biometric data is compromised or publicly available, it’s at risk of being recorded and exploited. Faces plastered across the web on social media are especially vulnerable—look no further than the wealth of facial biometric data literally called Facebook.

facerecognition

Other groups have done similar research into defeating facial recognition systems, but unlike in previous studies, the UNC test models weren’t developed from photos the researchers took or ones that the study participants provided. The researchers instead went about collecting images of the 20 volunteers the way any Google stalker might—through image search engines, professional photos, and publicly available assets on social networks like Facebook, LinkedIn, and Google+. They found anywhere from three to 27 photos of each volunteer. “We could leverage online pictures of the [participants], which I think is kind of terrifying,” says True Price, a study author who works on computer vision at UNC. “You can’t always control your online presence or your online image.” Price points out that many of the study participants are computer science researchers themselves, and some make an active effort to protect their privacy online. Still, the group was able to find at least three photos of each of them.

The researchers tested their virtual reality face renders on five authentication systems—KeyLemon, Mobius, TrueKey, BioID, and 1D. All are available from consumer software vendors like the Google Play Store and the iTunes Store and can be used for things like protecting data and locking smartphones. To test the security systems, the researchers had the subjects program each one to detect their real faces. Then they showed 3-D renders of each subject to the systems to see if they would accept them. In addition to making face models from online photos, the researchers also took indoor head shots of each participant, rendered them for virtual reality, and tested these against the five systems. Using the control photos, the researchers were able to trick all five systems in every case they tested. Using the public web photos, the researchers were able to trick four of the systems with success rates from 55 percent up to 85 percent.

facerecognition-3

Face authentication systems have been proliferating in consumer products like laptops and smartphones—Google even announced this year that it’s planning to put a dedicated image processing chip into its smartphones to do image recognition. This could help improve Android’s facial authentication, which was easily spoofed when it launched in 2011 under the name “Face Unlock” and was later improved and renamed “Trusted Face.” Nonetheless, Googlewarns, “This is less secure than a PIN, pattern, or password. Someone who looks similar to you could unlock your phone.”

Facial authentication spoofing attacks can use 2-D photos, videos, or in this case, 3-D face replicas (virtual reality renders, 3-D printed masks) to trick a system. For the UNC researchers, the most challenging part of executing their 3-D replica attack was working with the limited image resources they could find for each person online. Available photos were often low resolution and didn’t always depict people’s full faces. To create digital replicas, the group used the photos to identify “landmarks” of each person’s face, fit these to a 3-D render, and then used the best quality photo (factoring in things like resolution, lighting, and pose) to combine data about the texture of the face with the 3-D shape. The system also needed to extrapolate realistic texture for parts of the face that weren’t visible in the original photo. “Obtaining an accurately shaped face we found was not terribly difficult, but then retexturing the faces to look like the victims’ was a little trickier and we were trying solve problems with different illuminations,” Price says

If a face model didn’t succeed at fooling a system, the researchers would try using texture data from a different photo. The last step for each face render was correcting the eyes so they appeared to look directly into the camera for authentication. At this point, the faces were ready to be animated as needed for “liveness clues” like blinking, smiling, and raising eyebrows—basically authentication system checks intended to confirm that a face is alive.
facerecognition-2

In the “cat-and-mouse game” of face authenticators and attacks against them, there are definitely ways systems can improve to defend against these attacks. One example is scanning faces for human infrared signals, which wouldn’t be reproduced in a VR system. “It is now well known that face biometrics are easy to spoof compared to other major biometric modalities, namely fingerprints and irises,” says Anil Jain, a biometrics researcher at Michigan State University. He adds, though, that, “While 3-D face models may visually look similar to the person’s face that is being spoofed, they may not be of sufficiently high quality to get authenticated by a state of the art face matcher.”

The UNC researchers agree that it would be possible to defend against their attack. The question is how quickly consumer face authentication systems will evolve to keep up with new methods of spoofing. Ultimately, these systems will probably need to incorporate hardware and sensors beyond just mobile cameras or web cams, and that might be challenging to implement on mobile devices where hardware space is very limited. “Some vendors—most notably Microsoft with its Windows Hello software—already have commercial solutions that leverage alternative hardware,” UNC’s Price says. “However, there is always a cost-benefit to adding hardware, and hardware vendors will need to decide whether there is enough demand from and benefit for consumers to add specialized components like IR cameras or structured light projectors.

Biometric authenticators have the potential to be extremely powerful security mechanisms, but they’re threatened when would-be attackers gain easy access to personal data. In the Office of Personnel Management breach last year, for instance, hackers stole data for 5.6 million people’s fingerprints. Those markers will be in the wild for the rest of the victims’ lives. That data breach debacle, and the UNC researchers’ study, captures the troubling nature of biometric authentication: When your fingerprint–or faceprint–leaks into the ether, there’s no password reset button that can change it.

Source : https://www.wired.com/2016/08/hackers-trick-facial-recognition-logins-photos-facebook-thanks-zuck/#slide-2

Categorized in Internet Privacy

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media