fbpx

Identity fraud is now more threatening than ever

Technology is changing the way people do business but, in doing so, it increases the risks around security. Identity fraud is especially on the rise. In fact, it’s estimated this type of fraud has doubled in just the last year. And, while the banking sector may be the juiciest target for attempted identity fraud, security is not purely a banking concern.

In 2015, damage caused by internet fraud amounted to $3 trillion worldwide. Latest predictions say it will be $6 trillion in 2021. This makes cyber fraud one of the biggest threats in our economy and the fastest growing crime. It is becoming far more profitable than the global trade of illegal drugs.

Enterprises all over the world need to focus on this cost-intensive problem. With over 1.9 billion websites and counting, there is a huge possibility for fraud to be committed – a serious problem that must be slowed down.

Most common identity fraud methods

Of all fraud methods, social engineering is the biggest issue for companies. It became the most common fraud method in 2019, accounting for 73% of all attempted attacks, according to our own research. It lures unsuspecting users into providing or using their confidential data and is increasingly popular with fraudsters, being efficient and difficult to recognise.

Fraudsters trick innocent people into registering for a service using their own valid ID. The account they open is then overtaken by the fraudster and used to generate value by withdrawing money or making online transfers.

They mainly look for their victims on online portals where people search for jobs, buying – and selling things, or connecting with other people. In most of the cases, the fraudsters use fake job ads, app testing offers, cheap loan offers, or fake IT support to lure their victims. People are contacted on channels like eBay Classifieds, job search engines and Facebook.

Fraudsters are also creating sophisticated architecture to boost the credibility of these cover stories which includes fake corporate email addresses, fake ads, and fake websites.

In addition, we are seeing more applicants being coached, either by messenger or video call, on what to say during the identity process. Specifically, they are instructed to say that they were not prompted to open the account by a third party but are doing so by choice.

How to fight social engineering

If organisations are to consistently stay ahead of the latest fraud methods and protect their customers, they need to have the right technology in place to be able to track fraudulent activity, react quickly and be flexible in reengineering the security system.

Crucially, it requires a mix of technical and ‘personal’ mechanisms. Some methods include:

Device binding – to make sure that only the person who can use an app – and the account behind it – is the person who is entitled to do so, the device binding feature is highly effective. From the moment a customer signs up for a service, the specific app binds with their used device (a mobile phone for example) and, as soon as another device is used, the customer needs to verify themselves again.

Psychological questions – to detect social engineering, even if it is well disguised, trained staff are an additional safety net that should be applied – and in addition to the standard checks at the start of the verification process. They ask a customer an advanced set of questions once an elevated risk of a social engineering attack is detected. These questions are constantly updated as new attack patterns emerge.

Takedown service – with every attack, organisations can learn. This means constantly checking new methods and tricks to identify websites which fraudsters are using to lure in innocent people. And, by working with an identity verification provider that has good connections to the most used web hosts and a very engaged research team, they are able to take hundreds of these websites offline.

 

Fake ID fraud

However, social engineering isn’t the only common type of identity fraud. Organisations should be aware of fake ID fraud. Our research indicates fake IDs are available on the dark web for as little as €50 and some of them are so realistic they can often fool human passport agents. The most commonly faked documents are national ID cards, followed by passports in second place. Other documents include residence permits and driving licenses.

The quality of these fake IDs is increasing too. Where in the past fraudsters used simple colour copies of ID cards, now they are switching to more advanced, and more costly falsifications that even include holographs.

Biometric security is extremely effective at fighting this kind of fraud. It can check and detect holograms and other features like optical variable inks just by moving the ID in front of the camera. Machine learning algorithms can also be used for dynamic visual detection.

Similarity fraud is another method used by fraudsters, although it’s not as common thanks to the development of easier and more efficient ways (like social engineering). This method sees a fraudster use a genuine, stolen, government-issued ID that belongs to a person with similar facial features.

To fight similarity fraud, biometric checks and liveness checks used together are very effective – and they are much more precise and accurate than a human could ever be without the help of state-of-the-art security technology.

The biometric checks scan all the characteristics in the customer’s face and compares it to the picture on their ID card or passport. If the technology confirms all of the important features in both pictures, it hands over to the liveness check. This is a liveness detection program to verify the customer’s presence. It builds a 3D model of their face by taking different angled photos while the customer moves according to instructions.

The biometric check itself could be tricked with a photo but, in combination with the liveness check, it proves there is a real person in front of the camera.

Fighting back

The threat of identity fraud is not going away and, as fraudsters become more and more sophisticated, so too must technology. With the right investment in advanced technology measures, organisations will be in a much stronger position to stop fraudsters in their tracks and protect their customers from the risk of identity fraud.

 [Source: This article was published in techradar.com By Charlie Roberts - Uploaded by the Association Member: Alex Gray]

Categorized in Investigative Research

This could make many Twitter users very happy, or equally lead to more confusion, depending on how it's enacted.

According to a new discovery by reverse engineering expert Jane Manchun Wong, Twitter is working on a new option that would enable users to apply for profile verification from their account settings.

Screenshot 2

EZ54vXLUwAEq9rW.png

This could make many Twitter users very happy, or equally lead to more confusion, depending on how it's enacted.

 

According to a new discovery by reverse engineering expert Jane Manchun Wong, Twitter is working on a new option that would enable users to apply for profile verification from their account settings.

Screenshot 3

Twitter hasn't provided any updates on the process since then, though it has repeatedly noted that it is working on a new system. Twitter has also continued to verify some accounts, though not via user applications. Most recently, Twitter used its verification tick to highlight authoritative voices in relation to COVID-19, but again, that was internally managed, and not open for public requests.

Twitter first enabled all users to apply for verification back in 2016, though if you try to go through that process now, you're met with this note:

Twitter product lead Kayvon Beykpour reported in July 2018 that, while work had been done on fixing its verification process, it was not a priority, and was still some way off being re-launched. The appearance of a new prompt in testing could suggest that it's now moving closer to making a comeback - though how it might function, and what qualification process Twitter will use for such, remains a mystery. And it'll likely be difficult for Twitter to manage, no matter how they go about it.

For example, part of the problem with verification was that it seemingly implied that Twitter endorsed any account with a blue tick. In 2017, Twitter verified the profile of a white supremacist leader - despite, around the same time, vowing to take more action against hate speech. That's what prompted the initial pause on verification - the confusion here was that some within Twitter saw the verification tick as a basic mark of ID confirmation, while others felt it should be reserved for approved public figures only. So some people have been verified simply by proving who they are, regardless of their public profile, while others have been rejected, despite being people of significance.

Any changes to the process will mean that Twitter will need to provide more specific clarity around exactly what qualifies someone for a blue tick, but it could also mean that Twitter will need to retrospectively remove the tick from those who currently have it, yet don't meet these updated standards.

Twitter, of course, is unlikely to do that, but if it doesn't take that step, that will mean that a level of confusion will remain around what the blue tick represents, as some people who've been approved previously will still have it, despite not matching the new requirements.

How Twitter gets around that is hard to say - just remove it for everyone then start again? That seems unlikely - but then again, with only 356k people currently holding the blue tick, Twitter could, theoretically, review all of these profiles and take the tick away from those who are no longer eligible.

Either way, it's interesting to note that Twitter does appear to be moving on this, and it'll be ineresting to see how they facilitate the process moving forward.

If Twitter leans towards making it more of an official ID confirmation, that could help to provide more accountability, with users unable to hide behind a basic account. Twitter could, for example, reduce the visibility of accounts which are not approved, limiting their capacity to interact without going through the ID process. That could make trolls think twice about their activity, given that it would be tied back to their actual identity.

If Twitter leans towards making it more of exclusive endorsement for public figures, that, as noted, could see accounts that don't qualify stripped of the tick.  

It's an interesting element, and we'll have to wait and see where Twitter decides to go with it.

[Source: This article was published in socialmediatoday.com By Andrew Hutchinson - Uploaded by the Association Member: Wushe Zhiyang]

Categorized in Social

Annotation of a doctored image shared by Rep. Paul A. Gosar on Twitter. (Original 2011 photo of President Barack Obama with then-Indian Prime Minister Manmohan Singh by Charles Dharapak/AP)

To a trained eye, the photo shared by Rep. Paul A. Gosar (R-Ariz.) on Monday was obviously fake.

pual gosar

At a glance, nothing necessarily seems amiss. It appears to be one of a thousand (a million?) photos of a president shaking a foreign leader’s hand in front of a phalanx of flags. It’s easy to imagine that, at some point, former president Barack Obama encountered this particular official and posed for a photo.

Except that the photo at issue is of Iranian President Hassan Rouhani, someone Obama never met. Had he done so, it would have been significant news, nearly as significant as President Trump’s various meetings with North Korean leader Kim Jong Un. Casual observers would be forgiven for not knowing all of this, much less who the person standing next to Obama happened to be. Most Americans couldn’t identify the current prime minister of India in a New York Times survey; the odds they would recognize the president of Iran seem low.

Again, though, there are obvious problems with the photo that should jump out quickly. There’s that odd, smeared star on the left-most American flag (identified as A in the graphic above). There’s Rouhani’s oddly short forearm (B). And then that big blotch of color between the two presidents (C), a weird pinkish-brown blob of unexpected uniformity.

Each of those glitches reflects where the original image — a 2011 photo of Obama with then-Indian Prime Minister Manmohan Singh — was modified. The truncated star was obscured by Singh’s turban. The blotch of color is an attempt to remove the circle from the middle of the Indian flag behind the leaders. The weird forearm is a function of the slightly different postures and sizes of the Indian and Iranian leaders.

Screenshot 1

President Barack Obama meets with Indian Prime Minister Manmohan Singh in Nusa Dua, on the island of Bali, Indonesia, on Nov. 18, 2011. (Charles Dharapak/AP)

Compared with the original, the difference is obvious. What it takes, of course, is looking.

Tools exist to determine whether a photo has been altered. It’s often more art than science, involving a range of probability more than a certain final answer. The University of California at Berkeley professor Hany Farid has written a book about detecting fake images and shared quick tips with The Washington Post.

 

  • Reverse image search. Save the photo to your computer and then drop it into Google Image Search. You’ll quickly see where it might have appeared before, useful if an image purports to be over a breaking news event. Or it might show sites that have debunked it.
  • Check fact-checking sites. This can be a useful tool by itself. Images of political significance have a habit of floating around for a while, deployed for various purposes. The fake Obama-Rouhani image, for example, has been around since at least 2015 — when it appeared in a video created by a political action committee supporting Sen. Ron Johnson (R-Wis.).
  • Know what’s hard to fake. In an article for Fast Company, Farid noted that some things, like complicated physical interactions, are harder to fake than photos of people standing side by side. Backgrounds are also often tricky; it’s hard to remove something from an image while accurately re-creating what the scene behind them would have looked like. (It’s not a coincidence that both the physical interaction and background of the “Rouhani” photo were clues that it was fake.)

But, again, you have to care that you’re passing along a fake photo. Gosar didn’t. Presented with the image’s inaccuracy by a reporter from the Intercept, Gosar replied via tweet that “no one said this wasn’t photoshopped.”

“No one said the president of Iran was dead. No one said Obama met with Rouhani in person,” Gosar wrote to the “dim-witted reporter.” “The point remains to all but the dimmest: Obama coddled, appeased, nurtured and protected the worlds No. 1 sponsor of terror.”

As an argument, that may be evaluated on the merits. It is clearly the case, though, that Gosar had no qualms about sharing an edited image. He recognizes, in fact, that the photo is a lure for the point he wanted to make: Obama is bad.

That brings us to a more important point, one that demands a large-type introduction.

The Big Problem with social media

There exists a concept in social psychology called the “Dunning-Kruger effect.” You’ve probably heard of it; it’s a remarkable lens through which to consider a lot of what happens in American culture, including, specifically, politics and social media.

The idea is this: People who don’t know much about a subject necessarily don’t know how little they know. How could they? So after learning a little bit about the topic, there’s sudden confidence that arises. Now knowing more than nothing and not knowing how little of the subject they know, people can feel as though they have some expertise. And then they offer it, even while dismissing actual experts.

“Their deficits leave them with a double burden,” David Dunning wrote in 2011 about the effect, named in part after his research. “Not only does their incomplete and misguided knowledge lead them to make mistakes, but those exact same deficits also prevent them from recognizing when they are making mistakes and other people choosing more wisely.”

The effect is often depicted in a graph like this. You learn a bit and feel more confident talking about it — and that increases and increases until, in a flash, you realize that there’s a lot more to it than you thought. Call it the “oh, wait” moment. Confidence plunges, slowly rebuilding as you learn more, and learn more about what you don’t know. This affects all of us, myself included.

Screenshot 2(Philip Bump/The Washington Post)

Dunning’s effect is apparent on Twitter all the time. Here’s an example from this week, in which the “oh, wait” moment comes at the hands of an actual expert.

Screenshot 3

One value proposition for social media (and the Internet more broadly) is that this sort of Marshall-McLuhan-in-“Annie-Hall” moment can happen. People can inform themselves about reality, challenge themselves by accessing the vast scope of human knowledge and even be confronted directly by those in positions of expertise.

In reality, though, the effect of social media is often to create a chorus of people who are at a similar, overconfident point in the Dunning-Kruger curve. Another value of the Internet is in its ability to create ad hoc like-minded communities, but that also means it can convene like-minded groups of wrong-minded opinions. It’s awfully hard to feel chastened or uninformed when there is any number of other people who vocally share your view. (Why one could fill hours on a major cable-news network simply by filling panels with people on the dashed-line part of the graph above!)

The Internet facilitates ignorance as readily as it does knowledge. It allows us to build reinforcements around our errors. It allows us to share a fake image and wave away concerns because the target of the image is a shared enemy for your in-group. Or, simply, to accept a faked image as real because you’re either unaware of obvious signs of fakery or unaware of the unlikely geopolitics that surrounds its implications.

I asked Farid, the fake-photo expert, how normal people lingering at the edge of an “oh, wait” moment might avoid sharing altered images.

“Slow down!” he replied. “Understand that most fake news/images/videos are designed to be sensational or outrageous and get you to respond quickly before you’ve had time to think. When you find yourself reacting viscerally, take a breath, slow down, and don’t be so quick to share/like/retweet.”

Unless, of course, your goals are both to be sensational and to get retweets. In that case, go ahead and share the image. You can always rationalize it later.

[Source: This article was published in washingtonpost.com By Philip Bump - Uploaded by the Association Member: Alex Gray]

Categorized in Investigative Research

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media