fbpx

Annotation of a doctored image shared by Rep. Paul A. Gosar on Twitter. (Original 2011 photo of President Barack Obama with then-Indian Prime Minister Manmohan Singh by Charles Dharapak/AP)

To a trained eye, the photo shared by Rep. Paul A. Gosar (R-Ariz.) on Monday was obviously fake.

pual gosar

At a glance, nothing necessarily seems amiss. It appears to be one of a thousand (a million?) photos of a president shaking a foreign leader’s hand in front of a phalanx of flags. It’s easy to imagine that, at some point, former president Barack Obama encountered this particular official and posed for a photo.

Except that the photo at issue is of Iranian President Hassan Rouhani, someone Obama never met. Had he done so, it would have been significant news, nearly as significant as President Trump’s various meetings with North Korean leader Kim Jong Un. Casual observers would be forgiven for not knowing all of this, much less who the person standing next to Obama happened to be. Most Americans couldn’t identify the current prime minister of India in a New York Times survey; the odds they would recognize the president of Iran seem low.

 

Again, though, there are obvious problems with the photo that should jump out quickly. There’s that odd, smeared star on the left-most American flag (identified as A in the graphic above). There’s Rouhani’s oddly short forearm (B). And then that big blotch of color between the two presidents (C), a weird pinkish-brown blob of unexpected uniformity.

Each of those glitches reflects where the original image — a 2011 photo of Obama with then-Indian Prime Minister Manmohan Singh — was modified. The truncated star was obscured by Singh’s turban. The blotch of color is an attempt to remove the circle from the middle of the Indian flag behind the leaders. The weird forearm is a function of the slightly different postures and sizes of the Indian and Iranian leaders.

Screenshot 1

President Barack Obama meets with Indian Prime Minister Manmohan Singh in Nusa Dua, on the island of Bali, Indonesia, on Nov. 18, 2011. (Charles Dharapak/AP)

Compared with the original, the difference is obvious. What it takes, of course, is looking.

Tools exist to determine whether a photo has been altered. It’s often more art than science, involving a range of probability more than a certain final answer. The University of California at Berkeley professor Hany Farid has written a book about detecting fake images and shared quick tips with The Washington Post.

 

  • Reverse image search. Save the photo to your computer and then drop it into Google Image Search. You’ll quickly see where it might have appeared before, useful if an image purports to be over a breaking news event. Or it might show sites that have debunked it.
  • Check fact-checking sites. This can be a useful tool by itself. Images of political significance have a habit of floating around for a while, deployed for various purposes. The fake Obama-Rouhani image, for example, has been around since at least 2015 — when it appeared in a video created by a political action committee supporting Sen. Ron Johnson (R-Wis.).
  • Know what’s hard to fake. In an article for Fast Company, Farid noted that some things, like complicated physical interactions, are harder to fake than photos of people standing side by side. Backgrounds are also often tricky; it’s hard to remove something from an image while accurately re-creating what the scene behind them would have looked like. (It’s not a coincidence that both the physical interaction and background of the “Rouhani” photo were clues that it was fake.)

But, again, you have to care that you’re passing along a fake photo. Gosar didn’t. Presented with the image’s inaccuracy by a reporter from the Intercept, Gosar replied via tweet that “no one said this wasn’t photoshopped.”

“No one said the president of Iran was dead. No one said Obama met with Rouhani in person,” Gosar wrote to the “dim-witted reporter.” “The point remains to all but the dimmest: Obama coddled, appeased, nurtured and protected the worlds No. 1 sponsor of terror.”

As an argument, that may be evaluated on the merits. It is clearly the case, though, that Gosar had no qualms about sharing an edited image. He recognizes, in fact, that the photo is a lure for the point he wanted to make: Obama is bad.

That brings us to a more important point, one that demands a large-type introduction.

The Big Problem with social media

There exists a concept in social psychology called the “Dunning-Kruger effect.” You’ve probably heard of it; it’s a remarkable lens through which to consider a lot of what happens in American culture, including, specifically, politics and social media.

The idea is this: People who don’t know much about a subject necessarily don’t know how little they know. How could they? So after learning a little bit about the topic, there’s sudden confidence that arises. Now knowing more than nothing and not knowing how little of the subject they know, people can feel as though they have some expertise. And then they offer it, even while dismissing actual experts.

 

“Their deficits leave them with a double burden,” David Dunning wrote in 2011 about the effect, named in part after his research. “Not only does their incomplete and misguided knowledge lead them to make mistakes, but those exact same deficits also prevent them from recognizing when they are making mistakes and other people choosing more wisely.”

The effect is often depicted in a graph like this. You learn a bit and feel more confident talking about it — and that increases and increases until, in a flash, you realize that there’s a lot more to it than you thought. Call it the “oh, wait” moment. Confidence plunges, slowly rebuilding as you learn more, and learn more about what you don’t know. This affects all of us, myself included.

Screenshot 2(Philip Bump/The Washington Post)

Dunning’s effect is apparent on Twitter all the time. Here’s an example from this week, in which the “oh, wait” moment comes at the hands of an actual expert.

Screenshot 3

One value proposition for social media (and the Internet more broadly) is that this sort of Marshall-McLuhan-in-“Annie-Hall” moment can happen. People can inform themselves about reality, challenge themselves by accessing the vast scope of human knowledge and even be confronted directly by those in positions of expertise.

 

In reality, though, the effect of social media is often to create a chorus of people who are at a similar, overconfident point in the Dunning-Kruger curve. Another value of the Internet is in its ability to create ad hoc like-minded communities, but that also means it can convene like-minded groups of wrong-minded opinions. It’s awfully hard to feel chastened or uninformed when there is any number of other people who vocally share your view. (Why one could fill hours on a major cable-news network simply by filling panels with people on the dashed-line part of the graph above!)

The Internet facilitates ignorance as readily as it does knowledge. It allows us to build reinforcements around our errors. It allows us to share a fake image and wave away concerns because the target of the image is a shared enemy for your in-group. Or, simply, to accept a faked image as real because you’re either unaware of obvious signs of fakery or unaware of the unlikely geopolitics that surrounds its implications.

I asked Farid, the fake-photo expert, how normal people lingering at the edge of an “oh, wait” moment might avoid sharing altered images.

“Slow down!” he replied. “Understand that most fake news/images/videos are designed to be sensational or outrageous and get you to respond quickly before you’ve had time to think. When you find yourself reacting viscerally, take a breath, slow down, and don’t be so quick to share/like/retweet.”

Unless, of course, your goals are both to be sensational and to get retweets. In that case, go ahead and share the image. You can always rationalize it later.

[Source: This article was published in washingtonpost.com By Philip Bump - Uploaded by the Association Member: Alex Gray]

Categorized in Investigative Research

[This article is originally published in help123.sg - Uploaded by AIRS Member: Dorothy Allen]

The internet is full of websites that are fake or fraudulent, and we understand that it can be challenging to determine if a website is credible. Here are some tips you can use to find out if a website is legitimate:

1) Ensure that the contact information is valid

Credible websites provide updated and accurate contact information. Legitimate companies will always list ways you can get in touch with them. Always validate the contact information provided if you are unsure of its credibility.

2) Look out for spelling or grammatical mistakes

Spelling mistakes and grammatical inconsistencies in a website is an indication that the site may not be credible. Legitimate companies or website owners take effort to present information in a clear and error-free manner.

 

3) Double-check the web address to make sure it is the original

The website address bar contains vital information about where you are on the internet and how secure the page is. Paying attention to these details can minimize the risk of falling into a phishing scam or any other form of scams which hackers or cybercriminals have created to dupe web users.

Many fraudulent websites use domain names that reference well-known brands to trick unknowing users into providing sensitive personal information. It is good to always practice caution when visiting websites to make sure it is the official website you are visiting.

4) Ensure that the website is secure

Another piece of vital information that can be picked up from the website address bar is the website's connection security indicator. A secure website is indicated by the use of "HTTPS" instead of "HTTP", which means that the website's connection is secure and any information exchanged between you and the website is encrypted and safe.

5) Is the offer too good to be true?

Fraudulent and scam websites use low prices or deals that are too good to be true to lure internet users or shoppers to purchase fake, counterfeit, or even non-existent products. If you encounter a website which offers prices that sounds too good to be true, be suspicious about it. Always ensure that the website is legitimate before making any purchase!

Categorized in Internet Privacy

 Source: This article was Published theverge.com By Dami Lee - Contributed by Member: Olivia Russell

There’s no mention of ‘fake news,’ though

There are more young people online than ever in our current age of misinformation, and Facebook is developing resources to help youths better navigate the internet in a positive, responsible way. Facebook has launched a Digital Literacy Library in partnership with the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University. The interactive lessons and videos can be downloaded for free, and they’re meant to be used in the classroom, in after-school programs, or at home.

 

Created from more than 10 years of academic research and “built in consultation with teens,” the curriculum is divided into five themes: Privacy and Reputation, Identity Exploration, Positive Behavior, Security, and Community Engagement. There are 18 lessons in total, available in English; there are plans to add 45 more languages. Lessons can be divided into three different age groups between 11 and 18, and they cover everything from having healthy relationships online (group activities include discussing scenarios like “over-texting”) to recognizing phishing scams.

The Digital Literacy Library is part of Facebook’s Safety Center as well as a larger effort to provide digital literacy skills to nonprofits, small businesses, and community colleges. Though it feels like a step in the right direction, curiously missing from the lesson plans are any mentions of “fake news.” Facebook has worked on a news literacy campaign with the aim of reducing the spread of false news before. But given the company’s recent announcements admitting to the discovery of “inauthentic” social media campaigns ahead of the midterm elections, it’s strange that the literacy library doesn’t call attention to spotting potential problems on its own platform.

Categorized in Internet Privacy

Source: This article was published internetofbusiness.com By Malek Murison - Contributed by Member: Carol R. Venuti

Facebook has announced a raft of measures to prevent the spread of false information on its platform.

Writing in a company blog post on Friday, product manager Tessa Lyons said that Facebook’s fight against fake news has been ongoing through a combination of technology and human review.

However, she also wrote that, given the determination of people seeking to abuse the social network’s algorithms for political and other gains, “This effort will never be finished and we have a lot more to do.”

Lyons went on to announce several updates and enhancements as part of Facebook’s battle to control the veracity of content on its platform. New measures include expanding its fact-checking programme to new countries and developing systems to monitor the authenticity of photos and videos.

 

Both are significant in the wake of the Cambridge Analytica fiasco. While fake new stories are widely acknowledged or alleged to exist on either side of the left/right political divide, concerns are also growing about the fast-emerging ability to fake videos.


Meanwhile, numerous reports surfaced last year documenting the problem of teenagers in Macedonia producing some of the most successful viral pro-Trump content during the US presidential election.

Other measures outlined by Lyons include increasing the impact of fact-checking, taking action against repeat offenders, and extending partnerships with academic institutions to improve fact-checking results.

Machine learning to improve fact-checking

Facebook already applies machine learning algorithms to detect sensitive content. Though fallible, this software goes a long way toward ensuring that photos and videos containing violence and sexual content are flagged and removed as swiftly as possible.

Now, the company is set to use similar technologies to identify false news and take action on a bigger scale.

In part, that’s because Facebook has become a victim of its own success. With close to two billion registered users, one billion regularly active ones, and over a billion pieces of content posted every day, it’s impossible for human fact-checkers to review stories on an individual basis, without Facebook employing vast teams of people to monitor citizen behavior.

Lyons explained how machine learning is being used, not only to detect false stories but also to detect duplicates of stories that have already been classed as false. “Machine learning helps us identify duplicates of debunked stories,” she wrote.

“For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.”

 

The big-picture challenge, of course, is that real science is constantly advancing alongside pseudoscience, and new or competing theories constantly emerge, while others are still being tested.

Facebook is also working on technology that can sift through the metadata of published images to check their background information against the context in which they are used. This is because while the fake news is a widely known problem, the cynical deployment of genuine content, such as photos, in false or deceptive contexts can be a more insidious problem.

Machine learning is also being deployed to recognise where false claims may be emanating from. Facebook filters are now actively attempting to predict which pages are more likely to share false content, based on the profile of page administrators, the behavior of the page, and its geographical location.

Internet of Business says

Facebook’s moves are welcome and, many would argue, long overdue. However, in a world of conspiracy theories – many spun on social media – it’s inevitable that some will see the evidenced, fact-checked flagging-up of false content as itself being indicative of bias or media manipulation.

In a sense, Facebook is engaged in an age-old battle, belief versus evidence, which is now spreading into more and more areas of our lives. Experts are now routinely vilified by politicians, even as we still trust experts to keep planes in the sky, feed us, teach us, clothe us, treat our illnesses, and power our homes.

Many false stories are posted on social platforms to generate clicks and advertising revenues through controversy – hardly a revelation. However, red flags can automatically be raised when, for example, page admins live in one country but post content to users on the other side of the world.

“These admins often have suspicious accounts that are not fake, but are identified in our system as having suspicious activity,” Lyons told Buzzfeed.

An excellent point. But some media magnates also live on the other side of the world, including – for anyone outside of the US – Mark Zuckerberg.

Categorized in Social

The fake news problem is anything but fake. It has flooded all social media platforms and remains an issue that many still believe shaped the results of the 2016 American election. Well, turns out that Microsoft’s Bing also had a fake news problem. One YouTube channel gamed the Microsoft Search engine and flooded it with hoaxes and fake news videos (via The Verge.)

The problem at heart in the situation happened to be with the Bing autofill feature. For example, when a user clicks on the News section of Bing, the search bar can be auto-filled with a “Top Stories” suggestion. After clicking through, the same “top stories” query will then follow the user and autofill through other sections of the search engine, including Maps, Images, and more importantly, videos.

 

It is the videos section where “Top Stories” goes a bit rogue and linked users to fake news videos from the “Top Stories Today” YouTube channel. According to The Verge, examples of fake videos from the channel included “Breaking: Germany demands immediate prosecution of Obama” and “Russian is about to take out Obama permanently.” These videos reportedly racked up 83.6 million views, and are obviously aimed to promote Donald Trump and criticize Hillary Clinton and Barack Obama.

The fake news videos in Bing (Image via The Verge)

Microsoft has since removed this YouTube channel from the search results, and at the time of writing, we were unable to find these videos via a “Top Stories” query in Bing Videos. Instead, we were linked to videos to the USA Today YouTube channel, a much more reliable source.  Searches for “Top News today,” though, still linked us to fake news videos. Microsoft provided the following statement about this issue:

“As soon as we become aware of this type of content, we take action to remove it from news search results, which we are doing in this case.”

Bing previously received a Fact Check label feature to help users identify fake news, but the label only applied to web searches and not videos. Safe to say that Microsoft may have learned a lesson in this instance. Do you think that Bing needs more fact checking features? Let us know your thoughts in the comments below.

 Source: This article was published onmsft.com By ARIF BACCHUS

Categorized in Search Engine

How Top Stories Today gamed the system

Over the course of the last several years, every major social platform has been plagued by fake news. Now Bing, Microsoft’s search engine, has a fake news problem of its own.

Because of how the search engine’s autofill feature works, people who visit Bing looking for news videos may be redirected to a flood of fake news videos, all generated by a single source. You can see how it works for yourself: click on the “News” tab from Bing’s homepage. The page autofills the search bar with “Top stories.” Now travel to any other search tab, including “Maps” or “Images” and you’ll see that the search bar retains the “Top stories” query. Autofilling “Top stories” into the search bar appears to be an innocuous design decision — until you hit the “Video” tab.

There, you’ll see a wall of videos including “Breaking: Germany demands immediate prosecution of Obama”; “The Royal wedding in jeopardy,” and “Russian is about to take out Obama permanently.” Many of the videos promote moves made by President Donald Trump, and offer criticism of former President Barack Obama and Hillary Clinton. Collectively, the videos have earned 83.6 million views.

 

And every video comes from one YouTube account: Top Stories Today, an account which appears to have been designed to game Bing’s design. The channel is devoted to promoting false and sensationalized news videos narrated by synthesized voices, which often speak in a kind of gibberish. “We report the genuine news and circumstances occurring the world over,” reads the account’s “about” page on YouTube. “Genuine Reports that the predominant press doesn’t need you to think about! We are your #ONE source for the most vital world event and stories happening every day!”

In content and in tone, Top Stories Today’s videos are reminiscent of the hoaxes that spread virally on Facebook and other platforms during the 2016 election.

“As soon as we become aware of this type of content, we take action to remove it from news search results, which we are doing in this case,” a Microsoft spokeswoman said in a statement. A message sent to Top Stories Today was not returned.

Source: This article was published theverge.com By Casey Newton

Categorized in Search Engine

Group of 11 British MPs flew to Washington at a cost of £30,000 to taxpayers. But why?

The usual practice at the start of a select committee hearing is for the chair to thank the witnesses for having made the effort to come. At the digital, culture, media and sports committee’s latest hearing on “fake news,” it was the other way round. For the first time in parliamentary history, an entire committee had upped sticks and decamped to the US.

Quite why they had chosen to do so was not altogether clear. As far as anyone was aware, GoogleYouTube, and Facebook all had senior executives working in the UK who were just as qualified to give evidence as their US counterparts. But on the off chance that the committee was hell-bent on hearing from the Americans, you’d have thought it was a great deal cheaper and much less of an organizational nightmare to fly them to the UK. After all, some of them were halfway to London having already flown 3,000 miles from Silicon Valley to join the committee in Washington.

Some might call it a nice winter break at an estimated cost of £30,000 to the British taxpayer. The 11 MPs preferred to call it thoroughness and, to mark the occasion, they had had special lapel badges made for themselves. Every trip abroad deserves a souvenir. And after a day or so to acclimatize and recover from jet lag – the committee flew out to the US on Tuesday – everyone was gathered in an echoey white hall at George Washington University for a 9 am the start.

 

First in the firing line were Richard Gingras, a dead ringer for Donald Sutherland as well as being vice-president of news at Google, and Juniper Downs, the global head of public policy at YouTube. Both were at pains to say how pleased they were to be there, how much they admired the work of the committee and how much they hated the fake news. Just in case anyone had not been paying attention to this, they repeated how much they hated the fake news.

The committee chair, Damian Collins, is a much shrewder operator than he sometimes appears and probed them rather more forensically than they expected on just how much they put the profit principle above such dreary considerations as monitoring fake news and making sure people weren’t using their platforms to influence election outcomes. “We’ve got 10,000 raiders to make sure people don’t misuse the Google search engines,” Gingras insisted. In which case, Collins observed, why was it that when you typed in Jew, the auto-complete function more often than not took you to an antisemitic website? Gingras shrugged. No one was perfect.

“It’s mission critical for us,” said Downs, when asked what Google-owned YouTube did to ensure the veracity and provenance of the news videos posted on its site. “We spend tens of millions of dollars on security.” She was then asked how much YouTube made in total. Downs bounced up and down in her chair nervously. “I don’t know,” she squeaked. Collins filled her in: $10bn. So YouTubewas spending 0.1% of its earnings on security. Downs shrugged. Sounded plenty to her.

Things didn’t improve when Facebook’s Monika Bickert, its head of global policy management, and Simon Milner, its policy director for the UK, Middle East, and Africa, got their turn in front of the committee. Milner’s appearance was especially baffling as he is a Brit through and through and could far more easily have been questioned in London.

Like Google and YouTube before them, the Facebook execs were mortified that anyone might have been using their websites for anything other than the greater glory of self-improvement. In fact, they were so appalled that they were now voluntarily implementing security measures that the regulators had recently imposed on them.

None of it was terribly enlightening. Just about the only thing we did learn was new media execs talk the same bullshit the world over. And 11 MPs probably didn’t need to travel 3,000 miles to discover that.

Source: This article was published theguardian.com By John Crace

Categorized in News & Politics

There is plenty of fake news circulating on the internet today. The vast majority falls into the category of “Nobody capable of reading could actually be dumb enough to think this is true,” and yet the stories are liked, shared, and promoted by millions who inexplicably believe them. Of course, it doesn’t help when the fake news stories are being offered up by what you might otherwise assume to be a credible source of information—like from Microsoft’s Bing.

If you visit the Bing website, it defaults to a beautiful picture of the day with a white Bing search box in the middle. Across the top are links for News, Maps, Videos and Images. If you click on News, it automatically populates the search field with “top stories”. If you then click the other links, the “top stories” term remains, and the results that appear are related to the search “top stories.”

A YouTube channel dedicated to propagating ludicrous fake news figured out how to exploit this design flaw and game the Microsoft Bing page into displaying their stories—almost exclusively. The name of the YouTube channel is “Top Stories Today,” so it automatically ranks higher in a search for “top stories” because it’s right in the name.

The titles of most of the videos make it obvious that they’re blatantly false. If that wasn’t enough, though, the source should also be a tip-off. Any tinfoil hat wearing nutjob who thinks the Earth is flat, or Infowars is legitimate news can cobble together a YouTube video to convey whatever hallucinogenic, fever-inspired conspiracy theory they wish.

 

I was reluctant, but I took one for the team and clicked on a link. The narrator's voice sounds computer-generated—like if you mixed Sean Hannity and Max Headroom—so that too should be a hint that this news is a few apples short of a bushel. The “story” I clicked breathlessly “enlightened” me about what an awesome job actor James Woods did spewing verifiably false things about former President Barack Obama on Twitter.

The danger here is that someone who doesn’t know any better, and—for whatever reason—can’t pick up on all the clues, will see these videos displayed among videos from MSN and USA Today, and actually believe them. The tricky part about a decent fake news story is that it contains just enough truth or plausible content to suck you in, so by the end, you’re at least thinking, “Huh. Maybe?” Then, those people will go on Facebook and post them to share the shocking news with their friends and family and spark heated partisan debates.

For what it’s worth, this only seems to happen if the first link you click on from the Bing website is News. If you click Videos, Maps or Images first, the search field is not automatically populated with “top stories”, and even if you subsequently visit the News link it doesn’t populate with “top stories” or remain persistent when you go to the other links.

Hopefully, Microsoft is now aware of this flaw and will take steps to do something about it. That is just the tip of the iceberg, though. There’s a lot more than search engines, social media sites, and legitimate news sources need to do to counter the rising tide of fake news and restore some sanity online.

Update: Microsoft did, in fact, take immediate action to remove the YouTube fake news links from the Videos feed on Bing as quickly as possible once it was notified. the behavior of automatically populating the search criteria with "Top Stories" when you click on News first still occurs, but the fake news YouTube videos have been stripped from the results.

Source: This article was published forbes.com By Tony Bradley

Categorized in News & Politics

HIGHLIGHTS

  • Google Search finds quality of newsy content algorithmically
  • Search results to omit fake news through improved ranking signals
  • India marks 2x growth in daily active search users on Google

Google Search already receive some artificial intelligence (AI) tweaks to enhance user experience. But with the swift growth of inferior-quality content, Google is now in the process of improving the quality of its search results. VP of Engineering Shashidhar Thakur on the sidelines of Google for India 2017 on Tuesday stated that Google is making continuous efforts to cut down on the amount of fake news content listed on its search engine.

 

"Whether it's in India or internationally, we make sure that we uphold a high bar when it comes to the quality of newsy content. Generally, in search, we find this type of content algorithmically," Thakur told Gadgets 360. The algorithms deployed behind Google Search look for the authoritativeness of the content and its quality to rank them appropriately. Thakur said that this continuous improvement will uplift the quality of the search results over time.

"We improve ranking signals on our search engine from time to time to overcome the issue of fake news. Signals help the system understand a query or the language of the query or the text or matching different keywords to provide relevant results," explained Thakur.

Similar to other search engines that use code-based bots to crawl different webpages, Google Search indexes hundreds of billions of webpages consistently. Once indexed, Google Search adds webpages to different entries that include all the words available on those pages. This data is then processed to the Knowledge Graph that not just looks for any particular keywords but also picks user interests to give relevant results.

Related...

"Inferior-quality content on the Web isn't a new and special problem," Thakur said. "But certainly, it is a problem that we need to solve by continuous tuning and making the underlying search algorithms better. This is indeed a very crucial area of focus for us."

Google isn't the only Web company that is taking the menace of fake news seriously. Facebook and Microsoft's Bing are also testing new developments to curb fake news. A recent report by Gartner predicted that fake news will grow multifold by 2022 and people in mature economies will consume more amount of false information over the information that is true and fair.

Having said that, Google is dominating the Web space and its search engine is the most prominent area for counterfeit content. Thakur at the Google for India stage revealed the number of daily active search users in India has grown two times in the last one year. The Mountain View, California-headquartered company also released Google Go as the lightweight version of the flagship Google app on Android devices.

 

Source: This article was published gadgets.ndtv.com By Jagmeet Singh

 

Categorized in Search Engine

Apparently, the world’s leading search engine (by a very wide margin) feels that we aren’t capable of discerning the difference between news that is propaganda and news that is real.  Recent developments that received almost no coverage by the western media show us the lengths that Google is willing to go to in its efforts to protect us from Russian-sourced fake news.

Before we go any further in this posting, let’s look at a study from 2009 that looked at users online behaviour.  According to the study which looked at the internet behaviour of 109 subjects, 91 percent did not go past the first page of internet search engine results and 36 percent of subjects did not go beyond the first three search results.  This means that any external “adjustments” to search engine results could be used introduce a significant bias from the perspective of users.   

At the recent Halifax International Security Forum held in Halifax, Nova Scotia (Canada for those of you that aren’t familiar with Canadian geography), during a question and answer session, Alphabet’s (the parent company of Google) Executive Chairman, Eric Schmidt made some very interesting and telling comments.

The basic question asked of Dr. Schmidt at the beginning of his exchange with the moderator and various members of the audience was “What is Google doing to fight extremism and fake news“.

 

Here are excerpts from his responses to several questioners:

“Ten years ago, I thought that everyone would be able to deal with the internet because the internet, we all knew, was full of falsehoods as well as truths.  It’s been joked for years that the sewer part of the internet, crazy people, crazy ideas and so forth.  But the new data is that the other side, actors that trying to either to spread misinformation or worse, have figured out how to use that information for their own good whether it’s amplification around a message or repeating something a hundred times so that people actually believe even though it’s obviously false and that kind of thing.  My own view is that these patterns can be detected and that they can be taken down or de-prioritized.  One of the sort of problems in the industry is that we came from, shall we say, a more naive position, right, that illegal actors and that these actors would not be so active.  But now, faced with the data and what we’ve seen from Russia in 2016 and with other factors around the world, we have to act….

Related...

The most important thing, I think that we can do is to ensure that as the other side gets more automated, we also are more automated.   The way to think about it is that much of what Russia did was largely manual, literally troll farms as they’re called, of human beings in Moscow.  We know this because they were operating on Moscow time and were appearing to operate in Virginia and Ohio and Wyoming and so forth and you can imagine the next round of that will be much more automated.

We started with the general American view that bad speech will be replaced by good speech in a crowded network and the problem in the last year is that that may not be true in certain situations especially when you have a well-funded opponent who’s trying to actively spread this information.  So, I think everyone is sort of grappling with “Where is that line” (i.e. the line of censorship).    

I am strongly not in favour of censorship, I am very strongly in favour of ranking and that’s what we do…You would de-rank, that is lower rank, information that was repetitive, exploitive, false, likely to have been weaponized and so forth.”

It’s very difficult for us to ascertain truth.

Given that background on Dr. Schmidt’s preferred approach to fake news, the following comments are particularly telling.  When asked by a questioner if it was necessary for Google to monetarize “Russian propaganda outlets” such as Sputnik with Google Adsense, a function that provides Sputnik with income when a reader clicks on a Google Ad that is displayed on a webpage, Dr. Schmidt answered:

“So, we’re well aware of this one ande are working on detecting this kind of scenario you are describing and again, de-ranking those kinds of sites.  It’s basically RT and Sputnik are the two and there’s a whole bunch of coverage about what we’re doing there.  But we’re well aware of it and we’re trying to engineer the system to prevent it.  We don’t want to ban the sites, that’s not how we operate.”

 

Given that most users go no further than the first page of search engine results, one can see how easily Google could manipulate “the news” to nearly eliminating the Russian viewpoint.

With that, let’s look at how Google/Alphabet/Dr. Schmidt assisted financially during the latest election cycle:

Here are the top recipients:

Note that Hillary Clinton received $1.588 million compared to Donald Trump’s very meagre $22,564.  Perhaps at least some of Dr. Schmidt’s angst about Russia’s alleged involvement in the 2016 U.S. election is connected to the fact that his candidate of choice lost.

Having spent some time in Russia, I found that there were no access problems to websites from around the world.   From my perspective, it certainly did not appear that the Russian government was doing anything to prevent its citizens from accessing all of the content that they wish to access from anywhere in the world.  What’s next? is Google going to write an algorithm that will prevent Russians Chinese and other people around the world from reading their own government’s “propaganda” that may be not particularly pro-Washington and, by doing so, force them to read the American version of the “truth”.

If you wish to watch the entire interaction with Dr. Schmidt, you can go to the link here.  His comments start at the 1 hour and six minute mark. 

I’ve said it before.  George Orwell was right, he was just a few decades ahead of his time.  Non-government actors in the United States, including Google, have learned an important lesson from the 2016 election and we can pretty much assure ourselves that the next election will see significant massaging when it comes to what we read and hear.  At least when it comes to Google, we know that they have our backs when it comes to fake news.

Source: This article was published oyetimes.com By Glen Asher

Categorized in How to
Page 1 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more