Internet company uses technology to find potentially spurious information then turns to government agencies for verification, its president says

China’s biggest search engine, Baidu, checks out 3 billion claims of fake news every year and works closely with government agencies to tackle an issue it calls a global challenge.

The spread of rumours and false information is a problem faced by companies around the world that requires technology and cooperation with external organisations to fix, President Zhang Yaqin told Bloomberg Television. 

Baidu, one of the country’s three largest internet players, employs technology to spot potentially spurious information before turning to local agencies such as the cyberspace administration to verify items, he said.

Pressure is building on social media services from Google to Twitter to try and curb the proliferation of fake news and targeted ads that critics say have an outsized effect on public discourse and elections.

Facebook’s chief security officer, Alex Stamos, said last week it was very difficult to spot fake news and propaganda using computer programs, a view echoed by former Microsoft chief executive Steve Ballmer.

Companies in China, where freedom of speech is heavily curtailed by censorship programs, have long used a mix of advanced technologies and human cybercops to police the internet and suppress opinions deemed to threaten social harmony.

“Every year we see somewhere around 3 billion claims, requests that we need to verify that might turn out to be fake news,” he said. “We’re using a combination of technology and content authorisation to minimise the fake news.

“We have an obligation to make sure the user gets good content, but it continues to be a challenge for us, for other companies in China, and companies in the US,” he added.

Zhang also said the company was expanding its artificial intelligence labs in America and would likely attempt to acquire more companies there as it prepares to put driverless cars on Chinese streets from 2018.

“We will probably see cars as early as next year,” he said. “In three to five years you will see some of the cars on the street as commercial vehicles.”

Source: This article was published scmp.com

Categorized in News & Politics

When it comes to internet trolls, online harassment and fake news, there’s not a lot of light at the end of the online tunnel. And things are probably going to get darker.

Researchers at the Pew Research Center and Elon University’s Imagining the Internet Center asked 1,537 scholars and technologists what they think the future of the Internet – in terms of how long people will continue to treat each other like garbage – holds. An overwhelming 81 percent said the trolls are winning.

Specifically, the survey asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”

Forty-two percent of respondents think the internet will stay about the same over the next 10 years, while 39 percent said they expect discourse to get even more hostile. Only 19 percent predicted any sort of decline in abuse and harassment. Pew stated that the interviews were conducted between July 1 and August 12 – well before the term “fake news” started making daily headlines.

“People are attracted to forums that align with their thinking, leading to an echo effect,” Vint Cerf, a vice president at Google, said. “This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it.”

Respondents could submit comments with their answers, and the report is chock full (literally hundreds) of remarks from professors, engineers and tech leaders.

Experts blamed the rotting internet culture to every imaginable factor: the rise of click-bait, bot accounts, unregulated comment sections, social media platforms serving as anonymous public squares, the hesitation of anyone who avoids condemning vitriolic posts for fear of stepping on free speech or violating first amendment rights — and even someone merely having a bad day.

The steady decline of the public’s trust in media is another not-helpful factor. People have, historically, adopted their barometer for civil discourse from news organizations – which, with social media and the cable news format, just isn’t the case anymore.

“Things will stay bad because to troll is human,” the report states. Basically humanity’s always been awful, but now its in the plainest sight.

But setting up system to simply punish the bad actors isn’t necessarily the solution, and could result in a sort of “Potemkin internet.” The term Potemkin comes from Grigory Potemkin, a Russian military leader in the 18th century who fell in love with Catherine the Great and built fake villages along one of her routes to make it look like everything was going great. A “Potemkin village” is built to fool others into thinking a situation is way better than it is.

“The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor,” Susan Etlinger, a technology industry analyst, told Pew. “Of course, this is already happening, just out of sight of most of us.”

Tor is free, downloadable software that lets you anonymously browse the web. It’s pretty popular among trolls, terrorists and people who want to get into the dark web or evade government surveillance.

But these tools aren’t always employed for dark purposes.

“Privacy and anonymity are double-edged swords online because they can be very useful to people who are voicing their opinions under authoritarian regimes,” Norah Abokhodair, an information privacy researcher at the University of Washington, wrote in the report. “However the same technique could be used by the wrong people and help them hide their terrible actions.”

Glass-half-full respondents did offer a glimmer of hope. Most of the experts on the side of “it’s going to get better” placed their bets on technology’s ability to advance and serve society. One anonymous security engineer wrote that “as the tools to prevent harassment improve, the harassers will be robbed of their voices.”

But for now, we have a long way to go.

“Accountability and consequences for bad action are difficult to impose or toothless when they do,” Baratunde Thurston, a fellow at MIT Media Lab who’s also worked The Onion and Fast Company, wrote. “To quote everyone ever, things will get worse before they get better.”

Source : nypost.com

Categorized in Internet Privacy

Fake news” has become a topic of household conversation. It is more important than ever to have a firm understanding of what authentic and reputable journalism is, and what is actually fake news.

It is more important than ever that individuals be proactive in differentiating fake news from real news, especially in the social media world. Consider some of the points below to get your education started.

What Actually is Fake News?

According to GCF Learn Free, “…Fake news is any article or video containing untrue information disguised as a credible news source. Fake news typically comes from sites that specialize in bogus or sensationalized stories. It tends to use provocative headlines, like Celebrity endorses not brushing teeth or Politician selling toxic waste on the black market.”

However, as GCF points out, these stories are becoming increasingly dangerous in the digital age, with many people consuming stories on social media without fact checking or bothering to confirm that such headlines that aim for a “shock” factor even exist. Once these stories are shared and become popularized, enough people believe the story and accept the story as truthful. Oftentimes, this can even become subconscious. This cycle is vicious in the social media world, as stories that make it to the top of news feed are all-too-often untruthful clickbait.

Jokes and Satire Are Not Fake News

It is also incredibly important to mention that satire sites like The Onion and Clickhole, which feature funny stories based on relevant current events, are not “fake news.” They are smart satire pieces intended to be humorous — not real — and their entire sites are based around their readers being knowledgeable about this strategy and theme. With branding like Clickhole’s own “Because Everything Deserves to Go Viral” or The Onion’s “America’s Finest News Source,” their articles’ joking nature is intended to be common knowledge.

A Word on Mainstream News

For the most part, trust in major news sources really lies in the eye of the beholder. See the image below highlighting the most trusted major news sources in America (most of which are actually British), as found by Pew Research Center and cited by Business Insider. While there are discrepancies based on ideological views, most major news sources do have to undergo editorial reviews and are recognized as being prestigious forms of journalism. The BBC, PBS, NPR, the Wall Street Journal, ABC, NBC, CNN, USA Today, and Google News were among the most trusted across ideological groups (with the exception of consistently conservative folks — who favor BBC, Google News, and the Wall Street Journal).

Trust levels of news sources by ideological group

How to Differentiate Between Fake News and Real News

There are definitely some things you can do if you are not certain a story is real or fake. Here are some tips to help you differentiate between fake news and real news stories:

What is the Site?

As discussed above, while people fall all over the board ideologically in deciding whether they trust a mainstream news source, the truth is that most major recognized sources for news journalism are not going to be producing clickbait fake news. Most of the fake news that go for “shock” value and produce fake stories are not as recognized. Look into the source itself and see whether it is a website that can be trusted.

Check the Domain

NPR recently reported that many fake news stories use similar URLs and domain names to mimic reputable news sources, but rather than using a .com they use .com.co endings. “This is true even when the site looks professional and has semi-recognizable logos. For example, abcnews.com is a legitimate news source, but abcnews.com.co is not, despite its similar appearance.”

What are the Authors’ Sources?

Good news stories contain links to other reputable reporting by respected organizations. They contain interviews with individuals who can confirm or deny they made the claim. They are supported by evidence, dates, and other data that can be fact checked. Be wary of sources that cannot substantiate their claims.

Fact Check!

When in doubt, fact-check the information that you read! You can start with a simple search to look into the keywords or the event that is being reported on. You can also use sites like PolitiFactFactCheck, and Snopes — all of which are extremely reputable fact checking sites for a variety of issues and topics (not just politics).

Fact checking fake news on Snopes

Snopes indicating that a news story is false

Examine the Website Closely

It is important to not look at one story alone but to look at the full spectrum of details on the site. Are there other fake-looking or shocking headlines? What does the overall website look like? How is the user experience? Sometimes doing just a little further digging will make it evident if a news story is fake.

How Informed Users Can Interact

Once you identify if a story is real or fake, you can make a big difference. Do not share stories on social media that are fake and make them more visible. If you notice a friend or family member share a fake story on a social media outlet, do them a favor and comment or message them showing how you found out it was fake so they don’t repeat the same mistake.

If you come across a fake news article, comment on it stating how you arrived at the conclusion it was fake. If everyone does their part to distinguish fake news stories and make them known, then they won’t be shared as easily.

How do you differentiate between fake news and actual news stories? Do you see this as an increasing problem on social media? Let us know your thoughts and what strategies you use for identification on social media.

Author : Amanda

Source : searchenginejournal.com

Categorized in How to

BARACK Obama is planning a coup, fluoride is dulling my IQ and five US Presidents were members of the Ku Klux Klan — well, that’s if you believe the “facts” that Google delivers.

The search engine giant has joined Facebook as being a deliverer of fake news, thanks to the reliance of an algorithm which looks for popular results rather than true results.

Generally, Google escapes a lot of the bad press that other tech giants, quite fairly, cop.

Twitter is a place where nameless trolls say inexecutable things while Facebook is the place where ignorant people share their ignorant views in a way that is unreasonably popular. Just ask US President Donald Trump.

But now it’s Google’s term to cop some flak and it’s because the search engine, rather than just deliver results, also seeks to return what Danny Sullivan of Search Engine Land calls the “one true answer”.

The reason Google is now a spreader of lies and falsehood comes down to the realisation that we Google things we want an answer to.

Google Inc. headquarters in Mountain View, California. Picture: AP

Google Inc. headquarters in Mountain View, California. Picture: APSource:News Limited 

Want to know “when did World War II” end, you type it into Google. And rather than just get a link to dozens of websites, we also get a box at the top of the screen with the dates of World War II.

You have a question and now you have an answer.

This way of delivering a fact is called a “featured snippet”. It’s been a feature that Google has delivered since 2014 and, generally, people have been happy. But they’re not happy now because Google’s one true answer, in some cases, is total rubbish.

The problem is particularly highlighted with the Google Home speaker, the smart speaker that in some cases has been delivering dumb answers.

Several people have shared videos on YouTube and Twitter of asking Google Home the question: Is Obama planning a coup?

The real answer would be something like “naw mate, he’s living the good life and glad to be doing so”. The answer, according to Google, is yep — he’s in league with the Chinese.

Likewise, according to Google Home, there have been five US presidents who were members of the Ku Klux Klan. Nope, according to more reliable sources, there is no evidence that any US presidents were members of the Klan although some were racists. (Eight US presidents, including George Washington, owned slaves.)

You can keep going down this rabbit hole of misinformation that is not all right-wing conspiracies. According to Google snippets, Obama’s birth certificate is forged, Donald Trump is paranoid and mentally ill and “republicans = Nazis”.


Not all of the false answers are political. There is medical misinformation, including the claim that fluoride will lower your IQ, and it took God six days to create the Earth.

Google has issued a statement blaming the misinformation on the algorithm and says people can click on a feedback button on each boxed fact to report it as incorrect.

The problem Google faces in all of this is the amount of misinformation out there.

The “facts” that it delivers comes from the top ten results for each query. Arguably, Google is the messenger and someone else has created the falsehood and spread it.

Sullivan crunched the numbers to work out how Google might fix it.

It could, for instance, assign a person to check each fact.

But given Google processes 5 billion queries a day and about 15 per cent of them have featured snippets, that would require someone to check nearly 1 billion facts a day.

Or it could drop the feature altogether, but the problem in the age of Apple Siri, Amazon’s Alexa and Google Home, is that people are now used to asking a device a question and expecting an answer.

Other solutions would be to more obviously source the fact, so that it’s clear that it comes from something that is an unreliable source. Or only deliver snippets if they come from a list of vetted sites — but even that is problematic.

Here is the one real answer. Don’t believe everything you hear — even if the person talking is a smart speaker with artificial intelligence. They’ll say anything.

Source : http://www.news.com.au/technology/gadgets/google-joins-facebook-in-fake-news-cycle-with-algorithm-delivering-false-facts/news-story/1d65166dc1a2ac947aa3c0d10c806721

Categorized in Search Engine

A beta version of Hoaxy, a search engine designed to track fake news, was released Wednesday by Indiana University's Network Science Institute and its Center for Complex Networks and System Research. Hoaxy indexes stories from 132 sites known to produce fake news, such as WashingtonPost.com.co and MSNBC.website, and allows you to see how these sites' links spread across social media.

Fake news has plagued the internet and social networks for a long time but has grown in prominence in the past year or so, forcing Facebook to introduce new features to flag false articles. The hoaxes have lead to real-life consequences, with a fake news creator taking some credit for Donald Trump's White House win and a Washington DC shooting earlier this month related to "Pizzagate." Even Pope Francis has chimed in, comparing the spread of fake news to a literal shit show.

Type any subject, and Hoaxy responds with a list of fake articles related to the search term.

There are even fake news stories about fake news. A search for "Pizzagate," for example, generates 20 results on Hoaxy. Pizzagate itself is a conspiracy theory falsely claiming Hillary Clinton helped run a child sex ring out of a Washington DC pizza place. The lies culminated with a gunman, who claimed to be investigating Pizzagate, opening fire at the targeted pizzeria on December 4.


DC restaurant Comet Ping Pong became tangled in a series of fake news stories that have been dubbed "Pizzagate."

The Washington Post/Getty Images

Following the shooter's arrest, the website DC Clothesline published a fake-news article on December 6 titled "More Evidence Pizzagate Shooting is a PSYOP: The Shooter Has an IMDB Page, He's Literally An Actor." The article claims the shooting was propaganda and faked. The story claims the gunman, Edgar Maddison Welch, is an actor -- much like how Sandy Hook conspiracy theorists say the victims killed in the 2012 school shooting were also actors.

After selecting an article on Hoaxy, you can visualize its influence in two charts, one showing its popularity over time and the other showing how it spread on Twitter.

The DC Clothesline story has been shared on Twitter 137 times and 643 times on Facebook. On the chart, you can watch it move from the DC Clothesline's Twitter account through its network of followers and how the article branches off after that.

The Indiana University research center decided to build Hoaxy and track the spread of fake news to figure out how to address the public's concerns about this issue, Filippo Menczer, the center's director, said in an interview.

"Until we understand the phenomenon, we can't really develop countermeasures," Menczer said.

Using Hoaxy, you can see how the posts are connected on social media.

Hoaxy's developers generate the search results from 132 websites compiled via fake-news warning lists from watchdog sites like Snopes and Fake News Watch.

Hoaxy also tracks the spread of fact-checking articles, which Menczer found don't go as viral as fake articles do.

The search engine also appears to discover stories that may not be fake, per se, but that include a strong bias. A search for "Amazon" finds a December 1 story from Infowars.com titled "Amazon Pushes Islamic Propaganda in New 'Priest and Imam' Commercial." Infowars.com, which traffics in conspiracy theories, gained attention during the presidential campaign for spreading fake news about Democratic nominee Hillary Clinton.

The article is based on a real event: Amazon released a commercial in mid-November featuring a priest and imam that tells a story of two friends ordering gifts for each other with Amazon Prime. However, there is no "Islamic propaganda" in the commercial, just the use of heart-touching humanity to help market Amazon's member services.

Author : Alfred Ng

Source : https://www.cnet.com/news/fake-news-search-engine-tracks-spread-of-lies/

Categorized in News & Politics

Kay Brown, PR and social manager at Leeds digital marketing agency Blueclaw, writes for Digital City about the phenomenon of ‘fake news’ and its impact on her industry.

President Trump has brought the term ‘fake news’ into mainstream, with a 1000 per cent increase in Google searches for the term since November.

For digital marketing agencies like Blueclaw, this has brought a renewed focus on public appetite for authoritative, data-led news and content marketing.

‘Fake news’ accusations have been effective in many cases in part because of public scepticism about the accuracy of ‘official’ mainstream media reporting.

However, Dr Richard Thomas, journalism lecturer at Leeds Trinity University, points to a resurgence of journalistic quality in the UK, prompted by the Leveson inquiry: “Post-Leveson, journalists generally have had to work very hard to regain the trust of the public, even though not all journalists were guilty of the unethical and unlawful behaviour that some went to prison for.”

While some news audiences have made their choice to seek out sources that back up their established political and social viewpoints, others are expressing a desire for accuracy in reporting, and stats they can trust. In digital marketing, the information fatigue that many audiences suffer cannot be ignored and this is why in content marketing, advertising, PR and even SEO, it’s essential to focus on accuracy, results and relevancy.

For Google and other search engines, data-driven relevancy is king with search engine results ranked in order of how useful the website is objectively believed to be for the searcher.

Our content and search strategies are developed with the understanding that news websites are authoritative sources for Google and as such, coverage for a client is key to raising their rankings across their core search terms – so content marketing and PR must be accurate, inspire trust and meet the standards required for coverage.

As search engines and advertising platforms become smarter about the claims that marketers make (and more proactive in the penalties they dish out in terms of lost rankings), the greater the need there is to develop digital marketing strategies that customers, the media and search engines can have faith in.

Just as it was revealed that Wikipedia editors have voted to ban the Daily Mail as a source for the website in all but exceptional circumstances, other online platforms are ever-more discerning about the content they will support.

As well as being aware that traditional media brands are in question more than ever, the way we work with journalists is changing.

There is a greater desire for information and stories that have quantitative data and multiple sources to reinforce their validity. There are also new opportunities for PR professionals to earn worthwhile coverage with content that is genuinely in the public interest.

However, not all data is equal so it is emerging that the role of both the PR graduate and journalist is to increasingly identify relevant data sources, dissect data sets and present it in an informative, easy to understand way that doesn’t undermine or affect the rigour of the data presented, whether the information is shared in 140 characters or more.

Author : Kay Brown

Source : http://www.yorkshireeveningpost.co.uk/news/fake-news-is-driving-our-appetite-for-data-says-leeds-expert-1-8411590

Categorized in News & Politics

Fake news. Alternative facts. Media bias. Whatever you want to call it, the Information Age is encountering a crisis of misinformation. And while social media — where like-minded people can share information that supports their worldview — may bear much of the responsibility, search is not blameless.

“Search engines are an essential part of the fake news crisis,” Paul Levinson, professor of communications and media studies at Fordham University. “If someone searches on a topic, and search engines serve up fake news — not identified as fake news — that obviously fuels and exacerbates the problem.” 

The nature of search makes it a difficult problem to combat. Trending topics and heavily linked, heavily trafficked sites (as well as other factors like “dwell time”) naturally move a site to the top of a results queue, regardless of their veracity, leaving it up to users to decide what’s true and what isn’t.

Google ran into this problem last year, when listings put a Holocaust-denial site first among results for a simple, “When did the Holocaust happen” search. The search engine giant quickly rectified the problem, telling Search Engine Land that it has changed its algorithm to “help surface more high-quality, credible content on the Web,” though it did not provide details about what those changes were or how it would determine credible and authoritative sites from those that are not.

“It would be very helpful to have algorithms that can quickly distinguish between truth and falsity, but that's not easy to do,” Levinson notes.

The issue is thorny. Search engines are designed to present the information it “thinks” as user is looking for. Limiting those results based on what’s “true” and “fake” (especially when there can be so much gray in between) can lead to cries of censorship, or even exacerbate the problem.

“Truth should never be suppressed, and neither should lies, untruths or alternative facts,” says Larry Burris, professor at Middle Tennessee State University’s School of Journalism. “Exploring these non-facts can, in reality, help us discover the truth, and thus we should ask, why should we suppress an avenue for helping us discover truth and reality?”

On the display side, Google last year updated its AdSense policies to limit ad placements on pages that “misrepresent, misstate or conceal information” about the site publisher or its content’s primary purpose. Among the examples was “enticing users to engage with content under false or unclear pretenses,” which would presumably include intentionally fake news.

But that’s only one side of Google’s business. The impact of fake news on search marketing (and search marketers’ responsibility to combat it) is blurrier. Obviously, search marketers should work to make sure they’re not part of the problem, credibly sourcing and disseminating provable information in their own content. They should also monitor what’s being said about them to combat any false information as it happens.

“If you’re a marketer, you’re not in the fake news business, [but] in cases where you think your product is being pushed down by fake news, you can seek redress from Google,” says Sastry Rachakonda, CEO of digital marketing company iQuanti, who also suggests monitoring how your information may be repurposed and reused elsewhere on the Internet.

Clearly, this is only the beginning of what will likely be a larger and longer discussion, but it’s one search marketers should keep on their list.

Author : Aaron Baar

Source : http://www.mediapost.com/publications/article/293605/whats-searchs-role-in-combating-fake-news.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+search-insider+(MediaPost+%7C+Search+Insider)

Categorized in News & Politics

Researchers at Indiana University have developed a search engine, Hoaxy, to demonstrate how fake news and unverified stories spread through social media, after showing how they generated revenue from advertising based on the misinformation.

Filippo Menczer, the director of the university's Center for Complex Networks and Systems Research, is heading the joint project between the university and the Center for Complex Networks and Systems Research. Menczer and Giovanni LucaCiampaglia, assistant research scientist at the university, coordinated the project. Along with a team, the two have created an open platform for the automatic tracking of both online fake news and fact-checking on social media.

The platform tracks social sharing of links on "independent fact-checking sites like snopes.com and factcheck.org, and sources that publish inaccurate, unverified, or satirical claims "according to lists compiled and published by reputable news and fact-checking organizations," per the Frequently Asked Questions (Q&A) page on the university's site.

"Social media makes it more likely that I am more exposed to false information that I am likely to believe," Menczer told Reuters.

Hoaxy relies on Web crawlers to extract links to articles posted on fake news Web sites and links to their debunking on fact-checking sites. The post explains that the search engine uses social media APIs to monitor how these links spread through online social networks. The collected data is stored on a database for retrieval and analysis. A dashboard will provide interactive analytics and visualizations.

Fake news also enabled Menczer and researchers to make money during an earlier research phase. Ten years ago, Menczer and colleagues ran an experiment where 72% of the college students participating in the study trusted links that appeared to originate from friends, even to the point of entering personal login information on phishing sites.

He placed ads on a fake Web page that he created and populated the site with random, computer-generated gossip news. A disclaimer ran at the bottom of the page saying the site contained meaningless text and made-up facts. At the end of the month, he received a check in the mail with earnings from the ads, proving that he could make money off fake news.

Google is actively working to stop fake news from appearing in search engine results and also is taking steps to stop advertisements powered by AdSense, its ad-serving platform, from serving on publisher sites known to publish fake news.

Author:  Laurie Sullivan

Source:  http://www.mediapost.com/publications/article/291655/researchers-create-search-engine-that-identifies-h.html

Categorized in Search Engine

Last Thursday, after weeks of criticism over its role in the proliferation of falsehoods and propaganda during the presidential election, Facebook announced its plan to combat “hoaxes” and “fake news.” The company promised to test new tools that would allow users to report misinformation, and to enlist fact-checking organizations including Snopes and PolitiFact to help litigate the veracity of links reported as suspect. By analyzing patterns of reading and sharing, the company said, it might be able to penalize articles that are shared at especially low rates by those who read them — a signal of dissatisfaction. Finally, it said, it would try to put economic pressure on bad actors in three ways: by banning disputed stories from its advertising ecosystem; by making it harder to impersonate credible sites on the platform; and, crucially, by penalizing websites that are loaded with too many ads.

Over the past month the colloquial definition of “fake news” has expanded beyond usefulness, implicating everything from partisan news to satire to conspiracy theories before being turned, finally, back against its creators. Facebook’s fixes address a far more narrow definition. “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain,” wrote Adam Mosseri, a vice president for news feed, in a blog post.

Facebook’s political news ecosystem during the 2016 election was vast and varied. There was, of course, content created by outside news media that was shared by users, but there were also reams of content — posts, images, videos — created on Facebook-only pages, and still more media created by politicians themselves. During the election, it was apparent to almost anyone with an account that Facebook was teeming with political content, much of it extremely partisan or pitched, its sourcing sometimes obvious, other times obscured, and often simply beside the point — memes or rants or theories that spoke for themselves.

Facebook seems to have zeroed in on only one component of this ecosystem — outside websites — and within it, narrow types of bad actors. These firms are, generally speaking, paid by advertising companies independent of Facebook, which are unaware of or indifferent to their partners’ sources of audience. Accordingly, Facebook’s anti-hoax measures seek to regulate these sites by punishing them not just for what they do on Facebook, but for what they do outside of it.

“We’ve found that a lot of fake news is financially motivated,” Mosseri wrote. “Spammers make money by masquerading as well-known news organizations and posting hoaxes that get people to visit to their sites, which are often mostly ads.” The proposed solution: “Analyzing publisher sites to detect where policy enforcement actions might be necessary.”

The stated targets of Facebook’s efforts are precisely defined, but its formulation of the problem implicates, to a lesser degree, much more than just “the worst of the worst.” Consider this characterization of what makes a “fake news” site a bad platform citizen: It uses Facebook to capture receptive audiences by spreading lies and then converts those audiences into money by borrowing them from Facebook, luring them to an outside site larded with obnoxious ads. The site’s sin of fabrication is made worse by its profit motive, which is cast here as a sort of arbitrage scheme. But an acceptable news site does more or less the same thing: It uses Facebook to capture receptive audiences by spreading not-lies and then converts those audiences into money by luring them to an outside site not-quite larded with not-as-obnoxious ads. In either case, Facebook users are being taken out of the safe confines of the platform into areas that Facebook does not and cannot control.

In this context, this “fake news” problem reads less as a distinct new phenomenon than as a flaring symptom of an older, more existential anxiety that Facebook has been grappling with for years: its continued (albeit diminishing) dependence on the same outside web that it, and other platforms, have begun to replace. Facebook’s plan for “fake news” is no doubt intended to curb certain types of misinformation. But it’s also a continuation of the company’s bigger and more consequential project — to capture the experiences of the web it wants and from which it can profit, but to insulate itself from the parts that it doesn’t and can’t. This may help solve a problem within the ecosystem of outside publishers — an ecosystem that, in the distribution machinery of Facebook, is becoming redundant, and perhaps even obsolete.

As Facebook has grown, so have its ambitions. Its mantralike mission (to “connect the world”) is rivaled among internet companies perhaps by only that of Google (to “organize the world’s information”) in terms of sheer scope. In the run-up to Facebook’s initial public offering, Mark Zuckerberg told investors that the company makes decisions “not optimizing for what’s going to happen in the next year, but to set us up to really be in this world where every product experience you have is social, and that’s all powered by Facebook.”

To understand what such ambition looks like in practice, consider Facebook’s history. It started as an inward-facing website, closed off from both the web around it and the general public. It was a place to connect with other people, and where content was created primarily by other users: photos, wall posts, messages. This system quickly grew larger and more complex, leading to the creation, in 2006, of the news feed — a single location in which users could find updates from all of their Facebook friends, in roughly reverse-chronological order.

When the news feed was announced, before the emergence of the modern Facebook sharing ecosystem, Facebook’s operating definition of “news” was pointedly friend-centric. “Now, whenever you log in, you’ll get the latest headlines generated by the activity of your friends and social groups,” the announcement about the news feed said. This would soon change.

In the ensuing years, as more people spent more time on Facebook, and following the addition of “Like” and “Share” functions within Facebook, the news feed grew into a personalized portal not just for personal updates but also for the cornucopia of media that existed elsewhere online: links to videos, blog posts, games and more or less anything else published on an external website, including news articles. This potent mixture accelerated Facebook’s change from a place for keeping up with family and friends to a place for keeping up, additionally, with the web in general, as curated by your friends and family. Facebook’s purview continued to widen as its user base grew and then acquired their first smartphones; its app became an essential lens through which hundreds of millions of people interacted with one another, with the rest of the web and, increasingly, with the world at large.

Facebook, in other words, had become an interface for the whole web rather than just one more citizen of it. By sorting and mediating the internet, Facebook inevitably began to change it. In the previous decade, the popularity of Google influenced how websites worked, in noticeable ways: Titles and headlines were written in search-friendly formats; pages or articles would be published not just to cover the news but, more specifically, to address Google searchers’ queries about the news, the canonical example being The Huffington Post’s famous “What Time Does The Super Bowl Start?” Publishers built entire business models around attracting search traffic, and search-engine optimization, S.E.O., became an industry unto itself. Facebook’s influence on the web — and in particular, on news publishers — was similarly profound. Publishers began taking into consideration how their headlines, and stories, might travel within Facebook. Some embraced the site as a primary source of visitors; some pursued this strategy into absurdity and exploitation.

Facebook, for its part, paid close attention to the sorts of external content people were sharing on its platform and to the techniques used by websites to get an edge. It adapted continually. It provided greater video functionality, reducing the need to link to outside videos or embed them from YouTube. As people began posting more news, it created previews for links, with larger images and headlines and longer summaries; eventually, it created Instant Articles, allowing certain publishers (including The Times) to publish stories natively in Facebook. At the same time, it routinely sought to penalize sites it judged to be using the platform in bad faith, taking aim at “clickbait,” an older cousin of “fake news,” with a series of design and algorithm updates. As Facebook’s influence over online media became unavoidably obvious, its broad approach to users and the web became clearer: If the network became a popular venue for a certain sort of content or behavior, the company generally and reasonably tried to make that behavior easier or that content more accessible. This tended to mean, however, bringing it in-house.

To Facebook, the problem with “fake news” is not just the obvious damage to the discourse, but also with the harm it inflicts upon the platform. People sharing hoax stories were, presumably, happy enough with they were seeing. But the people who would then encounter those stories in their feeds were subjected to a less positive experience. They were sent outside the platform to a website where they realized they were being deceived, or where they were exposed to ads or something that felt like spam, or where they were persuaded to share something that might later make them look like a rube. These users might rightly associate these experiences not just with their friends on the platform, or with the sites peddling the bogus stories but also with the platform itself. This created, finally, an obvious issue for a company built on attention, advertising and the promotion of outside brands. From the platform’s perspective, “fake news” is essentially a user-experience problem resulting from a lingering design issue — akin to slow-loading news websites that feature auto-playing videos and obtrusive ads.

Increasingly, legitimacy within Facebook’s ecosystem is conferred according to a participant’s relationship to the platform’s design. A verified user telling a lie, be it a friend from high school or the president elect, isn’t breaking the rules; he is, as his checkmark suggests, who he represents himself to be. A post making false claims about a product is Facebook’s problem only if that post is labeled an ad. A user video promoting a conspiracy theory becomes a problem only when it leads to the violation of community guidelines against, for example, user harassment. Facebook contains a lot more than just news, including a great deal of content that is newslike, partisan, widely shared and often misleading. Content that has been, and will be, immune from current “fake news” critiques and crackdowns, because it never had the opportunity to declare itself news in the first place. To publish lies as “news” is to break a promise; to publish lies as “content” is not.

That the “fake news” problem and its proposed solutions have been defined by Facebook as link issues — as a web issue — aligns nicely with a longer-term future in which Facebook’s interface with the web is diminished. Indeed, it heralds the coming moment when posts from outside are suspect by default: out of place, inefficient, little better than spam.


Source : http://www.nytimes.com/2016/12/22/magazine/facebooks-problem-isnt-fake-news-its-the-rest-of-the-internet.html?_r=1

Categorized in News & Politics

A brand new search engine called Hoaxy, which aims to track the spread of fake news, is now available in beta. The search engine was created by Indiana University’s Network Science Institute and its Center for Complex Networks and System Research.

Hoaxy indexes content from what are said to be 132 sites known for producing fake news. The search engine is capable of creating visual representations of how stories from these sites spread across social media.

Simply search for a subject using Hoaxy and it will return suspected fake news stories related to that search term. This is an efficient way to confirm the validity of a story. For example, if you’re suspicious of a news story being false, you can look up the subject in Hoaxy and see if it returns some of the stories you’ve been reading.

I’m going to grab a subject from Mashable’s list of top fake news stories of 2016 to show you how Hoaxy works. Let’s look at the story about the Corona beer founder who allegedly left a fortune to a small village after passing away.

This story was proven to be false, but how did it spread so far in the first place? Hoaxy knows. Searching for “Corona founder” returned the following result. As you can see it is clearly marked as being false:

Screen Shot 2016-12-21 at 6.21.28 PM

To visualize how the news spread, check the box next to the search result and scroll down to the bottom of the screen where you’ll see a “Visualize” button. Click the button and you’ll get a screen that looks like the one below:

Screen Shot 2016-12-21 at 6.25.59 PM

The graph shows how the story spread over time, while the chart on the right hand side visualizes who initially published the news on Twitter and then who shared shared it from there.

Hoaxy is not perfect and has its flaws. For example, it only tracks the spread of news on Twitter, which is less than ideal considering Facebook has been heavily criticized for its spread of fake news of late.

However, the search engine just launched today and is still in beta. As it develops it could end up being a valuable took for vetting news stories.

Author : Matt Southern

Source : https://www.searchenginejournal.com/new-search-engine-hoaxy-tracks-spread-fake-news/181989/

Categorized in Search Engine
Page 2 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media