fbpx

 Source: This article was Published wired.com By IE LAPOWSKY - Contributed by Member: Bridget Miller

IN LATE JULY, a group of high-ranking Facebook executives organized an emergency conference call with reporters across the country. That morning, Facebook’s chief operating officer, Sheryl Sandberg, explained, they had shut down 32 fake pages and accounts that appeared to be coordinating disinformation campaigns on Facebook and Instagram. They couldn’t pinpoint who was behind the activity just yet, but said the accounts and pages had loose ties to Russia’s Internet Research Agency, which had spread divisive propaganda like a flesh-eating virus throughout the 2016 US election cycle.

Facebook was only two weeks into its investigation of this new network, and the executives said they expected to have more answers in the days to come. Specifically, they said some of those answers would come from the Atlantic Council's Digital Forensics Research Lab. The group, whose mission is to spot, dissect, and explain the origins of online disinformation, was one of Facebook’s newest partners in the fight against digital assaults on elections around the world. “When they do that analysis, people will be able to understand better what’s at play here,” Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said.

Back in Washington DC, meanwhile, DFRLab was still scrambling to understand just what was going on themselves. Facebook had alerted them to the eight suspicious pages the day before the press call. The lab had no access to the accounts connected to those pages, nor to any information on Facebook’s backend that would have revealed strange patterns of behavior. They could only see the parts of the pages that would have been visible to any other Facebook user before the pages were shut down—and they had less than 24 hours to do it.

“We screenshotted as much as possible,” says Graham Brookie, the group’s 28-year-old director. “But as soon as those accounts are taken down, we don’t have access to them... We had a good head start, but not a full understanding.” DFRLab is preparing to release a longer report on its findings this week.

As a company, Facebook has rarely been one to throw open its doors to outsiders. That started to change after the 2016 election, when it became clear that Facebook and other tech giants missed an active, and arguably incredibly successful, foreign influence campaign going on right under their noses. Faced with a backlash from lawmakers, the media, and their users, the company publicly committed to being more transparent and to work with outside researchers, including at the Atlantic Council.

'[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.'

GRAHAM BROOKIE, DIGITAL FORENSICS RESEARCH LAB

DFRLab is a scrappier, substantially smaller offshoot of the 57-year-old bipartisan think tank based in DC, and its team of 14 is spread around the globe. Using open source tools like Google Earth and public social media data, they analyze suspicious political activity on Facebook, offer guidance to the company, and publish their findings in regular reports on Medium. Sometimes, as with the recent batch of fake accounts and pages, Facebook feeds tips to the DFRLab for further digging. It's an evolving, somewhat delicate relationship between a corporate behemoth that wants to appear transparent without ceding too much control or violating users' privacy, and a young research group that’s ravenous for Intel and eager to establish its reputation.

“This kind of new world of information sharing is just that, it’s new,” Brookie says. “[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.”

The lab got its start almost by accident. In 2014, Brookie was working for the National Security Council under President Obama when the military conflict broke out in eastern Ukraine. At the time, he says, the US intelligence community knew that Russian troops had invaded the region, but given the classified nature of their intel they had no way to prove it to the public. That allowed the Russian government to continue denying their involvement.

What the Russians didn’t know was that proof of their military surge was sitting right out in the open online. A working group within the Atlantic Council was among the groups busy sifting through the selfies and videos that Russian soldiers were uploading to sites like Instagram and YouTube. By comparing the geolocation data on those posts to Google Earth street view images that could reveal precisely where the photos were taken, the researchers were able to track the soldiers as they made their way through Ukraine.

“It was old-school Facebook stalking, but for classified national security interests,” says Brookie.

This experiment formed the basis of DFRLab, which has continued using open source tools to investigate national security issues ever since. After the initial report on eastern Ukraine, for instance, DFRLab followed up with a piece that used satellite images to prove that the Russian government had misled the world about its air strikes on Syria; instead of hitting ISIS territory and oil reserves, as it claimed, it had in fact targeted civilian populations, hospitals, and schools.

But Brookie, who joined DFRLab in 2017, says the 2016 election radically changed the way the team worked. Unlike Syria or Ukraine, where researchers needed to extract the truth in a low-information environment, the election was plagued by another scourge: information overload. Suddenly, there was a flood of myths to be debunked. DFRLab shifted from writing lengthy policy papers to quick hits on Medium. To expand its reach even further, the group also launched a series of live events to train other academics, journalists, and government officials in their research tactics, creating even more so-called “digital Sherlocks.”

'Sometimes a fresh pair of eyes can see something we may have missed.'

KATIE HARBATH, FACEBOOK

This work caught Facebook’s attention in 2017. After it became clear that bad actors, including Russian trolls, had used Facebook to prey on users' political views during the 2016 race, Facebook pledged to better safeguard election integrity around the world. The company has since begun staffing up its security team, developing artificial intelligence to spot fake accounts and coordinated activity, and enacting measures to verify the identities of political advertisers and administrators for large pages on Facebook.

According to Katie Harbath, Facebook’s director of politics, DFRLab's skill at tracking disinformation not just on Facebook but across platforms felt like a valuable addition to this effort. The fact that the Atlantic Council’s board is stacked with foreign policy experts including former secretary of state Madeleine Albright and Stephen Hadley, former national security adviser to President George W. Bush, was an added bonus.

“They bring that unique, global view set of both established foreign policy people, who have had a lot of experience, combined with innovation and looking at problems in new ways, using open source material,” Harbath says.

That combination has helped the Atlantic Council attract as much as $24 million a year in contributions, including from government and corporate sponsors. As the think tank's profile has grown, however, it has also been accused of peddling influence for major corporate donors like FedEx. Now, after committing roughly $1 million in funding to the Atlantic Council, the bulk of which supports the DFRLab’s work, Facebook is among the organization's biggest sponsors.

But for Facebook, giving money away is the easy part. The challenge now is figuring out how best to leverage this new partnership. Facebook is a $500 billion tech juggernaut with 30,000 employees in offices around the world; it's hard to imagine what a 14-person team at a non-profit could tell them that they don't already know. But Facebook's security team and DFRLab staff swap tips daily through a shared Slack channel, and Harbath says that Brookie’s team has already made some valuable discoveries.

During the recent elections in Mexico, for example, DFRLab dissected the behavior of a political consulting group called Victory Lab that was spamming the election with fake news, driven by Twitter bots and Facebook likes that appeared to have been purchased in bulk. The team found that a substantial number of those phony likes came from the same set of Brazilian Facebook users. What's more, they all listed the same company, Frases & Versos, as their employer.

The team dug deeper, looking into the managers of Frases & Versos, and found that they were connected with an entity called PCSD, which maintained a number of pages where Facebook users could buy and sell likes, shares, and even entire pages. With the Brazilian elections on the horizon in October, Brookie says, it was critical to get the information in front of Facebook immediately.

"We flagged it for Facebook, like, 'Holy cow this is interesting,'" Brookie remembers. The Facebook team took on the investigation from there. On Wednesday, the DFRLab published its report on the topic, and Facebook confirmed to WIRED that it had removed a network of 72 groups, 46 accounts, and five pages associated with PCSD.

"We’re in this all day, every day, looking at these things," Harbath says. "Sometimes a fresh pair of eyes can see something we may have missed."

Of course, Facebook has missed a lot in the past few years, and the partnership with the DFRLab is no guarantee it won't miss more. Even as it stumbles toward transparency, the company remains highly selective about which sets of eyes get to search for what they've missed, and what they get to see. After all, Brookie's team can only examine clues that are already publicly accessible. Whatever signals Facebook is studying behind the scenes remain a mystery.

Categorized in Internet Privacy

Internet trolls are powerless without anonymity: By obscuring their identities behind random screen names, they can engage in hateful exchanges and prey on complete strangers without fear of retaliation. In doing so, they illustrate one troubling characteristic of humankind: People are shitty to people they don’t know.

Fortunately, curbing this tendency is extremely straightforward, scientists report in a new study.

In a new paper to be published in Science Advances, a team of scientists reveal a simple solution for getting people to cooperate: “Removing the cloak of anonymity seems to go a long way towards making people more thoughtful about their actions,” Hokkaido University biophysicist and economist Marko Jusup, Ph.D., a co-author on the study, tells Inverse in an e-mail.

This conclusion may seem like a no-brainer, but in fact it’s not always clear what factors lead people to cooperate rather than engage in conflict.

Trolling: Easy to do when people can't see your face.

To investigate what makes people choose conflict over peace, he and his colleagues from China’s Northwestern Polytechnical University ran a slightly modified version of the classic social cooperation experiment called the “Prisoner’s Dilemma.” The setup was simple: Pairs of strangers in mock court are told they’ll get a prize by testifying against the other but that they’ll both get fined if both of them do so. If they both remain silent, however, they’ll both walk free. The idea is that ratting on your partner gives you a better reward than cooperating with them, so it’s expected that most rational-thinking people won’t play nice. In their experiments, Jusup and his colleagues modified this concept slightly to see whether people would play nice if they weren’t strangers.

Their 154 participants had “prior knowledge of each other” and were roughly the same age and had the same interests,” Jusup says. They were given the option to maintain or revoke their anonymity during the game, which lasted multiple rounds. The scientists found that when participants knew each other, they were much more likely to cooperate with each other than if they were total strangers. “This paid out very well for all - so, winners play nice,” he said in a statement.

The central question in studying social cooperation, Jusup says, is this: What do prospective cooperators know about each other? The research shows that mutual recognition — even just seeing a familiar face — forces people to act more thoughtfully.

The power of mutual recognition was illustrated in famous ferry scene in The Dark Knight, which showed a much more sinister version of the same Prisoner’s Dilemma. In the scene, the Joker traps two groups of Gotham citizens in separate ferries; both ferries are lined with explosives, and each group has a detonator to blow the other up and set itself free in the next 30 minutes. If neither group triggers the detonator, the Joker will blow them both up. The Joker, of course, thinks that both groups, comprising complete strangers in an evil town who owe each other nothing, should have no problem blowing up the other. But the captives, amid their deliberations, come to an important realization: They’re not strangers but together are Gothamites, and thus they share a set of beliefs of morals that binds them.

Ultimately, recognizing similarities involves exchanging details about identity, and this in turn builds trust. Jusup thinks we can apply the findings from his study to pretty much any situation that requires cooperation. “It would seem that people should take a little time to exchange information about one another before getting down to ‘business’,” he says. “Such an exchange, according to our results, should put people into a more cooperative frame of mind.”

As for dealing with internet trolls, he thinks it’s much more likely that online conversations will stop de-volving “towards the point where participants simply insult one another” if users are forced to share some personal information for others to see, like a simple profile or a photo.

 

But even niceness has its limits, Jusup found: Cycles of conflict and retaliation began when one participant punished the other for screwing them over previously. He and his colleagues thought that punishment might lead a participant to wise up and act more cooperatively the next time around, but the results suggest social cooperation isn’t that straightforward. “This high level of onymity bears a question: To what extent could onymity be lowered and still promote cooperation?” Jusup asks, noting that his follow-up work will address this question. While there’s little doubt that forcing anonymous trolls to reveal themselves will reduce their hatefulness, it remains to be seen how to deal with trolls you know.

Source : inverse.com

Categorized in Internet Privacy

I’m going to confess an occasional habit of mine, which is petty, and which I would still enthusiastically recommend to anyone who frequently encounters trolls, Twitter eggs, or other unpleasant characters online.

Sometimes, instead of just ignoring a mean-spirited comment like I know I should, I type in the most cathartic response I can think of, take a screenshot and then file that screenshot away in a little folder I only revisit when I want to make my coworkers laugh.

I don’t actually send the response. I delete my silly comeback and move on with my life. For all the troll knows, I never saw the original message in the first place. The original message being something like the suggestion, in response to a piece I once wrote, that there should be a special holocaust just for women.

 

It’s bad out there, man!

We all know it by now. The internet, like the rest of the world, can be as gnarly as it is magical.

But there’s a sense lately that the lows have gotten lower, that the trolls who delight in chaos are newly invigorated and perhaps taking over all of the loveliest, most altruistic spaces on the web. There’s a real battle between good and evil going on. A new report by the Pew Research Center and Elon University’s Imagining the Internet Center suggests technologists widely agree: The bad guys are winning.

Researchers surveyed more than 1,500 technologists and scholars about the forces shaping the way people interact with one another online. They asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust and disgust?”

The vast majority of those surveyed—81 percent of them—said they expect the tone of online discourse will either stay the same or get worse in the next decade.

Not only that, but some of the spaces that will inevitably crop up to protect people from trolls may contribute to a new kind of “Potemkin internet,” pretty façades that hide the true lack of civility across the web, says Susan Etlinger, a technology industry analyst at the Altimeter Group, a market research firm.

“Cyberattacks, doxing and trolling will continue, while social platforms, security experts, ethicists and others will wrangle over the best ways to balance security and privacy, freedom of speech and user protections. A great deal of this will happen in public view,” Etlinger told Pew. “The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor.”

Tor is software that enables people to browse and communicate online anonymously—so it’s used by people who want to cover their tracks from government surveillance, those who want to access the dark web, trolls, whistleblowers and others.

“Of course, this is already happening, just out of sight of most of us,” Etlinger said, referring to the use of hidden channels online. “The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

 

The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.

“Very often, hate, anxiety and anger drive participation with the platform,” said Frank Pasquale, a law professor at the University of Maryland, in the report. “Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

News organizations, which once set the tone for civic discourse, have less cultural importance than they once did. The rise of formats like cable news—where so much programming involves people shouting at one another—and talk radio are clear departures from a once-higher standard of discourse in professional media.

Few news organizations are stewards for civilized discourse in their own comment sections, which sends mixed messages to people about what’s considered acceptable. And then, of course, social media platforms like Facebook and Twitter serve as the new public square.

“Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals,” said Andrew Nachison, the founder of We Media, in his response to Pew. “But that’s really a ruse to optimize for quantity. The more we come back, the more money they make... So the shouting match goes on.”

The resounding message in the Pew report is this: There’s no way the problem in public discourse is going to solve itself. “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day,” said Randy Bush, a research fellow at Internet Initiative Japan, in his response to Pew.

Many of those polled said we’re now witnessing the emergence of “flame wars and strategic manipulation” that will only get worse. This goes beyond obnoxious comments, or Donald Trump’s tweets, or even targeted harassment. Instead, we’ve entered the realm of “weaponized narrative” as a 21st-century battle space, as the authors of a recent Defense One essay put it. And just like other battle spaces, humans will need to develop specialized technology for the fight ahead.

 

Researchers have already used technology to begin to understand what they’re up against. Earlier this month, a team of computer scientists from Stanford University and Cornell University wrote about how they used machine-learning algorithms to forecast whether a person was likely to start trolling. Using their algorithm to analyze a person’s mood and the context of the discussion they were in, the researchers got it right 80 percent of the time.

They learned that being in a bad mood makes a person more likely to troll, and that trolling is most frequent late at night (and least frequent in the morning). They also tracked the propensity for trolling behavior to spread. When the first comment in a thread is written by a troll—a nebulous term, but let’s go with it—then it’s twice as likely additional trolls will chime in compared with a conversation not led by a troll to start, the researchers found. On top of that, the more troll comments there are in a discussion, the more likely it is participants will start trolling in other, unrelated threads.

“A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, and even more troll comments elsewhere,” the Stanford and Cornell researchers wrote. “As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.”

Using technology to understand when and why people troll is essential, but many people agree the scale of the problem requires technological solutions. Stopping trolls isn’t as simple as creating spaces that prevent anonymity, many of those surveyed told Pew, because doing so also enables “governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate,” Pew wrote.

“One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” Bailey Poland, the author of “Haters: Harassment, Abuse and Violence Online,” told Pew. Pseudonymity may be one useful approach—so someone’s offline identity is concealed, but their behavior in a certain forum over time can be analyzed in response to allegations of harassment. Machines can help, too: Chatbots, filters and other algorithmic tools can complement human efforts. But they’ll also complicate things.

“When chatbots start running amok—targeting individuals with hate speech—how will we define ‘speech’?” said Amy Webb, the CEO of the Future Today Institute, in her response to Pew. “At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

 

Another challenge is no matter what solutions people devise to fight trolls, the trolls will fight back. Even among those who are optimistic the trolls can be beaten back, and that civic discourse will prevail online, there are myriad unknowns ahead.

“Online discourse is new, relative to the history of communication,” said Ryan Sweeney, the director of analytics at Ignite Social Media, in his response to the survey. “Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”

Source : nextgov.com

Categorized in Internet Technology

When it comes to internet trolls, online harassment and fake news, there’s not a lot of light at the end of the online tunnel. And things are probably going to get darker.

Researchers at the Pew Research Center and Elon University’s Imagining the Internet Center asked 1,537 scholars and technologists what they think the future of the Internet – in terms of how long people will continue to treat each other like garbage – holds. An overwhelming 81 percent said the trolls are winning.

Specifically, the survey asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”

 

Forty-two percent of respondents think the internet will stay about the same over the next 10 years, while 39 percent said they expect discourse to get even more hostile. Only 19 percent predicted any sort of decline in abuse and harassment. Pew stated that the interviews were conducted between July 1 and August 12 – well before the term “fake news” started making daily headlines.

“People are attracted to forums that align with their thinking, leading to an echo effect,” Vint Cerf, a vice president at Google, said. “This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it.”

Respondents could submit comments with their answers, and the report is chock full (literally hundreds) of remarks from professors, engineers and tech leaders.

Experts blamed the rotting internet culture to every imaginable factor: the rise of click-bait, bot accounts, unregulated comment sections, social media platforms serving as anonymous public squares, the hesitation of anyone who avoids condemning vitriolic posts for fear of stepping on free speech or violating first amendment rights — and even someone merely having a bad day.

The steady decline of the public’s trust in media is another not-helpful factor. People have, historically, adopted their barometer for civil discourse from news organizations – which, with social media and the cable news format, just isn’t the case anymore.

“Things will stay bad because to troll is human,” the report states. Basically humanity’s always been awful, but now its in the plainest sight.

 

But setting up system to simply punish the bad actors isn’t necessarily the solution, and could result in a sort of “Potemkin internet.” The term Potemkin comes from Grigory Potemkin, a Russian military leader in the 18th century who fell in love with Catherine the Great and built fake villages along one of her routes to make it look like everything was going great. A “Potemkin village” is built to fool others into thinking a situation is way better than it is.

“The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor,” Susan Etlinger, a technology industry analyst, told Pew. “Of course, this is already happening, just out of sight of most of us.”

Tor is free, downloadable software that lets you anonymously browse the web. It’s pretty popular among trolls, terrorists and people who want to get into the dark web or evade government surveillance.

But these tools aren’t always employed for dark purposes.

“Privacy and anonymity are double-edged swords online because they can be very useful to people who are voicing their opinions under authoritarian regimes,” Norah Abokhodair, an information privacy researcher at the University of Washington, wrote in the report. “However the same technique could be used by the wrong people and help them hide their terrible actions.”

Glass-half-full respondents did offer a glimmer of hope. Most of the experts on the side of “it’s going to get better” placed their bets on technology’s ability to advance and serve society. One anonymous security engineer wrote that “as the tools to prevent harassment improve, the harassers will be robbed of their voices.”

But for now, we have a long way to go.

“Accountability and consequences for bad action are difficult to impose or toothless when they do,” Baratunde Thurston, a fellow at MIT Media Lab who’s also worked The Onion and Fast Company, wrote. “To quote everyone ever, things will get worse before they get better.”

Source : nypost.com

Categorized in Internet Privacy

The internet can be a harsh place. It seems like for every feel-good story or picture of a puppy playing with a kitten, there are 1,000 trolls rummaging through the depths of their minds to post the most vile comments they can imagine. And if you’re a woman or person of color, well, multiply that troll army by 10.

But hey, that’s the internet, right? Except it doesn’t have to be that way. And it might not be for much longer if the folks at Google (GOOG, GOOGL) subsidiary Jigsaw have their way. A kind of high-powered startup inside Google’s parent company Alphabet, Jigsaw focuses on how technology can defend international human rights.

The toxicity of trolls

The company’s latest effort is called the Perspective API. Available Thursday, Feb. 23, Perspective is the result of Jigsaw’s Conversation AI project and uses Google’s machine learning technologies to provide online publishers with a tool that can automatically rank comments in their forums and comments sections based on the likelihood that they will cause someone to leave a conversation. Jigsaw refers to this as a “toxicity” ranking.

“At its core, Perspective is a tool that simply takes a comment and returns back this score from 0 to 100 based on how similar it is to things that other people have said that are toxic,” explained product manager CJ Adams.

Jigsaw doesn’t become the arbiter of what commenters can and can’t say in a publisher’s comment section, though. Perspective is only a tool that publishers use as they see fit. For example, they can give their readers the ability to filter comments based on their toxicity level, so they’ll only see non-toxic posts. Or the publisher could provide a kind of feedback mechanism that tells you if your comments are toxic.

The tool won’t stop you from submitting toxic comments, but it will provide you with the nudge to rethink what you’re writing.

Perspective isn’t just a bad word filter, though. Google’s machine learning actually gives the tool the ability to understand context. So it will eventually be able to tell the difference between telling someone a vacuum cleaner can really suck and that they suck at life.

Perspective still makes mistakes, as I witnessed during a brief demo. But the more comments and information it’s fed, the more it can learn about how to better understand the nuances of human communication.

Jigsaw’s global efforts

In its little over a year of existence, Jigsaw has implemented a series of projects designed to improve the lives of internet users around the world. Project Shield, for example, is a free service that protects news sites from distributed denial of service (DDoS) attacks. Redirect Method uses Adwords targeting tools to help refute ISIS’ online recruitment messages, while Montage helps researchers sort through thousands of hours of YouTube videos to find evidence of potential war crimes.

“We wake up and come to work everyday to try to find ways to use technology to make people around the world safer,” Jigsaw President Jared Cohen said. “We are at this nexus between international security and business.”

Cohen said Jigsaw’s engineers travel around the world to meet with internet users vulnerable to harassment and other online-based rights abuses, such as individuals promoting free speech or opposing authoritarian regimes, to understand their unique challenges. And one of the biggest problems, Cohen explained, has been online harassment.

Trolls aren’t always just cruel

Dealing with trolls is par for the course in the US. But in other countries, harassment in comment sections and forums can have political implications.

“In lots of parts of the world where we spend time [harassment] takes on a political motivation, sectarian motivation, ethnic motivation and it’s all sort kind of heightened and exacerbated,” Cohen explained.

But with Perspective, Jigsaw can start to cut down on those forms of harassing comments, and bring more people into online conversations.

“Our goal is to get as many people to rejoin conversations as possible and also to get people who everyday are sort of entering the gauntlet of toxicity to have an opportunity to see that environment improve,” said Cohen.

The path to a better internet?

Jigsaw is already working with The New York Times and Wikipedia to improve their commenting systems. At The New York Times, the Perspective API is being used to let The Gray Lady enable more commenting sections on its articles.

Prior to using Perspective, The Times relied on employees to manually read and filter comments from the paper’s online articles. As a result, just 10% of stories could have comments activated. The Times is using Perspective to create an open source tool that will help reviewers run through comments more quickly and open up a larger number of stories to comments.

Wikipedia, meanwhile, has been using Perspective to detect personal attacks on its volunteer editors, something Jigsaw and the online encyclopedia recently published a paper on.

With the release of Perspective, publishers and developers around the world can take advantage of Google technologies to improve their users’ experiences. And the conversation filtering won’t just stop hateful comments. Cohen said the company is also working to provide publishers and their readers with the ability to filter out comments that are off-topic or generally don’t contribute to conversations.

If Perspective takes off, and a number of publications end up using the technology, the internet could one day have far fewer trolls lurking in its midst.

Source :https://www.yahoo.com/tech/how-google-is-fighting-the-war-on-internet-trolls-123048658.html 

Categorized in How to
Google is attempting to tackle one of the most hostile places on the Internet: comment sections. This week, the search engine announced a new project called Perspective in collaboration with Jigsaw, a tech incubator owned by Google's parent company.
 
"Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime," said Jared Cohen, Jigsaw's president, in a statement about the problems Perspective aims to address.
 
Perspective is essentially a very smart online moderator. Using machine learning, the technology can identify toxic comments that might drive people who have something constructive to say away from the discussion. The tool was tested in collaboration with The New York Times, where reviewers are currently tasked with sifting through as many as 11,000 comments every day. (Other news sites, including The Week and Bloomberg, have resorted to ditching comment sections all together.)

 

 
The hope is that Perspective will not only speed up the process of reviewing comments and open up new conversations online, but also prohibit toxic comments from being published in the first place.
 
Statistics about online harassment are alarmingly high. According to a report from the Data & Society Research Institute, 47% of people online have experienced some form of abuse, leading 27% of of Internet users to sensor what they say online out of concern that they may become a target themselves.
 
Perspective is still in its early stages, but, if used by online publishers, could have a positive effect on those numbers.
Want to know if your comment would get labeled as toxic? Try the writing experiment here.
 
 
 
 
Author : MADELINE BUXTON
Categorized in Internet Privacy

The internet can be a harsh place. It seems like for every feel-good story or picture of a puppy playing with a kitten, there are 1,000 trolls rummaging through the depths of their minds to post the most vile comments they can imagine. And if you’re a woman or person of color, well, multiply that troll army by 10.

But hey, that’s the internet, right? Except it doesn’t have to be that way. And it might not be for much longer if the folks at Google (GOOG, GOOGL) subsidiary Jigsaw have their way. A kind of high-powered startup inside Google’s parent company Alphabet, Jigsaw focuses on how technology can defend international human rights.

 

The toxicity of trolls

The company’s latest effort is called the Perspective API. Available Thursday, Feb. 23, Perspective is the result of Jigsaw’s Conversation AI project and uses Google’s machine learning technologies to provide online publishers with a tool that can automatically rank comments in their forums and comments sections based on the likelihood that they will cause someone to leave a conversation. Jigsaw refers to this as a “toxicity” ranking.

“At its core, Perspective is a tool that simply takes a comment and returns back this score from 0 to 100 based on how similar it is to things that other people have said that are toxic,” explained product manager CJ Adams.

Jigsaw doesn’t become the arbiter of what commenters can and can’t say in a publisher’s comment section, though. Perspective is only a tool that publishers use as they see fit. For example, they can give their readers the ability to filter comments based on their toxicity level, so they’ll only see non-toxic posts. Or the publisher could provide a kind of feedback mechanism that tells you if your comments are toxic.

The tool won’t stop you from submitting toxic comments, but it will provide you with the nudge to rethink what you’re writing.

Perspective isn’t just a bad word filter, though. Google’s machine learning actually gives the tool the ability to understand context. So it will eventually be able to tell the difference between telling someone a vacuum cleaner can really suck and that they suck at life.

Perspective still makes mistakes, as I witnessed during a brief demo. But the more comments and information it’s fed, the more it can learn about how to better understand the nuances of human communication.

Jigsaw’s global efforts

In its little over a year of existence, Jigsaw has implemented a series of projects designed to improve the lives of internet users around the world. Project Shield, for example, is a free service that protects news sites from distributed denial of service (DDoS) attacks. Redirect Method uses Adwords targeting tools to help refute ISIS’ online recruitment messages, while Montage helps researchers sort through thousands of hours of YouTube videos to find evidence of potential war crimes.

“We wake up and come to work everyday to try to find ways to use technology to make people around the world safer,” Jigsaw President Jared Cohen said. “We are at this nexus between international security and business.”

Cohen said Jigsaw’s engineers travel around the world to meet with internet users vulnerable to harassment and other online-based rights abuses, such as individuals promoting free speech or opposing authoritarian regimes, to understand their unique challenges. And one of the biggest problems, Cohen explained, has been online harassment.

Trolls aren’t always just cruel

Dealing with trolls is par for the course in the US. But in other countries, harassment in comment sections and forums can have political implications.

“In lots of parts of the world where we spend time [harassment] takes on a political motivation, sectarian motivation, ethnic motivation and it’s all sort kind of heightened and exacerbated,” Cohen explained.

But with Perspective, Jigsaw can start to cut down on those forms of harassing comments, and bring more people into online conversations.

“Our goal is to get as many people to rejoin conversations as possible and also to get people who everyday are sort of entering the gauntlet of toxicity to have an opportunity to see that environment improve,” said Cohen.

The path to a better internet?

Jigsaw is already working with The New York Times and Wikipedia to improve their commenting systems. At The New York Times, the Perspective API is being used to let The Gray Lady enable more commenting sections on its articles.

Prior to using Perspective, The Times relied on employees to manually read and filter comments from the paper’s online articles. As a result, just 10% of stories could have comments activated. The Times is using Perspective to create an open source tool that will help reviewers run through comments more quickly and open up a larger number of stories to comments.

Wikipedia, meanwhile, has been using Perspective to detect personal attacks on its volunteer editors, something Jigsaw and the online encyclopedia recently published a paper on.

With the release of Perspective, publishers and developers around the world can take advantage of Google technologies to improve their users’ experiences. And the conversation filtering won’t just stop hateful comments. Cohen said the company is also working to provide publishers and their readers with the ability to filter out comments that are off-topic or generally don’t contribute to conversations.

If Perspective takes off, and a number of publications end up using the technology, the internet could one day have far fewer trolls lurking in its midst.

Author : Daniel Howley

Source : http://finance.yahoo.com/news/how-google-is-fighting-the-war-on-internet-trolls-123048658.html

Categorized in Search Engine

An Internet troll is a member of an online social community who deliberately tries to disrupt, attack, offend or generally cause trouble within the community by posting certain comments, photos, videos, GIFs or some other form of online content.

You can find trolls all over the Internet -- on message boards, in your YouTube video comments, on Facebook, on dating sites, in blog comment sections and everywhere else that has an open area where people can freely post to express their thoughts and opinions. Controlling them can be difficult when there are a lot of community members, but the most common ways to get rid of them include either banning/blocking individual user accounts (and sometimes IP addresses altogether) or closing off comment sections entirely from a blog post, video page or topic thread.

 

Regardless of where you'll find Internet trolls lurking, they all tend to disrupt communities in very similar (and often predictable) ways. This isn't by any means a complete list of all the different types of trolls out there, but they're most certainly some of the most common types you'll often come across in active online communities.

1-The insult troll

The insult troll is a pure hater, plain and simple. And they don't even really have to have a reason to hate or insult someone. These types of trolls will often pick on everyone and anyone -- calling them names, accusing them of certain things, doing anything they can to get a negative emotional response from them -- just because they can. In many cases, this type of trolling can become so severe that it can lead to or be considered a serious form of cyberbullying.

2-The persistent debate troll

This type of troll loves a good argument. They can take a great, thoroughly researched and fact-based piece of content, and come at it from all opposing discussion angles to challenge its message. They believe they're right, and everyone else is wrong. You'll often also find them leaving long threads or arguments with other commenters in community comment sections, and they're always determined to have the last word -- continuing to comment until that other user gives up. 

3-The grammar and spellcheck troll

You know this type of troll. They're the people who always have to tell other users that they have misspelled words and grammar mistakes. Even when they do it by simply commenting with the corrected word behind an asterisk symbol, it's pretty much never a welcomed comment to any discussion. Some of them even use a commenter's spelling and grammar mistakes as an excuse to insult them.

4-The forever offended troll

When controversial topics are discussed online, they're bound to offend someone. That's normal. But then there are the types of trolls who can take a piece of content -- often times it's a joke, a parody or something sarcastic -- and turn on the digital waterworks. They're experts at taking humorous pieces of content and turning them into an argument by playing the victim. People really do get upset by some of the strangest things said and done online.

5-The show-off, know-it-all or blabbermouth troll

A close relative to the persistent debate troll, the show-off or blabbermouth troll is a person who doesn't necessarily like to participate in arguments but does love to share his opinion in extreme detail, even spreading rumors and secrets in some cases. Think of that one family member or friend you know who just loves to hear his own voice. That's the Internet equivalent of the show-off or know-it-all or blabbermouth troll. They love to have long discussions and write lots of paragraphs about whatever they know, whether anyone reads it or not. 

 

6-The profanity and all-caps troll

Unlike some of the more intelligent trolls like the debate troll, the grammar troll and the blabbermouth troll, the profanity and all-caps troll is the guy who has nothing really of value to add to the discussion, spewing only F-bombs and other curse words with his caps lock button left on. In many cases, these types of trolls are just bored kids looking for something to do without needing to put too much thought or effort into anything. On the other side of the screen, they're often harmless.

7-The one word only troll

There's always that one contributor to a Facebook status update, a forum thread, and Instagram photo, a Tumblr post or any other form of social posting who just says "lol" or "what" or "k" or "yes" or "no." They're certainly far from the worst type of troll you meet online, but when a serious or detailed topic is being discussed, their one-word replies are just a nuisance to all who are trying add value and follow the discussion.

8-The exaggeration troll

Exaggeration trolls can sometimes be a combination of know-it-alls, the offended and even debate trolls. They know how to take any topic or problem and completely blow it out of proportion. Some of them actually try to do it to be funny, and sometimes they succeed, while others do it just to be annoying. They rarely ever contribute any real value to a discussion and often bring up problems and issues that may arguably be unrelated to what's being discussed.

9-The off topic troll

It's pretty hard not to hate that guy who posts something completely off topic in any type of social community discussion. It can be even worse when that person succeeds in shifting the topic and everyone ends up talking about whatever irrelevant thing that he posted. You see it all the time online -- in the comments of Facebook posts, in threaded YouTube comments, on Twitter and literally anywhere there're active discussions happening.  

10-The greedy spammer troll

Last but not least, there's the dreaded spammer troll. This it the troll who truly could not care less about your post or discussion and is only posting to benefit himself. He wants you to check out his page, buy from his link, use his coupon code or download his free ebook. These trolls also include all those users you see littering discussions on Twitter and Instagram and every other social network with "follow me!!!" posts. 

Author : Elise Moreau

Source : https://www.lifewire.com/types-of-internet-trolls-3485894

Categorized in Internet Privacy

Have you ever been attacked by trolls on social media? I have. In December a mocking tweet from white supremacist David Duke led his supporters to turn my Twitter account into an unholy sewer of Nazi ravings and disturbing personal abuse. It went on for days.

We’re losing the Internet war with the trolls. Faced with a torrent of hate and abuse, people are giving up on social media, and websites are removing comment features. Who wants to be part of an online community ruled by creeps and crazies?

Fortunately, this pessimism may be premature. A new strategy promises to tame the trolls and reinvigorate civil discussion on the Internet. Hatched by Jigsaw, an in-house think tank at Google’s parent company, Alphabet (GOOGL, +1.96%), the tool relies on artificial intelligence and could solve the once-impossible task of vetting floods of online comments.

To explain what Jigsaw is up against, chief research scientist Lucas Dixon compares the troll problem to so-called denial-of-service attacks in which attackers flood a website with garbage traffic in order to knock it off-line.

“Instead of flooding your website with traffic, it’s flooding the comment section or your social media or hashtag so that no one else can have a word, and basically control the conversation,” says Dixon.

Such surges of toxic comments are a threat not only to individuals, but also to media companies and retailers—many of whose business models revolve around online communities. As part of its research on trolls, Jigsaw is beginning to quantify the damage they do. In the case of Wikipedia, for instance, Jigsaw can measure the correlation between a personal attack on a Wikipedia editor and the subsequent frequency the editor will contribute to the site in the future.

The solution to today’s derailed online discourse lies in reams of data and deep learning, a fast-evolving subset of artificial intelligence that mimics the neural networks of the brain. Deep learning gave rise to recent and remarkable breakthroughs in Google’s translation tools.

In the case of comments, Jigsaw is using millions of comments from the New York Times and Wikipedia to train machines to recognize traits like aggression and irrelevancy. The implication: A site like the Times, which has the resources to moderate only about 10% of its articles for comments, could soon deploy algorithms to expand those efforts 10-fold.

While the tone and vocabulary on one media outlet comment section may be radically different from another’s, Jigsaw says it will be able to adapt its tools for use across a wide variety of websites. In practice, this means a small blog or online retailer will be able to turn on comments without fear of turning a site into a vortex of trolls.

Technophiles seem keen on what Jigsaw is doing. A recent Wired feature dubbed the unit the “Internet Justice League” and praised its range of do-gooder projects.

But some experts say that the Jigsaw team may be underestimating the challenge.

Recent high-profile machine learning projects focused on identifying images and translating text. But Internet conversations are highly contextual: While it might seem obvious, for example, to train a machine learning program to purge the word “bitch” from any online comment, the same algorithm might also flag posts in which people are using the term more innocuously—as in, “Life’s a bitch,” or “I hate to bitch about my job, but …” Teaching a computer to reliably catch the slur won’t be easy.

“Machine learning can understand style but not context or emotion behind a written statement, especially something as short as a tweet. This is stuff it takes a human a lifetime to learn,” says David Auerbach, a former Google software engineer. He adds that the Jigsaw initiative will lead to better moderation tools for sites like the New York Times but will fall short when it comes to more freewheeling forums like Twitter and Reddit.

Such skepticism doesn’t faze Jigsaw’s Dixon. He points out that, like denial-of-service attacks, trolls are a problem that will never be solved but their effect can be mitigated. Using the recent leaps in machine learning technology, Jigsaw will tame the trolls enough to let civility regain the upper hand, Dixon believes.

Jigsaw researchers also point out that gangs of trolls—the sort that pop up and spew vile comments en masse—are often a single individual or organization deploying bots to imitate a mob. And Jigsaw’s tools are rapidly growing adept at identifying and stifling such tactics.

 

Dixon also has an answer to the argument that taming trolls won’t work because the trolls will simply adapt their insults whenever a moderating tool catches on to them.

“The more we introduce tools, the more creative the attacks will be,” Dixon says. “The dream is the attacks at some level get so creative no one understands them anymore and they stop being attacks.” 

***

DRIVEN FROM SOCIAL MEDIA BY TROLLS

2015–16
Increasingly, popular media sites and blogs, from NPR to Reuters, are eliminating comments from their pages.

Ex-Kleiner VC Introduces Diversity Initiative
Ellen Pao Photograph by David Paul Morris—Bloomberg via Getty Images 

July 2015 
Ellen Pao, interim CEO of Reddit, resigns in the wake of what she calls “one of the largest trolling attacks in history.”

Late Night with Seth Meyers - Season 4
Actress Leslie Jones Photograph by Lloyd Bishop—NBC/NBCU Photo Bank via Getty Images 

July 2016
Movie actress Leslie Jones quits Twitter after trolls send a barrage of racist and sexual images. In one of her final tweets, she writes, “You won’t believe the evil.”

***

A version of this article appears in the February 1, 2017 issue of Fortune with the headline "Troll Hunters."

Author : Jeff John Roberts

Source : http://fortune.com/2017/01/23/jigsaw-google-internet-trolls/

Categorized in Internet Privacy

AROUND MIDNIGHT ONE Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane­ously wrote what she now bitterly refers to as “the tweet that launched a thousand ships.” The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper­bolic joke, she tweeted out a list of political carica­tures, one of which called the typical Sanders fan a “vitriolic crypto­racist who spends 20 hours a day on the Internet yelling at women.”

The ill-advised late-night tweet was, Jeong admits, provocative and absurd—she even supported Sanders. But what happened next was the kind of backlash that’s all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to “rip each one of [her] hairs out” and “twist her tits clear off.”

The attacks continued for weeks. “I was in crisis mode,” she recalls. So she did what many victims of mass harassment do: She gave up and let her abusers have the last word. Jeong made her tweets private, removing herself from the public conversation for a month. And she took a two-week unpaid leave from her job as a contributor to the tech news site Motherboard.

For years now, on Twitter and practically any other freewheeling public forum, the trolls have been out in force. Just in recent months: Trump’s anti-Semitic supporters mobbed Jewish public figures with menacing Holocaust “jokes.” Anonymous racists bullied African American comedian Leslie Jones off Twitter temporarily with pictures of apes and Photoshopped images of semen on her face.Guardian columnist Jessica Valenti quit the service after a horde of misogynist attackers resorted to rape threats against her 5-year-old daughter. “It’s too much,” she signed off. “I can’t live like this.” Feminist writer Sady Doyle says her experience of mass harassment has induced a kind of permanent self-­censorship. “There are things I won’t allow myself to talk about,” she says. “Names I won’t allow myself to say.”qa 

Jigsaw's Jared Cohen: “I want us to feel the responsibility of the burden we’re shouldering.”

Mass harassment online has proved so effective that it’s emerging as a weapon of repressive governments. In late 2014, Finnish journalist Jessikka Aro reported on Russia’s troll farms, where day laborers regurgitate messages that promote the government’s interests and inundate oppo­nents with vitriol on every possible outlet, including Twitter and Facebook. In turn, she’s been barraged daily by bullies on social media, in the comments of news stories, and via email. They call her a liar, a “NATO skank,” even a drug dealer, after digging up a fine she received 12 years ago for possessing amphetamines. “They want to normalize hate speech, to create chaos and mistrust,” Aro says. “It’s just a way of making people disillusioned.”

All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on—clamoring for—the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”

Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

Conversation AI represents just one of Jigsaw’s wildly ambitious projects. The New York–based think tank and tech incubator aims to build products that use Google’s massive infra­structure and engineer­ing muscle not to advance the best possibilities of the Internet but to fix the worst of it: surveillance, extremist indoctrination, censorship. The group sees its work, in part, as taking on the most intract­able jobs in Google’s larger mission to make the world’s information “universally accessible and useful.”

Cohen founded Jigsaw, which now has about 50 staffers (almost half are engineers), after a brief high-profile and controversial career in the US State Department, where he worked to focus American diplomacy on the Internet like never before. One of the moon-shot goals he’s set for Jigsaw is to end censorship within a decade, whether it comes in the form of politically motivated cyberattacks on opposition websites or government strangleholds on Internet service providers. And if that task isn’t daunting enough, Jigsaw is about to unleash Conversation AI on the murky challenge of harassment, where the only way to protect some of the web’s most repressed voices may be to selectively shut up others. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

 
jigsaw_illo2.jpg

But slowly, the group’s lofty challenges began to attract engineers, some joining from other parts of Google after volunteering for Cohen’s team. One of their first creations was a tool called uProxy that allows anyone whose Internet access is censored to bounce their traffic through a friend’s connection outside the firewall; it’s now used in more than 100 countries. Another tool, a Chrome add-on called Password Alert, aims to block phishing by warning people when they’re retyping their Gmail password into a malicious look-­alike site; the company developed it for Syrian activists targeted by government-friendly hackers, but when it proved effective, it was rolled out to all of Google’s users.

  

“We are not going to be one of those groups that justimagines what vulnerable populations are experienc­ing. We’re going to get to know our users.”

In February, the group was renamed Jigsaw to reflect its focus on building practical products. A program called Montage lets war correspondents and nonprofits crowdsource the analysis of YouTube videos to track conflicts and gather evidence of human rights violations. Another free service called Project Shield uses Google’s servers to absorb government-sponsored cyberattacks intended to take down the websites of media, election-­monitoring, and human rights organi­zations. And an initiative, aimed at deradicalizing ISIS recruits, identifies would-be jihadis based on their search terms, then shows them ads redirecting them to videos by former extremists who explain the downsides of joining an ultraviolent, apocalyptic cult. In a pilot project, the anti-ISIS ads were so effective that they were in some cases two to three times more likely to be clicked than typical search advertising campaigns.

 

The common thread that binds these projects, Cohen says, is a focus on what he calls “vulnerable populations.” To that end, he gives new hires an assignment: Draw a scrap of paper from a baseball cap filled with the names of the world’s most troubled or repressive countries; track down someone under threat there and talk to them about their life online. Then present their stories to other Jigsaw employees.

At one recent meeting, Cohen leans over a conference table as 15 or so Jigsaw recruits—engineers, designers, and foreign policy wonks—prepare to report back from the dark corners of the Internet. “We are not going to be one of those groups that sits in our offices and imagines what vulnerable populations around the world are experiencing,” Cohen says. “We’re going to get to know our users.” He speaks in a fast-­forward, geeky patter that contrasts with his blue-eyed, broad-­shouldered good looks, like a politician disguised as a Silicon Valley executive or vice versa. “Every single day, I want us to feel the burden of the responsibility we’re shouldering.”

“Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying.”

 We hear about an Albanian LGBT activist who tries to hide his identity on Facebook despite its real-names-only policy, an admini­strator for a Libyan youth group wary of govern­ment infiltrators, a defector’s memories from the digital black hole of North Korea. Many of the T-shirt-and-­sandal-­wearing Googlers in the room will later be sent to some of those far-flung places to meet their contacts face-to-face.

“They’ll hear stories about people being tortured for their passwords or of state-sponsored cyberbullying,” Cohen tells me later. The purpose of these field trips isn’t simply to get feedback for future products, he says. They’re about creating personal investment in otherwise distant, invisible problems—a sense of investment Cohen says he himself gained in his twenties during his four-year stint in the State Department, and before that during extensive travel in the Middle East and Africa as a student.

Cohen reports directly to Alphabet’s top execs, but in practice, Jigsaw functions as Google’s blue-sky, human-rights-focused skunkworks. At the group’s launch, Schmidt declared its audacious mission to be “tackling the world’s toughest geopolitical problems” and listed some of the challenges within its remit: “money laundering, organized crime, police brutality, human trafficking, and terrorism.” In an interview in Google’s New York office, Schmidt (now chair of Alphabet) summarized them to me as the “problems that bedevil humanity involving information.”

Jigsaw, in other words, has become ­Google’s Internet justice league, and it represents the notion that the company is no longer content with merely not being evil. It wants—as difficult and even ethically fraught as the impulse may be—to do good.

 
Yasmin Green, Jigsaw’s head of R&D.

IN SEPTEMBER OF 2015, Yasmin Green, then head of operations and strategy for ­Google Ideas, the working group that would become Jigsaw, invited 10 women who had been harassment victims to come to the office and discuss their experiences. Some of them had been targeted by members of the antifeminist Gamergate movement. Game developer Zoë Quinn had been threatened repeatedly with rape, and her attackers had dug up and distributed old nude photos of her. Another visitor, Anita Sarkeesian, had moved out of her home temporarily because of numerous death threats.

At the end of the session, Green and a few other Google employees took a photo with the women and posted it to the company’s Twitter account. Almost immediately, the Gamergate trolls turned their ire against Google itself. Over the next 48 hours, tens of thousands of comments on Reddit and Twitter demanded the Googlers be fired for enabling “feminazis.”

“It’s like you walk into Madison Square Garden and you have 50,000 people saying you suck, you’re horrible, die,” Green says. “If you really believe that’s what the universe thinks about you, you certainly shut up. And you might just take your own life.”

To combat trolling, services like Reddit, YouTube, and Facebook have for years depended on users to flag abuse for review by overworked staffers or an offshore workforce of content moderators in countries like the Philippines. The task is expensive and can be scarring for the employees who spend days on end reviewing loathsome content—yet often it’s still not enough to keep up with the real-time flood of filth. Twitter recently introduced new filters designed to keep users from seeing unwanted tweets, but it’s not yet clear whether the move will tame determined trolls.

The meeting with the Gamergate victims was the genesis for another approach. Lucas Dixon, a wide-eyed Scot with a doctorate in machine learning, and product manager CJ Adams wondered: Could an abuse-detecting AI clean up online conversations by detecting toxic language—with all its idioms and ambiguities—as reliably as humans?

Show millions of vile Inter­net comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

To create a viable tool, Jigsaw first needed to teach its algorithm to tell the difference between harmless banter and harassment. For that, it would need a massive number of examples. So the group partnered withThe New York Times, which gave Jigsaw’s engineers 17 million comments fromTimes stories, along with data about which of those comments were flagged as inappropriate by moderators. Jigsaw also worked with the Wikimedia Foundation to parse 130,000 snippets of discussion around Wikipedia pages. It showed those text strings to panels of 10 people recruited randomly from the CrowdFlower crowdsourcing service and asked whether they found each snippet to represent a “personal attack” or “harassment.” Jigsaw then fed the massive corpus of online conversation and human evaluations into Google’s open source machine learning software, TensorFlow.

Machine learning, a branch of computer science that Google uses to continually improve everything from Google Translate to its core search engine, works something like human learning. Instead of programming an algorithm, you teach it with examples. Show a toddler enough shapes identified as a cat and eventually she can recognize a cat. Show millions of vile Internet comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

In fact, by some measures Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10 percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack. For now the tool looks only at the content of that single string of text. But Green says Jigsaw has also looked into detecting methods of mass harassment based on the volume of messages and other long-term patterns.

Wikipedia and the Times will be the first to try out Google’s automated harassment detector on comment threads and article discussion pages. Wikimedia is still considering exactly how it will use the tool, while the Times plans to make Conversation AI the first pass of its website’s com­ments, blocking any abuse it detects until it can be moder­ated by a human. Jigsaw will also make its work open source, letting any web forum or social media platform adopt it to automatically flag insults, scold harassers, or even auto-delete toxic language, preventing an intended harassment victim from ever seeing the offending comment. The hope is that “anyone can take these models and run with them,” says Adams, who helped lead the machine learning project.

Adams types in “What’s up, bitch?” and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale.

What’s more, some limited evidence suggests that this kind of quick detection can actually help to tame trolling. Conversation AI was inspired in part by an experiment undertaken by Riot Games, the video­game company that runs the world’s biggest multi­player world, known as League of Legends, with 67 million players. Starting in late 2012, Riot began using machine learning to try to analyze the results of in-game conversations that led to players being banned. It used the resulting algorithm to show players in real time when they had made sexist or abusive remarks. When players saw immediate automated warnings, 92 percent of them changed their behavior for the better, according to areport in the science journal Nature.

My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office, when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyze. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.

In fact, Conversation AI’s algorithm goes on to make impressively subtle distinctions. Pluralizing my trashy greeting to “What’s up bitches?” drops the attack score to 45. Add a smiling emoji and it falls to 39. So far, so good.

But later, after I’ve left Google’s office, I open the Conver­sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver­sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.

FOR A TECH EXECUTIVE taking on would-be terrorists, state-sponsored trolls, and tyrannical surveillance regimes, Jigsaw’s creator has a surprisingly sunny outlook on the battle between the people who use the Internet and the authorities that seek to control them. “I have a fundamental belief that technology empowers people,” Jared Cohen says. Between us sits a coffee table covered in souvenirs from his travels: a clay prayer coin from Iraq, a plastic-wrapped nut bar from Syria, a packet of North Korean cigarettes. “It’s hard for me to imagine a world where there’s not a continued cat-and-mouse game. But over time, the mouse might just become bigger than the cat.”

 

JIGSAW’S PROJECTS

Project Shield

Montage

Password Alert

The Redirect Method

Conversation AI

Digital Attack Map

When Cohen became the youngest person ever to join the State Depart­ment’s Policy Planning Staff in 2006, he brought with him a notion that he’d formed from seeing digitally shrewd Middle Eastern youths flout systems of control: that the Internet could be a force for political empowerment and even upheaval. And as Facebook, then YouTube and Twitter, started to evolve into tools of protest and even revo­lution, that theory earned him access to officials far above his pay grade—all the way up to secretaries of state Condo­leezza Rice and later Hillary Clinton. Rice would describe Cohen in her memoirs as an “inspired” appoint­ment. Former Policy Planning director Anne-Marie Slaughter, his boss under Clinton, remembers him as “ferociously intelligent.”

Many of his ideas had a digital twist. After visiting Afghanistan, Cohen helped create a cell-phone-based payment system for local police, a move that allowed officers to speed up cash trans­fers to remote family members. And in June of 2009, when Twitter had scheduled downtime for maintenance during a massive Iranian protest against hardliner president Mahmoud Ahmadi­nejad, Cohen emailed founder Jack Dorsey and asked him to keep the service online. The unauthorized move, which violated the Obama administra­tion’s noninterference policy with Iran, nearly cost Cohen his job. But when Clinton backed Cohen, it signaled a shift in the State Department’s relationship with both Iran and Silicon Valley.

Around the same time, Cohen began calling up tech CEOs and inviting them on tech delegation trips, or “techdels”—conceived to somehow inspire them to build products that could help people in repressed corners of the world. He asked Google’s Schmidt to visit Iraq, a trip that sparked the relationship that a year later would result in Schmidt’s invitation to Cohen to create Google Ideas. But it was Cohen’s email to Twitter during the Iran protests that most impressed Schmidt. “He wasn’t following a playbook,” Schmidt tells me. “He was inventing the playbook.”

The story Cohen’s critics focus on, however, is his involvement in a notorious piece of software called Haystack, intended to provide online anonymity and circumvent censorship. They say Cohen helped to hype the tool in early 2010 as a potential boon to Iranian dissidents. After the US govern­ment fast-tracked it for approval, however, a security researcher revealed it had egregious vulnerabilities that put any dissident who used it in grave danger of detection. Today, Cohen disclaims any responsibility for Haystack, but two former colleagues say he championed the project. His former boss Slaughter describes his time in government more diplomatically: “At State there was a mismatch between the scale of Jared’s ideas and the tools the department had to deliver on them,” she says. “Jigsaw is a much better match.”

But inserting Google into thorny geopolitical problems has led to new questions about the role of a multinational corporation. Some have accused the group of trying to monetize the sensitive issues they’re taking on; the Electronic Frontier Foundation’s director of international free expression, Jillian York, calls its work “a little bit imperialistic.” For all its altruistic talk, she points out, Jigsaw is part of a for-profit entity. And on that point, Schmidt is clear: Alphabet hopes to someday make money from Jigsaw’s work. “The easiest way to understand it is, better connectivity, better information access, we make more money,” he explains to me. He draws an analogy to the company’s efforts to lay fiber in some developing countries. “Why would we try to wire up Africa?” he asks. “Because eventually there will be advertising markets there.”

“We’re not a government,” Eric Schmidt says slowly and carefully. “We’re not engaged in regime change. We don’t do that stuff.”

Throwing out well-intentioned speech thatresembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect. When I ask Conversation AI’s inventors about its potential for collateral damage, the engineers argue that its false positive rate will improve over time as the software continues to train itself. But on the question of how its judgments will be enforced, they say that’s up to whoever uses the tool. “We want to let communities have the discussions they want to have,” says Conversation AI cocreator Lucas Dixon. And if that favors a sanitized Internet over a freewheeling one? Better to err on the side of civility. “There are already plenty of nasty places on the Internet. What we can do is create places where people can have better conversations.”

ON A MUGGY MORNING in June, I join Jared Cohen at one of his favorite spots in New York: the Soldiers’ and Sailors’ Monument, an empty, expansive, tomblike dome of worn marble in sleepy Riverside Park. When Cohen arrives, he tells me the place reminds him of the quiet ruins he liked to roam during his travels in rural Syria.

 

Our meeting is in part to air the criticisms I’ve heard of Conversation AI. But when I mention the possibility of false positives actually censoring speech, he answers with surprising humility. “We’ve been asking these exact questions,” he says. And they apply not just to Conversation AI but to everything Jigsaw builds, he says. “What’s the most dangerous use case for this? Are there risks we haven’t sufficiently stress-tested?”

Jigsaw runs all of its projects by groups of beta testers and asks for input from the same groups it intends to recruit as users, he says. But Cohen admits he never knows if they’re getting enough feedback, or the right kind. Conversation AI in particular, he says, remains an experiment. “When you’re looking at curbing online harassment and at free expression, there’s a tension between the two,” he acknowledges, a far more measured response than what I’d heard from Conversation AI’s developers. “We don’t claim to have all the answers.”

And if that experiment fails, and the tool ends up harming the exact free speech it’s trying to protect, would Jigsaw kill it? “Could be,” Cohen answers without hesitation.

 

I start to ask another question, but Cohen interrupts, unwilling to drop the notion that Jigsaw’s tools may have unintended consequences. He wants to talk about the people he met while wandering through the Middle East’s most repressive countries, the friends who hosted him and served as his guide, seemingly out of sheer curiosity and hospitality.

It wasn’t until after Cohen returned to the US that he realized how dangerous it had been for them to help him or even to be seen with him, a Jewish American during a peak of anti-­Americanism. “My very presence could have put them at risk,” he says, with what sounds like genuine throat-­tightening emotion. “To the extent I have a guilt I act on, it’s that. I never want to make that mistake again.”

Cohen still sends some of those friends, particularly ones in the war-torn orbit of Syria and ISIS, an encrypted message almost daily, simply to confirm that they’re alive and well. It’s an exercise, like the one he assigns to new Jigsaw hires but designed as maintenance for his own conscience: a daily check-in to assure himself his interventions in the world have left it better than it was before.

“Ten years from now I’ll look back at where my head is at today too,” he says. “What I got right and what I got wrong.” He hopes he’ll have done good.

Source : https://www.wired.com

Categorized in Internet Privacy

AOFIRS Logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media