fbpx

Nobody likes internet trolls. They pop up in discussions they weren't invited to and upset as many people as possible.

Time and time again we are told the best thing to do is ignore the inflammatory, abusive things they put on forums, comment threads, and even social media posts.

However, this is easier said than done, and thankfully there is a block function on most online communities.

Trolls lie, exaggerate, and will say pretty much anything to get a response, but what makes them this way? And why do they insult others from behind a computer screen?

According to a study from last year, published in the journal Personality and Individual Differences, people with the highest scores of Dark Tetrad personality traits were more likely to say trolling was their favorite internet activity.

The researchers asked over 1,200 people to take part in personality tests to determine their levels of Dark Tetrad traits, which are narcissism, Machiavellianism, psychopathy, and sadism, then asked them to fill out a survey about their internet commenting habits.

"The associations between sadism and GAIT (Global Assessment of Internet Trolling) scores were so strong that it might be said that online trolls are prototypical everyday sadists," the authors of the paper wrote. "Both trolls and sadists feel sadistic glee at the distress of others. Sadists just want to have fun ... and the internet is their playground."

According to therapist and psychologist Perpetua Neo, this isn't surprising. In their everyday lives, a psychopath or narcissist is unlikely to get "caught out" by their friends and family, and trolling people anonymously gives them a release for their less favourable qualities.

"These people they are living double lives or triple lives. You might just find this narcissist actually has three families — it's not uncommon to hear stories like that," Neo told Business Insider. "Trolling is a very simple, low-cost kind of way — by cost I mean your time, energy and effort — to boost your sense of power."

In real life you can't upset multiple people at once without a great deal of effort. Online, trolls can be offending people left, right and centre, in a very short space of time.

"It's like a cat and mouse game," Neo said. "Their brains are probably firing off, because that's what trolling is about — it's about power over someone else, and this dominance thing, to bring someone to a lower level. Especially if you enjoy the suffering."

Source: This article was published uk.businessinsider.com By Lindsay Dodgson

Categorized in Internet Privacy

You, too, could become a troll. Not a mythological creature that hides under bridges, but one of those annoying people who post disruptive messages in internet discussion groups – "trolling" for attention—and off-topic posters who throw out racist, sexist or politically controversial rants. The term has come to be applied to posters who use offensive language, harass other posters and generally conjure up the image of an ugly, deformed beast.

It has been assumed that trolls are just naturally nasty people being themselves online, but according to Cornell research, what makes a troll is a combination of a bad mood and the bad example of other trolls.

"While prior work suggests that trolling behavior is confined to a vocal and anti-social minority, ordinary people can, under the right circumstances, behave like trolls," said Cristian Danescu-Niculescu-Mizil, assistant professor of information science. He and his colleagues actually caused that to happen in an online experiment.

They described their research at the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing, Feb. 25–March 1 in Portland, Oregon, where they received the Best Paper Award. The team included Stanford University computer science professors Michael Bernstein and Jure Leskovec, and their doctoral student Justin Cheng '12.

To tease out possible causes of trolling, the researchers set up an online experiment. Through the Amazon Mechanical Turk service, where people can be hired to perform online tasks for a small hourly payment, they recruited people to participate in a discussion group about current events.

Participants were first given a quiz consisting of logic, math and word problems, then shown a news item and invited to comment. To compare the effects of positive and negative mood, some participants were given harder questions or were told afterward that they had performed poorly on the quiz. To compare the effects of exposure to other trolls, some were led into discussions already seeded with real troll posts copied from comments on CNN.com. The experiment showed that  and bad example could lead to offensive posting.

Following up, the researchers reviewed 16 million posts on CNN.com, noting which posts were flagged by moderators, and applying computer text analysis and human review of samples to confirm that these qualified as trolling. They found that as the number of flagged posts among the first four posts in a discussion increases, the probability that the fifth post is also flagged increases. Even if only one of the first four posts was flagged, the fifth post was more likely to be flagged. This study was described in a separate paper, "Antisocial Behavior in Online Discussion Communities," presented at the Ninth International AAAI Conference on Web and Social Media, May 2015 at Oxford University.

Using day and time as a stand-in for mood, they found that ordinary posters were more likely to troll late at night, and more likely on Monday than Friday.

It might be possible to build some troll reduction into the design of discussion groups, the researchers propose. A person likely to start trolling could be identified based on recent participation in discussions where they might have been involved in heated debate. Mood can be inferred from keystroke movements. In these and other cases a time limit on new postings might allow for cooling off. Moderators could remove troll posts to limit contagion. Allowing users to retract posts may help, they added, as would reducing other sources of user frustration, such as poor interface design or slow loading times.

The point of their research, the researchers conclude, is to show that not all trolling is done by inherently anti-social people, so looking at the whole situation may better reflect the reality of how trolling occurs, and perhaps help us see less of it.

Source: This article was published phys.org By Bill Steele

Categorized in Internet Privacy

Is there a troll’s voice inside all of us at one time or another?

In today’s online climate, we are witnessing friends unfollowing or actually unfriending each other, while people are boldly insulting each other with their offensive opinions, thoughts and comments.

Research from Stanford and Cornell University suggests that under the right circumstances, we all have a troll lingering in us.

Lindsey Blackwell, a researcher of online harassment at the University of Michigan, concurs. She points out that technology has the ability to amplify our behavior, not only our best shines through online but also our worst.

Attorney Mitch Jackson, a social media leader and influencer, has experienced his share of trolls. He places them in two categories:

· Recreational trolls - Ones that are simply annoying and will eventually go away after you block them and ignore them.

· Criminal trolls - The ones you need to take more seriously, ones that are out to seriously harm you/and/or your business.

When I wrote Google Bomb with the late John Dozier, a leading internet attorney, he described trolls in ten scofflaw persona’s:

1. Pick-pocket

This is the guy who used to wait on street corners for elderly ladies to pass. He enjoys attacking defenseless people and stealing covertly using deception.

2. Wacko

We usually identify a wacko situation quickly. There are distinctive characteristics of his communications. The wacko is usually a “follower,” someone looking to gain attention and recognition, but escalates what may have started as fair criticism into more and more outrageous claims.

3. Druggie

Or, maybe “liquid courage” would be more appropriate. This guy is exactly what comes to mind. During the day this blogger is a normal guy, but at night he returns to the sanctity of his home, gets drunk or high, and goes out on the web looking for “hook-ups” and blogging on his “hang-ups.”

4. Alien

No, not from another world. But from overseas. In a far, far away place, without any treaty with the US, in a country without an effective legal system and no notion of business or personal property ownership rights.

5. Nerd

This is the guy who is scared to talk with a girl, but behind the keyboard, all alone, morphs into a Casanova. This empowerment of anonymity creates an omnipotent persona, and for the first time the nerd feels the effect of power and control, gets an adrenaline buzz when he exercises it, and he exercises it often, usually creating or perpetuating a volatile situation in which he feels he can outsmart the “opposition.”

6. Rookie

Enjoy debating a thirteen year old? They are out on the net acting like adults, posting statements and play-acting like a grown-up.

7. Sadist

 

This person attacks others, causes pain, and revels in the results in ways not worthy of mention. He loves to create, direct, control, and unleash a firestorm of criticism about you or your company just to create pain and damage.

8. Bankrupt

No, not morally bankrupt. Actually bankrupt…no money, no assets, no prospects for work, and nothing to lose.

9. Criminal

Career criminals, no less. Like the convicted felon running a sophisticated extortion scheme against a very prominent business.

10. Mis-Leader

This person is in no manner a leader. This blogger has a hidden agenda, but he just makes it sound like he is a totally objective commentator.

Invasion of Trolls

Over the past years we have watched platforms such as Huffington Post put an end to anonymous comments and NPR , Popular Science and Motherboard shut them down completely due to online commenters harassing each other and abusing the privilege of leaving comments.

These were not children polluting their sites. People leaving comments on these platforms were likely adults, yet they were acting like toddlers with a keypad — poking, teasing and harassing their playmates on a playground with no concern that they are humans too.

Where has civility gone that when you piss off a fellow parent in a carpool line you end up being trashed and trolled on social media? Want to break-up with your partner, but fear you’ll end up as a victim of e-venge? Could it be that when people are pushed to their limit they are recognizing the power of the keyboard?

Yes.

Justin Cheng, a researcher at Stanford University and lead author of the study above, wanted to better understand why trolling is so prevalent today.

“While the common knowledge is that trolls are particularly sociopathic individuals that occasionally appear in conversations, is it really just these people who are trolling others?” 


We may think we know the descriptions of trolls and even ideas of who these people are, which is usually the cesspool of the web, however the research uncovers that you or I could easily be pushed to a point of digital warfare. In a bad mood, wake-up on the wrong side of the bed, passionate about a heated trending topic and your fingers may go flying. When you see a thread that has sparked a fire of negative comments, some think it’s a green-light to pile on the insults — why not add my two-cents, everyone else is.

This gang-like trolling behavior is a “spiral of negativity”, explains Jure Leskovec, senior author of the above study.

“Just one person waking up cranky can create a spark and, because of discussion context and voting, these sparks can spiral out into cascades of bad behavior. Bad conversations lead to bad conversations. People who get down-voted come back more, comment more and comment even worse.”


There is an art to commenting when you don’t agree with posters. We should learn to be constructive, not combative in our responses.

It goes back to the old cliché many of us were taught by our grandparents and parents: “If you don’t have anything nice to say, don’t say it at all.” We need to take this advice online.

Takeaway tips:

  • When in doubt, click out.
  • Pause before posting.
  • Recognize we are all a click away from being a troll.

Source: This article was published huffingtonpost.com By Sue Scheff

Categorized in Internet Privacy

Internet trolls are powerless without anonymity: By obscuring their identities behind random screen names, they can engage in hateful exchanges and prey on complete strangers without fear of retaliation. In doing so, they illustrate one troubling characteristic of humankind: People are shitty to people they don’t know.

Fortunately, curbing this tendency is extremely straightforward, scientists report in a new study.

In a new paper to be published in Science Advances, a team of scientists reveal a simple solution for getting people to cooperate: “Removing the cloak of anonymity seems to go a long way towards making people more thoughtful about their actions,” Hokkaido University biophysicist and economist Marko Jusup, Ph.D., a co-author on the study, tells Inverse in an e-mail.

This conclusion may seem like a no-brainer, but in fact it’s not always clear what factors lead people to cooperate rather than engage in conflict.

Trolling: Easy to do when people can't see your face.

To investigate what makes people choose conflict over peace, he and his colleagues from China’s Northwestern Polytechnical University ran a slightly modified version of the classic social cooperation experiment called the “Prisoner’s Dilemma.” The setup was simple: Pairs of strangers in mock court are told they’ll get a prize by testifying against the other but that they’ll both get fined if both of them do so. If they both remain silent, however, they’ll both walk free. The idea is that ratting on your partner gives you a better reward than cooperating with them, so it’s expected that most rational-thinking people won’t play nice. In their experiments, Jusup and his colleagues modified this concept slightly to see whether people would play nice if they weren’t strangers.

Their 154 participants had “prior knowledge of each other” and were roughly the same age and had the same interests,” Jusup says. They were given the option to maintain or revoke their anonymity during the game, which lasted multiple rounds. The scientists found that when participants knew each other, they were much more likely to cooperate with each other than if they were total strangers. “This paid out very well for all - so, winners play nice,” he said in a statement.

The central question in studying social cooperation, Jusup says, is this: What do prospective cooperators know about each other? The research shows that mutual recognition — even just seeing a familiar face — forces people to act more thoughtfully.

The power of mutual recognition was illustrated in famous ferry scene in The Dark Knight, which showed a much more sinister version of the same Prisoner’s Dilemma. In the scene, the Joker traps two groups of Gotham citizens in separate ferries; both ferries are lined with explosives, and each group has a detonator to blow the other up and set itself free in the next 30 minutes. If neither group triggers the detonator, the Joker will blow them both up. The Joker, of course, thinks that both groups, comprising complete strangers in an evil town who owe each other nothing, should have no problem blowing up the other. But the captives, amid their deliberations, come to an important realization: They’re not strangers but together are Gothamites, and thus they share a set of beliefs of morals that binds them.

Ultimately, recognizing similarities involves exchanging details about identity, and this in turn builds trust. Jusup thinks we can apply the findings from his study to pretty much any situation that requires cooperation. “It would seem that people should take a little time to exchange information about one another before getting down to ‘business’,” he says. “Such an exchange, according to our results, should put people into a more cooperative frame of mind.”

As for dealing with internet trolls, he thinks it’s much more likely that online conversations will stop de-volving “towards the point where participants simply insult one another” if users are forced to share some personal information for others to see, like a simple profile or a photo.

But even niceness has its limits, Jusup found: Cycles of conflict and retaliation began when one participant punished the other for screwing them over previously. He and his colleagues thought that punishment might lead a participant to wise up and act more cooperatively the next time around, but the results suggest social cooperation isn’t that straightforward. “This high level of onymity bears a question: To what extent could onymity be lowered and still promote cooperation?” Jusup asks, noting that his follow-up work will address this question. While there’s little doubt that forcing anonymous trolls to reveal themselves will reduce their hatefulness, it remains to be seen how to deal with trolls you know.

Source : inverse.com

Categorized in Internet Privacy

I’m going to confess an occasional habit of mine, which is petty, and which I would still enthusiastically recommend to anyone who frequently encounters trolls, Twitter eggs, or other unpleasant characters online.

Sometimes, instead of just ignoring a mean-spirited comment like I know I should, I type in the most cathartic response I can think of, take a screenshot and then file that screenshot away in a little folder I only revisit when I want to make my coworkers laugh.

I don’t actually send the response. I delete my silly comeback and move on with my life. For all the troll knows, I never saw the original message in the first place. The original message being something like the suggestion, in response to a piece I once wrote, that there should be a special holocaust just for women.

It’s bad out there, man!

We all know it by now. The internet, like the rest of the world, can be as gnarly as it is magical.

But there’s a sense lately that the lows have gotten lower, that the trolls who delight in chaos are newly invigorated and perhaps taking over all of the loveliest, most altruistic spaces on the web. There’s a real battle between good and evil going on. A new report by the Pew Research Center and Elon University’s Imagining the Internet Center suggests technologists widely agree: The bad guys are winning.

Researchers surveyed more than 1,500 technologists and scholars about the forces shaping the way people interact with one another online. They asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust and disgust?”

The vast majority of those surveyed—81 percent of them—said they expect the tone of online discourse will either stay the same or get worse in the next decade.

Not only that, but some of the spaces that will inevitably crop up to protect people from trolls may contribute to a new kind of “Potemkin internet,” pretty façades that hide the true lack of civility across the web, says Susan Etlinger, a technology industry analyst at the Altimeter Group, a market research firm.

“Cyberattacks, doxing and trolling will continue, while social platforms, security experts, ethicists and others will wrangle over the best ways to balance security and privacy, freedom of speech and user protections. A great deal of this will happen in public view,” Etlinger told Pew. “The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor.”

Tor is software that enables people to browse and communicate online anonymously—so it’s used by people who want to cover their tracks from government surveillance, those who want to access the dark web, trolls, whistleblowers and others.

“Of course, this is already happening, just out of sight of most of us,” Etlinger said, referring to the use of hidden channels online. “The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.

“Very often, hate, anxiety and anger drive participation with the platform,” said Frank Pasquale, a law professor at the University of Maryland, in the report. “Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

News organizations, which once set the tone for civic discourse, have less cultural importance than they once did. The rise of formats like cable news—where so much programming involves people shouting at one another—and talk radio are clear departures from a once-higher standard of discourse in professional media.

Few news organizations are stewards for civilized discourse in their own comment sections, which sends mixed messages to people about what’s considered acceptable. And then, of course, social media platforms like Facebook and Twitter serve as the new public square.

“Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals,” said Andrew Nachison, the founder of We Media, in his response to Pew. “But that’s really a ruse to optimize for quantity. The more we come back, the more money they make... So the shouting match goes on.”

The resounding message in the Pew report is this: There’s no way the problem in public discourse is going to solve itself. “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day,” said Randy Bush, a research fellow at Internet Initiative Japan, in his response to Pew.

Many of those polled said we’re now witnessing the emergence of “flame wars and strategic manipulation” that will only get worse. This goes beyond obnoxious comments, or Donald Trump’s tweets, or even targeted harassment. Instead, we’ve entered the realm of “weaponized narrative” as a 21st-century battle space, as the authors of a recent Defense One essay put it. And just like other battle spaces, humans will need to develop specialized technology for the fight ahead.

Researchers have already used technology to begin to understand what they’re up against. Earlier this month, a team of computer scientists from Stanford University and Cornell University wrote about how they used machine-learning algorithms to forecast whether a person was likely to start trolling. Using their algorithm to analyze a person’s mood and the context of the discussion they were in, the researchers got it right 80 percent of the time.

They learned that being in a bad mood makes a person more likely to troll, and that trolling is most frequent late at night (and least frequent in the morning). They also tracked the propensity for trolling behavior to spread. When the first comment in a thread is written by a troll—a nebulous term, but let’s go with it—then it’s twice as likely additional trolls will chime in compared with a conversation not led by a troll to start, the researchers found. On top of that, the more troll comments there are in a discussion, the more likely it is participants will start trolling in other, unrelated threads.

“A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, and even more troll comments elsewhere,” the Stanford and Cornell researchers wrote. “As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.”

Using technology to understand when and why people troll is essential, but many people agree the scale of the problem requires technological solutions. Stopping trolls isn’t as simple as creating spaces that prevent anonymity, many of those surveyed told Pew, because doing so also enables “governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate,” Pew wrote.

“One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” Bailey Poland, the author of “Haters: Harassment, Abuse and Violence Online,” told Pew. Pseudonymity may be one useful approach—so someone’s offline identity is concealed, but their behavior in a certain forum over time can be analyzed in response to allegations of harassment. Machines can help, too: Chatbots, filters and other algorithmic tools can complement human efforts. But they’ll also complicate things.

“When chatbots start running amok—targeting individuals with hate speech—how will we define ‘speech’?” said Amy Webb, the CEO of the Future Today Institute, in her response to Pew. “At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

Another challenge is no matter what solutions people devise to fight trolls, the trolls will fight back. Even among those who are optimistic the trolls can be beaten back, and that civic discourse will prevail online, there are myriad unknowns ahead.

“Online discourse is new, relative to the history of communication,” said Ryan Sweeney, the director of analytics at Ignite Social Media, in his response to the survey. “Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”

Source : nextgov.com

Categorized in Science & Tech

When it comes to internet trolls, online harassment and fake news, there’s not a lot of light at the end of the online tunnel. And things are probably going to get darker.

Researchers at the Pew Research Center and Elon University’s Imagining the Internet Center asked 1,537 scholars and technologists what they think the future of the Internet – in terms of how long people will continue to treat each other like garbage – holds. An overwhelming 81 percent said the trolls are winning.

Specifically, the survey asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”

Forty-two percent of respondents think the internet will stay about the same over the next 10 years, while 39 percent said they expect discourse to get even more hostile. Only 19 percent predicted any sort of decline in abuse and harassment. Pew stated that the interviews were conducted between July 1 and August 12 – well before the term “fake news” started making daily headlines.

“People are attracted to forums that align with their thinking, leading to an echo effect,” Vint Cerf, a vice president at Google, said. “This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it.”

Respondents could submit comments with their answers, and the report is chock full (literally hundreds) of remarks from professors, engineers and tech leaders.

Experts blamed the rotting internet culture to every imaginable factor: the rise of click-bait, bot accounts, unregulated comment sections, social media platforms serving as anonymous public squares, the hesitation of anyone who avoids condemning vitriolic posts for fear of stepping on free speech or violating first amendment rights — and even someone merely having a bad day.

The steady decline of the public’s trust in media is another not-helpful factor. People have, historically, adopted their barometer for civil discourse from news organizations – which, with social media and the cable news format, just isn’t the case anymore.

“Things will stay bad because to troll is human,” the report states. Basically humanity’s always been awful, but now its in the plainest sight.

But setting up system to simply punish the bad actors isn’t necessarily the solution, and could result in a sort of “Potemkin internet.” The term Potemkin comes from Grigory Potemkin, a Russian military leader in the 18th century who fell in love with Catherine the Great and built fake villages along one of her routes to make it look like everything was going great. A “Potemkin village” is built to fool others into thinking a situation is way better than it is.

“The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor,” Susan Etlinger, a technology industry analyst, told Pew. “Of course, this is already happening, just out of sight of most of us.”

Tor is free, downloadable software that lets you anonymously browse the web. It’s pretty popular among trolls, terrorists and people who want to get into the dark web or evade government surveillance.

But these tools aren’t always employed for dark purposes.

“Privacy and anonymity are double-edged swords online because they can be very useful to people who are voicing their opinions under authoritarian regimes,” Norah Abokhodair, an information privacy researcher at the University of Washington, wrote in the report. “However the same technique could be used by the wrong people and help them hide their terrible actions.”

Glass-half-full respondents did offer a glimmer of hope. Most of the experts on the side of “it’s going to get better” placed their bets on technology’s ability to advance and serve society. One anonymous security engineer wrote that “as the tools to prevent harassment improve, the harassers will be robbed of their voices.”

But for now, we have a long way to go.

“Accountability and consequences for bad action are difficult to impose or toothless when they do,” Baratunde Thurston, a fellow at MIT Media Lab who’s also worked The Onion and Fast Company, wrote. “To quote everyone ever, things will get worse before they get better.”

Source : nypost.com

Categorized in Internet Privacy

The internet can be a harsh place. It seems like for every feel-good story or picture of a puppy playing with a kitten, there are 1,000 trolls rummaging through the depths of their minds to post the most vile comments they can imagine. And if you’re a woman or person of color, well, multiply that troll army by 10.

But hey, that’s the internet, right? Except it doesn’t have to be that way. And it might not be for much longer if the folks at Google (GOOG, GOOGL) subsidiary Jigsaw have their way. A kind of high-powered startup inside Google’s parent company Alphabet, Jigsaw focuses on how technology can defend international human rights.

The toxicity of trolls

The company’s latest effort is called the Perspective API. Available Thursday, Feb. 23, Perspective is the result of Jigsaw’s Conversation AI project and uses Google’s machine learning technologies to provide online publishers with a tool that can automatically rank comments in their forums and comments sections based on the likelihood that they will cause someone to leave a conversation. Jigsaw refers to this as a “toxicity” ranking.

“At its core, Perspective is a tool that simply takes a comment and returns back this score from 0 to 100 based on how similar it is to things that other people have said that are toxic,” explained product manager CJ Adams.

Jigsaw doesn’t become the arbiter of what commenters can and can’t say in a publisher’s comment section, though. Perspective is only a tool that publishers use as they see fit. For example, they can give their readers the ability to filter comments based on their toxicity level, so they’ll only see non-toxic posts. Or the publisher could provide a kind of feedback mechanism that tells you if your comments are toxic.

The tool won’t stop you from submitting toxic comments, but it will provide you with the nudge to rethink what you’re writing.

Perspective isn’t just a bad word filter, though. Google’s machine learning actually gives the tool the ability to understand context. So it will eventually be able to tell the difference between telling someone a vacuum cleaner can really suck and that they suck at life.

Perspective still makes mistakes, as I witnessed during a brief demo. But the more comments and information it’s fed, the more it can learn about how to better understand the nuances of human communication.

Jigsaw’s global efforts

In its little over a year of existence, Jigsaw has implemented a series of projects designed to improve the lives of internet users around the world. Project Shield, for example, is a free service that protects news sites from distributed denial of service (DDoS) attacks. Redirect Method uses Adwords targeting tools to help refute ISIS’ online recruitment messages, while Montage helps researchers sort through thousands of hours of YouTube videos to find evidence of potential war crimes.

“We wake up and come to work everyday to try to find ways to use technology to make people around the world safer,” Jigsaw President Jared Cohen said. “We are at this nexus between international security and business.”

Cohen said Jigsaw’s engineers travel around the world to meet with internet users vulnerable to harassment and other online-based rights abuses, such as individuals promoting free speech or opposing authoritarian regimes, to understand their unique challenges. And one of the biggest problems, Cohen explained, has been online harassment.

Trolls aren’t always just cruel

Dealing with trolls is par for the course in the US. But in other countries, harassment in comment sections and forums can have political implications.

“In lots of parts of the world where we spend time [harassment] takes on a political motivation, sectarian motivation, ethnic motivation and it’s all sort kind of heightened and exacerbated,” Cohen explained.

But with Perspective, Jigsaw can start to cut down on those forms of harassing comments, and bring more people into online conversations.

“Our goal is to get as many people to rejoin conversations as possible and also to get people who everyday are sort of entering the gauntlet of toxicity to have an opportunity to see that environment improve,” said Cohen.

The path to a better internet?

Jigsaw is already working with The New York Times and Wikipedia to improve their commenting systems. At The New York Times, the Perspective API is being used to let The Gray Lady enable more commenting sections on its articles.

Prior to using Perspective, The Times relied on employees to manually read and filter comments from the paper’s online articles. As a result, just 10% of stories could have comments activated. The Times is using Perspective to create an open source tool that will help reviewers run through comments more quickly and open up a larger number of stories to comments.

Wikipedia, meanwhile, has been using Perspective to detect personal attacks on its volunteer editors, something Jigsaw and the online encyclopedia recently published a paper on.

With the release of Perspective, publishers and developers around the world can take advantage of Google technologies to improve their users’ experiences. And the conversation filtering won’t just stop hateful comments. Cohen said the company is also working to provide publishers and their readers with the ability to filter out comments that are off-topic or generally don’t contribute to conversations.

If Perspective takes off, and a number of publications end up using the technology, the internet could one day have far fewer trolls lurking in its midst.

Source :https://www.yahoo.com/tech/how-google-is-fighting-the-war-on-internet-trolls-123048658.html 

Categorized in How to
Google is attempting to tackle one of the most hostile places on the Internet: comment sections. This week, the search engine announced a new project called Perspective in collaboration with Jigsaw, a tech incubator owned by Google's parent company.
 
"Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime," said Jared Cohen, Jigsaw's president, in a statement about the problems Perspective aims to address.
 
Perspective is essentially a very smart online moderator. Using machine learning, the technology can identify toxic comments that might drive people who have something constructive to say away from the discussion. The tool was tested in collaboration with The New York Times, where reviewers are currently tasked with sifting through as many as 11,000 comments every day. (Other news sites, including The Week and Bloomberg, have resorted to ditching comment sections all together.)
 
The hope is that Perspective will not only speed up the process of reviewing comments and open up new conversations online, but also prohibit toxic comments from being published in the first place.
 
Statistics about online harassment are alarmingly high. According to a report from the Data & Society Research Institute, 47% of people online have experienced some form of abuse, leading 27% of of Internet users to sensor what they say online out of concern that they may become a target themselves.
 
Perspective is still in its early stages, but, if used by online publishers, could have a positive effect on those numbers.
Want to know if your comment would get labeled as toxic? Try the writing experiment here.
 
 
 
 
Author : MADELINE BUXTON
Categorized in Internet Privacy

Three methods of research find that the situation in which an online discussion occurs influences whether people will troll more than their personal past of trolling suggests.

Internet trolls, by definition, are disruptive, combative and often unpleasant with their offensive or provocative online posts designed to disturb and upset.

 

Under the right circumstances, just about anybody can become an Internet troll, according to Stanford research. (Image credit: wildpixel / Getty Images)

 

The common assumption is that people who troll are different from the rest of us, allowing us to dismiss them and their behavior. But research from Stanford University and Cornell University, published as part of the upcoming 2017 Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017), suggests otherwise. The research offers evidence that, under the right circumstances, anyone can become a troll.

“We wanted to understand why trolling is so prevalent today,” said Justin Cheng, a computer science researcher at Stanford and lead author of the paper. “While the common knowledge is that trolls are particularly sociopathic individuals that occasionally appear in conversations, is it really just these people who are trolling others?”

Taking inspiration from social psychology research methods, Cheng investigated whether trolling behavior is an innate characteristic or if situational factors can influence people to act like trolls. Through a combination of experimentation, data analysis and machine learning, the researchers honed in on simple factors that make the average person more likely to troll.

Becoming a troll

Following previous research on antisocial behavior, the researchers decided to focus on how mood and context affect what people write on a discussion forum. They set up a two-part experiment with 667 subjects recruited through a crowdsourcing platform.

In the first part of the experiment, the participants were given a test, which was either very easy or very difficult. After taking the tests, all subjects filled out a questionnaire that evaluated various facets of their mood, including anger, fatigue, depression and tension. As expected, the people who completed the difficult test were in a worse mood than those who had the easy test.

All participants were then instructed to read an article and engage in its comment section. They had to leave at least one comment, but could leave multiple comments and up-votes and down-votes and could reply to other comments. All participants saw the same article on the same platform, created solely for the experiment, but some participants were given a forum with three troll posts at the top of the comment section. Others saw three neutral posts.

Two independent experts evaluated whether the posts left by subjects qualified as trolling, defined generally in this research by a combination of posting guidelines taken from several discussion forums. For example, personal attacks and cursing were indicative of troll posts.

About 35 percent of people who completed the easy test and saw neutral posts then posted troll comments of their own. That percentage jumped to 50 percent if the subject either took the hard test or saw trolling comments. People exposed to both the difficult test and the troll posts trolled approximately 68 percent of the time.

The spread of trolling

To relate these experimental insights to the real world, the researchers also analyzed anonymized data from CNN’s comment section from throughout 2012. This data consisted of 1,158,947 users, 200,576 discussions and 26,552,104 posts. This included banned users and posts that were deleted by moderators. In this part of the research, the team defined troll posts as those that were flagged by members of the community for abuse.

It wasn’t possible to directly evaluate the mood of the commenters, but the team looked at the time stamp of posts because previous research has shown that time of day and day of week correspond with mood. Incidents of down-votes and flagged posts lined up closely with established patterns of negative mood. Such incidents tend to increase late at night and early in the week, which is also when people are most likely to be in a bad mood.

The researchers investigated the effects of mood further and found that people were more likely to produce a flagged post if they had recently been flagged or if they had taken part in a separate discussion that merely included flagged posts written by others. These findings held true no matter what article was associated with the discussion.

“It’s a spiral of negativity,” explained Jure Leskovec, associate professor of computer science at Stanford and senior author of the study. “Just one person waking up cranky can create a spark and, because of discussion context and voting, these sparks can spiral out into cascades of bad behavior. Bad conversations lead to bad conversations. People who get down-voted come back more, comment more and comment even worse.”

Predicting bad behavior

As a final step in their research, the team created a machine-learning algorithm tasked with predicting whether the next post an author wrote would be flagged.

The information fed to the algorithm included the time stamp of the author’s last post, whether the last post was flagged, whether the previous post in the discussion was flagged, the author’s overall history of writing flagged posts and the anonymized user ID of the author.

The findings showed that the flag status of the previous post in the discussion was the strongest predictor of whether the next post would be flagged. Mood-related features, such as timing and previous flagging of the commenter, were far less predictive. The user’s history and user ID, although somewhat predictive, were still significantly less informative than discussion context. This implies that, while some people may be consistently more prone to trolling, the context in which we post is more likely to lead to trolling.

Troll prevention

Between the real-life, large-scale data analysis, the experiment and the predictive task, the findings were strong and consistent. The researchers suggest that conversation context and mood can lead to trolling. They believe this could inform the creation of better online discussion spaces.

“Understanding what actually determines somebody to behave antisocially is essential if we want to improve the quality of online discussions,” said Cristian Danescu-Niculescu-Mizil, assistant professor of information science at Cornell University and co-author of the paper. “Insight into the underlying causal mechanisms could inform the design of systems that encourage a more civil online discussion and could help moderators mitigate trolling more effectively.”

Interventions to prevent trolling could include discussion forums that recommend a cooling-off period to commenters who have just had a post flagged, systems that automatically alert moderators to a post that’s likely to be a troll post or “shadow banning,” which is the practice of hiding troll posts from non-troll users without notifying the troll.

The researchers believe studies like this are only the beginning of work that’s been needed for some time, since the Internet is far from being the worldwide village of cordial debate and discussion people once thought it would become.

“At the end of the day, what this research is really suggesting is that it’s us who are causing these breakdowns in discussion,” said Michael Bernstein, assistant professor of computer science at Stanford and co-author of the paper. “A lot of news sites have removed their comments systems because they think it’s counter to actual debate and discussion. Understanding our own best and worst selves here is key to bringing those back.”

This work was supported in part by Microsoft, Google, the National Science Foundation, the Army Research Office, the U.S. Department of Defense, the Stanford Data Science Initiative, Boeing, Lightspeed, SAP and Volkswagen.

Author : TAYLOR KUBOTA

Source : http://news.stanford.edu/2017/02/06/stanford-research-shows-anyone-can-become-internet-troll/

Categorized in Internet Privacy

An Internet troll is a member of an online social community who deliberately tries to disrupt, attack, offend or generally cause trouble within the community by posting certain comments, photos, videos, GIFs or some other form of online content.

You can find trolls all over the Internet -- on message boards, in your YouTube video comments, on Facebook, on dating sites, in blog comment sections and everywhere else that has an open area where people can freely post to express their thoughts and opinions. Controlling them can be difficult when there are a lot of community members, but the most common ways to get rid of them include either banning/blocking individual user accounts (and sometimes IP addresses altogether) or closing off comment sections entirely from a blog post, video page or topic thread.

Regardless of where you'll find Internet trolls lurking, they all tend to disrupt communities in very similar (and often predictable) ways. This isn't by any means a complete list of all the different types of trolls out there, but they're most certainly some of the most common types you'll often come across in active online communities.

1-The insult troll

The insult troll is a pure hater, plain and simple. And they don't even really have to have a reason to hate or insult someone. These types of trolls will often pick on everyone and anyone -- calling them names, accusing them of certain things, doing anything they can to get a negative emotional response from them -- just because they can. In many cases, this type of trolling can become so severe that it can lead to or be considered a serious form of cyberbullying.

2-The persistent debate troll

This type of troll loves a good argument. They can take a great, thoroughly researched and fact-based piece of content, and come at it from all opposing discussion angles to challenge its message. They believe they're right, and everyone else is wrong. You'll often also find them leaving long threads or arguments with other commenters in community comment sections, and they're always determined to have the last word -- continuing to comment until that other user gives up. 

3-The grammar and spellcheck troll

You know this type of troll. They're the people who always have to tell other users that they have misspelled words and grammar mistakes. Even when they do it by simply commenting with the corrected word behind an asterisk symbol, it's pretty much never a welcomed comment to any discussion. Some of them even use a commenter's spelling and grammar mistakes as an excuse to insult them.

4-The forever offended troll

When controversial topics are discussed online, they're bound to offend someone. That's normal. But then there are the types of trolls who can take a piece of content -- often times it's a joke, a parody or something sarcastic -- and turn on the digital waterworks. They're experts at taking humorous pieces of content and turning them into an argument by playing the victim. People really do get upset by some of the strangest things said and done online.

5-The show-off, know-it-all or blabbermouth troll

A close relative to the persistent debate troll, the show-off or blabbermouth troll is a person who doesn't necessarily like to participate in arguments but does love to share his opinion in extreme detail, even spreading rumors and secrets in some cases. Think of that one family member or friend you know who just loves to hear his own voice. That's the Internet equivalent of the show-off or know-it-all or blabbermouth troll. They love to have long discussions and write lots of paragraphs about whatever they know, whether anyone reads it or not. 

6-The profanity and all-caps troll

Unlike some of the more intelligent trolls like the debate troll, the grammar troll and the blabbermouth troll, the profanity and all-caps troll is the guy who has nothing really of value to add to the discussion, spewing only F-bombs and other curse words with his caps lock button left on. In many cases, these types of trolls are just bored kids looking for something to do without needing to put too much thought or effort into anything. On the other side of the screen, they're often harmless.

7-The one word only troll

There's always that one contributor to a Facebook status update, a forum thread, and Instagram photo, a Tumblr post or any other form of social posting who just says "lol" or "what" or "k" or "yes" or "no." They're certainly far from the worst type of troll you meet online, but when a serious or detailed topic is being discussed, their one-word replies are just a nuisance to all who are trying add value and follow the discussion.

8-The exaggeration troll

Exaggeration trolls can sometimes be a combination of know-it-alls, the offended and even debate trolls. They know how to take any topic or problem and completely blow it out of proportion. Some of them actually try to do it to be funny, and sometimes they succeed, while others do it just to be annoying. They rarely ever contribute any real value to a discussion and often bring up problems and issues that may arguably be unrelated to what's being discussed.

9-The off topic troll

It's pretty hard not to hate that guy who posts something completely off topic in any type of social community discussion. It can be even worse when that person succeeds in shifting the topic and everyone ends up talking about whatever irrelevant thing that he posted. You see it all the time online -- in the comments of Facebook posts, in threaded YouTube comments, on Twitter and literally anywhere there're active discussions happening.  

10-The greedy spammer troll

Last but not least, there's the dreaded spammer troll. This it the troll who truly could not care less about your post or discussion and is only posting to benefit himself. He wants you to check out his page, buy from his link, use his coupon code or download his free ebook. These trolls also include all those users you see littering discussions on Twitter and Instagram and every other social network with "follow me!!!" posts. 

Author : Elise Moreau

Source : https://www.lifewire.com/types-of-internet-trolls-3485894

Categorized in Internet Privacy
Page 1 of 2

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media