fbpx

“For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.” Dr. Anita L. Allen

Dr. Anita Allen serves as Vice Provost for Faculty and Henry R. Silverman Professor of Law and Philosophy at the University of Pennsylvania. Dr. Allen is a renowned expert in the areas of privacy, data protection, ethics, bioethics, and higher education, having authored the first casebook on privacy law and has been awarded numerous accolades and fellowships for her work. She earned her JD from Harvard and both her Ph.D. and master’s in philosophy from the University of Michigan. I had the opportunity to speak with her recently about her illustrious career, the origins of American privacy law and her predictions about the information age.

 

Q: Dr. Allen, a few years ago you spoke to the Aspen Institute and offered a prediction that “our grandchildren will resurrect privacy from a shallow grave just in time to secure the freedom, fairness, democracy, and dignity we all value… a longing for solitude and independence of mind and confidentiality…” Do you still feel that way, and if so, what will be the motivating factors for reclaiming those sacred principles?

 

A: Yes, I believe that very hopeful prediction will come true because there’s an increasing sense in the general public of the extent to which we have perhaps unwittingly ceded our privacy controls to the corporate sector, and in addition to that, to the government. I think the Facebook problems that had been so much in the news around Cambridge Analytica have made us sensitive and aware of the fact that we are, by simply doing things we enjoy, like communicating with friends on social media, putting our lives in the hands of strangers.

And so, these kinds of disclosures, whether they’re going to be on Facebook or some other social media business, are going to drive the next generation to be more cautious. They’ll be circumspect about how they manage their personal information, leading to, I hope, eventually, a redoubled effort to ensure our laws and policies are respectful of personal privacy.

Q: Perhaps the next generation heeds the wisdom of their elders and avoids the career pitfalls and reputational consequences of exposing too much on the internet?

A: I do think that’s it as well. Your original question was about my prediction that the future would see a restoration of concern about privacy. I believe that, yes, as experience shows the younger generation just what the consequences are of living your life in the public view and there will be a turnaround to some extent. To get people to focus on what they have to lose. It’s not just that you could lose job opportunities. You could lose school admissions. You could lose relationship opportunities and the ability to find the right partner because your reputation is so horrible on social media.

All of those consequences are causing people to be a little more reserved. It may lead to a big turnaround when people finally get enough control over their understanding of those consequences that they activate their political and governmental institutions to do better by them.

Q: While our right to privacy isn’t explicitly stated in the U.S. Constitution, it’s reasonably inferred from the language in the amendments. Yet today, “the right to be forgotten” is an uphill battle. Some bad actors brazenly disregard a “right to be left alone,” as defined by Justice Brandeis in 1890. Is legislation insufficient to protect privacy in the Information Age, or is the fault on the part of law enforcement and the courts?

A: I’ve had the distinct pleasure to follow developments in privacy law pretty carefully for the last 20 years, now approaching 30, and am the author or co-author of numerous textbooks on the right to privacy in the law, and so I’m familiar with the legal landscape. I can say from that familiarity that the measures we have in place right now are not adequate. It’s because the vast majority of our privacy laws were written literally before the internet, and in some cases in the late 1980s or early 1990s or early 2000s as the world was vastly evolving. So yes, we do need to go back and refresh our electronic communications and children’s internet privacy laws. We need to rethink our health privacy laws constantly. And all of our privacy laws need to be updated to reflect existing practices and technologies.

 

The right to be forgotten, which is a right described today as a new right created by the power of Google, is an old right that goes back to the beginning of privacy law. Even in the early 20th century, people were concerned about whether or not dated, but true information about people could be republished. So, it’s not a new question, but it has a new shape. It would be wonderful if our laws and our common law could be rewritten so that the contemporary versions of old problems, and completely new issues brought on by global technologies, could be rethought in light of current realities.

Q: The Fourth Amendment to the Constitution was intended to protect Americans from warrantless search and seizure. However, for much of our history, citizens have observed as surveillance has become politically charged and easily abused. How would our founders balance the need for privacy, national security, and the rule of law today?

A: The fourth amendment is an amazing provision that protects persons from a warrantless search and seizure. It was designed to protect peoples’ correspondence, letters, papers, as well as business documents from disclosure without a warrant. The idea of the government collecting or disclosing sensitive personal information about us was the same then as it is now. The fact that it’s much more efficient to collect information could be described as almost a legal technicality as opposed to a fundamental shift.

I think that while the founding generation couldn’t imagine the fastest computers we all have on our wrists and our desktops today, they could understand entirely the idea that a person’s thoughts and conduct would be placed under government scrutiny. They could see that people would be punished by virtue of government taking advantage of access to documents never intended for them to see. So, I think they could very much appreciate the problem and why it’s so important that we do something to restore some sense of balance between the state and the individual.

Q: Then, those amendments perhaps anticipated some of today’s challenges?

A: Sure. Not in the abstract, but think of it in the concrete. If we go back to the 18th and 19th centuries, you will find some theorists speculating that someday there will be new inventions that will raise these types of issues. Warren and Brandeis talked specifically about new inventions and business methods. So, it’s never been far from the imagination of our legal minds that more opportunities would come through technology. They anticipated technologies that would do the kinds of things once only done with pen and paper, things that can now be done in cars and with computers. It’s a structurally identical problem. And so, while I do think our laws could be easily updated, including our constitutional laws, the constitutional principles are beautiful in part because fundamentally they do continue to apply even though times have changed quite a bit.

Some of the constitutional languages we find in other countries around ideas like human dignity, which is now applied to privacy regulations, shows that, to some extent, very general constitutional language can be put to other purposes.

Q: In a speech to the 40th International Data Protection and Privacy Commissioners Conference, you posited that “Every person in every professional relationship, every financial transaction and every democratic institution thrives on trust. Openly embracing ethical standards and consistently living up to them remains the most reliable ways individuals and businesses can earn the respect upon which all else depends.” How do you facilitate trust, ethics, and morality in societies that have lost confidence in the authority of their institutions and have even begun to question their legitimacy?

A: For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. Unfortunately, the more draconian and unreasonable state actors behave respecting people’s privacy, the less people will be able to generate the kind of trust that’s needed. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.

I think that people have to begin to act in ways that make trust possible. I have to act in ways that make trust possible by behaving respectfully towards my neighbors, my family members, and my colleagues at work, and they the same toward me. The businesses that we deal with have to act in ways that are suggestive of respect for their customers and their vendors. Up and down the chain. That’s what I think. There’s no magic formula, but I do think there’s some room for conversation for education in schools, in religious organizations, in NGOs, and policy bodies. There is room for conversations that enable people to find discourses about privacy, confidentiality, data protection that can be used when people demonstrate that they want to begin to talk together about the importance of respect for these standards.

It’s surprising to me how often I’m asked to define privacy or define data protection. When we’re at the point where experts in the field have to be asked to give definitions of key concepts, we’re, of course, at a point where it’s going to be hard to have conversations that can develop trust around these ideas. That’s because people are not always even talking about the same thing. Or they don’t even know what to talk about under the rubric. We’re in the very early days of being able to generate trust around data protection, artificial intelligence, and the like because it’s just too new.

Q: The technology is new, but the principles are almost ancient, aren’t they?

A: Exactly. If we have clear conceptions about what we’re concerned about, whether its data protection or what we mean by artificial intelligence, then those ancient principles can be applied to new situations effectively.

Q: In a world where people have a little less shame about conduct, doesn’t that somehow impact the general population’s view of the exploitation of our data?

A: It seems to me we have entered a phase where there’s less shame, but a lot of that’s OK because I think we can all agree that maybe in the past, we were a bit too ashamed of our sexuality, of our opinions. Being able to express ourselves freely is a good thing. I guess I’m not sure yet on where we are going because I’m thinking about, even like 50 years ago, when it would have been seen as uncouth to go out in public without your hat and gloves. We have to be careful that we don’t think that everything that happens that’s revealing is necessarily wrong in some absolute sense.

 

It’s different to be sure. But what’s a matter of not wearing your hat and gloves, and what’s a matter of demeaning yourself? I certainly have been a strong advocate for moralizing about privacy and trying to get people to be more reserved and less willing to disclose when it comes to demeaning oneself. And I constantly use the example of Anthony Weiner as someone who, in public life, went too far, and not only disclosed but demeaned himself in the process. We do want to take precautions against that. But if it’s just a matter of, “we used to wear white gloves to Sunday school, and now we don’t…” If that’s what we’re talking about, then it’s not that important.

Q: You studied dance in college and then practiced law after graduating from Harvard, but ultimately decided to dedicate your career to higher education, writing, and consulting. What inspired you to pursue an academic career, and what would you say are the lasting rewards?

A: I think a love of reading and ideas guided my career. Reading, writing, and ideas, and independence governed my choices. As an academic, I get to be far freer than many employees are. I get to write what I want to write, to think about what I want to think, and to teach and to engage people in ideas, in university, and outside the university. Those things governed my choices.

I loved being a practicing lawyer, but you have to think about and deal with whatever problems the clients bring to you. You don’t always have that freedom of choice of topic to focus on. Then when it comes to things like dance or the arts, well, I love the arts, but I think I’ve always felt a little frustrated about the inability to make writing and debate sort of central to those activities. I think I am more of a person of the mind than a person of the body ultimately.

 

[Source: This article was published in cpomagazine.com By RAFAEL MOSCATEL - Uploaded by the Association Member: Grace Irwin]

Categorized in Internet Ethics

As we close out 2019, we at Security Boulevard wanted to highlight the five most popular articles of the year. Following is the fifth in our weeklong series of the Best of 2019.

Privacy. We all know what it is, but in today’s fully connected society can anyone actually have it?

For many years, it seemed the answer was no. We didn’t care about privacy. We were so enamored with Web 2.0, the growth of smartphones, GPS satnav, instant updates from our friends and the like that we seemed to not care about privacy. But while industry professionals argued the company was collecting too much private information, Facebook CEO Mark Zuckerberg understood the vast majority of Facebook users were not as concerned. He said in a 2011 Charlie Rose interview, “So the question isn’t what do we want to know about people. It’s what do people want to tell about themselves?”

 

In the past, it would be perfectly normal for a private company to collect personal, sensitive data in exchange for free services. Further, privacy advocates were almost criticized for being alarmist and unrealistic. Reflecting this position, Scott McNealy, then-CEO of Sun Micro­systems, infamously said at the turn of the millennium, “You have zero privacy anyway. Get over it.”

And for another decade or two, we did. Privacy concerns were debated; however, serious action on the part of corporations and governments seemed moot. Ten years ago, the Payment Card Industry Security Standards Council had the only meaningful data security standard, ostensibly imposed by payment card issuers against processors and users to avoid fraud.

Our attitudes have shifted since then. Expecting data privacy is now seen by society as perfectly normal. We are thinking about digital privacy like we did about personal privacy in the ’60s, before the era of hand-held computers.

So, what happened? Why does society now expect digital privacy? Especially in the U.S., where privacy under the law is not so much a fundamental right as a tort? There are a number of factors, of course. But let’s consider three: a data breach that gained national attention, an international elevation of privacy rights and growing frustration with lax privacy regulations.

Our shift in the U.S. toward expecting more privacy started accelerating in December 2013 when Target experienced a headline-gathering data breach. The termination of the then-CEO and the subsequent following-year staggering operating loss, allegedly due to customer dissatisfaction and reputation erosion from this incident, got the boardroom’s attention. Now, data privacy and security are chief strategic concerns.

On the international stage, the European Union started experimenting with data privacy legislation in 1995. Directive 95/46/EC required national data protection authorities to explore data protection certification. This resulted in an opinion issued in 2011 which, through a series of opinions and other actions, resulted in the General Data Protection Regulation (GDPR) entering force in 2016. This timeline is well-documented on the European Data Protection Supervisor’s website.

It wasn’t until 2018, however, when we noticed GDPR’s fundamental privacy changes. Starting then, websites that collected personal data had to notify visitors and ask for permission first. Notice the pop-ups everywhere asking for permission to store cookies? That’s a byproduct of the GDPR.

What happened after that? Within a few short years, many local governments in the U.S. became more and more frustrated with the lack of privacy progress at the national level. GDPR was front and center, with several lawsuits filed against high-profile companies that allegedly failed to comply.

As the GDPR demonstrated the possible outcomes of serious privacy regulation, smaller governments passed such legislation. The State of California passed the California Consumer Privacy Act and—almost simultaneously—the State of New York passed the Personal Privacy Protection Law. Both of these legislations give U.S. citizens significantly more privacy protection than any under U.S. law. And not just to state residents, but also to other U.S. citizens whose personal data is accessed or stored in those states.

Without question, we as a society have changed course. The unfettered internet has had its day. Going forward, more and more private companies will be subject to increasingly demanding privacy legislation.

Is this a bad thing? Something nefarious? Probably not. Just as we have always expected privacy in our physical lives, we now expect privacy in our digital lives as well. And businesses are adjusting toward our expectations.

One visible adjustment is more disclosure about exactly what private data a business collects and why. Privacy policies are easier to understand, as well as more comprehensive. Most websites warn visitors about the storage of private data in “cookies.” Many sites additionally grant visitors the ability to turn off such cookies except those technically necessary for the site’s operation.

Another visible adjustment is the widespread use of multi-factor authentication. Many sites, especially those involving credit, finance or shopping, validate login with a token sent by email, text or voice. These sites then verify the authorized user is logging in, which helps avoid leaking private data.

Perhaps the biggest adjustment is not visible: encryption of private data. More businesses now operate on otherwise meaningless cipher substitutes (the output of an encryption function) in place of sensitive data such as customer account numbers, birth dates, email or street addresses, member names and so on. This protects customers from breaches where private data is exploited via an all-too-common breach.

Respecting privacy is now the norm. Companies that show this respect will be rewarded for doing so. Those that allegedly don’t, however, may experience a different fiscal outcome.

 

[Source: This article was published in securityboulevard.com By Jason Paul Kazarian - Uploaded by the Association Member: Jason Paul Kazarian]

Categorized in Internet Ethics

The civility debate sidesteps how false assumptions about harm online, coupled with the affordances of digital media, encourage toxicity

Whitney Phillips is an Assistant Professor of Communication and Rhetorical Studies at Syracuse University and is the author of This Is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.

Ryan M Milner is an Associate Professor of Communication at the College of Charleston and is author of The World Made Meme: Public Conversations and Participatory Media and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.


A common lamentation online, one that spans the political divide and is forwarded by politicians and editorial boards alike, is that civility in American politics has died. It’s such a pressing concern that 80 percent of respondents to a recent NPR survey fear that uncivil speech will lead to physical violence. If only people would lower their voices, stop posting rude memes, and quit with the name-calling, we could start having meaningful conversations. We could unite around our shared experiences. We could come together as a nation.

 

In the current media environment, in which Twitter and Instagram are inundated with harassment, journalists are routinely threatened, and YouTube algorithms prop up reactionary extremists, we find it difficult to argue with that sentiment.

As idyllic as it might sound, however, the call to restore civility isn’t as straightforward as it appears. Civility alone isn’t enough to fix what’s broken. It might actually make the underlying problems worse. We need, instead, to consider the full range of behaviors that facilitate harm online. Yes this includes extreme, explicitly damaging cases. But it also includes the kinds of behaviors that many of us do without thinking, in fact, that many of us have already done today. These things might seem small. When we use them to connect with others, build communities, and express support, they might seem downright civil. But the little things we do every day, even when we have no intention of causing harm, quickly accumulate. Not only do these everyday actions normalize an ever-present toxicity online, they pave the way for the worst kinds of abuses to flourish.

The Civility Trap

When used as a political rallying point, appeals to civility are often a trap, particularly when forwarded in response to critical, dissenting speech. Sidestepping the content of a critique in order to police the tone of that critique—a strategy employed with particular vigor during the Kavanaugh hearings, and which frequently factors into hand-wringing over anti-racist activism—serves to falsey equate civility with politeness, and politeness with the democratic ideal. In short: you are being civil when you don’t ruffle my feathers, which is to say, when I don’t have to hear your grievance.

Besides their tendency to be adopted as bad faith, rhetorical sleights-of-hand, calls for civility have another, perhaps more insidious, consequence: deflecting blame. It’s everybody else’s behavior, they’re the ones who need to start acting right. They’re the ones who need to control themselves. In these instances, “We need to restore civility” becomes an exercise in finger pointing. You’re the one who isn’t being civil. Indeed, the above NPR survey explicitly asked respondents to identify who was to blame for the lack of civility in Washington, with four possible choices: President Trump, Republicans in Congress, Democrats in Congress, or the media. Whose fault is it: this is how the civility question tends to be framed.

Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.

We certainly maintain that the behavior of others can be a problem, or outright dangerous. We certainly maintain that some people need to control themselves, particularly given the increasingly glaring link between violent political rhetoric and violent action. Those who trade in antagonism, in manipulation, in symbolic violence and physical violence, warrant special, unflinching condemnation.

But few of us are truly blameless. In order to mitigate political toxicity and cultivate healthier communities, we must be willing to consider how, when, and to what effect blame whips around and points the finger squarely at our own chests.

We do this not by focusing merely on what’s civil, certainly when civility is used as a euphemism for tone-policing, or when it’s employed to pathologize and silence social justice activists (as if loudly calling out injustice and bigotry is an equivalent sin to that injustice and bigotry). We do this by focusing on what’s ethical. A more robust civility will stem from that shift in emphasis. Civility without solid ethical foundations, in contrast, will be as useful as a bandaid slapped over a broken bone.

 

As we conceive of them, online ethics foreground the full political, historical, and technological context of online communication; contend with the repercussions of everyday online behaviors; and avoid harming others. Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.

The Ethics of the Biomass

It’s not just that online ethics help facilitate more reflective, more empathetic, and indeed, more civil online interactions. Online ethics do even heavier lifting than that. Decisions guided by efforts to contextualize information, foreground stakes, preempt harm, and accept consequences also help combat information disorder, a term Claire Wardle and Hossein Derakhshan use to describe the process by which misinformation, disinformation, and malinformation contaminate public discourse. Ethics are a critical, if underutilized, bulwark against the spread of such information. Without strong ethical foundations, everyday communication functions, instead, as an information sort target.

The fact that unethical—or merely ethically unmoored—behaviors contribute to information disorder is a structural weakness that abusers, bigots, and media manipulators have exploited again and again. Phillips underscores this point in a Data & Society report on the ways extremists and manipulators launder toxic messaging through mainstream journalism. The same point holds for everyday social media users. Extremists need signal boosting. They get it when non-extremists serve as links in the amplification chain, whatever a person’s motives might be for amplifying that content.

When considering how ethical reflection can cultivate civility and help stymie information disorder, biomass pyramids provide a helpful, if unexpected, entry point.

In biology, biomass pyramids chart the relative number or weight of one class of organism compared to another organism within the same ecosystem. For a habitat to support one lion, the biomass pyramid shows, it needs a whole lot of insects. When applied to questions of online toxicity, biomass pyramids speak to the fact that there are far more everyday, relatively low-level cases of harmful behavior than there are apex predator cases—the kinds of actions that are explicitly and wilfully harmful, from coordinated hate and harassment campaigns to media manipulation tactics designed to sow chaos and confusion.

 

When people talk about online toxicity, they tend to focus on these apex predator cases. With good reason: these attacks have profound personal and professional implications for those targeted.

But apex predators aren’t the only creatures worth considering. The bottom strata is just as responsible for the rancor, negativity, and mis-, dis- and mal- information that clog online spaces, causing a great deal of cumulative harm.

Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars.

This bottom strata includes posting snarky jokes about an unfolding news story, tragedy, or controversy; retweeting hoaxes and other misleading narratives ironically, to condemn them, make fun of the people involved, or otherwise assert superiority over those who take the narratives seriously; making ambivalent inside jokes because your friends will know what you mean (and for white people in particular, that your friends will know you’re not a real racist); @mentioning the butts of jokes, critiques, or collective mocking, thus looping the target of the conversation into the discussion; and easiest of all, jumping into conversations mid-thread without knowing what the issues are. Regarding visual media, impactive everyday behaviors include responding to a thread with a GIF or reaction image featuring random everyday strangers, or posting (and/or remixing) the latest meme to comment on the news of the day.

Here is one example: recently, one of us published something on, let's say, internet stuff. Other people have written lots of things on the same general subject. One day, a stranger @-mentioned us to say that what we published was better than what someone else had published, and proceeded to explain how the other author fell short. The stranger @-mentioned the other author in the tweet. This was, we suppose, meant as a compliment to us. At the same time, it made us party to something we didn't want any part of, since just saying "thank you" would have cosigned, or seemed to cosign, the underlying insult. The other author, of course, fared much worse; the stranger didn't seem to give them the slightest passing thought.

It was a handle on Twitter to link to, not a person with feelings to consider. But of course, that stranger was wrong—no person on Twitter is just a handle to link to. And no person wants to be told in public that they are less than, for any reason. But that was the conversation, suddenly, this other author had been thrust into. One we were thrust into as well, even as the stranger thought they were saying something nice.

This strata of behavior receives far less attention than apex predator cases. Most basically, this is because each of the above behaviors, taken on their own, pales in comparison to extreme abuses. Whether emanating from platforms like YouTube, white supremacist spaces like The Daily Stormer, or even the White House, the damage done by the proverbial lions is clear, present, and often intractable. From a biomass perspective, insects seem tiny in comparison—and therefore not worth much consideration.

Less obviously, the lower strata of the biomass pyramid receives less fanfare because of assumptions about harm online. In cases of explicit abuse, bigotry, and manipulation, harm is almost always tethered to the criterion of intentionality: the idea that someone meant to hurt another person, meant to sow chaos and confusion, meant to ruin someone’s life.

 

In terms of classification, and of course interventionist triaging, it makes good sense to use the criterion of intentionality. Coordinated campaigns of hate, harassment, and manipulation, particularly those involving multiple participants, don’t just happen accidentally. Abusers and manipulators choose to abuse and manipulate; this is what makes them apex predators.

At the same time, however, reliance on the criterion of intentionality has some unintended consequences.

First, the criterion of intentionality discourages self-reflection in those who aren’t apex predators. If someone doesn’t set out to harm another person, that person is almost guaranteed not to spend much time reflecting on whether their behavior has or could harm others. Harm is something lions do. If you are not a lion, carry on.

But just because you’re not a lion doesn’t mean you can’t leave a nasty bite. Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars. They can still weaponize what someone else said, or result in the weaponization of something you said. They can still strip a person of their ability to decide if, for example, they want a picture of themselves to be used as part of some stranger’s snarky Twitter commentary, or to be included in a conversation in which they are being publicly mocked.

From an information disorder perspective, these low-level behaviors can also be of great benefit to the lions. Retweeting false or misleading stories, even if the point is to make fun of how stupid they are, making ironic statements that, taken out of context, look like actual examples of actual hate, and generally opening the floodgates for polluted information to flow through, is what allows apex predators to cause as much damage as they do.

These actions also feed into, and are fed by, issues of journalistic amplification. The greater the social media reaction to a story, the more reason journalists have to cover it, or at least tweet about it. And the greater the journalistic response to a story, the more social media reaction it will generate. And then there are the trending topics algorithms, which do not care why people share things, just that they share things, as polluted information cyclones across platforms, accruing strength as it travels.

Because of these overlapping forces, whether or not someone means to sow discord, or spread hate, or propagate false and misleading information, discord can be sown, hate can be spread, and false and misleading information can be propagated by behaviors that otherwise don’t create a blip on the political radar.

Stacking the Deck with Digital Tools

Focusing on intentionality obscures the collective damage everyday people can do when they use social media in socially and technologically-prescribed ways. The affordances of digital media make this problem even worse by further cloaking the stakes of everyday communication.

 

We describe these affordances in our book The Ambivalent Internet. They include modularity, the ability to manipulate, rearrange, and/or substitute digitized parts of a larger whole without disrupting or destroying that whole; modifiability, the ability to repurpose and reappropriate aspects of an existing project toward some new end; archivability, the ability to replicate and store existing data; and accessibility, the ability to categorize and search for tagged content.

These tools don’t just allow, they outright encourage participants to flatten contexts into highly shareable, highly remixable, texts: specific images, specific GIFs, specific memes.

All creative play online owes its existence to these affordances. They are what make the internet the internet. They also make it enormously easy to sever social media avatar from offline body, and to mistake one tiny sliver of a story for an entire narrative, or to never even think about what the entire narrative might be. As a result, even the most well-intentioned among us can overlook the consequences of our actions, and never even know whose toes we might be stepping on.

In such an environment, the first step towards making more ethical choices is acknowledging how the deck has been stacked against making more ethical choices.

The second is to anticipate and try and preempt unethical outcomes. This means contending with the fact that your own contextualizing information, including your underlying motivations, become moot once tossed to the internet’s winds. You might know what you meant, or why you did what you did, particularly in cases where you’re relying on “I was just joking” excuses. But others can’t know any of that. Not due to oversensitivity, not due to them not being able to take a joke. But because they can’t read your mind, and shouldn’t be expected to try.

Another critical question to ask is what you don’t know about the content you’re sharing. How and where was something sourced? What happened to the people involved? Did they ever give consent? Who was the initial intended audience? Each of these unknowns shapes the implications, and of course the ethics, of further amplifying that content. The devil, in these cases, isn’t in the details, the devil is in the unseen, unknown, unsolicited narratives.

Finally, we must all remember that the issues we discuss online, the stories we share, the media we play with—all can be traced back to bodies. Fully-fleshed out human beings who have friends, feelings, and a family—just like each of us.

This point is particularly important for middle-class, able-bodied, cisgendered white people to reflect on (a point we make as middle-class, able-bodied, cisgendered white people ourselves). When your body—your skin color, the resources you have access to, your gender identity, your ability—has never been the source of threats, abuse, and dehumanization, it is very easy to downplay the seriousness of threats, abuse, and dehumanization. To approach them abstractly, as just words, on just the internet. The behaviors in question might not seem like a big deal to you, because they’ve never needed to be a big deal for you. Because you’ve always, more or less, been safe. This might help explain why you react the way you do, but it’s not an excuse to keep reacting that way.

So when in doubt, when you do not understand: remember that what might look like an insect to one person can act like a lion to others. Particularly when those insects are everywhere, always, clogging a person’s experience, weighing down their bodies.

Environmental Protections

The biomass pyramid shows that the distinction between big harm and small harm is, in fact, highly permeable. The big harms perpetrated by apex predators are exactly that: big and dangerous. Smaller harms are, by definition, smaller, and on their own, less dangerous. But the harm at that lower strata can still be harmful. It is also cumulative; it adds up to something massive. So massive, in fact, that these smaller harms implicate all of us—not just as potential victims, but as potential perpetrators. Just as it does in nature, this omnipresent lower strata in turn supports all the strata above, including the largest, most dangerous animals at the top of the food chain. Directly and indirectly, insects feed the lions.

Robust online ethics provide the tools for minimizing all this harm. By using ethical tools, we minimize the environmental support apex predators depend on. We also have in our own hands the ability to cultivate civility that is not superficial, that is not a trap, but that has the potential to fundamentally alter what the online environment is like for the everyday people who call it home.

[Source: This article was published in vice.com By Whitney Phillips & Ryan M Milner - Uploaded by the Association Member: Joshua Simon] 

Categorized in Internet Ethics

Source: This article was published lawjournalnewsletters.com By JONATHAN BICK - Contributed by Member: Barbara Larson

Internet professional responsibility and client privacy difficulties are intimately associated with the services offered by lawyers. Electronic attorney services result in data gathering, information exchange, document transfers, enhanced communications and novel opportunities for marketing and promotion. These services, in turn, provide an array of complicated ethical issues that can present pitfalls for the uninitiated and unwary.

Since the Internet interpenetrates every aspect of the law, Internet activity can result in a grievance filed against attorneys for professional and ethical misconduct when such use results in communication failure, conflicts of interest, misrepresentation, fraud, dishonesty, missed deadlines or court appearances, advertising violations, improper billing, and funds misuse. While specific Internet privacy violation rules and regulations are rarely applied to attorney transactions, attorneys are regularly implicated in unfair and deceptive trade practices and industry-specific violations which are often interspersed with privacy violation facts.

 

Attorneys have a professional-responsibility duty to use the Internet, and it is that professional responsibility which results in difficulties for doing so. More specifically, the Model Rules of Professional Conduct Rule 1.1 (competence) paragraph 8 (maintenance) has been interpreted to require the use of the Internet, and Rules 7.1 – 7.5 (communications, advertising and soliciting) specifically charge attorneys with malfeasance for using the Internet improperly.

Internet professional conduct standards and model rules/commentary cross the full range of Internet-related concerns, including expert self-identification and specialty description; the correct way to structure Internet personal profiles; social media privacy settings; the importance and use of disclaimers; what constitutes “communication”; and the establishment of an attorney-client relationship. Additionally, ethics rules address “liking,” “friending” and “tagging” practices.

The application of codes of professional conduct is faced with a two-fold difficulty. First, what is the nature of the attorney Internet activity? Is the activity of publishing, broadcasting or telecommunications? Determining the nature of the attorney Internet activity is important because different privacy and ethic cannons apply. Additionally, the determination of the nature of the attorney activity allows practitioners to apply analogies. For example, guidance with respect to attorney Internet-advertising professional conduct is likely to be judged by the same standards as traditional attorney advertising.

The second difficulty is the location where activity occurs. Jurisdictions have enacted contrary laws and professional-responsibility duties.

Options for protecting client privacy and promoting professional responsibility include technical, business and legal options. Consider the following specific legal transactions.

A lawyer seeking to use the Internet to attract new clients across multiple jurisdictions frequently is confronted with inconsistent rules and regulations. A number of jurisdictions have taken the position that Internet communications are a form of advertising and thus subject to a particular state bar’s ethical restrictions. Such restrictions related to Internet content include banning testimonials; prohibitions on self-laudatory statements; disclaimers; and labeling the materials presented as advertising.

Other restrictions relate to content processing, such as requiring that advance copies of any advertising materials be submitted for review by designated bar entities prior to dissemination, and requiring that attorneys keep a copy of their website and any changes made to it for three years, along with a record of when and where the website was used. Still, other restrictions relate to distribution techniques, such as unsolicited commercial emailing (spam). Spam is considered by some states as overreaching, on the same grounds as ethical bans on in-person or telephone solicitation.

To overcome these difficulties and thus permit the responsible use of the Internet for attorney marketing, both technical and business solutions are available. The technical solution employs selectively serving advertisements to appropriate locations. For this solution, the software can be deployed to detect the origin of an Internet transaction. This software will serve up advertising based on the location of the recipient. Thus, attorneys can ameliorate or eliminate the difficulties associated with advertising and marketing restrictions without applying the most restrictive rule to every state.

Alternatively, a business solution may be used. Such a business solution would apply the most restrictive rules of each state to every Internet advertising and marketing communication.

Another legal difficulty associated with attorney Internet advertising and marketing is the unauthorized practice of law. All states have statutes or ethical rules that make it unlawful for persons to hold themselves out as attorneys or to provide legal services unless admitted and licensed to practice in that jurisdiction.

There are no reported decisions on this issue, but a handful of ethics opinions and court decisions take a restrictive view of unauthorized practice issues. For example, the court in Birbower, Montalbano, Condon & Frank v. Superior, 949 P.2d 1(1998), relied on unauthorized practice concerns in refusing to honor a fee agreement between a New York law firm and a California client for legal services provided in California, because the New York firm did not retain local counsel and its attorneys were not admitted in California.

The software can detect the origin of an Internet transaction. Thus, attorneys can ameliorate or eliminate the unauthorized practice of law by identifying the location of a potential client and only interacting with potential clients located in the state where an attorney is authorized to practice. Alternatively, an attorney could use a net nanny to prevent communications with potential clients located in the state where the attorney is not authorized to practice.

Preserving clients’ confidences is of critical importance in all aspects of an attorney’s practice. An attorney using the Internet to communicate with a client must consider the confidentiality of such communications. Using the Internet to communicate with clients on confidential matters raises a number of issues, including whether such communications: might violate the obligation to maintain client confidentiality; result in a waiver of the attorney-client privilege if intercepted by an unauthorized party; and create possible malpractice liability.

Both legal and technological solutions are available. First, memorializing informed consent is a legal solution.

Some recent ethics opinions suggest a need for caution. Iowa Opinion 96-1 states that before sending client-sensitive information over the Internet, a lawyer should either encrypt the information or obtain the client’s written acknowledgment of the risks of using this method of communication.

Substantial compliance may be a technological solution because the changing nature of Internet difficulties makes complete compliance unfeasible. Some attorneys have adopted internal measures to protect electronic client communications, including asking clients to consider alternative technologies; encrypting messages to increase security; obtaining written client authorization to use the Internet and acknowledgment of the possible risks in so doing, and exercising independent judgment about communications too sensitive to share using the Internet. While the use of such technology is not foolproof, if said use is demonstrably more significant than what is customary, judges and juries have found such efforts to be sufficient.

 

Finally, both legal and business options are available to surmount Internet-related client conflicts. Because of the business development potential of chat rooms, bulletin boards, and other electronic opportunities for client contact, many attorneys see the Internet as a powerful client development tool. What some fail to recognize, however, is that the very opportunity to attract new clients may be a source of unintended conflicts of interest.

Take, for example, one of the most common uses of Internet chat rooms: a request seeking advice from attorneys experienced in dealing with a particular legal problem. Attorneys have been known to prepare elaborate and highly detailed responses to such inquiries. Depending on the level and nature of the information received and the advice provided, however, attorneys may be dismayed to discover that they have inadvertently created an attorney-client relationship with the requesting party. At a minimum, given the anonymous nature of many such inquiries, they may face the embarrassment and potential client relations problem of taking a public position or providing advice contrary to the interests of an existing firm client.

An acceptable legal solution is the application of disclaimers and consents. Some operators of electronic bulletin boards and online discussion groups have tried to minimize the client conflict potential by providing disclaimers or including as part of the subscription agreement the acknowledgment that any participation in online discussions does not create an attorney-client relationship.

Alternatively, the use of limited answers would be a business solution. The Arizona State Bar recently cautioned that lawyers probably should not answer specific questions posed in chat rooms or newsgroups because of the inability to screen for potential conflicts with existing clients and the danger of disclosing confidential information.

Because the consequences of finding an attorney-client relationship are severe and may result in disqualification from representing other clients, the prudent lawyer should carefully scrutinize the nature and extent of any participation in online chat rooms and similar venues.

Categorized in Internet Ethics

 Source: This article was published cyberblogindia.in By Abhay Singh Sengar - Contributed by Member: Bridget Miller

When we talk about “ethics” we refer to attitude, values, beliefs, and habits possessed by a person or a group. The sense of the word is directly related to the term “morality” as Ethics is the study of morality.

Meaning of Computer Ethics

It is not a very old term. Until 1960s there was nothing known as “computer ethics”. Walter Manerin the mid-70s introduced the term ‘computer ethics’ which means “ethical problems aggravated, transformed or created by computer technology”. Wiener and Moor have also discussed about this in their book, “computer ethics identifies and analyses the impacts of information technology upon human values like health, wealth, opportunity, freedom, democracy, knowledge, privacy, security, self-fulfillment, and so on…“. Since the 1990s the importance of this term has increased. In simple words, Computer ethics is a set of moral principles that govern the usage of Computers.

 

Issues

As we all know, that Computer is an effective technology and it raises ethical issues like Personal Intrusion, Deception, Breach of Privacy, Cyber-bullying, Cyber-stalking, Defamation, Evasion Technology or social responsibility and Intellectual Property Rights i.e. copyrighted electronic content. In a Computer or Internet (Cyberspace) domain of Information security, understanding and maintaining ethics is very important at this stage. A typical problem related to ethics arises mainly because of the absence of policies or rules about how computer technology should be used. It is high time, there is some strict legislation regarding the same in the country.

Internet Ethics for everyone

  1. Acceptance- We should accept that the Internet is a primary component of our society only and not something apart from it.
  2. We should understand the sensitivity of Information before writing it on the Internet as there are no national or cultural barriers.
  3. As we do not provide our personal information to any stranger, similarly it should not be uploaded to a public network because it might be misused.
  4. Avoid the use of rude or bad language while using e-mail, chatting, blogging, social networking. Respect the person on another side.
  5. No copyrighted material should be copied, downloaded or shared with others.

Computer Ethics

Following are the 10 commandments as created by The Computer Ethics Institute which is a nonprofit working in this area:

  1. Thou shall not use a computer to harm other people;
  2. Thou shall not interfere with other people’s computer work;
  3. Thou shall not snoop around in other people’s computer files;
  4. Thou shall not use a computer to steal;
  5. Thou shall not use a computer to bear false witness;
  6. Thou shall not copy or use proprietary software for which you have not paid;
  7. Thou shall not use other people’s computer resources without authorization or proper compensation;
  8. Thou shall not appropriate other people’s intellectual output;
  9. Thou shall think about the social consequences of the program you are writing or the system you are designing;
  10. Thou shall always use a computer in ways that insure consideration and respect for your fellow humans.

 

Computer and Internet both are time-efficient tools for everyone. It can enlarge the possibilities for your curriculum growth. There is a lot of information on the Internet that can help you in learning. Explore that Information instead of exploiting others.

Computer Internet Ethics

 

Categorized in Internet Ethics

When reading Wikipedia’s 1992 Ten Commandments of Computer Ethics you can easily substitute “Internet” for “computer” and it’s amazing what you see…., for example the 1stCommandment “You shall not use the Internet to harm other people.”  Here are all Ten Commandments of Internet Ethics (with my minor edits):

  1. You shall not use the Internet to harm other people.
  2. You shall not interfere with other people’s Internet work.
  3. You shall not snoop around in other people’s Internet files.
  4. You shall not use the Internet to steal.
  5. You shall not use the Internet to bear false witness.
  6. You shall not copy or use proprietary software for which you have not paid (without permission).
  7. You shall not use other people’s Internet resources without authorization or proper compensation.
  8. You shall not appropriate other people’s intellectual output.
  9. You shall think about the social consequences of the program you are writing or the system you are designing.
  10. You shall always use the Internet in ways that ensure consideration and respect for your fellow humans.

For those of us who used the Internet 1992 it’s great to see that the Ethics of the Internet in 1992 (from the Computer Ethics Institute) applies in 2016!

Source: This article was published vogelitlawblog.com By Peter S. Vogel

Categorized in Internet Ethics

The Copyright Industry, especially the RIAA (Recording Industry Association of America), and MPAA (Motion Picture Association of America) have suppressed every form of innovation, and technology to protect their questionable rights.  In the 80s, they sued to stop video recorders, but were thankfully held back by the Supreme Court in the famous Betamax case.  The Media Industry forced manufacturers of blank cassettes, tapes, and CDs to pay a royalty to reimburse the industry because the blank recording media might be used to infringe copyright. That is right; your preacher's sermon tapes actually were forced to subsidize Hollywood.

In 1998, the RIAA sued to stop the first portable Mp3 player, Diamond Rio, from being sold.

In 1999, they took down Napster, the breakthrough file sharing program upstart.  Then they cut a swath of destruction going after a plethora of file sharing services, with such vicious tactics as suing children who downloaded songs for unconscionable amounts of money.

Upping the outrage, they tried to gut the First Amendment with the SOPA (Stop Online Piracy Act), which imperiled the whole Internet by making search engines and hosting companies liable for piracy that the technology companies had nothing to do with. Only when technology giants apprised Congress that technology produced more jobs than the media, did Congress back off. Temporarily!

In 2014, the RIAA considered suing Google for even listing sites that people could use to rip media.

 

The RIAA previously found that for 98% of the music related searches they performed, “pirate sites” were listed on the first page of the search results. According to the music group, this is an indication that more proactive measures are required, in the interests of both Google and the labels.“So the enforcement system we operate under requires us to send a staggering number of piracy notices – 100 million and counting to Google alone—and an equally staggering number of takedowns Google must process. And yet pirated copies continue to proliferate and users are bombarded with search results to illegal sources over legal sources for the music they love,” Sherman notes. -Torrent Freak

Why is it in Google's interest to doctor their search engine results for make the copyright industry happy?  And is the word, "bombarded" appropriate for providing the public with search results that the public wants? This is industry propaganda. Now, the RIAA is going full speed after You Tube ripping.

So what is YouTube ripping?

A few years ago, soon after file sharing sites were sued into oblivion, technology surfaced which made it possible to rip the music directly off of YouTube videos. No longer did one have to download buggy software to download files - and which ironically opened one's computer to viruses.  One could merely go to YouTube, copy the URL and then go to a ripping site to split the mp3 music off of the video, and then download it. This 2010 video - made at just about the same time that LimeWire file sharing service was finally taken down - gives some instruction. (Click Here).  More recent instruction videos are easily searched out.  Newer sites are incredibly ease to use.

News of the YouTube ripping technique spread slowly at first, except among technophiles; but soon enough, the Media Industry's victory over file sharing software/services would prove pyrrhic. YouTube ripping had the advantage of being incredibly easy and all but untraceable. No need to worry about RIAA lawsuits.

 

So, now, the RIAA is back again, crying foul, going nuts, suing YouTube ripping sites.

This week a huge coalition of recording labels headed by the RIAA, IFPI, and BPI, sued YouTube ripping service YouTube-MP3. Today we take a closer look at the lawsuit which was filed against a German company, owned and operated by a German citizen, which could seek damages running into the hundreds of millions of dollars. - Torrent Freak

This time, the RIAA has lost all reason. They are once again playing Whack-a-Mole - which is what they have been doing all along.  If history teaches anything, innovators, by their very nature, will always outpace Luddites. YouTube ripping sites have proliferated across the web - with this link at this time, showing 95 million results for a search for YouTube rippers.

Nothing will stop the RIAA, the MPAA, and the Media Industry, though.

Hollywood media moguls are intent on preserving a dying business model. Worse yet, they expect technology companies to provide the technical expertise to protect their quasi-monopoly.  It is much cheaper to have Google, Microsoft, and Facebook pay programmers to fight piracy than the RIAA actually hiring programmers to come up with the technology themselves.

Then again, their incompetence in this area has been humiliating.

In an attempt to curb music piracy, major labels such as Sony started selling music CDs that have built-in “copy-proof” technology. The technology was meant to stop people from copying music from these discs onto recordable CDs or hard drives. There's a fatal flaw in this technology, however, which allows you to bypass the copy protection with a simple marker pen, and a recent upsurge in Internet newsgroup talk about this flaw has brought it to light again.  -- Geek (2002)

Open up a cafe or a bar with some live music and you could be forced to pay three royalty collection agencies: ASCAPBMI, and SESAC.

 

Antonowisch explained that once ASCAP got wind that they had live music (even though they were only holding about 12 concerts a year), ASCAP began their crusade. “They called us everyday. They sent two letters a day. They threatened us with a lawsuit because they said we had violated copyright,” Antonowisch lamented. As not to get sued, the coffee shop owners conceded. They agreed to pay ASCAP the $600 yearly license for the right to have live music.But then they found out that there was another PRO that required the same license. BMI. (snip)Then, as luck would have it, SESAC got in touch. And they demanded just over $700.)snip)Bauhaus [the cafe] actually explained to ASCAP that all of their musicians play original music and ASCAP shot back “how do you know? Do you know every song ever written?” So the PROs won’t believe a venue if they claim that they only host original music. And all it takes is one musician to play one cover song for a PRO to sue for serious damages.

Consider them three mafias.  A protection racket.  Once you pay one, the others want their cut. Add in the MPAA, the RIAA, and it is legalized corruption. Congress indulges them because the media can make or break a politician's career; and so Congress passes more and more noxious copyright laws, to protect their monopoly.

As part of draining the swamp, this new administration has nothing to lose offending the media.  Trump should reform our copyright laws. Copyrights should be limited to no more time than patents: 20 years.  Getting a technical patent can require decades of investments and education.  Why should a song written over a short period of time get protected for 70 years plus the life of the author?

These media moguls are mafiosi in legal garb.  It is high time they were told that it is not the duty of Google, YouTube, Microsoft, or Apple to protect their recordings.  If the media companies cannot protect their own product, then so be it.

Let the industry die off. It is a dinosaur in an age of mammals. It is a relic that has lots its usefulness like royalty, and aristocracy. We won't have to suffer industry stars telling us how enlightened they are, and how retro-stupid the public is.

For decades they have monopolized American and Western culture - often destroying our core values - and charged us for the privilege of their artistic rampage.  We were stupid to put up with it. Now they are suing us. Let them die out. Let music and artistic creation return to the individual, as it was when the republic was born. Let the copyright attorneys find something useful to do.

Author: Mike Konrad
Source: http://www.americanthinker.com/articles/2017/01/copyright_vultures_are_at_it_again.html

Categorized in Internet Ethics

The Internet Society has released the findings of its 2016 Global Internet Report in which 40% of users admit they would not do business with a company which had suffered a data breach.

Highlighting the extent of the data breach problem, the report makes key recommendations for building user trust in the online environment, stating that more needs to be done to protect online personal information.

With a reported 1,673 breaches and 707 million exposed records occurring in 2015, the Internet Society is urging organisations to change their stance and follow five recommendations to reduce the number and impact of data breaches globally:

 

1. Put users - who are the ultimate victims of data breaches - at the centre of solutions. When assessing the costs of data breaches, include the costs to both users and organisations. 

2. Increase transparency about the risk, incidence and impact of data breaches globally. Sharing information responsibly helps organisations improve data security, helps policymakers improve policies and regulators pursue attackers, and helps the data security industry create better solutions.

3. Data security must be a priority – organisations should be held to best practice standards when it comes to data security.

4. Increase accountability – organisations should be held accountable for their breaches. Rules regarding liability and remediation must be established up front.

5. Increase incentives to invest in security – create a market for trusted, independent assessment of data security measures so that organisations can credibly signal their level of data security. Security signals enable organisations to indicate that that they are less vulnerable than competitors.

The report also draws parallels with threats posed by the Internet of Things (IoT). Forecasted to grow to tens of billions of devices by 2020, interconnected components and sensors that can track locations, health and other daily habits are opening gateways into user’s personal lives, leaving data exposed.

“We are at a turning point in the level of trust users are placing in the Internet,” said Internet Society’s Olaf Kolkman, Chief Internet Technology Officer. “With more of the devices in our pockets now having Internet connectivity, the opportunities for us to lose personal data is extremely high.

“Direct attacks on websites such as Ashley Madison and the recent IoT-based attack on Internet performance management company, Dyn, that rendered some of the world’s most famous websites including Reddit, Twitter and The New York Times temporarily inaccessible, are incredibly damaging both in terms of profits and reputation, but also to the levels of trust users have in the Internet.”

Other report highlights include:

  • The average cost of a data breach is now $4 million, up 29 percent since 2013
  • The average cost per lost record is $158, up 15 percent since 2013
  • Within business, the retail sector represents 13 percent of all breaches and six percent of all records stolen, while financial institutions represent 15 percent of breaches, but just 0.1 percent of records stolen, indicating these businesses might have greater resilience built in to protect their users

Source  :  https://www.finextra.com/pressarticle/67186/internet-trust-at-all-time-low-not-enough-data-protection

Categorized in Internet Ethics

The proposals aren’t just bad for Google, but for everyone.

There’s a lot to like about the copyright proposals that the European Commission unveiled Wednesday—easier access to video across the EU’s internal borders, more copyright exceptions for researchers, and more access to books for blind people.

However, two elements in particular could be disastrous if carried out as proposed. One would make it more difficult for small news publications to be able to challenge legacy media giants, and the other would threaten the existence of user-generated content platforms.

In a way, it’s good that digital commissioner Günther Oettinger has finally laid his cards on the table. But the battles that begin now will be epic.

The first contentious proposal is the introduction of so-called neighboring rights for press publishers, also known as ancillary copyright.

The move sounds pretty obscure, but isn’t. Much as it is possible for someone to get rewarded for performing a work—as opposed to writing it, which involves copyright—publishers would get to command fees for the stuff their writers write, based their own (new) rights rather than the copyright held by the journalist.

 

In effect, this would allow publishers to try wrangling fees out of others for any “use of the work”—a dangerously vague term in this context. What’s more, they’d get to do so for a whopping 20 years after publication.

This idea has been tried before in Germany and in Spain, where large publishers used new laws to try getting Google GOOG 0.11%  News to pay for using snippets of their text and thumbnails of their images.

Get Data SheetFortune’s technology newsletter.

Both times the attempts failed. In Germany, Google stopped reproducing snippets of text in Google News, and the publishers granted the firm a free (albeit temporary) licence once they saw how their traffic suffered. In Spain, the publishers had no such leeway and Google News ended up pulling out of the country, hammering the industry’s income in the process.

The Commission’s new proposals aren’t as suicidally rigid as what went down in Spain, but they’re also much vaguer than the German version. As currently phrased, they could allow press publishers to try charging for the reproduction of headlines, or even the mere indexing of their articles.

It’s hard to know whether the large press publishers who lobbied so hard for these measures really think Google will ultimately pay up, or whether their real goal is what happens when it refuses.

Because Google surely won’t pay for indexing their content or reproducing snippets of their text. It can’t—that would be the beginning of the end of its entire search engine business model, which can no longer scale if its links come with a cost.

If this law goes through and demands for licensing fees are rigidly enforced, Google will almost certainly pull Google News out of the entire EU.

Remember that it doesn’t run ads on Google News. It does run ads on its regular search engine, of course, and news results make that a fuller product, but it would have no reason to maintain Google News in Europe if it became a serious financial liability.

And if Google News exits the EU, the biggest victims will be the smaller publications, as happened in Spain. They rely on Google News and other aggregators because that’s how people find their articles, visit their sites, and view and click on their ads.

More established media outlets have much more brand recognition and traditional marketing clout, particularly in linguistically semi-closed markets such as Germany and France. They have everything to gain from reversing the Internet’s opening up of the media market; their rivals, and the reading public, have everything to lose. No wonder they’ve been pushing Oettinger to bring in ancillary copyright.

 

The other major flaw in the new proposals would also be bad news for smaller players, and for the rights of the public.

Under the e-Commerce Directive of 2000, the operators of user-generated content platforms—YouTube and SoundCloud and the like—are not liable for the content their users upload, as long as they take down the illegal stuff once someone flags it. That directive also explicitly says there can be no laws forcing platforms to generally monitor the content they manage.

Despite having consistently denied it is going to change these rules, the Commission is now proposing exactly that. In its new copyright directive proposal, it wants to force all user-generated content platforms to use “effective content recognition technologies,” which sounds an awful lot like generally monitoring content.

Of course, YouTube already has its Content ID technology for identifying and purging illegally uploaded films and so on, but what about new platforms? It cost Google more than $60 million to develop and implement Content ID, and it has to constantly tweak it to counteract those users who figure out ways to get around it.

You know how people upload movies to YouTube that are re-filmed from a funny angle, or that cut off the edges of the screen? That’s an attempt to circumvent Content ID and fighting it costs money, as does handling disputes when the system incorrectly flags videos as infringing copyright.

Quite apart from the fact that this would clash with another piece of EU legislation that’s trying to protect freedom of expression, this would be a huge burden for anyone trying to set up a new user-generated content platform, making it a problem for both innovation and competition.

Yes, creators deserve fair remuneration for the works they create. Yes, the Internet has turned their livelihoods upside-down by forcing them to compete with millions of rivals in an open market. Yes, lack of funding threatens media diversity. Yes, change is hard.

But these new proposals wouldn’t help creators make the best of the new landscape. All they would do is entrench the positions of the big players—the legacy media outlets in the case of ancillary copyright, and funnily enough Google in the case of the user-generated content proposals.

The European Parliament and the EU’s member states have a lot to fix over the next year or two, as this proposal wends its way through the legislative process.

 

Source : http://fortune.com/2016/09/14/europe-copyright-google/

Categorized in Internet Ethics

China’s powerful internet censorship body has further tightened its grip on online news reports by warning all news or social network websites against publishing news without proper verification, state media reports.

The instruction, issued by the Cyberspace Administration of China, came only a few days after Xu Lin, formerly the deputy head of the organisation, replaced his boss, Lu Wei, as the top gatekeeper of Chinese internet affairs.

Xu is regarded as one of President Xi Jinping’s key supporters.
The cyberspace watchdog said online media could not report any news taken from social media websites without approval.

“All websites should bear the key responsibility to further streamline the course of reporting and publishing of news, and set up a sound internal monitoring mechanism among all mobile news portals [and the social media chat websites] Weibo or WeChat,” Xinhua reported the directive as saying.

“It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts,” it said.

The central internet censorship organ ordered its regional subordinates to fully fulfil their duties on the basis of content management, strengthen supervision and inspection, and severely punish fake news or news that deviated from the facts.

 

“No website is allowed to report public news without specifying the sources, or report news that quotes untrue origins,” the circular warned, adding that the fabrication of news or distortion of the facts were also strictly prohibited.

The report said that a number of popular news portals, including Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com, had been punished and given warnings for fabricating news before distributing it, the report said, without giving any details about the penalty.

The Chinese government already exercises widespread controls over the internet and has sought to codify that policy in law.

Officials say internet restrictions, including the blocking of popular foreign websites such as Google and Facebook, are needed to ensure security in the face of rising threats, such as terrorism, and also to stop the spread of damaging rumours.

Source:  http://www.scmp.com/news/china/policies-politics/article/1985118/all-news-stories-must-be-verified-chinas-internet

Categorized in Internet Privacy
Page 1 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more