fbpx

Tech workers are building the future. With that power comes a responsibility to build a future that is more free, just, and more prosperous than the present. Many tech workers are taking that responsibility seriously.

For example, since 2018, employees at GoogleFacebook, and Amazon have publicly protested their companies' behavior on ethical grounds.

It’s essential that we understand what’s at stake when it comes to who we work for and what we build. Below are five areas within technology that represent forks in the road. Each of them holds tremendous possibility. Some are helping to usher in a better future. But all of them have the potential to hasten dystopia. Here's a brief summary of each of these areas and why they matter.

Mass surveillance

In a nutshell: Private companies including social media sites and cellular phone service providers are collecting vast troves of detailed, real-time location and communication metadata and selling it to and sharing it with law enforcement, immigration enforcement, and the intelligence community without informing users.

What may be at stake: Surveillance by immigration enforcement is literally a matter of life and death. Law enforcement’s use of surveillance technology to identify and track protestors and journalists threatens First Amendment rights. Amazon Ring and other surveillance tools can present risks that police could escalate responses to protestors to the point at which violence can result.

Where to learn more: The Intercept and 20 Minutes into the Future are starting points for sources of surveillance reporting. Be sure to follow these five leaders on tethics (tech ethics); one listee, Eva Gaperin, updates an excellent Twitter feed that provides constant updates on surveillance. And be sure to check out our this post on the pros and cons of employee surveillance.

Be aware of deepfakes

In a nutshell: In April, State Farm debuted a widely discussed TV commercial that appeared to show an ESPN analyst making shockingly accurate predictions about the year 2020 in 1998. It was a deepfake - a disturbing trend that is occurring within media worldwide.

Deepfakes are media representations of people saying and doing things they didn’t actually say or do. To make a deepfake, someone records a photo, audio clip, or video of someone and then swaps out his or her likeness for another person's.

What may be at stake: Detecting deepfakes is one of the most important challenges ahead of us. Examples of deepfakes in include a video in which Belgium’s Prime Minister Sophie Wilmès links COVID-19 to climate change. In one particularly frightening example, rumors that a video of the president of a small African country was a deepfake helped instigate a failed coup. On the other hand, brands are using deepfakes for marketing and advertising to positive effect. Other positive uses include creating “voice skins” for gamers who want realistic-sounding voices that aren’t their own.

Where to learn more about these tech challenges: This synopsis by MIT and this CSO intro both do a good strong job covering how deepfakes are made and the risks they impose. The Brookings Institution offers a good summary of the potential political and social dangers of deepfakes. Further, this guide, in addition to additional work on Forbes, are good primers on how advanced deepfake technology is - along with its potential to become even more sophisticated. Finally, the videos embedded in this CNN analysis can help those interested in this challenge can get up to speed.

Stay vigilant on disinformation

In a nutshell: A play on the word “misinformation,” disinformation is a type of propaganda meant to mislead or misdirect a rival. For example, a 2019 Senate Select Committee on Intelligence (SSCI) report confirmed that Russian-backed online disinformation campaigns exploited systemic racism to support Donald Trump’s candidacy in the 2016 election.

What may be at stake: When disinformation from Chinese and Russian-backed groups is distributed online, it can have real-world consequences. Between 2015 and 2017 Russian operatives posing as Americans successfully organized in-person rallies and demonstrations using Facebook. In one instance, Muslim civil rights activists counter-protested anti-Muslim Texas secessionists in Houston who waved Confederate flags and held “White Lives Matter” banners. Russian disinformation operatives organized both rallies. Experts predict more Russian-backed disinformation in the run-up to the 2020 elections.

Where to learn more: Dan Harvey’s “20 Minutes into the Future” is among the leading newsletters on this topic, and his most recent edition is a quick read on the recent developments in Russian disinformation. In it, he recommends this analysis of Internet Research Agency (IRA) campaigns put together by Oxford University. The Axios Codebook newsletter is also insightful, and its June edition on Russian disinformation disinformation is an especially compelling resource. For a thorough-but-readable long read, I recommend DiResta’s The Digital Maginot Line. For a more academic analysis, check out Stanford University’s Internet Observatory.

Be wary of addictive user experience

In a nutshell: Product managers, designers, tech marketers and start-up founders are all trying to build tools that users can’t put down. The benefits of addictive technology is obvious for the builders. But what is the long-term impact on users?

What may be at stake: Habit-forming tech products aren’t bad in and of themselves. But not all habits turn out to be healthy. Multiple studies have linked social media use with anxiety and depression, although the causal relationship isn’t clear. After the fintech company Robinhood made it free, easy, and fast to trade individual stocks, some users developed an unhealthy relationship with trading. One 20-year-old user committed suicide after seeing his $730,000 negative balance.

 

Arguably, no app is more addictive than TikTok. As a Chinese company, TikTok owner, ByteDance, is required to pass user data to the Chinese government. And going back to the disinformation section, TikTok has little incentive to resist pressure to display content that gives China an advantage over the US. In 2019 Senator Josh Hawley introduced ham-fisted legislation aimed at combating any addictive user experiences.

Where to learn more: This Scientific American piece is a good overview of the research on social media’s impact on mental health. The Margins newsletter is a good source of information on the pros and cons of technology and its Robinhood edition is a worthwhile read. Ben Thompson’s Stratechery newsletter is nuts-and-bolts, but delves into useful analysis of the ethical implications of technology.

Racist AI can reflect our own biases

In a nutshell: Artificial intelligence (AI) is only as good as the data it’s on which it’s based. Since humans still, by and large, exhibit racial biases it makes sense that the data we produce and use to train our AI is also going to contain racist ideas and language. The fact that Black and Latino Americans are severely underrepresented in positions of leadership at influential technology companies exacerbates the problem. Tech workers are only 3.1 percent are Black nationwide. Silicon Valley companies lag on diversity, as only 3 percent their total workforce is Black. Finally, only 1 percent of tech entrepreneurs in Silicon Valley are Black.

What may be at stake: After Nextdoor moderators found themselves in hot water for deleting Black Lives Matter content, the company said it would use AI to identify racism on the platform. But racist algorithms are causing harm to Black Americans. Police departments are using facial recognition software they know misidentifies up to 97 percent of Black suspects, leading to false arrests.

The kind of modeling used in predictive policing is also inaccurate, according to researchers. And judges are using algorithms to assist with setting pre-trial bail that assign Black Americans a higher risk of recidivism based on their race. Amazon scrapped its internal recruitment AI once it came to light that it was biased against women. On the other hand, one study showed that a machine learning algorithm led to better hires and lower turnover while increasing diversity among Minneapolis schoolteachers.

Where to learn more: The Partnership on AI, a nonprofit coalition committed to the responsible use of AI, is a great resource to learn more about the challenges within this space. This discussion on algorithms and a November 2019 assessment on the pitfalls of AI are both good valuable as they are short, readable intros to on the topic. Race after Technology is a concise, readable, quotable tome on what author Michelle Alexander calls the New Jim Crow.

[Source: This article was published in triplepundit.com By Matt Martin - Uploaded by the Association Member: Jay Harris]

Categorized in Internet Ethics

Popular search engines and browsers do a great job at finding and browsing content on the web, but can do a better job at protecting your privacy while doing so.

With your data being the digital currency of our times, websites, advertisers, browsers, and search engines track your behavior on the web to deliver tailored advertising, improve their algorithms, or improve their services.

In this guide, we list the best search engines and browsers to protect your privacy while using the web.

Privacy-focused search engines

Below are the best privacy-focused search engines that do not track your searchers or display advertisements based on your cookies or interests.

DuckDuckGo

The first privacy-focused search engine, and probably the most recognizable, we spotlight is DuckDuckGo.

Founded in 2008, DuckDuckGo is popular among users who are concerned about privacy online, and the privacy-friendly search engine recently said it had seen 2 billion total searches.

DDG

With DuckDuckGo, you can search for your questions and websites online anonymously.

DuckDuckGo does not compile entire profiles of user's search habits and behavior, and it also does not collect personal information.

DuckDuckGo is offered as a search engine option in all popular browsers.

In 2017, Brave added DuckDuckGo as a default search engine option when you use the browser on mobile or desktop. In Brave browser, your search results are powered by DuckDuckGo when you enter the private tabs (incognito).

Last year, Google also added DuckDuckGo to their list of search engines on Android and platforms. With iOS 14, Apple is now also allowing users to use DuckDuckGo as their preferred search engine.

Startpage

Unlike DuckDuckGo, Startpage is not crawling the internet to generate unique results, but instead, it allows users to obtain Google Search results while protecting their data.

Startpage started as a sister company of Ixquick, which was founded in 1998. In 2016, both websites were merged and Startpage owners received a significant investment from Privacy One Group last year.

This search engine also generates its income from advertising, but these ads are anonymously generated solely based on the search term you entered. Your information is not stored online or shared with other companies, such as Google.

StartPage

Startpage also comes with one interesting feature called "Annonymous View" that allows you to view links anonymously.

When you use this feature, Startpage renders the website in its container and the website won't be able to track you because it will see Startpage as the visitor.

Ecosia

The next search engine in our list is Ecosia.

Unlike any other search engines, Ecosia is a CO2-neutral search engine and it uses the revenue generated to plant trees. Ecosia's search results are provided by Bing and enhanced by the company's own algorithms.

Ecosia

Ecosia was first launched on 7 December 2009 and the company has donated most of its profits to plant trees across the world.

Ecosia says they're a privacy-friendly search engine and your searches are encrypted, which means the data is not stored permanently and sold to third-party advertisers.

List of privacy-friendly browsers:

Web browser developers have taken existing browser platforms such as Chrome and Firefox, and modified them to include more privacy-focuses features that protect your data while browsing the web.

Brave Browser

Brave is one of the fastest browser that is solely focused on privacy with features like private browsing, data saver, ad-free experience, bookmarks sync, tracking protections, HTTPs everywhere, and more.

Brave

Memory usage by Brave is far below Google Chrome and the browser is also available for both mobile and desktop.

You can download Brave from here.

Tor Browser

The Tor Browser is another browser that aims to protect your data, including your IP address, as you browse the web.

When browsing the web with Tor, your connections to web sites will be anonymous as your request will be routed through other computers and your real IP address is not shared. 

In addition, Tor bundles comes with the NoScript and HTTPS Everywhere extensions preinstalled, and clears your HTTP cookies on exit, to further protect your privacy.

Tor

firefox focus

Firefox Focus also comes with built-in ad blocker to improve your experience and block all trackers, including those operated by Google and Facebook.

You can download Tor browser from here.

Firefox Focus

Firefox Focus is also a great option if you use Android or iOS.

 

According to Mozilla, Firefox Focus blocks a wide range of online trackers, erases your history, passwords, cookies, and comes with a user-friendly interface.

 [Source: This article was published in bleepingcomputer.com By Mayank Parmar - Uploaded by the Association Member: Logan Hochstetler]

Categorized in Search Engine

“For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.” Dr. Anita L. Allen

Dr. Anita Allen serves as Vice Provost for Faculty and Henry R. Silverman Professor of Law and Philosophy at the University of Pennsylvania. Dr. Allen is a renowned expert in the areas of privacy, data protection, ethics, bioethics, and higher education, having authored the first casebook on privacy law and has been awarded numerous accolades and fellowships for her work. She earned her JD from Harvard and both her Ph.D. and master’s in philosophy from the University of Michigan. I had the opportunity to speak with her recently about her illustrious career, the origins of American privacy law and her predictions about the information age.

Q: Dr. Allen, a few years ago you spoke to the Aspen Institute and offered a prediction that “our grandchildren will resurrect privacy from a shallow grave just in time to secure the freedom, fairness, democracy, and dignity we all value… a longing for solitude and independence of mind and confidentiality…” Do you still feel that way, and if so, what will be the motivating factors for reclaiming those sacred principles?

 

A: Yes, I believe that very hopeful prediction will come true because there’s an increasing sense in the general public of the extent to which we have perhaps unwittingly ceded our privacy controls to the corporate sector, and in addition to that, to the government. I think the Facebook problems that had been so much in the news around Cambridge Analytica have made us sensitive and aware of the fact that we are, by simply doing things we enjoy, like communicating with friends on social media, putting our lives in the hands of strangers.

And so, these kinds of disclosures, whether they’re going to be on Facebook or some other social media business, are going to drive the next generation to be more cautious. They’ll be circumspect about how they manage their personal information, leading to, I hope, eventually, a redoubled effort to ensure our laws and policies are respectful of personal privacy.

Q: Perhaps the next generation heeds the wisdom of their elders and avoids the career pitfalls and reputational consequences of exposing too much on the internet?

A: I do think that’s it as well. Your original question was about my prediction that the future would see a restoration of concern about privacy. I believe that, yes, as experience shows the younger generation just what the consequences are of living your life in the public view and there will be a turnaround to some extent. To get people to focus on what they have to lose. It’s not just that you could lose job opportunities. You could lose school admissions. You could lose relationship opportunities and the ability to find the right partner because your reputation is so horrible on social media.

All of those consequences are causing people to be a little more reserved. It may lead to a big turnaround when people finally get enough control over their understanding of those consequences that they activate their political and governmental institutions to do better by them.

Q: While our right to privacy isn’t explicitly stated in the U.S. Constitution, it’s reasonably inferred from the language in the amendments. Yet today, “the right to be forgotten” is an uphill battle. Some bad actors brazenly disregard a “right to be left alone,” as defined by Justice Brandeis in 1890. Is legislation insufficient to protect privacy in the Information Age, or is the fault on the part of law enforcement and the courts?

A: I’ve had the distinct pleasure to follow developments in privacy law pretty carefully for the last 20 years, now approaching 30, and am the author or co-author of numerous textbooks on the right to privacy in the law, and so I’m familiar with the legal landscape. I can say from that familiarity that the measures we have in place right now are not adequate. It’s because the vast majority of our privacy laws were written literally before the internet, and in some cases in the late 1980s or early 1990s or early 2000s as the world was vastly evolving. So yes, we do need to go back and refresh our electronic communications and children’s internet privacy laws. We need to rethink our health privacy laws constantly. And all of our privacy laws need to be updated to reflect existing practices and technologies.

 

The right to be forgotten, which is a right described today as a new right created by the power of Google, is an old right that goes back to the beginning of privacy law. Even in the early 20th century, people were concerned about whether or not dated, but true information about people could be republished. So, it’s not a new question, but it has a new shape. It would be wonderful if our laws and our common law could be rewritten so that the contemporary versions of old problems, and completely new issues brought on by global technologies, could be rethought in light of current realities.

Q: The Fourth Amendment to the Constitution was intended to protect Americans from warrantless search and seizure. However, for much of our history, citizens have observed as surveillance has become politically charged and easily abused. How would our founders balance the need for privacy, national security, and the rule of law today?

A: The fourth amendment is an amazing provision that protects persons from a warrantless search and seizure. It was designed to protect peoples’ correspondence, letters, papers, as well as business documents from disclosure without a warrant. The idea of the government collecting or disclosing sensitive personal information about us was the same then as it is now. The fact that it’s much more efficient to collect information could be described as almost a legal technicality as opposed to a fundamental shift.

I think that while the founding generation couldn’t imagine the fastest computers we all have on our wrists and our desktops today, they could understand entirely the idea that a person’s thoughts and conduct would be placed under government scrutiny. They could see that people would be punished by virtue of government taking advantage of access to documents never intended for them to see. So, I think they could very much appreciate the problem and why it’s so important that we do something to restore some sense of balance between the state and the individual.

Q: Then, those amendments perhaps anticipated some of today’s challenges?

A: Sure. Not in the abstract, but think of it in the concrete. If we go back to the 18th and 19th centuries, you will find some theorists speculating that someday there will be new inventions that will raise these types of issues. Warren and Brandeis talked specifically about new inventions and business methods. So, it’s never been far from the imagination of our legal minds that more opportunities would come through technology. They anticipated technologies that would do the kinds of things once only done with pen and paper, things that can now be done in cars and with computers. It’s a structurally identical problem. And so, while I do think our laws could be easily updated, including our constitutional laws, the constitutional principles are beautiful in part because fundamentally they do continue to apply even though times have changed quite a bit.

Some of the constitutional languages we find in other countries around ideas like human dignity, which is now applied to privacy regulations, shows that, to some extent, very general constitutional language can be put to other purposes.

Q: In a speech to the 40th International Data Protection and Privacy Commissioners Conference, you posited that “Every person in every professional relationship, every financial transaction and every democratic institution thrives on trust. Openly embracing ethical standards and consistently living up to them remains the most reliable ways individuals and businesses can earn the respect upon which all else depends.” How do you facilitate trust, ethics, and morality in societies that have lost confidence in the authority of their institutions and have even begun to question their legitimacy?

A: For me, trust has to be earned. It’s not something that can be demanded or pulled out of a drawer and handed over. Unfortunately, the more draconian and unreasonable state actors behave respecting people’s privacy, the less people will be able to generate the kind of trust that’s needed. And the more government or the business sector shows genuine regard and respect for peoples’ privacy in their actions, as well as in their word and policies, the more that trust will come into being.

I think that people have to begin to act in ways that make trust possible. I have to act in ways that make trust possible by behaving respectfully towards my neighbors, my family members, and my colleagues at work, and they the same toward me. The businesses that we deal with have to act in ways that are suggestive of respect for their customers and their vendors. Up and down the chain. That’s what I think. There’s no magic formula, but I do think there’s some room for conversation for education in schools, in religious organizations, in NGOs, and policy bodies. There is room for conversations that enable people to find discourses about privacy, confidentiality, data protection that can be used when people demonstrate that they want to begin to talk together about the importance of respect for these standards.

It’s surprising to me how often I’m asked to define privacy or define data protection. When we’re at the point where experts in the field have to be asked to give definitions of key concepts, we’re, of course, at a point where it’s going to be hard to have conversations that can develop trust around these ideas. That’s because people are not always even talking about the same thing. Or they don’t even know what to talk about under the rubric. We’re in the very early days of being able to generate trust around data protection, artificial intelligence, and the like because it’s just too new.

Q: The technology is new, but the principles are almost ancient, aren’t they?

A: Exactly. If we have clear conceptions about what we’re concerned about, whether its data protection or what we mean by artificial intelligence, then those ancient principles can be applied to new situations effectively.

Q: In a world where people have a little less shame about conduct, doesn’t that somehow impact the general population’s view of the exploitation of our data?

A: It seems to me we have entered a phase where there’s less shame, but a lot of that’s OK because I think we can all agree that maybe in the past, we were a bit too ashamed of our sexuality, of our opinions. Being able to express ourselves freely is a good thing. I guess I’m not sure yet on where we are going because I’m thinking about, even like 50 years ago, when it would have been seen as uncouth to go out in public without your hat and gloves. We have to be careful that we don’t think that everything that happens that’s revealing is necessarily wrong in some absolute sense.

 

It’s different to be sure. But what’s a matter of not wearing your hat and gloves, and what’s a matter of demeaning yourself? I certainly have been a strong advocate for moralizing about privacy and trying to get people to be more reserved and less willing to disclose when it comes to demeaning oneself. And I constantly use the example of Anthony Weiner as someone who, in public life, went too far, and not only disclosed but demeaned himself in the process. We do want to take precautions against that. But if it’s just a matter of, “we used to wear white gloves to Sunday school, and now we don’t…” If that’s what we’re talking about, then it’s not that important.

Q: You studied dance in college and then practiced law after graduating from Harvard, but ultimately decided to dedicate your career to higher education, writing, and consulting. What inspired you to pursue an academic career, and what would you say are the lasting rewards?

A: I think a love of reading and ideas guided my career. Reading, writing, and ideas, and independence governed my choices. As an academic, I get to be far freer than many employees are. I get to write what I want to write, to think about what I want to think, and to teach and to engage people in ideas, in university, and outside the university. Those things governed my choices.

I loved being a practicing lawyer, but you have to think about and deal with whatever problems the clients bring to you. You don’t always have that freedom of choice of topic to focus on. Then when it comes to things like dance or the arts, well, I love the arts, but I think I’ve always felt a little frustrated about the inability to make writing and debate sort of central to those activities. I think I am more of a person of the mind than a person of the body ultimately.

[Source: This article was published in cpomagazine.com By RAFAEL MOSCATEL - Uploaded by the Association Member: Grace Irwin]

Categorized in Internet Ethics

As we close out 2019, we at Security Boulevard wanted to highlight the five most popular articles of the year. Following is the fifth in our weeklong series of the Best of 2019.

Privacy. We all know what it is, but in today’s fully connected society can anyone actually have it?

For many years, it seemed the answer was no. We didn’t care about privacy. We were so enamored with Web 2.0, the growth of smartphones, GPS satnav, instant updates from our friends and the like that we seemed to not care about privacy. But while industry professionals argued the company was collecting too much private information, Facebook CEO Mark Zuckerberg understood the vast majority of Facebook users were not as concerned. He said in a 2011 Charlie Rose interview, “So the question isn’t what do we want to know about people. It’s what do people want to tell about themselves?”

 

In the past, it would be perfectly normal for a private company to collect personal, sensitive data in exchange for free services. Further, privacy advocates were almost criticized for being alarmist and unrealistic. Reflecting this position, Scott McNealy, then-CEO of Sun Micro­systems, infamously said at the turn of the millennium, “You have zero privacy anyway. Get over it.”

And for another decade or two, we did. Privacy concerns were debated; however, serious action on the part of corporations and governments seemed moot. Ten years ago, the Payment Card Industry Security Standards Council had the only meaningful data security standard, ostensibly imposed by payment card issuers against processors and users to avoid fraud.

Our attitudes have shifted since then. Expecting data privacy is now seen by society as perfectly normal. We are thinking about digital privacy like we did about personal privacy in the ’60s, before the era of hand-held computers.

So, what happened? Why does society now expect digital privacy? Especially in the U.S., where privacy under the law is not so much a fundamental right as a tort? There are a number of factors, of course. But let’s consider three: a data breach that gained national attention, an international elevation of privacy rights and growing frustration with lax privacy regulations.

Our shift in the U.S. toward expecting more privacy started accelerating in December 2013 when Target experienced a headline-gathering data breach. The termination of the then-CEO and the subsequent following-year staggering operating loss, allegedly due to customer dissatisfaction and reputation erosion from this incident, got the boardroom’s attention. Now, data privacy and security are chief strategic concerns.

On the international stage, the European Union started experimenting with data privacy legislation in 1995. Directive 95/46/EC required national data protection authorities to explore data protection certification. This resulted in an opinion issued in 2011 which, through a series of opinions and other actions, resulted in the General Data Protection Regulation (GDPR) entering force in 2016. This timeline is well-documented on the European Data Protection Supervisor’s website.

It wasn’t until 2018, however, when we noticed GDPR’s fundamental privacy changes. Starting then, websites that collected personal data had to notify visitors and ask for permission first. Notice the pop-ups everywhere asking for permission to store cookies? That’s a byproduct of the GDPR.

What happened after that? Within a few short years, many local governments in the U.S. became more and more frustrated with the lack of privacy progress at the national level. GDPR was front and center, with several lawsuits filed against high-profile companies that allegedly failed to comply.

As the GDPR demonstrated the possible outcomes of serious privacy regulation, smaller governments passed such legislation. The State of California passed the California Consumer Privacy Act and—almost simultaneously—the State of New York passed the Personal Privacy Protection Law. Both of these legislations give U.S. citizens significantly more privacy protection than any under U.S. law. And not just to state residents, but also to other U.S. citizens whose personal data is accessed or stored in those states.

Without question, we as a society have changed course. The unfettered internet has had its day. Going forward, more and more private companies will be subject to increasingly demanding privacy legislation.

Is this a bad thing? Something nefarious? Probably not. Just as we have always expected privacy in our physical lives, we now expect privacy in our digital lives as well. And businesses are adjusting toward our expectations.

One visible adjustment is more disclosure about exactly what private data a business collects and why. Privacy policies are easier to understand, as well as more comprehensive. Most websites warn visitors about the storage of private data in “cookies.” Many sites additionally grant visitors the ability to turn off such cookies except those technically necessary for the site’s operation.

Another visible adjustment is the widespread use of multi-factor authentication. Many sites, especially those involving credit, finance or shopping, validate login with a token sent by email, text or voice. These sites then verify the authorized user is logging in, which helps avoid leaking private data.

Perhaps the biggest adjustment is not visible: encryption of private data. More businesses now operate on otherwise meaningless cipher substitutes (the output of an encryption function) in place of sensitive data such as customer account numbers, birth dates, email or street addresses, member names and so on. This protects customers from breaches where private data is exploited via an all-too-common breach.

Respecting privacy is now the norm. Companies that show this respect will be rewarded for doing so. Those that allegedly don’t, however, may experience a different fiscal outcome.

[Source: This article was published in securityboulevard.com By Jason Paul Kazarian - Uploaded by the Association Member: Jason Paul Kazarian]

Categorized in Internet Ethics

The civility debate sidesteps how false assumptions about harm online, coupled with the affordances of digital media, encourage toxicity

Whitney Phillips is an Assistant Professor of Communication and Rhetorical Studies at Syracuse University and is the author of This Is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.

Ryan M Milner is an Associate Professor of Communication at the College of Charleston and is author of The World Made Meme: Public Conversations and Participatory Media and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.


A common lamentation online, one that spans the political divide and is forwarded by politicians and editorial boards alike, is that civility in American politics has died. It’s such a pressing concern that 80 percent of respondents to a recent NPR survey fear that uncivil speech will lead to physical violence. If only people would lower their voices, stop posting rude memes, and quit with the name-calling, we could start having meaningful conversations. We could unite around our shared experiences. We could come together as a nation.

 

In the current media environment, in which Twitter and Instagram are inundated with harassment, journalists are routinely threatened, and YouTube algorithms prop up reactionary extremists, we find it difficult to argue with that sentiment.

As idyllic as it might sound, however, the call to restore civility isn’t as straightforward as it appears. Civility alone isn’t enough to fix what’s broken. It might actually make the underlying problems worse. We need, instead, to consider the full range of behaviors that facilitate harm online. Yes this includes extreme, explicitly damaging cases. But it also includes the kinds of behaviors that many of us do without thinking, in fact, that many of us have already done today. These things might seem small. When we use them to connect with others, build communities, and express support, they might seem downright civil. But the little things we do every day, even when we have no intention of causing harm, quickly accumulate. Not only do these everyday actions normalize an ever-present toxicity online, they pave the way for the worst kinds of abuses to flourish.

The Civility Trap

When used as a political rallying point, appeals to civility are often a trap, particularly when forwarded in response to critical, dissenting speech. Sidestepping the content of a critique in order to police the tone of that critique—a strategy employed with particular vigor during the Kavanaugh hearings, and which frequently factors into hand-wringing over anti-racist activism—serves to falsey equate civility with politeness, and politeness with the democratic ideal. In short: you are being civil when you don’t ruffle my feathers, which is to say, when I don’t have to hear your grievance.

Besides their tendency to be adopted as bad faith, rhetorical sleights-of-hand, calls for civility have another, perhaps more insidious, consequence: deflecting blame. It’s everybody else’s behavior, they’re the ones who need to start acting right. They’re the ones who need to control themselves. In these instances, “We need to restore civility” becomes an exercise in finger pointing. You’re the one who isn’t being civil. Indeed, the above NPR survey explicitly asked respondents to identify who was to blame for the lack of civility in Washington, with four possible choices: President Trump, Republicans in Congress, Democrats in Congress, or the media. Whose fault is it: this is how the civility question tends to be framed.

Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.

We certainly maintain that the behavior of others can be a problem, or outright dangerous. We certainly maintain that some people need to control themselves, particularly given the increasingly glaring link between violent political rhetoric and violent action. Those who trade in antagonism, in manipulation, in symbolic violence and physical violence, warrant special, unflinching condemnation.

But few of us are truly blameless. In order to mitigate political toxicity and cultivate healthier communities, we must be willing to consider how, when, and to what effect blame whips around and points the finger squarely at our own chests.

We do this not by focusing merely on what’s civil, certainly when civility is used as a euphemism for tone-policing, or when it’s employed to pathologize and silence social justice activists (as if loudly calling out injustice and bigotry is an equivalent sin to that injustice and bigotry). We do this by focusing on what’s ethical. A more robust civility will stem from that shift in emphasis. Civility without solid ethical foundations, in contrast, will be as useful as a bandaid slapped over a broken bone.

As we conceive of them, online ethics foreground the full political, historical, and technological context of online communication; contend with the repercussions of everyday online behaviors; and avoid harming others. Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.

The Ethics of the Biomass

It’s not just that online ethics help facilitate more reflective, more empathetic, and indeed, more civil online interactions. Online ethics do even heavier lifting than that. Decisions guided by efforts to contextualize information, foreground stakes, preempt harm, and accept consequences also help combat information disorder, a term Claire Wardle and Hossein Derakhshan use to describe the process by which misinformation, disinformation, and malinformation contaminate public discourse. Ethics are a critical, if underutilized, bulwark against the spread of such information. Without strong ethical foundations, everyday communication functions, instead, as an information sort target.

The fact that unethical—or merely ethically unmoored—behaviors contribute to information disorder is a structural weakness that abusers, bigots, and media manipulators have exploited again and again. Phillips underscores this point in a Data & Society report on the ways extremists and manipulators launder toxic messaging through mainstream journalism. The same point holds for everyday social media users. Extremists need signal boosting. They get it when non-extremists serve as links in the amplification chain, whatever a person’s motives might be for amplifying that content.

When considering how ethical reflection can cultivate civility and help stymie information disorder, biomass pyramids provide a helpful, if unexpected, entry point.

In biology, biomass pyramids chart the relative number or weight of one class of organism compared to another organism within the same ecosystem. For a habitat to support one lion, the biomass pyramid shows, it needs a whole lot of insects. When applied to questions of online toxicity, biomass pyramids speak to the fact that there are far more everyday, relatively low-level cases of harmful behavior than there are apex predator cases—the kinds of actions that are explicitly and wilfully harmful, from coordinated hate and harassment campaigns to media manipulation tactics designed to sow chaos and confusion.

 

When people talk about online toxicity, they tend to focus on these apex predator cases. With good reason: these attacks have profound personal and professional implications for those targeted.

But apex predators aren’t the only creatures worth considering. The bottom strata is just as responsible for the rancor, negativity, and mis-, dis- and mal- information that clog online spaces, causing a great deal of cumulative harm.

Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars.

This bottom strata includes posting snarky jokes about an unfolding news story, tragedy, or controversy; retweeting hoaxes and other misleading narratives ironically, to condemn them, make fun of the people involved, or otherwise assert superiority over those who take the narratives seriously; making ambivalent inside jokes because your friends will know what you mean (and for white people in particular, that your friends will know you’re not a real racist); @mentioning the butts of jokes, critiques, or collective mocking, thus looping the target of the conversation into the discussion; and easiest of all, jumping into conversations mid-thread without knowing what the issues are. Regarding visual media, impactive everyday behaviors include responding to a thread with a GIF or reaction image featuring random everyday strangers, or posting (and/or remixing) the latest meme to comment on the news of the day.

Here is one example: recently, one of us published something on, let's say, internet stuff. Other people have written lots of things on the same general subject. One day, a stranger @-mentioned us to say that what we published was better than what someone else had published, and proceeded to explain how the other author fell short. The stranger @-mentioned the other author in the tweet. This was, we suppose, meant as a compliment to us. At the same time, it made us party to something we didn't want any part of, since just saying "thank you" would have cosigned, or seemed to cosign, the underlying insult. The other author, of course, fared much worse; the stranger didn't seem to give them the slightest passing thought.

It was a handle on Twitter to link to, not a person with feelings to consider. But of course, that stranger was wrong—no person on Twitter is just a handle to link to. And no person wants to be told in public that they are less than, for any reason. But that was the conversation, suddenly, this other author had been thrust into. One we were thrust into as well, even as the stranger thought they were saying something nice.

This strata of behavior receives far less attention than apex predator cases. Most basically, this is because each of the above behaviors, taken on their own, pales in comparison to extreme abuses. Whether emanating from platforms like YouTube, white supremacist spaces like The Daily Stormer, or even the White House, the damage done by the proverbial lions is clear, present, and often intractable. From a biomass perspective, insects seem tiny in comparison—and therefore not worth much consideration.

Less obviously, the lower strata of the biomass pyramid receives less fanfare because of assumptions about harm online. In cases of explicit abuse, bigotry, and manipulation, harm is almost always tethered to the criterion of intentionality: the idea that someone meant to hurt another person, meant to sow chaos and confusion, meant to ruin someone’s life.

 

In terms of classification, and of course interventionist triaging, it makes good sense to use the criterion of intentionality. Coordinated campaigns of hate, harassment, and manipulation, particularly those involving multiple participants, don’t just happen accidentally. Abusers and manipulators choose to abuse and manipulate; this is what makes them apex predators.

At the same time, however, reliance on the criterion of intentionality has some unintended consequences.

First, the criterion of intentionality discourages self-reflection in those who aren’t apex predators. If someone doesn’t set out to harm another person, that person is almost guaranteed not to spend much time reflecting on whether their behavior has or could harm others. Harm is something lions do. If you are not a lion, carry on.

But just because you’re not a lion doesn’t mean you can’t leave a nasty bite. Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars. They can still weaponize what someone else said, or result in the weaponization of something you said. They can still strip a person of their ability to decide if, for example, they want a picture of themselves to be used as part of some stranger’s snarky Twitter commentary, or to be included in a conversation in which they are being publicly mocked.

From an information disorder perspective, these low-level behaviors can also be of great benefit to the lions. Retweeting false or misleading stories, even if the point is to make fun of how stupid they are, making ironic statements that, taken out of context, look like actual examples of actual hate, and generally opening the floodgates for polluted information to flow through, is what allows apex predators to cause as much damage as they do.

These actions also feed into, and are fed by, issues of journalistic amplification. The greater the social media reaction to a story, the more reason journalists have to cover it, or at least tweet about it. And the greater the journalistic response to a story, the more social media reaction it will generate. And then there are the trending topics algorithms, which do not care why people share things, just that they share things, as polluted information cyclones across platforms, accruing strength as it travels.

Because of these overlapping forces, whether or not someone means to sow discord, or spread hate, or propagate false and misleading information, discord can be sown, hate can be spread, and false and misleading information can be propagated by behaviors that otherwise don’t create a blip on the political radar.

Stacking the Deck with Digital Tools

Focusing on intentionality obscures the collective damage everyday people can do when they use social media in socially and technologically-prescribed ways. The affordances of digital media make this problem even worse by further cloaking the stakes of everyday communication.

We describe these affordances in our book The Ambivalent Internet. They include modularity, the ability to manipulate, rearrange, and/or substitute digitized parts of a larger whole without disrupting or destroying that whole; modifiability, the ability to repurpose and reappropriate aspects of an existing project toward some new end; archivability, the ability to replicate and store existing data; and accessibility, the ability to categorize and search for tagged content.

These tools don’t just allow, they outright encourage participants to flatten contexts into highly shareable, highly remixable, texts: specific images, specific GIFs, specific memes.

All creative play online owes its existence to these affordances. They are what make the internet the internet. They also make it enormously easy to sever social media avatar from offline body, and to mistake one tiny sliver of a story for an entire narrative, or to never even think about what the entire narrative might be. As a result, even the most well-intentioned among us can overlook the consequences of our actions, and never even know whose toes we might be stepping on.

In such an environment, the first step towards making more ethical choices is acknowledging how the deck has been stacked against making more ethical choices.

The second is to anticipate and try and preempt unethical outcomes. This means contending with the fact that your own contextualizing information, including your underlying motivations, become moot once tossed to the internet’s winds. You might know what you meant, or why you did what you did, particularly in cases where you’re relying on “I was just joking” excuses. But others can’t know any of that. Not due to oversensitivity, not due to them not being able to take a joke. But because they can’t read your mind, and shouldn’t be expected to try.

Another critical question to ask is what you don’t know about the content you’re sharing. How and where was something sourced? What happened to the people involved? Did they ever give consent? Who was the initial intended audience? Each of these unknowns shapes the implications, and of course the ethics, of further amplifying that content. The devil, in these cases, isn’t in the details, the devil is in the unseen, unknown, unsolicited narratives.

Finally, we must all remember that the issues we discuss online, the stories we share, the media we play with—all can be traced back to bodies. Fully-fleshed out human beings who have friends, feelings, and a family—just like each of us.

This point is particularly important for middle-class, able-bodied, cisgendered white people to reflect on (a point we make as middle-class, able-bodied, cisgendered white people ourselves). When your body—your skin color, the resources you have access to, your gender identity, your ability—has never been the source of threats, abuse, and dehumanization, it is very easy to downplay the seriousness of threats, abuse, and dehumanization. To approach them abstractly, as just words, on just the internet. The behaviors in question might not seem like a big deal to you, because they’ve never needed to be a big deal for you. Because you’ve always, more or less, been safe. This might help explain why you react the way you do, but it’s not an excuse to keep reacting that way.

So when in doubt, when you do not understand: remember that what might look like an insect to one person can act like a lion to others. Particularly when those insects are everywhere, always, clogging a person’s experience, weighing down their bodies.

Environmental Protections

The biomass pyramid shows that the distinction between big harm and small harm is, in fact, highly permeable. The big harms perpetrated by apex predators are exactly that: big and dangerous. Smaller harms are, by definition, smaller, and on their own, less dangerous. But the harm at that lower strata can still be harmful. It is also cumulative; it adds up to something massive. So massive, in fact, that these smaller harms implicate all of us—not just as potential victims, but as potential perpetrators. Just as it does in nature, this omnipresent lower strata in turn supports all the strata above, including the largest, most dangerous animals at the top of the food chain. Directly and indirectly, insects feed the lions.

Robust online ethics provide the tools for minimizing all this harm. By using ethical tools, we minimize the environmental support apex predators depend on. We also have in our own hands the ability to cultivate civility that is not superficial, that is not a trap, but that has the potential to fundamentally alter what the online environment is like for the everyday people who call it home.

[Source: This article was published in vice.com By Whitney Phillips & Ryan M Milner - Uploaded by the Association Member: Joshua Simon] 

Categorized in Internet Ethics

Source: This article was published lawjournalnewsletters.com By JONATHAN BICK - Contributed by Member: Barbara Larson

Internet professional responsibility and client privacy difficulties are intimately associated with the services offered by lawyers. Electronic attorney services result in data gathering, information exchange, document transfers, enhanced communications and novel opportunities for marketing and promotion. These services, in turn, provide an array of complicated ethical issues that can present pitfalls for the uninitiated and unwary.

Since the Internet interpenetrates every aspect of the law, Internet activity can result in a grievance filed against attorneys for professional and ethical misconduct when such use results in communication failure, conflicts of interest, misrepresentation, fraud, dishonesty, missed deadlines or court appearances, advertising violations, improper billing, and funds misuse. While specific Internet privacy violation rules and regulations are rarely applied to attorney transactions, attorneys are regularly implicated in unfair and deceptive trade practices and industry-specific violations which are often interspersed with privacy violation facts.

Attorneys have a professional-responsibility duty to use the Internet, and it is that professional responsibility which results in difficulties for doing so. More specifically, the Model Rules of Professional Conduct Rule 1.1 (competence) paragraph 8 (maintenance) has been interpreted to require the use of the Internet, and Rules 7.1 – 7.5 (communications, advertising and soliciting) specifically charge attorneys with malfeasance for using the Internet improperly.

Internet professional conduct standards and model rules/commentary cross the full range of Internet-related concerns, including expert self-identification and specialty description; the correct way to structure Internet personal profiles; social media privacy settings; the importance and use of disclaimers; what constitutes “communication”; and the establishment of an attorney-client relationship. Additionally, ethics rules address “liking,” “friending” and “tagging” practices.

The application of codes of professional conduct is faced with a two-fold difficulty. First, what is the nature of the attorney Internet activity? Is the activity of publishing, broadcasting or telecommunications? Determining the nature of the attorney Internet activity is important because different privacy and ethic cannons apply. Additionally, the determination of the nature of the attorney activity allows practitioners to apply analogies. For example, guidance with respect to attorney Internet-advertising professional conduct is likely to be judged by the same standards as traditional attorney advertising.

The second difficulty is the location where activity occurs. Jurisdictions have enacted contrary laws and professional-responsibility duties.

Options for protecting client privacy and promoting professional responsibility include technical, business and legal options. Consider the following specific legal transactions.

A lawyer seeking to use the Internet to attract new clients across multiple jurisdictions frequently is confronted with inconsistent rules and regulations. A number of jurisdictions have taken the position that Internet communications are a form of advertising and thus subject to a particular state bar’s ethical restrictions. Such restrictions related to Internet content include banning testimonials; prohibitions on self-laudatory statements; disclaimers; and labeling the materials presented as advertising.

Other restrictions relate to content processing, such as requiring that advance copies of any advertising materials be submitted for review by designated bar entities prior to dissemination, and requiring that attorneys keep a copy of their website and any changes made to it for three years, along with a record of when and where the website was used. Still, other restrictions relate to distribution techniques, such as unsolicited commercial emailing (spam). Spam is considered by some states as overreaching, on the same grounds as ethical bans on in-person or telephone solicitation.

To overcome these difficulties and thus permit the responsible use of the Internet for attorney marketing, both technical and business solutions are available. The technical solution employs selectively serving advertisements to appropriate locations. For this solution, the software can be deployed to detect the origin of an Internet transaction. This software will serve up advertising based on the location of the recipient. Thus, attorneys can ameliorate or eliminate the difficulties associated with advertising and marketing restrictions without applying the most restrictive rule to every state.

Alternatively, a business solution may be used. Such a business solution would apply the most restrictive rules of each state to every Internet advertising and marketing communication.

Another legal difficulty associated with attorney Internet advertising and marketing is the unauthorized practice of law. All states have statutes or ethical rules that make it unlawful for persons to hold themselves out as attorneys or to provide legal services unless admitted and licensed to practice in that jurisdiction.

There are no reported decisions on this issue, but a handful of ethics opinions and court decisions take a restrictive view of unauthorized practice issues. For example, the court in Birbower, Montalbano, Condon & Frank v. Superior, 949 P.2d 1(1998), relied on unauthorized practice concerns in refusing to honor a fee agreement between a New York law firm and a California client for legal services provided in California, because the New York firm did not retain local counsel and its attorneys were not admitted in California.

The software can detect the origin of an Internet transaction. Thus, attorneys can ameliorate or eliminate the unauthorized practice of law by identifying the location of a potential client and only interacting with potential clients located in the state where an attorney is authorized to practice. Alternatively, an attorney could use a net nanny to prevent communications with potential clients located in the state where the attorney is not authorized to practice.

Preserving clients’ confidences is of critical importance in all aspects of an attorney’s practice. An attorney using the Internet to communicate with a client must consider the confidentiality of such communications. Using the Internet to communicate with clients on confidential matters raises a number of issues, including whether such communications: might violate the obligation to maintain client confidentiality; result in a waiver of the attorney-client privilege if intercepted by an unauthorized party; and create possible malpractice liability.

Both legal and technological solutions are available. First, memorializing informed consent is a legal solution.

Some recent ethics opinions suggest a need for caution. Iowa Opinion 96-1 states that before sending client-sensitive information over the Internet, a lawyer should either encrypt the information or obtain the client’s written acknowledgment of the risks of using this method of communication.

Substantial compliance may be a technological solution because the changing nature of Internet difficulties makes complete compliance unfeasible. Some attorneys have adopted internal measures to protect electronic client communications, including asking clients to consider alternative technologies; encrypting messages to increase security; obtaining written client authorization to use the Internet and acknowledgment of the possible risks in so doing, and exercising independent judgment about communications too sensitive to share using the Internet. While the use of such technology is not foolproof, if said use is demonstrably more significant than what is customary, judges and juries have found such efforts to be sufficient.

Finally, both legal and business options are available to surmount Internet-related client conflicts. Because of the business development potential of chat rooms, bulletin boards, and other electronic opportunities for client contact, many attorneys see the Internet as a powerful client development tool. What some fail to recognize, however, is that the very opportunity to attract new clients may be a source of unintended conflicts of interest.

Take, for example, one of the most common uses of Internet chat rooms: a request seeking advice from attorneys experienced in dealing with a particular legal problem. Attorneys have been known to prepare elaborate and highly detailed responses to such inquiries. Depending on the level and nature of the information received and the advice provided, however, attorneys may be dismayed to discover that they have inadvertently created an attorney-client relationship with the requesting party. At a minimum, given the anonymous nature of many such inquiries, they may face the embarrassment and potential client relations problem of taking a public position or providing advice contrary to the interests of an existing firm client.

An acceptable legal solution is the application of disclaimers and consents. Some operators of electronic bulletin boards and online discussion groups have tried to minimize the client conflict potential by providing disclaimers or including as part of the subscription agreement the acknowledgment that any participation in online discussions does not create an attorney-client relationship.

Alternatively, the use of limited answers would be a business solution. The Arizona State Bar recently cautioned that lawyers probably should not answer specific questions posed in chat rooms or newsgroups because of the inability to screen for potential conflicts with existing clients and the danger of disclosing confidential information.

Because the consequences of finding an attorney-client relationship are severe and may result in disqualification from representing other clients, the prudent lawyer should carefully scrutinize the nature and extent of any participation in online chat rooms and similar venues.

Categorized in Internet Ethics

 Source: This article was published cyberblogindia.in By Abhay Singh Sengar - Contributed by Member: Bridget Miller

When we talk about “ethics” we refer to attitude, values, beliefs, and habits possessed by a person or a group. The sense of the word is directly related to the term “morality” as Ethics is the study of morality.

Meaning of Computer Ethics

It is not a very old term. Until 1960s there was nothing known as “computer ethics”. Walter Manerin the mid-70s introduced the term ‘computer ethics’ which means “ethical problems aggravated, transformed or created by computer technology”. Wiener and Moor have also discussed about this in their book, “computer ethics identifies and analyses the impacts of information technology upon human values like health, wealth, opportunity, freedom, democracy, knowledge, privacy, security, self-fulfillment, and so on…“. Since the 1990s the importance of this term has increased. In simple words, Computer ethics is a set of moral principles that govern the usage of Computers.

Issues

As we all know, that Computer is an effective technology and it raises ethical issues like Personal Intrusion, Deception, Breach of Privacy, Cyber-bullying, Cyber-stalking, Defamation, Evasion Technology or social responsibility and Intellectual Property Rights i.e. copyrighted electronic content. In a Computer or Internet (Cyberspace) domain of Information security, understanding and maintaining ethics is very important at this stage. A typical problem related to ethics arises mainly because of the absence of policies or rules about how computer technology should be used. It is high time, there is some strict legislation regarding the same in the country.

Internet Ethics for everyone

  1. Acceptance- We should accept that the Internet is a primary component of our society only and not something apart from it.
  2. We should understand the sensitivity of Information before writing it on the Internet as there are no national or cultural barriers.
  3. As we do not provide our personal information to any stranger, similarly it should not be uploaded to a public network because it might be misused.
  4. Avoid the use of rude or bad language while using e-mail, chatting, blogging, social networking. Respect the person on another side.
  5. No copyrighted material should be copied, downloaded or shared with others.

Computer Ethics

Following are the 10 commandments as created by The Computer Ethics Institute which is a nonprofit working in this area:

  1. Thou shall not use a computer to harm other people;
  2. Thou shall not interfere with other people’s computer work;
  3. Thou shall not snoop around in other people’s computer files;
  4. Thou shall not use a computer to steal;
  5. Thou shall not use a computer to bear false witness;
  6. Thou shall not copy or use proprietary software for which you have not paid;
  7. Thou shall not use other people’s computer resources without authorization or proper compensation;
  8. Thou shall not appropriate other people’s intellectual output;
  9. Thou shall think about the social consequences of the program you are writing or the system you are designing;
  10. Thou shall always use a computer in ways that insure consideration and respect for your fellow humans.

Computer and Internet both are time-efficient tools for everyone. It can enlarge the possibilities for your curriculum growth. There is a lot of information on the Internet that can help you in learning. Explore that Information instead of exploiting others.

Computer Internet Ethics

Categorized in Internet Ethics

When reading Wikipedia’s 1992 Ten Commandments of Computer Ethics you can easily substitute “Internet” for “computer” and it’s amazing what you see…., for example the 1stCommandment “You shall not use the Internet to harm other people.”  Here are all Ten Commandments of Internet Ethics (with my minor edits):

  1. You shall not use the Internet to harm other people.
  2. You shall not interfere with other people’s Internet work.
  3. You shall not snoop around in other people’s Internet files.
  4. You shall not use the Internet to steal.
  5. You shall not use the Internet to bear false witness.
  6. You shall not copy or use proprietary software for which you have not paid (without permission).
  7. You shall not use other people’s Internet resources without authorization or proper compensation.
  8. You shall not appropriate other people’s intellectual output.
  9. You shall think about the social consequences of the program you are writing or the system you are designing.
  10. You shall always use the Internet in ways that ensure consideration and respect for your fellow humans.

For those of us who used the Internet 1992 it’s great to see that the Ethics of the Internet in 1992 (from the Computer Ethics Institute) applies in 2016!

Source: This article was published vogelitlawblog.com By Peter S. Vogel

Categorized in Internet Ethics

The Copyright Industry, especially the RIAA (Recording Industry Association of America), and MPAA (Motion Picture Association of America) have suppressed every form of innovation, and technology to protect their questionable rights.  In the 80s, they sued to stop video recorders, but were thankfully held back by the Supreme Court in the famous Betamax case.  The Media Industry forced manufacturers of blank cassettes, tapes, and CDs to pay a royalty to reimburse the industry because the blank recording media might be used to infringe copyright. That is right; your preacher's sermon tapes actually were forced to subsidize Hollywood.

In 1998, the RIAA sued to stop the first portable Mp3 player, Diamond Rio, from being sold.

In 1999, they took down Napster, the breakthrough file sharing program upstart.  Then they cut a swath of destruction going after a plethora of file sharing services, with such vicious tactics as suing children who downloaded songs for unconscionable amounts of money.

Upping the outrage, they tried to gut the First Amendment with the SOPA (Stop Online Piracy Act), which imperiled the whole Internet by making search engines and hosting companies liable for piracy that the technology companies had nothing to do with. Only when technology giants apprised Congress that technology produced more jobs than the media, did Congress back off. Temporarily!

In 2014, the RIAA considered suing Google for even listing sites that people could use to rip media.

 

The RIAA previously found that for 98% of the music related searches they performed, “pirate sites” were listed on the first page of the search results. According to the music group, this is an indication that more proactive measures are required, in the interests of both Google and the labels.“So the enforcement system we operate under requires us to send a staggering number of piracy notices – 100 million and counting to Google alone—and an equally staggering number of takedowns Google must process. And yet pirated copies continue to proliferate and users are bombarded with search results to illegal sources over legal sources for the music they love,” Sherman notes. -Torrent Freak

Why is it in Google's interest to doctor their search engine results for make the copyright industry happy?  And is the word, "bombarded" appropriate for providing the public with search results that the public wants? This is industry propaganda. Now, the RIAA is going full speed after You Tube ripping.

So what is YouTube ripping?

A few years ago, soon after file sharing sites were sued into oblivion, technology surfaced which made it possible to rip the music directly off of YouTube videos. No longer did one have to download buggy software to download files - and which ironically opened one's computer to viruses.  One could merely go to YouTube, copy the URL and then go to a ripping site to split the mp3 music off of the video, and then download it. This 2010 video - made at just about the same time that LimeWire file sharing service was finally taken down - gives some instruction. (Click Here).  More recent instruction videos are easily searched out.  Newer sites are incredibly ease to use.

News of the YouTube ripping technique spread slowly at first, except among technophiles; but soon enough, the Media Industry's victory over file sharing software/services would prove pyrrhic. YouTube ripping had the advantage of being incredibly easy and all but untraceable. No need to worry about RIAA lawsuits.

 

So, now, the RIAA is back again, crying foul, going nuts, suing YouTube ripping sites.

This week a huge coalition of recording labels headed by the RIAA, IFPI, and BPI, sued YouTube ripping service YouTube-MP3. Today we take a closer look at the lawsuit which was filed against a German company, owned and operated by a German citizen, which could seek damages running into the hundreds of millions of dollars. - Torrent Freak

This time, the RIAA has lost all reason. They are once again playing Whack-a-Mole - which is what they have been doing all along.  If history teaches anything, innovators, by their very nature, will always outpace Luddites. YouTube ripping sites have proliferated across the web - with this link at this time, showing 95 million results for a search for YouTube rippers.

Nothing will stop the RIAA, the MPAA, and the Media Industry, though.

Hollywood media moguls are intent on preserving a dying business model. Worse yet, they expect technology companies to provide the technical expertise to protect their quasi-monopoly.  It is much cheaper to have Google, Microsoft, and Facebook pay programmers to fight piracy than the RIAA actually hiring programmers to come up with the technology themselves.

Then again, their incompetence in this area has been humiliating.

In an attempt to curb music piracy, major labels such as Sony started selling music CDs that have built-in “copy-proof” technology. The technology was meant to stop people from copying music from these discs onto recordable CDs or hard drives. There's a fatal flaw in this technology, however, which allows you to bypass the copy protection with a simple marker pen, and a recent upsurge in Internet newsgroup talk about this flaw has brought it to light again.  -- Geek (2002)

Open up a cafe or a bar with some live music and you could be forced to pay three royalty collection agencies: ASCAPBMI, and SESAC.

Antonowisch explained that once ASCAP got wind that they had live music (even though they were only holding about 12 concerts a year), ASCAP began their crusade. “They called us everyday. They sent two letters a day. They threatened us with a lawsuit because they said we had violated copyright,” Antonowisch lamented. As not to get sued, the coffee shop owners conceded. They agreed to pay ASCAP the $600 yearly license for the right to have live music.But then they found out that there was another PRO that required the same license. BMI. (snip)Then, as luck would have it, SESAC got in touch. And they demanded just over $700.)snip)Bauhaus [the cafe] actually explained to ASCAP that all of their musicians play original music and ASCAP shot back “how do you know? Do you know every song ever written?” So the PROs won’t believe a venue if they claim that they only host original music. And all it takes is one musician to play one cover song for a PRO to sue for serious damages.

Consider them three mafias.  A protection racket.  Once you pay one, the others want their cut. Add in the MPAA, the RIAA, and it is legalized corruption. Congress indulges them because the media can make or break a politician's career; and so Congress passes more and more noxious copyright laws, to protect their monopoly.

As part of draining the swamp, this new administration has nothing to lose offending the media.  Trump should reform our copyright laws. Copyrights should be limited to no more time than patents: 20 years.  Getting a technical patent can require decades of investments and education.  Why should a song written over a short period of time get protected for 70 years plus the life of the author?

These media moguls are mafiosi in legal garb.  It is high time they were told that it is not the duty of Google, YouTube, Microsoft, or Apple to protect their recordings.  If the media companies cannot protect their own product, then so be it.

Let the industry die off. It is a dinosaur in an age of mammals. It is a relic that has lots its usefulness like royalty, and aristocracy. We won't have to suffer industry stars telling us how enlightened they are, and how retro-stupid the public is.

For decades they have monopolized American and Western culture - often destroying our core values - and charged us for the privilege of their artistic rampage.  We were stupid to put up with it. Now they are suing us. Let them die out. Let music and artistic creation return to the individual, as it was when the republic was born. Let the copyright attorneys find something useful to do.

Author: Mike Konrad
Source: http://www.americanthinker.com/articles/2017/01/copyright_vultures_are_at_it_again.html

Categorized in Internet Ethics

The Internet Society has released the findings of its 2016 Global Internet Report in which 40% of users admit they would not do business with a company which had suffered a data breach.

Highlighting the extent of the data breach problem, the report makes key recommendations for building user trust in the online environment, stating that more needs to be done to protect online personal information.

With a reported 1,673 breaches and 707 million exposed records occurring in 2015, the Internet Society is urging organisations to change their stance and follow five recommendations to reduce the number and impact of data breaches globally:

 

1. Put users - who are the ultimate victims of data breaches - at the centre of solutions. When assessing the costs of data breaches, include the costs to both users and organisations. 

2. Increase transparency about the risk, incidence and impact of data breaches globally. Sharing information responsibly helps organisations improve data security, helps policymakers improve policies and regulators pursue attackers, and helps the data security industry create better solutions.

3. Data security must be a priority – organisations should be held to best practice standards when it comes to data security.

4. Increase accountability – organisations should be held accountable for their breaches. Rules regarding liability and remediation must be established up front.

5. Increase incentives to invest in security – create a market for trusted, independent assessment of data security measures so that organisations can credibly signal their level of data security. Security signals enable organisations to indicate that that they are less vulnerable than competitors.

The report also draws parallels with threats posed by the Internet of Things (IoT). Forecasted to grow to tens of billions of devices by 2020, interconnected components and sensors that can track locations, health and other daily habits are opening gateways into user’s personal lives, leaving data exposed.

“We are at a turning point in the level of trust users are placing in the Internet,” said Internet Society’s Olaf Kolkman, Chief Internet Technology Officer. “With more of the devices in our pockets now having Internet connectivity, the opportunities for us to lose personal data is extremely high.

“Direct attacks on websites such as Ashley Madison and the recent IoT-based attack on Internet performance management company, Dyn, that rendered some of the world’s most famous websites including Reddit, Twitter and The New York Times temporarily inaccessible, are incredibly damaging both in terms of profits and reputation, but also to the levels of trust users have in the Internet.”

Other report highlights include:

  • The average cost of a data breach is now $4 million, up 29 percent since 2013
  • The average cost per lost record is $158, up 15 percent since 2013
  • Within business, the retail sector represents 13 percent of all breaches and six percent of all records stolen, while financial institutions represent 15 percent of breaches, but just 0.1 percent of records stolen, indicating these businesses might have greater resilience built in to protect their users

Source  :  https://www.finextra.com/pressarticle/67186/internet-trust-at-all-time-low-not-enough-data-protection

Categorized in Internet Ethics
Page 1 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media