fbpx

The age of digital technology, in which we can search and retrieve more information than we could in any previous era, has triggered a debate over whether we have too much information. Is the cure to “unpublish” things we think are wrong or out of date? Ought we have a “right to be forgotten”?

Until recently, this was an argument conducted in Europe and South America and given a powerful push by a decision in 2014 from the European Union’s highest court to provide a legally enforceable right to remove some material from internet searches.

Now the issue has reached American newsrooms. The dilemma is simple to describe and painfully hard to solve. People who have had long-ago brushes with the law or bankruptcy would prefer such information not to be at the top of search results on their name. Foolish pranks immortalised on Facebook may be harming someone’s chances of getting a job.

American editors are now getting so many requests to erase or unlink online material that they’ve been consulting pundits and lawyers for help. American media law, based around the First Amendment guaranteeing press freedom, is very different to European law.

But the development of the EU’s right to be forgotten is a poor precedent for the US or anywhere else. The European version of the right to be forgotten – really a conditional right to be taken out of internet searches – is carelessly written, based on muddled ideas and contains risks for free expression.

The “right to be forgotten” is an emblematic battle at the new frontier between privacy and freedom – both of speech and the right to know. It is a case study of the dilemmas which we will face. Who gets to decide whether free speech or privacy prevails in any given case? And on what criteria?

Gonzales’ gripe

In 2009 a Barcelona resident, Mario Costeja Gonzales, complained to Google that a search for his name produced – at the top of the first page – a newspaper item from 1998 which recorded that some of his property had been sold to pay debts. It was given unfair prominence and was out of date said Sr Gonzales. He asked La Vanguardia, the newspaper, to erase the item. Both search engine and newspaper rejected his complaint.

The case went to court. The court ruled out any action against the paper but referred the question of the search link to the EU’s Court of Justice. In 2014, the court said that Sr Gonzales did indeed have a right to ask Google to de-index items which would be produced by a search on his name – under certain conditions (and there’s a degree of irony that he fought a battle over the right for this small story to be forgotten only to become a global cause célèbre over the issue).

And the conditions are the heart of the matter. Google routinely de-indexes material from search results: copyright infractions (by the million), revenge porn, details of bank accounts or passport numbers. The court said that search results could be incompatible with the EU’s data protection directive and must be removed if:

… that information appears … to be inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue carried out by the operator of the search engine.

The judges went on to say that, as a rule, the individual’s “data” or privacy rights outrank the search engine’s commercial interest or the public’s right to know. But that would not be the case if the public had a “preponderant interest” in the information – as would be the case if the individual was in public life.

You might say, what could be more natural than this? The internet has unleashed a flood of stuff: we must have some way of protecting ourselves from the obvious harm it can cause. Carefully, transparently and accountably done, it does not have to amount to “censorship” – the claim from many voices when the judgement first appeared.

Google has taken down 1.72 billion URLs after 566,000 requests. Press freedom and free expression were never absolute – we allow some criminal convictions to be forgotten, we have libel and contempt of court laws. All restrain publication.

The problem lies with much data protection law – principally in the EU – which fails to balance the competing rights. The court judgement’s tests for whether something ought to be de-indexed are vague and opaque. How do we test for the relevance of information? Relevant to whom? When does information go out of date?

The case wasn’t about defamation: no one claimed Sr Gonzales has been libelled.It was not about correcting inaccuracy. It wasn’t private: it had been made public quite legally. The court made clear that a successful claim did not have to show that harm or distress has been caused.

Muddling through

The intellectual origins of data protection law lie in the traumas of 20th-century Europe. The Dutch government in the 1930s recorded with characteristic thoroughness the details of every one of their citizens: name, age, address and so forth. So when Nazi Germany occupied the Netherlands all they had to do to locate the Jewish and gypsy populations was open the filing cabinets. The secret police of communist states in the second half of the century and their carefully filed surveillance reinforced the lesson that secretly stored data can inflict damage.

The “right to be forgotten” is a muddled solution and fails to clarify a specific remedy for a particular problem. Here are a few of the issues which we are going to have to deal with:

Although the Gonzalez case made the compromise of leaving the online newspaper archive untouched while stopping search engines finding it, we have now have two cases – in Italy and Belgium – where courts have ordered news media archives to be altered.

Google’s chief privacy counsel once said that his company is creating new jurisprudence about privacy and free speech. What he didn’t say is that Google is doing all this virtually in secret. Its decisions can be challenged in court by a litigant with money and patience, but should a private corporation be doing this at all?

There is a major unsolved problem about how far the right to be forgotten reaches. The French government thinks that it should be global, which is disproportionate as well as unfeasible.

What’s to be done?

The market isn’t providing ways to protect privacy – and individuals often part with their information barely knowing that they have surrendered some privacy. But the history of free expression has surely taught us that we should be very cautious about restrictions. If you want an alternative to the sweeping tests in EU law, have a look at the stiff tests laid out by the free speech organisation Article 19. Judges in several EU countries – notably the Netherlands – have tightened the tests for allowing material to be delinked.

EU law needs to recognise that privacy and free expression are matters of colliding rights which can’t be wished away by pretending that there’s no conflict. Collisions of basic rights can’t be abolished – they can only be managed.

The Gonzales judgement didn’t start the right to be forgotten but it did bring it to the attention of the world. It did some good by correcting thousands of small harms. But because it addressed the rights involved in such a muddled and careless way, it opened up risks to freedom of speech. The judges of the future need to do better.

Author:  George Brock

Source:  http://www.econotimes.com/

Categorized in Internet Privacy

In my last post, I reported on a press call by Senator Markey and a group of activists in support of the FCC’s Internet Privacy NPRM. I found the call extremely unhelpful because of significant factual gaps and errors in the story the activists told, but my account may not have been all that clear. So I’d like to focus on some of the specific claims the boosters made and why they aren’t factual.

The General Claim

At a high level, the promoters of the privacy NPRM claim these new regulations will protect user privacy. This can only be true if the regulated firms have information that none of the Internet’s incumbent advertising information brokers already collect and sell. The comments I filed with the FCC on the issue explain why this is not the case. The chart I made for the FCC comments shows this reasonably well.

Taxonomy.png
Privacy Taxonomy

In this chart, “CII” is information known to websites and advertising networks as well as to ISPs, and “CNNI” is information visible to web services but not to ISPs.

The bottom line is that advertising networks have access to more of our personal information than ISPs have because the data they see is unencrypted and extremely detailed. So no, the NPRM is not going to prevent a single meaningful bit of information about our web surfing behavior from circulating. What it will do is limit the number of firms that can sell this information to advertisers from 12 to 2: Facebook and Google will still collect this information from the beacons and ads they place all over the web but ISPs will not be allowed to compete with them.

The Sensitivity Canard

Laura Moy, an attorney who works for the Open Technology Institute, claims that users will only be required by the FCC to give affirmative consent to ISPs to collect “sensitive information.” The FCC makes the same claim. But the FCC has decided to classify nearly every website visit and application launch as a “sensitive event.” As I explained in the previous post:

Laura Moy, an activist affiliated with the predictably pro-advertiser Open Technology Institute, regards each bit of information seen by the ISPs as “communication” that warrants strict protection, even though that very same data would be considered non-sensitive data after it’s stored in website data centers. Consequently, Moy (and the FCC) seek to enact differential sensitivity classifications depending on how a given advertising merchant comes to possess the very same information.

And as Giuseppe Macri explained in Inside Sources, the FCC considers virtually all behavior sensitive:

“Our advocacy on this issue has been that all of it is sensitive,” Moy said, describing how seemingly innocuous metadata can be compiled over a broad view (which some argue providers have, though experts dispute this point) to create a detailed profile on a subscriber.

Macri cites the Peter Swire report that carefully distinguishes what ISPs can see and what they can’t see because of encryption. In the past, privacy advocates have recommended that users and websites concerned about privacy should encrypt their data. But since they do this for most web traffic now, the story that those who claim to speak for privacy rights have simply changed their story. They now maintain that encryption has no effect since it doesn’t prevent ISPs from seeing that users visit Google and Mayo Clinic, for example, even though it does prevent them from seeing search terms and actual pages visited.

Is the fact that a web user does Google searches significant information, let alone sensitive information?

Influencing Retail Sales

Obviously not. But if a user goes to Google and then goes to a medical site and then goes to doctor’s site, it might be possible to know that the user is ill. It might also mean that the user has decided to go to the doctor for a checkup, or the user is scheduling a family member for a doctor visit, or it might mean that the user is curious about an ailment that may afflict a friend. This sequence of events is not enough to determine, say, that the user has a terminal disease and will soon be shopping for coffins and funeral plots.

But what if the user goes to Google, Mayo Clinic, and Walgreen’s? That would simply tell us the user is in the market for an over-the-counter drug and doesn’t have a serious illness.

This would be a good time to pitch drug stores to the user, but the ISP doesn’t have a monopoly on this information. Walgreen’s, Google, and Facebook would know this too, and they would also know if the user shops Walgreen’s for NyQuil while the ISP would not.

Uneven Regulation

Which parties hold the most sensitive information and which are most heavily regulated?

In the first scenario, where a doctor appointment is booked, Google knows about the booking if the user searches for the doctor’s web site by entering the doctor’s or hospital’s name in a search. Google also knows the details of the appointment if the user is on Google Calendar or Gmail. So Google will often know more than the ISP because the act of making the appointment is encrypted, as are the Calendar and Gmail use.

Google is permitted to sell this information – to offer the user up in a group of people who have gone to a doctor for treatment of the disease for which this user searched – without restriction. The ISP can’t do that because it can’t know these details, and even it did it couldn’t because Google is governed by the FTC privacy framework and the ISP is governed by the much more strict FCC framework. So we have uneven and inconsistent regulation.

Expansive Concept of Privacy

In the second case, where the user is shopping for over-the-counter drugs, Google is freely able to sell its view of the user to advertisers without depersonalization or user permission. If the proposed regulation were consistent, the ISP would also be able to sell this shopping information, but under the FCC’s proposed rules it can’t.

The FCC considers a visit to a web site – all visits to all sites – to be sensitive information. This is clearly absurd, as members of Congress realize. This is why Ranking Member Pallone proposes to change the FTC Act so that it would apply the same standards to Google that the FCC proposes to apply to the ISPs.

Moy refuses to admit the inconsistency. This refusal is, at best, a failure of analysis. If a given fact – that a group of users has searched Walgreen’s after visiting Google and Mayo Clinic – is sensitive in the hands of one party, it is sensitive in the hands of all parties.

It’s reasonably simple to craft regulations for ISPs and Internet behavior that are consistent regardless of the status of the firm collecting and selling this information.

If the FCC examines a number of such scenarios in detail, I’m confident it can come to a sober, fair, and rational conclusion. All that it needs to do in this proceeding is to harmonize its rules with those of the FTC, taking into account the nature of the information, not the status of the data broker.

This isn’t really difficult.

Source : hightechforum

Categorized in Internet Privacy

For years, researchers have discussed how the “anonymizing” various companies claim to perform on the data they gather is poor and can be easily reversed. Over the last few years, we’ve seen multiple companies respond to these problems by refusing to continue anonymizing data at all. Verizon kicked things offbut Vizio has gone down this route as well, and now we know Google has — or, at the very least, has reserved the right to do so.

According to an investigation at Pro Publica, Google has quietly changed its privacy policy at some point over the last few months. When Google bought the advertising firm DoubleClick a few years back, it promised to keep all data gathered by DoubleClick sandboxed and isolated from the data it gathered elsewhere and used for other Google services. The company has since changed the wording of its privacy policy, as shown below:

ProPublica1

Image by ProPublica

Google has stated it doesn’t use the information gleaned from Gmail scanning to target ads to specific people, but it’s not clear what this means for its other services. Google tracks a great deal of information and its email keyword scanning is just one business area. Previously, Google’s privacy policy contained a hard line of what it would and would not do. Google has replaced that flat guarantee with a weasel-word “depending on your settings” statement that hides behind the word “may.”

Speaking of those settings, Google does have a “Privacy Checkup” tool that you can use to hide certain data from being tracked or gathered. It’s generally well-designed, but for one major example, shown below. Play a game with yourself if you like — see if you can spot the problem before you read further:

Google-Privacy

This is a perfect example of what’s known as a dark pattern. A dark pattern is a pattern designed to trick you into choosing the “right” option, where “right” is defined as “What the company wants you to pick,” as opposed to what you actually want. In this case, boxes are checked by default and you uncheck them to hide information. But if you uncheck the box labeled “Don’t feature my publicly shared Google+ photos as background images on Google products & services,” you’re actually giving Google permission to use your name and profile to advertise products. Google flipped the meaning of the checkbox to make it more likely that someone not reading carefully would click the wrong option.

But what’s really interesting to me is that the word “Don’t” is bolded. You bold something you want to draw attention to — and that’s pretty much the opposite of how a dark pattern works. Huge organizations are much less monolithic than they appear from the outside, and I suspect that what we see here is a tale of two opinions, played out in a single checkbox. By reversing what checking the option does, Google made it more likely that you would give it permission to use your personal likeness and data for advertising. By bolding the word “Don’t,” Google made it more likely that you’d realize what the box did and set the setting appropriately.

In any case, Google’s decision to stop anonymizing data should be serious, but there’s not much chance people will treat it that way. To-date, people have largely been uninterested in the ramifications of giving corporations and governments 24/7 permission to monitor every aspect of their lives, even when it intrudes into private homes or risks chilling freedom of speech.

Source : extremetech

Categorized in Internet Privacy

 

Following the launch of Google Allo, it was learned this week that the company’s smart messaging app may be a cause for concern when it comes to privacy.

Allo, an app which Google claims has privacy in mind, keeps all messages indefinitely until they are manually deleted. While that may not matter to some, there are others who aren’t comfortable with their correspondence being saved forever.

The reported reason behind Google’s decision to have Allo store messages permanently has to do with the Smart Reply function. It is thought that the technology will work better if it has a longer backlog of conversation history to draw from.

Allo’s approach to storing messages actually sets it apart from its competitors. While other messaging apps have privacy functions turned on by default, Allo is instead trasparent about the fact that it’s storing your messages from day one.

Users will have full control over how long the data stays on Google servers, with the option to delete entire conversations or just single messages. As another option, people can use Incognito Mode, which offers end-to-end encryption.

Over time we’ll see if the trade off of privacy is worth having Smart Reply being able to accurately predict what you’re going to say next.

Source : https://www.searchenginejournal.com

 

Categorized in Internet Privacy

A new London-based search engine, Oscobo just launched promising an anonymous searching experience on a platform that won’t sell or store user information.

Having spent 12 years working at Yahoo, co-founder Fred Cornell says he has seen for himself how the search engine industry harvests user data for financial gain.

Cornell was inspired to start Oscobo after growing uncomfortable with the lack user privacy offered by the leading search engines. He argues more data is being collected with what is needed, and people are starting to become more concerned about how that data is used.

The privacy search market is growing at a faster rate than the regular search market, Cornell says, likely referring to the successes of DuckDuckGo over the past year. Just recently it was reported that DuckDuckGo grew 70% over 2015, and this past summer it reached the milestone of 10 million searches per day.

Oscobo aims to be the UK’s answer to DuckDuckGo — a privacy-based search engine built for the UK market. While anyone can use Oscobo, at this time it is built to deliver results for a UK audience. Throughout 2016 the company will roll its search engine out to more countries, along with country-specific search settings for those countries.

At this time, Oscobo does not have any of its own search technology. Instead, it is licensing its search index from Bing/Yahoo. This is an indication that Oscobo does not intend to compete on tech, but rather on its ability to offer a more private searching experience.

The privately-funded company intends to make money through what it describes as simple paid search. Its paid search ads will rely on the most basic search data — what a person types into the query box.

Being London-based, Oscobo has a unique advantage in the privacy search market: it cannot be forced to provide user data at any point. US-based search engines can be forced to provide data on its users to law enforcement.

Oscobo is live and available to use today at Oscobo.co.uk.

Source : https://www.searchenginejournal.com/oscobo/153341/

Categorized in Internet Privacy

Swiss-based semantic search company Hulbee, which launched a consumer search engine in the U.S. this August, has closed a $9 million angel funding.

The investors are not being disclosed beyond the firm saying one is a serial entrepreneur from Switzerland and the other is a business person from Canada.

Hulbee is positioning its consumer search offering as a pro-privacy alternative to mainstream search engines like Google, with a pledge that unlike those guys it does not track users. So it’s competing with other search players in the pro-privacy space, such as DuckDuckGo.

Although, unlike DDG, it has its own (semantic) search tech too — which it’s touting as another differentiator, along with a “clean interface”, and search results supplemented by a word cloud of related themes/content that allow users to narrow their search with a few considered clicks. Hulbee

It also has its own ad system, rather than bolting on a third party ad network. And again here it’s taking a non-tracking approach. Ads on Hulbee are targeted based on the search query, according to CEO Andreas Wiebe, so there’s no geotargeting or cumulative tracking. (Although users can specify their region in order to ensure more relevant search results, so it may have basic country data. And once you step off Hulbee and onto whatever website you were trying to find chances are their ad networks will start tracking you, unless you’re running an ad blocker…)

“Unlike Google’s offering, Hulbee doesn’t fall back on surveillance, so there’s no geotargeting. For Hulbee, the user is completely invisible,” says Wiebe. “Hulbee only focuses on the search query, and definitely doesn’t know where it’s from or who entered it.”

“The fundamental idea… is to win over consumers who prioritize ownership of their data. We recognize that most consumers do not want to be tracked,” he adds.

Such a partial view of the user does not lend itself to highly targeted ‘interest-based advertising’ — so Hulbee is also focusing on touting a brand-building proposition to advertisers (hence the Coca-Cola graphic in the word cloud, above right).

“Unlike traditional search engines, we don’t focus on highly focused targeting, but instead specialize in ‘mass informing’ of our visitors, including image, brand name, event advertising. Thus, we obviously will be interested, for example, in global companies launching a new brand or product, such as the film industry promoting the new movies or an event tie-in,” says Wiebe.

“We’re dealing with fairly sophisticated visitors. Although we do not track and don’t ‘know’ our visitors, we can say with certainty that our user is a person following modern trends in such areas as information security, privacy, etc. That user is concerned about their own privacy, weighing the aspects of their web activity and understanding the consequences and risks of certain actions.”

As well as aiming to appeal to individuals with concerns about their privacy, the search engine is being targeted at parents with concerns about the kind of content their kids might be exposed to online — given it has a built-in filter for violent and pornographic content.

Hulbee is not a startup, having spent 15 years working on semantic search for the b2b space, and selling enterprise-grade search and data analytics to European companies. But it is relatively new to the consumer space — launching a Swiss search engine, called Swisscows.ch, in June 2014 as a first step.

In these post-Snowden tech times, it reckons there’s a fresh opportunity to differentiate on privacy and security grounds vs dominant consumer search players (Google has a circa 90 per cent share of the search market in Europe). And notes, for instance, that its servers are located in Switzerland, so away from the prying eyes of the NSA — or indeed the European Union.

The angel funding will specifically be used to expand its consumer search engine, according to Wiebe. “We have a big mountain to climb with a lot of competitors,” he admits. “[We’ll use the] money to continue to building and develop our search engine for consumers.”

After launching its consumer search engine in the U.S. this summer it added 30 more markets in September, and is now available in 60 countries. It’s not breaking out user data at this stage but says Swisscows.ch is processing more than five million queries per month, while Hulbee.com is processing more than eight million search queries monthly.

The company is also planning to step up its enterprise search activity, with the launch of an enterprise search product specifically targeted at medium and small companies planned for this later month, and an enterprise search engine that aims to compete with Microsoft, Google and HP slated for November.

https://techcrunch.com/2015/10/07/hulbee-angel-round/

Categorized in Search Engine

 

Imagine a criminal breaks into your home but doesn't steal anything or cause any damage. Instead, they photograph your personal belongings and valuables and later that day hand-deliver a letter with those pictures and a message: "Pay me a large sum of cash now, and I will tell you how I got in."

Cybercriminals are doing the equivalent of just that: Hacking into corporations to shake down businesses for upward of $30,000 when they find vulnerabilities, a new report from IBM Security revealed.

The firm has traced more than 30 cases over the past year across all industries, and at least one company has paid up. One case involved a large retailer with an e-commerce presence, said John Kuhn, senior threat researcher at IBM Security.

 

Though some companies operate bug bounty programs — rewarding hackers for revealing vulnerabilities — in these cases, the victims had no such program.

"This activity is all being done under the disguise of pretending to be a "good guy" when in reality, it is pure extortion," said Kuhn.

Researchers have dubbed the practice "bug poaching."

Here's how it typically works. The attacker finds and exploits web vulnerabilities on an organization's website. The main method of attack — known as SQL injection — involves the hacker injecting code into the website which allows them to download the database, said Kuhn.

 

Once the attacker has obtained sensitive data or personally identifiable information, they pull it down and store it, then place it in a cloud storage service. They then send an email to the victim with links to the stolen information — proof they have it — and demand cash to disclose the vulnerability or "bug."

Though the attacker does not always make explicit threats to expose the data or attack the organization directly, there is no doubt of the threatening nature of the emails. Hackers often include statements along the lines of, "Please rest assured that the data is safe with me. It was extracted for proof only. Honestly, I do this job for living, not for fun," said the report.

"This does not negate the fact that the attacker stole the organization's data and placed it online, where others could potentially find it, or where it can be released," said Kuhn.Trusting unknown parties to secure sensitive corporate data — particularly those who breached a company's security systems without permission — is inadvisable, said Kuhn. And, of course, there are no guarantees when dealing with these criminals so even when companies pay up, there is still a chance the attacker will just release the data.

 

Organizations that fall victim to this type of attack should should gather all relevant information from emails and servers and then contact law enforcement, said Kuhn.

Here are some measures companies can take to avoid becoming a victim, according to IBM Security: 1) Run regular vulnerability scans on all websites and systems. 2) Do penetration testing to help find vulnerabilities before criminals do. 3) Use intrusion prevention systems and web application firewalls. 4) Test and audit all web application code before deploying it. 5) Use technology to monitor data and detect anomalies.

Source:  http://www.cnbc.com/2016/05/27/the-disturbing-new-way-hackers-are-shaking-down-big-business.html

 

 

 

 

Categorized in Internet Privacy

 

Washington, DCA new report from the Federal Trade Commission (FTC) shows that data breach complaints are on the rise. In the report, Consumer Sentinel Network Data Book (2/16), the FTC notes that complaints about identity theft increased 47 percent in 2015, likely helped by a number of high-profile data breaches. Consumers have filed lawsuits against companies they allege have failed to adequately protect their personal, confidential information.

 

Data breaches frequently occur when unauthorized third parties gain access to personal information. Hackers exploit vulnerabilities in computer systems to access information such as bank accounts, health records, Social Security numbers, addresses, tax information and passwords. Making the situation more concerning, a report from Javelin Strategy & Research (2/2/16) notes that identity thieves have stolen around $112 billion in the past six years, the equivalent of around $35,600 per minute.

According to the FTC, identity theft was the second-highest complaint category, falling behind debt collection. Among identity theft complaints were tax- or wage-related fraud, credit card fraud, phone or utilities fraud, and bank fraud.

“Nearly half a million complaints sends a clear message: more needs to be done to protect consumers from identity fraud,” said National Consumers League Executive Director Sally Greenberg. “One of the key drivers of the identity theft threat is the continuing flow of consumers’ personal information to fraudsters thanks to the ongoing epidemic of data breaches.”

Meanwhile, New York State Attorney General Eric T. Schneiderman has also indicated that data breaches are increasing. A news release issued by the Attorney General (5/4/16) notes that his office has received more than 40 percent more data breach notifications so far in 2016, compared to the same time span in 2015. From January 1 to May 2, 2016, the Attorney General’s office received 459 data breach notices, compared with 327 in the same period of 2015.

An earlier report issued by the New York Attorney General’s office found that hacking intrusions - where third parties gain unauthorized access to data stored on computers - were the number-one cause of data security breaches.

 

Consumers have filed lawsuits against companies accused of not properly storing or securing customer information. In April, an appeals court reinstated a lawsuit filed against P.F. Chang’s, which alleged the restaurant chain was responsible for a massive data breach. Although the lawsuit was dismissed by a lower court, with the judge finding the plaintiffs did not show actual harm, according to The National Law Journal (4/15/16), a federal appeals court reinstated the lawsuit, finding the plaintiffs had shown plausible injuries.

Among possible compensation plaintiffs could be entitled to were the cost of credit-monitoring services, unreimbursed fraudulent charges and lost points on a debit card.

The lawsuit is Lewert et al. v. P.F. Chang’s China Bistro, No. 14-3700, in the US Court of Appeals for the Seventh Circuit.

Source:  https://www.lawyersandsettlements.com/articles/data-breach/federal-trade-commission-ftc-javelin-strategy-21469.html?utm_expid=3607522-13.Y4u1ixZNSt6o8v_5N8VGVA.0&utm_referrer=https%3A%2F%2Fwww.lawyersandsettlements.com%2Flegal-news-articles%2Finternet-technology-news-articles%2F

 

 

 

Categorized in Science & Tech

As the number of reported data breaches continues to blitz U.S. companies — over 6 million records exposed already this year, according to the Identity Theft Resource Center — IT budgets are ballooning to combat what corporations see as their greatest threat: faceless, sophisticated hackers from an outside entity.

But in reality, a bigger danger to many companies and to customers' sensitive data comes from seemingly benign faces inside the same companies that are trying to keep hackers out: a loan officer tasked with handling customers' e-mail, an attendant at a nursing home, a unit coordinator for the main operating room at a well-regarded city hospital.

According to Verizon's 2015 Data Breach Investigations Report, about 50 percent of all security incidents — any event that compromises the confidentiality, integrity or availability of an information asset — are caused by people inside an organization. And while 30 percent of all cases are due to worker negligence like delivering sensitive information to the wrong recipient or the insecure disposal of personal and medical data, roughly 20 percent are considered insider misuse events, where employees could be stealing and/or profiting from company-owned or protected information.

Often, that translates to employees on the front lines stealing patient medical data or client social security numbers, which can then be sold on the black market or used to commit fraud like collecting someone else's social security benefits, opening new credit card accounts in another's name, or applying for health insurance by assuming the identity of someone else.

"The Insider Misuse pattern shines a light on those in whom an organization has already placed trust," Verizon said in the report. "They are inside the perimeter defenses and given access to sensitive and valuable data, with the expectation that they will use it only for the intended purpose. Sadly, that's not always the way things work."

For the first time since 2011, Verizon found that it's not cashiers involved with most insider attacks, but many "insider" end users — essentially anyone at a company other than an executive, manager, finance worker, developer or system administrator — carrying out the majority of such acts. Most are motivated by greed.

"Criminals have a different motivating factor," said Eva Velasquez, CEO and president of Identity Theft Resource Center, a non-profit charity that supports victims of identity theft. "There are a number of jobs that pay minimum wage where individuals have access to this type of information, and so the incentive may be 'this isn't a job that is paying me enough to support myself.'"

Velasquez cites workers in an assisted living facility tasked with caring for patients, a job in close proximity to medical records that can be accessed by a few keyboard taps. According to the Bureau of Labor Statistics, such healthcare support occupations see mean annual wages hovering around $25,000, a salary that might make workers more vulnerable to stealing for self gain. Or, maybe worse, they fall prey to acting as a conduit for some type of organized crime ring looking to make big money by selling or manipulating stolen personal data.

"There are a number of jobs that pay minimum wage where individuals have access to this type of information, and so the incentive may be 'this isn't a job that is paying me enough to support myself."

According to the Verizon report, the public sector, health care and financial services — like credit card companies, banks, and mortgage and lending firms — were the industries hit hardest by insider incidents in 2015.

In one recent cases a Baltimore man is facing federal charges of identity theft and bank fraud after he used personal information of at least three nursing home residents to open multiple credit card accounts without their permission. A former employee of Tufts Health Plan pleaded guilty to stealing names, birth dates and social security numbers that were eventually used to collect social security benefits and fraudulent income tax refunds. A former assistant clerk at Montefiore Medical Center in New York who was indicted in June 2015 for printing thousands of patients' records daily and selling them. The information in the records was eventually used to open department store credit cards at places like Barneys New York and Bergdorf Goodman; the alleged actions are estimated to have caused more than $50,000 in fraud, according to the New York County District Attorney's Office.

While the number of breaches and hacks by outsiders has skyrocketed since 2007 in tandem with the surging digitization of information, the occurrence of insider jobs can be a read on the overall economy. It tends to peak during recessions and drop off when times are good, according to the Identity Theft Resource Center. In 2009, the percentage of insider attacks hit a high of roughly 17 percent; after a three-year slide, the amount today (about 10 percent) is slowly creeping back up.

"When the economy isn't doing well, you'll see people that are feeling stressed and taking advantage of opportunities they might not take advantage of otherwise," said attorney James Goodnow from the Lamber Goodnow team at law firm Fennemore Craig.

With the defining characteristic of an internal breach being privilege abuse — employees exploiting the access to data that they've been entrusted with — the best way to mitigate such attacks is to limit the amount of information allotted to workers.

"As business processes have started to rely more on information and IT, the temptation, the desire is to give people access to everything [because] we don't want to create any friction for users to do their jobs," said Robert Sadowski, director of marketing and technology solutions at security firm RSA.

Terry Kurzynski, senior partner at security firm Halock Security Labs, said that smart entities perform enterprise-wide risk assessments to find where their systems are most vulnerable and to spot aberrations in user behavior.

But sophisticated analytics does little to assuage situations where employees are using low-tech methods to capture information. "Most systems will not handle the single bank employee just writing down on paper all the bank numbers they see that day — that's difficult to track," said Guy Peer, a co-founder of security firm Dyadic Security.

Clay Calvert, director of cybersecurity at IT firm MetroStar Systems, said communication with employees in a position to turn rogue is key. "That's a big deterrent in identity theft cases; if an employee feels like the company cares for them, they're less likely to take advantage of the situation."

Hackers hiding in plain sight

Preventing the display of sensitive data in plain sight — say an employee seeing a confidential record as they walk by a colleague's computer — is the focus of Kate Borten, founder of Marblehead Group consultancy and a member of the Visual Privacy Advisory Council. She recommends companies institute a clean desk policy (ensuring that workers file away papers containing customer data before they leave their desk), implement inactivity time outs for any tech devices, and switch to an e-faxing system, which eliminates the exposure of sensitive patient data on paper that's piled up around traditional fax machines.

Experts also say that tougher penalties for and more prosecution of inside hackers would also be a disincentive for such crimes. "On a general level, there can be practical barriers to pursuit of a criminal case, such as the victim company's fear of embarrassment, reputational damage, or the perceived risk — real or not — that their trade secrets will be exposed in a court proceeding," said Brooke French, shareholder at law firm Carlton Fields.

But she added, "The DOJ and local authorities prosecute these cases all the time, despite what are seen as common barriers. The barriers are low when the actions are clearly wrong, such as a hospital employee stealing electronic medical records and selling them on the black market."

While the price tag for stolen information on the black market can translate to a lucrative sales career for some crooked employees, it's a costly phenomenon for organizations once they have realized it has occurred, which is often "during forensic examination of user devices after individuals left a company," said Verizon.

That's usually too late to enact damage control. According to the Ponemon Institute, the average cost of a breach is $217 per record.

"That's just the hard costs, what you have to pay for notifying customers or any type of remediation services," said Velasquez. "The bigger, broader cost is the reputational damage that shows itself not just to the entity that suffers the damage, but to the industry."

Source:  http://www.cnbc.com/2016/05/13/a-surprising-source-of-hackers-and-costly-data-breaches.html

Categorized in Internet Privacy

Finally ready to get off the grid? It's not quite as simple as it should be, but here are a few easy-to-follow steps that will at the very least point you in the right direction.

If you're reading this, it's highly likely that your personal information is available to the public. And while you can never remove yourself completely from the internet, there are ways to minimize your online footprint. Here are five ways to do so.

Be warned however; removing your information from the internet as I've laid it out below, may adversely affect your ability to communicate with potential employers.

1. Delete or deactivate your shopping, social network, and Web service accounts

Think about which social networks you have profiles on. Aside from the big ones, such as Facebook, Twitter, LinkedIn and Instagram, do you still have public accounts on sites like Tumblr, Google+ or even MySpace? Which shopping sites have you registered on? Common ones might include information stored on Amazon, Gap.com, Macys.com and others.

To get rid of these accounts, go to your account settings and just look for an option to either deactivate, remove or close your account. Depending on the account, you may find it under Security or Privacy, or something similar.

If you're having trouble with a particular account, try searching online for "How to delete," followed by the name of the account you wish to delete. You should be able to find some instruction on how to delete that particular account.

If for some reason you can't delete an account, change the info in the account to something other than your actual info. Something fake or completely random.

new-screen-delete.png

 

Using a service like DeleteMe can make removing yourself from the internet less of a headache.

2. Remove yourself from data collection sites

There are companies out there that collect your information. They're called data brokers and they have names like Spokeo, Crunchbase, PeopleFinder, as well as plenty of others. They collect data from everything you do online and then sell that data to interested parties, mostly in order more specifically advertise to you and sell you more stuff.

Now you could search for yourself on these sites and then deal with each site individually to get your name removed. Problem is, the procedure for opting out from each site is different and sometimes involves sending faxes and filling out actual physical paperwork. Physical. Paperwork. What year is this, again?
Anyway, an easier way to do it is to use a service like DeleteMe at Abine.com. For about $130 for a one-year membership, the service will jump through all those monotonous hoops for you. It'll even check back every few months to make sure your name hasn't been re-added to these sites.

3. Remove your info directly from websites

First, check with your phone company or cell provider to make sure you aren't listed online and have them remove your name if you are.

If you want to remove an old forum post or an old embarrassing blog you wrote back in the day, you'll have to contact the webmaster of those sites individually. You can either look at the About us or Contacts section of the site to find the right person to contact or go to www.whois.com and search for the domain name you wish to contact. There you should find information on who exactly to contact.

Unfortunately, private website operators are under no obligation to remove your posts. So, when contacting these sites be polite and clearly state why you want the post removed. Hopefully they'll actually follow through and remove them.

If they don't, tip number four is a less effective, but still viable, option.
4. Delete search engine results that return information about youSearch engine results includes sites like Bing, Yahoo and Google. In fact Google has a URL removal tool that can help you delete specific URLs.

Google's URL removal tool is handy for erasing evidence of past mistakes from the internet.

For example, if someone has posted sensitive information such as a Social Security number or a bank account number and the webmaster of the site where it was posted won't remove it, you can at least contact the search engine companies to have it removed from search results, making it harder to find.

5. And finally, the last step you'll want to take is to remove your email accountsDepending on the type of email account you have, the amount of steps this will take will vary.
You'll have to sign into your account and then find the option to delete or close the account. Some accounts will stay open for a certain amount of time, so if you want to reactivate them you can.

An email address is necessary to complete the previous steps, so make sure this one is your last.

One last thing...Remember to be patient when going through this process. Don't expect it to be completed in one day. And you may also have to accept that there some things you won't be able permanently delete from the internet.

Source: http://www.cnet.com/how-to/remove-delete-yourself-from-the-internet/

If you're reading this, it's highly likely that your personal information is available to the public. And while you can never remove yourself completely from the internet, there are ways to minimize your online footprint. Here are five ways to do so.

Be warned however; removing your information from the internet as I've laid it out below, may adversely affect your ability to communicate with potential employers.

1. Delete or deactivate your shopping, social network, and Web service accounts

Think about which social networks you have profiles on. Aside from the big ones, such as Facebook, Twitter, LinkedIn and Instagram, do you still have public accounts on sites like Tumblr, Google+ or even MySpace? Which shopping sites have you registered on? Common ones might include information stored on Amazon, Gap.comMacys.com and others.

To get rid of these accounts, go to your account settings and just look for an option to either deactivate, remove or close your account. Depending on the account, you may find it under Security or Privacy, or something similar.

If you're having trouble with a particular account, try searching online for "How to delete," followed by the name of the account you wish to delete. You should be able to find some instruction on how to delete that particular account.

If for some reason you can't delete an account, change the info in the account to something other than your actual info. Something fake or completely random.

new-screen-delete.png

Using a service like DeleteMe can make removing yourself from the internet less of a headache.

Screenshot by Eric Franklin/CNET

2. Remove yourself from data collection sites

There are companies out there that collect your information. They're called data brokers and they have names like Spokeo, Crunchbase, PeopleFinder, as well as plenty of others. They collect data from everything you do online and then sell that data to interested parties, mostly in order more specifically advertise to you and sell you more stuff.

Now you could search for yourself on these sites and then deal with each site individually to get your name removed. Problem is, the procedure for opting out from each site is different and sometimes involves sending faxes and filling out actual physical paperwork. Physical. Paperwork. What year is this, again?

Anyway, an easier way to do it is to use a service like DeleteMe at Abine.com. For about $130 for a one-year membership, the service will jump through all those monotonous hoops for you. It'll even check back every few months to make sure your name hasn't been re-added to these sites.

3. Remove your info directly from websites

First, check with your phone company or cell provider to make sure you aren't listed online and have them remove your name if you are.

If you want to remove an old forum post or an old embarrassing blog you wrote back in the day, you'll have to contact the webmaster of those sites individually. You can either look at the About us or Contacts section of the site to find the right person to contact or go to www.whois.com and search for the domain name you wish to contact. There you should find information on who exactly to contact.

Unfortunately, private website operators are under no obligation to remove your posts. So, when contacting these sites be polite and clearly state why you want the post removed. Hopefully they'll actually follow through and remove them.

If they don't, tip number four is a less effective, but still viable, option.

4. Delete search engine results that return information about you

Search engine results includes sites like Bing, Yahoo and Google. In fact Google has a URL removal tool that can help you delete specific URLs.

screen-shot-2016-06-28-at-11-34-49-am.png

Google's URL removal tool is handy for erasing evidence of past mistakes from the internet.

Screenshot by Eric Franklin/CNET

For example, if someone has posted sensitive information such as a Social Security number or a bank account number and the webmaster of the site where it was posted won't remove it, you can at least contact the search engine companies to have it removed from search results, making it harder to find.

5. And finally, the last step you'll want to take is to remove your email accounts

Depending on the type of email account you have, the amount of steps this will take will vary.

You'll have to sign into your account and then find the option to delete or close the account. Some accounts will stay open for a certain amount of time, so if you want to reactivate them you can.

An email address is necessary to complete the previous steps, so make sure this one is your last.

One last thing...

Remember to be patient when going through this process. Don't expect it to be completed in one day. And you may also have to accept that there some things you won't be able permanently delete from the internet.

Editors' note: This article was originally published in December 2014. It has been updated with only a few minor tweaks.

Categorized in Internet Privacy
Page 6 of 8

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media