fbpx

Google’s Biggest Privacy Push: Auto-Delete Of Web, App, Location Data, Youtube Search For New Users

Search engine giant, Google has disclosed a new development, where it will let users control their privacy. Google doesn’t have much of a reputation when it comes to gathering data about people. However this does come as a great news! 

Let’s find out what changes Google is planning to make right here!

Google CEO Sundar Pichai Announces New Developments Regarding Privacy

The new developments have when announced by the CEO of Google Sundar Pichai. He said that there will be a lot of privacy improvements included in the platform that well enables users to control the data that they’re sharing. 

Previously Google had enabled users to delete this data automatically every 3 months or 18 months. As per the new development, this feature can be enabled by default for any new users. 

As we all know, Google registers all the Search history, YouTube history, location history, and voice commands made through Google Assistant on the My Activity page. 

Google CEO, Sundar Pichai said, “As we design our products, we focus on three important principles: keeping your information safe, treating it responsibly, and putting you in control. Today, we are announcing privacy improvements to help do that, including changes to our data retention practices across our core products to keep less data by default.”

How Does Google’s New Feature Work? All You Need To Know

When any Google user turns on their Location History for the first time, the auto delete option will be set to 18 months by default. Previously, this was off by default. Additionally, Web and App Activity auto delete will also default to 18 months for any new users.

In simple words, the data of your activity will be deleted after 18 months automatically and continuously, which was previously stored until you chose to delete it. 

Remember that you can turn these settings off or also change your auto-delete option whenever you want. 

However, if the Location History and Web and App Activity has already been turned on by the user that will not be changed by Google. But, the company will remind its users about the new auto delete control features via notifications and mail.

As per Pichai, when a user signs into their Google Account, they will be able to search for  “Google Privacy Checkup” and “Is my Google Account secure?” Their query will be answered by a box that will be visible only to the user which will show privacy and security settings. Additionally, these can be easily reviewed or adjusted as well.

[Source: This article was published in trak.in By Radhika Kajarekar - Uploaded by the Association Member: Corey Parker]

Categorized in Search Engine

Ohio and Washington emerged as new hotspots for internet crime in 2019, though California continues to lead with the largest online fraud victim losses and number of victims, according to research from the Center for Forensic Accounting in Florida Atlantic University's College of Business.

California online victim losses increased 27 percent from 2018 to $573.6 million in 2019. The number of victims in California increased by 2 percent to 50,000.

Florida ranked second in victim losses ($293 million) and also posted the largest annual increase in both victim losses and number of victims over the past five years. The average loss per victim in the Sunshine State grew from $4,700 in 2015 to $10,800 in 2019, while the average victim loss jumped 46 percent from 2018.

When victim losses are adjusted for population, Ohio had the largest loss rate in 2019 at $22.6 million per 1 million in population, rising sharply from $8.4 million in 2018. Washington had the highest victim rate at 1,720 per 1 million in population.

Ohio and Washington replaced North Carolina and Virginia, which ranked among the top states in 2018.

The other top states in the latest   report were New York and Texas. The report is based on statistics from the FBI, which collects data from victims reporting alleged internet crimes.

"Fraudsters are getting more efficient at going after where the money is," said Michael Crain, DBA, director of FAU's Center for Forensic Accounting. "There doesn't seem to be any mitigation of the growing trend of online crime. The first line of defense from online fraud is not a technology solution or even law enforcement; it's user awareness. From a policy perspective, governments and other institutions should get the word out more so that individuals and organizations are more sensitive to online threats."

Crimes such as extortion, government impersonation and spoofing became more noticeable last year for their increases in victim losses and number of victims, according to the report. Business email compromise/email account compromise (BEC/EAC) remains the top internet crime in 2019 with reported losses of $1.8 billion, followed by confidence fraud/romance ($475 million) and spoofing ($300 million) schemes.

Spoofing, the falsifying of email contact information to make it appear to have been originated by a trustworthy source, was the crime with the largest percentage increase in victim losses (330 percent) of the top states during 2019.

BEC/EAC, in which business or personal email accounts are hacked or spoofed to request wire transfers, accounted for 30 percent to 90 percent of all victim losses last year in the top states and has grown significantly since 2015.

In confidence fraud/romance, an online swindler pretends to be in a friendly, romantic or family relationship to win the trust of the victim to obtain money or possessions.

For online investment fraud, in which scammers often lure seniors with promises of high returns, California leads the top states with $37.8 million in victim losses, but Florida's population-adjusted loss rate of $1.1 million makes it the state where victims are likely to lose the most money.

A major problem is that most internet crime appears to originate outside the United States and the jurisdiction of U.S. authorities.

"Foreign sources of internet crimes on U.S. residents and businesses make it challenging for whether  levels can be reduced as the public becomes more connected and dependent on the internet," the report states.

[Source: This article was published in phys.org By Paul - Uploaded by the Association Member: James Gill]

Categorized in Online Research

We all live in a digital world in which being in constant touch with technology is not just an option, but it is a necessity. This definitely has had a number of positive effects on society, but it also comes with its fair share of drawbacks. And one such drawback is in terms of the whole host of privacy issues.

When you are surfing on the internet, you are leaving a number of carbon footprints. And if you are not diligent about protecting your privacy, then somebody could very well track those footprints and steal your personal information. This sounds scary. But the good thing is that if you just a follow few steps, then you can avoid this entire scenario. In this article, we’ll be looking at the top 6 tips that you can follow to protect your privacy while using the internet. The list is mentioned below.

  1. Looking at Social Media Privacy Settings

According to current statistics, millions of people use various social media applications every single day. And the chances are that you are also one of those people. This is why it is important for you to take the necessary steps and protect all your private information that might be present on your social media handle. If someone hires the best detective agency in Delhi, they will surely look into your social media.

Most social medial applications come equipped with a strong privacy setting that you can activate to protect yourself and your information from total strangers on the internet. Also, you can select the people who can view what you post or share on your social media profile. This goes a long way in keeping your information safe.

  1. Avoid Using Public Storage

When we think of sharing information, then the first thing that comes to our mind is social media. But that is not where the privacy issue ends. Instead, there are also many other ways through which you might be sharing your information. And one such way is to use online sharing platforms for storing private information.

For example, there are many people who save their passwords, videos, photographs, and other documents on Google Drive. And while it is fine to upload some stuff to Google Drive that you mean to share but when it comes to saving your private information, then applications like Google Drive and Dropbox should not be the ideal choice.

  1. Evade Trackers

Surfing the internet is not possible without visiting various websites. And every single time that you visit a website, your browser is disclosing a bunch of information about you. This information is often used by marketers to target you with ads. But this information can also be misused. So, it is suggested that ideally, people should use private browsing services. This option is definitely better than browsing in incognito mode.

  1. Secure Everything

Let’s consider a scenario in which you lost your device or a situation in which a hacker is trying to hack your device. In both of these situations, the only tactic or weapon that can protect your private information is your password.

This is why you must make sure that all of your devices are password protected. You also need to ensure that you have a strong password. It is suggested that you should use a combination of alphabets, numbers, and special characters. When it comes to privacy, you can not let anything slip through the cracks. The biggest reason why I’m saying this is because even a small mistake can make such a big hole in your privacy.

  1. Keeping Your Electronic Devices Safe

Sometimes you can follow all the precautions, but a hacker might still find a way to try and steal your private information. And it is vital for you to also be prepared for a situation like that. This means that you should make sure to install an antivirus program in all of your devices irrespective of whether you are using those devices at your home or outside. You should also set up a firewall on your computer. It is very important to protect your electronic devices safe as a whistle. In today’s day and world, the biggest threat to your privacy is through electronic devices.

  1. Check Your Wi-Fi Connection

Do you remember the last time you were in a coffee shop or you were traveling, and you decided to connect to the internet through the public Wi-Fi network? This is something that most people do daily without giving it much thought. And this is not the right attitude.

The chances are that there are many people connected on the same public Wi-Fi network, and if you connect to that public Wi-Fi, then somebody could decide to snoop on you. So, it is suggested that you should avoid using public Wi-Fi networks. If that is not possible, then you should make it a point not to enter any of your private information while you are connected through a public Wi-Fi network.

[Source: This article was published in newspatrolling.com  - Uploaded by the Association Member: Jeremy Frink]

Categorized in Internet Privacy

With much of the country still under some form of lockdown due to COVID-19, communities are increasingly reliant upon the internet to stay connected.

The coronavirus’s ability to relegate professional, political, and personal communications to the web underscores just how important end-to-end encryption has already become for internet privacy. During this unprecedented crisis, just like in times of peace and prosperity, watering down online consumer protection is a step in the wrong direction.

The concept of end-to-end encryption is simple; platforms or services that use the system employ complex software to ensure that only the sender and the receiver can access the information being sent.

At present, many common messaging apps or video calling platforms offer end-to-end encryption, while the world’s largest social media platforms are in various stages of releasing their own form of encrypted protection.

End-to-end encryption provides consumers with the confidence that their most valuable information online will not be intercepted. In addition to personal correspondence, bank details, health records, and commercial secrets are just some of the private information entered and exchanged through encrypted connections.

With consumers unable to carry out routine business in person, such as visiting the DMV, a wealth of private data is increasingly being funneled into online transactions during the COVID-19 pandemic.

Unsurprisingly, however, the ability to communicate online in private has drawn the ire of law enforcement, who are wary of malicious actors being able to coordinate in secret. For example, earlier this year Attorney General Bill Barr called on Apple to unlock two iPhones as part of a Florida terror investigation.

The request is just the latest chapter in the Justice Department’s battle with cellphone makers to get access to private encrypted data.

While Apple has so far refused to forgo the integrity of its encryption, the push to poke loopholes into online privacy continues. The problem is not the Justice investigation, but rather the precedent it would set.

As Apple CEO Tim Cook noted in 2016, cracking encryption or installing a backdoor would effectively create a “master key.” With it, law enforcement would be able to access any number of devices.

Law enforcement agents already have a panoply of measures at their fingertips to access the private communications of suspected criminals and terrorists. From the now-infamous FISA warrants used to wiretap foreign spies to the routine subpoenas used to access historic phone records, investigators employ a variety of methods to track and prosecute criminals.

Moreover, creating a backdoor to encrypted services introduces a weak link in the system that could be exploited by countless third-party hackers. While would-be terrorists and criminals will simply shift their communications to new, yet-to-be cracked encryption services, everyday internet users will face a higher risk of having their data stolen. An effort to stop the crime that results in an opportunity for even more crime seems like a futile move.

Efforts to weaken encryption protections now appear even more misjudged due to a rise in cybercrime during the COVID-19 pandemic. Organizations such as the World Health Organization have come under cyberattack in recent weeks, with hundreds of email passwords being stolen.

Similarly, American and European officials have recently warned that hospitals and research institutions are increasingly coming under siege from hackers. According to the FBI, online crime has quadrupled since the beginning of the pandemic. In light of this cyber-crimewave, it seems that now is the time for more internet privacy protection, not less.

Internet users across America, and around the world, rely on end-to-end encryption for countless uses online. This reliance has only increased during the COVID-19 pandemic, as more consumers turn to online solutions.

Weakening internet privacy protections to fight crime might benefit law enforcement, but it would introduce new risk to law-abiding consumers.

[Source: This article was published in insidesources.com By Oliver McPherson-Smith- Uploaded by the Association Member: Jennifer Levin]

Categorized in Internet Privacy

While public safety measures have started to relax, the surge of malware accompanying the pandemic is still making headlines. As a recent study points out, hackers have created no less than 130 000 new e-mail domains related to Covid-19 to carry out what analysts now call ”fearware” attacks.

A lot of these domains and attacks are tied to the same source: the dark web. From selling vaccines and fake drugs to simply spreading panic, the dark web has been the host of many pandemic-related threats. And these attacks were just the latest addition to the dark web’s regular activity including, but not restricted to botnets, cryptojacking and selling ransomware.

However, to see how threats from the far reaches of the Internet can affect your company or clients, we must delve deeper into the concept of “dark web’’.

In the first part of our article, we try to understand the dark web’s structure and acknowledge its growing importance to cybersecurity teams.

What is the Dark Web?

Simple users or security specialists, most of us spend our time online the same way: tied to a few popular websites and chat clients or perusing pages through a search engine. This activity, mediated by traditional browsers and apps, accounts for an almost endless amount of content.

But, as copious as this content might seem, it’s only a small percentage of what the Internet has to offer – as little as 4%, according to CSO Online. The rest of it? An enormous collection of unindexed websites, private pages, and secluded networks that regular search engines cannot detect, bearing the generic moniker of ‘’ deep web’’.

The deep web covers just about anything that’s hidden from the public eye, including exclusive and paid content, private repositories, academic journals, medical records, confidential company data, and much more. In a broad sense, even the contents of an e-mail server are part of the deep web.

However, there is a certain part of the deep web that’s noticeably different. How? Well, if the deep web, in general, is content that can’t be found through conventional means, the dark web is that part of it that does not want to be found.

The dark web exists through private networks that use the Internet as support but require specific software to be accessed, as well as additional configurations or authorization. While the dark web is only a small part of the deep web, it allegedly still accounts for around 5% of the entire Internet… and for a lot of its malicious activity.

Since the dark web can’t be accessed directly, users need to use special software such as the Tor browser, I2P, or Freenet. Tor, also known as The Onion Router, is perhaps the best-known means of accessing the dark web, as it is used both as a gateway and a security measure (limiting website interactions with the user’s system). While the protocol itself was initially developed by a Navy division before becoming open source, the project is currently administered by an NGO.

I2P (The Invisible Internet Project) specializes in allowing the anonymous creation and hosting of websites through secure protocols, directly contributing to the development of the dark web.

At this point, it’s worth stating that many dark web sites are not in any way malicious and might just be private for security reasons (journalism websites for countries where censorship is rampant, private chat rooms for people affected by trauma, etc.). It’s also worth noting that platforms such as Tor are not malicious in themselves, with their technology being also used by many legitimate companies. However, the dark web offers two very powerful abilities to its users, both of them ripe for abuse.

These abilities are complete anonymity and untraceability. Unfortunately, their dangers only became visible after Silk Road, probably the world’s largest illegal online market at the time was closed. A similar ripple was also produced by the closing of the gigantic Alphabay, an even more comprehensive follow-up to Silk Road.

The Dangers of Anonymity

The truth is, dark web sites have been known to sell just about anything from drugs and contraband, guns, subscription credentials, password lists, credit cards to malware of all types, as well as multiple other illegal wares. All without any real control, from website owners or authorities, and all under the guard of encryption. Back in 2015, a study classified the contents of more than 2,700 dark web sites and found that no less than 57% hosted illicit materials!

Obviously, this prompted authorities to take action. Some law enforcement agencies have started monitoring Tor downloads to correlate them with suspicious activity, while others, such as the FBI, established their own fake illegal websites on the dark web to catch wrong-doers.

Even with such measures in place, the dark web’s growth is far from coming to a halt. Its traffic actually increased around the Covid-19 pandemic and the technology’s 20th anniversary. It is estimated that in 2019 30% of Americans were visiting the dark web regularly, although mostly not for a malicious purpose. Furthermore, as large social networks increase their content filtering and as web monitoring becomes more prevalent on the „surface web”, the dark web is slowly becoming an ideological escape for certain vocal groups.

While these numbers can put things into perspective, many security experts, from both enterprise organizations and MSSPs, might ask: ”Alright, but what does that have to do with my company? Why do I have to monitor the dark web?”

In the second part of our article, you will learn what Dark Web threats are aimed directly at your enterprise, and how an efficient Threat Intelligence solution can keep them at bay.

[Source: This article was published in securityboulevard.com By Andrei Pisau - Uploaded by the Association Member: Alex]

Categorized in Deep Web

Around the world, a diverse and growing chorus is calling for the use of smartphone proximity technology to fight COVID-19. In particular, public health experts and others argue that smartphones could provide a solution to an urgent need for rapid, widespread contact tracing—that is, tracking who infected people come in contact with as they move through the world. Proponents of this approach point out that many people already own smartphones, which are frequently used to track users’ movements and interactions in the physical world.

But it is not a given that smartphone tracking will solve this problem, and the risks it poses to individual privacy and civil liberties are considerable. Location tracking—using GPS and cell site information, for example—is not suited to contact tracing because it will not reliably reveal the close physical interactions that experts say are likely to spread the disease. Instead, developers are rapidly coalescing around applications for proximity tracing, which measures Bluetooth signal strength to determine whether two smartphones were close enough together for their users to transmit the virus. In this approach, if one of the users becomes infected, others whose proximity has been logged by the app could find out, self-quarantine, and seek testing. Just today, Apple and Google announced joint application programming interfaces (APIs) using these principles that will be rolled out in iOS and Android in May. A number of similarly designed applications are now available or will launch soon.

As part of the nearly unprecedented societal response to COVID-19, such apps raise difficult questions about privacy, efficacy, and responsible engineering of technology to advance public health. Above all, we should not trust any application—no matter how well-designed—to solve this crisis or answer all of these questions. Contact tracing applications cannot make up for shortages of effective treatment, personal protective equipment, and rapid testing, among other challenges.

COVID-19 is a worldwide crisis, one which threatens to kill millions and upend society, but history has shown that exceptions to civil liberties protections made in a time of crisis often persist much longer than the crisis itself. With technological safeguards, sophisticated proximity tracking apps may avoid the common privacy pitfalls of location tracking. Developers and governments should also consider legal and policy limits on the use of these apps. Above all, the choice to use them should lie with individual users, who should inform themselves of the risks and limitations, and insist on necessary safeguards. Some of these safeguards are discussed below. 

How Do Proximity Apps Work?

There are many different proposals for Bluetooth-based proximity tracking apps, but at a high level, they begin with a similar approach. The app broadcasts a unique identifier over Bluetooth that other, nearby phones can detect. To protect privacy, many proposals, including the Apple and Google APIs, have each phone’s identifier rotated frequently to limit the risk of third-party tracking.

When two users of the app come near each other, both apps estimate the distance between each other using Bluetooth signal strength. If the apps estimate that they are less than approximately six feet (or two meters) apart for a sufficient period of time, the apps exchange identifiers. Each app logs an encounter with the other’s identifier. The users’ location is not necessary, as the application need only know if the users are sufficiently close together to create a risk of infection.

When a user of the app learns that they are infected with COVID-19, other users can be notified of their own infection risk. This is where different designs for the app significantly diverge.

Some apps rely on one or more central authorities that have privileged access to information about users’ devices. For example, TraceTogether, developed for the government of Singapore, requires all users to share their contact information with the app’s administrators. In this model, the authority keeps a database that maps app identifiers to contact information. When a user tests positive, their app uploads a list of all the identifiers it has come into contact with over the past two weeks. The central authority looks up those identifiers in its database, and uses phone numbers or email addresses to reach out to other users who may have been exposed. This places a lot of user information out of their own control, and in the hands of the government. This model creates unacceptable risks of pervasive tracking of individuals’ associations and should not be employed by other public health entities.

Other models rely on a database that doesn’t store as much information about the app’s users. For example, it’s not actually necessary for an authority to store real contact information. Instead, infected users can upload their contact logs to a central database, which stores anonymous identifiers for everyone who may have been exposed. Then, the devices of users who are not infected can regularly ping the authority with their own identifiers. The authority responds to each ping with whether the user has been exposed. With basic safeguards in place, this model could be more protective of user privacy. Unfortunately, it may still allow the authority to learn the real identities of infected users. With more sophisticated safeguards, like cryptographic mixing, the system could offer slightly stronger privacy guarantees.

Some proposals go further, publishing the entire database publicly. For example, Apple and Google’s proposal, published April 10, would broadcast a list of keys associated with infected individuals to nearby people with the app. This model places less trust in a central authority, but it creates new risks to users who share their infection status that must be mitigated or accepted.

Some apps require authorities, like health officials, to certify that an individual is infected before they may alert other app users. Other models could allow users to self-report infection status or symptoms, but those may result in significant numbers of false positives, which could undermine the usefulness of the app.

In short, while there is early promise in some of the ideas for engineering proximity tracking apps, there are many open questions.

Would Proximity Apps Be Effective?

Traditional contact tracing is fairly labor intensive, but can be quite detailed. Public health workers interview the person with the disease to learn about their movements and people with whom they have been in close contact. This may include interviews with family members and others who may know more details. The public health workers then contact these people to offer help and treatment as needed, and sometimes interview them to trace the chain of contacts further. It is difficult to do this at scale during a pandemic. In addition, human memory is fallible, so even the most detailed picture obtained through interviews may have significant gaps or mistakes.

Any proximity app contact tracing is not a substitute for public health workers’ direct intervention. It is also doubtful that a proximity app could substantially help conduct COVID-19 contact tracing during a time like the present, when community transmission is so high that much of the general population is sheltering in place, and when there is not sufficient testing to track the virus. When there are so many undiagnosed infectious people in the population, a large portion of whom are asymptomatic, a proximity app will be unable to warn of most infection risks. Moreover, without rapid and widely available testing, even someone with symptoms cannot confirm to begin the notification process. And everyone is already being asked to avoid proximity to people outside their household. 

However, such an app might be helpful with contact tracing in a time we hope is coming soon, when community transmission is low enough that the population can stop sheltering in place, and when there is sufficient testing to quickly and efficiently diagnose COVID-19 at scale.

Traditional contact tracing is only useful for contacts that the subject can identify. COVID-19 is exceptionally contagious and may be spread from person to person during even short encounters. A brief exchange between a grocery clerk and a customer, or between two passengers on public transportation, may be enough for one individual to infect the other. Most people don’t collect contact information for everyone they encounter, but apps can do so automatically. This might make them useful complements to traditional contact tracing.

But an app will treat the contact between two people passing on the sidewalk the same as the contact between roommates or romantic partners, though the latter carry much greater risks of transmission. Without testing an app in the real world—which entails privacy and security risks—we can’t be sure that an app won’t also log connections between people separated by walls or in two adjacent cars stopped at a light. Apps also don’t take into account whether their users are wearing protective equipment, and may serially over-report exposure to users like hospital staff or grocery store workers, despite their extra precautions against infection. It is not clear how the technological constraints of Bluetooth proximity calculations will inform public health decisions to notify potentially infected individuals. Is it better for these applications to be slightly oversensitive and risk over-notifying individuals who may not have actually been standing within six feet of an infected user for the requisite amount of time? Or should the application have higher thresholds so that a notified user may have more confidence they were truly exposed?

Furthermore, these apps can only log contacts between two people who each have a phone on their person that is Bluetooth enabled and has the app installed. This highlights another necessary condition for a proximity app to be effective: its adoption by a sufficiently large number of people. The Apple and Google APIs attempt to address this problem by offering a common platform for health authorities and developers to build applications that offer common features and protections. These companies also aspire to build their own applications that will interoperate more directly and speed adoption. But even then, a sizable percentage of the world’s population—including a good part of the population of the United States—may not have access to a smartphone running the latest version of iOS or Android. This highlights the need to continue to employ tried-and-true public health measures such as testing and traditional contact tracing, to ensure that already-marginalized populations are not missed.

We cannot solve a pandemic by coding the perfect app. Hard societal problems are not solved by magical technology, among other reasons because not everyone will have access to the necessary smartphones and infrastructure to make this work. 

Finally, we should not excessively rely on the promise of an unproven app to make critical decisions, like deciding who should stop sheltering in place and when. Reliable applications of this sort typically go through many rounds of development and layers of testing and quality assurance, all of which takes time. And even then, new apps often have bugs. A faulty proximity tracing app could lead to false positives, false negatives, or maybe both. 

Would Proximity Apps Do Too Much Harm to Our Freedoms?

Any proximity app creates new risks for technology users. A log of a user’s proximity to other users could be used to show who they associate with and infer what they were doing. Fear of disclosure of such proximity information might chill users from participating in expressive activity in public places. Vulnerable groups are often disparately burdened by surveillance technology, and proximity tracking may be no different. And proximity data or medical diagnoses might be stolen by adversaries like foreign governments or identity thieves.

To be sure, some commonly used technologies create similar risks. Many track and report your location, from Fitbit to Pokemon Go. Just carrying a mobile phone brings the risk of tracking through cell tower triangulation. Stores try to mine customer foot traffic through Bluetooth. Many users are “opted in” to services like Google’s location services, which keep a detailed log of everywhere they have gone. Facebook attempts to quantify associations between people through myriad signals, including using face recognition to extract data from photographs, linking accounts to contact data, and mining digital interactions. Even privacy-preserving services like Signal can expose associations through metadata.

So the proposed addition of proximity tracking to these other extant forms of tracking would not be an entirely new threat vector. But the potentially global scale of contact tracing APIs and apps, and their collection of sensitive health and associational information, presents new risks for more users.

Context matters, of course. We face an unprecedented pandemic. Tens of thousands of people have died, and hundreds of millions of people have been instructed to shelter in place. A vaccine is expected in 12 to 18 months. While this gives urgency to proximity app projects, we must also remember that this crisis will end, but new tracking technologies tend to stick around. Thus proximity app developers must be sure they are developing a technology that will preserve the privacy and liberty we all cherish, so we do not sacrifice fundamental rights in an emergency. Providing sufficient safeguards will help mitigate this risk. Full transparency about how the apps and the APIs operate, including open source code, is necessary for people to understand, and give their informed consent to, the risks.

Does a Proximity App Have Sufficient Safeguards?

We urge app developers to provide, and users to require, the following necessary safeguards:

Consent

Informed, voluntary, and opt-in consent is the fundamental requirement for any application that tracks a user’s interactions with others in the physical world. Moreover, people who choose to use the app and then learn they are ill must also have the choice of whether to share a log of their contacts. Governments must not require the use of any proximity application. Nor should there be informal pressure to use the app in exchange for access to government services. Similarly, private parties must not require the app’s use in order to access physical spaces or obtain other benefits.

Individuals should also have the opportunity to turn off the proximity  tracing app. Users who consent to some proximity tracking might not consent to other proximity tracking, for example, when they engage in particularly sensitive activities like visiting a medical provider, or engaging in political organizing. People can withhold this information from traditional contact tracing interviews with health workers, and digital contact tracing must not be more intrusive. People are more likely to turn on proximity apps in the first place (which may be good for public health) if they know they have the prerogative to turn it off and back on when they choose.

While it may be tempting to mandate use of a contact tracing app, the interference with personal autonomy is unacceptable. Public health requires trust between public health officials and the public, and fear of surveillance may cause individuals to avoid testing and treatment. This is a particularly acute concern in marginalized communities that have historical reasons to be wary of coerced participation in the name of public health. While some governments may disregard the consent of their citizens, we urge developers not to work with such governments.

Minimization

Any proximity tracking application for contact tracing should collect the least possible information. This is probably just a record of two users being near each other, measured through Bluetooth signal strength plus device types, and a unique, rotating marker for the other person’s phone. The application should not collect location information. Nor should it collect time stamp information, except maybe the date (if public health officials think this is important to contact tracing).

The system should retain the information for the least possible amount of time, which likely is measured in days and weeks and not months. Public health officials should define the increment of time for which proximity data might be relevant to contact tracing. All data that is no longer relevant must be automatically deleted.

Any central authority that maintains or publishes databases of anonymous identifiers must not collect or store metadata (like IP addresses) that may link anonymous identifiers to real people.

The application should collect information solely for the purpose of contact tracing. Furthermore, there should be hard barriers between (a) the proximity tracking app and (b) anything else an app maker is collecting, such as aggregate location data or individual health records.

Finally, to the greatest extent possible, information collected should reside on a user’s own device, rather than on servers run by the application developer or a public health entity. This presents engineering challenges. But lists of devices with which the user has been in proximity should stay on the user’s own device, so that checking whether a user has encountered someone who is infected happens locally. 

Information security

An application running in the background on a phone and logging a user’s proximity to other users presents considerable information security risks. As always, limiting the attack surface and the amount of information collected will lower these risks. Developers should open-source their code and subject it to third-party audits and penetration testing. They should also publish details about their security practices.

Further engineering may be necessary to ensure that adversaries cannot compromise a proximity tracing system’s effectiveness or derive revealing information about the users of the application. This would include preventing individuals from falsely reporting infections as a form of trolling or denial of service, as well ensuring that well-resourced adversaries who monitor metadata cannot identify individuals using the app or log their connections with others.

“Anonymous” identifiers must not be linkable. Regularly rotating identifiers used by the phone is a start, but if an adversary can learn that multiple identifiers belong to the same user, it greatly increases the risk that they can tie that activity to a real person. As we understand Apple and Google’s proposal, users who test positive are asked to upload keys that tie together all their identifiers for a 24-hour period. (We have asked Apple and Google for clarification.) This could allow trackers to collect rotating identifiers if they had access to a widespread network of bluetooth readers, then track the movements of infected users over time. This breaks the safeguards created by using rotating identifiers in the first place. For that reason, rotating identifiers must be uploaded to any central authority or database in a way that doesn’t reveal the fact that many identifiers belong to the same person. This may require that the upload of a single user’s tokens are batched with other user data or spread out over time.

Finally, governments might try to force tech developers to subvert the limits they set, such as changing the application to report contact lists to a central authority. Transparency will mitigate these risks, but they remain inherent in building and deploying such an application. This is one of the reasons we call on developers to draw clear lines about the uses of their products and to pledge to resist government efforts to meddle in the design, as we’ve seen companies like Apple do in the San Bernardino case

Transparency

Entities that develop these apps must publish reports about what they are doing, how they are doing it, and why they are doing it. They must also publish open source code, as well as policies that address the above privacy and information security issues. These should include commitments to avoid other uses of information collected by the app and a pledge to avoid government interference to the extent allowed by law. Stated as application policy, this should also allow enforcement of violations through consumer protection laws. 

Addressing Bias

As discussed above, contact tracing applications will leave out individuals without access to the latest technology. They will also favor those predisposed to count on technology companies and the government to address their needs. We must ensure that developers and the government do not directly or indirectly leave out marginalized groups by relying on these applications to the exclusion of other interventions.

On the other side, these apps may lead to many more false positives for certain kinds of users, such as workers in the health or service sectors. This is another reason that contact-tracing apps must not be used as a basis to exclude people from work, public gatherings, or government benefits.

Expiration

When the COVID-19 crisis ends, any application built to fight the disease should end as well. Defining the end of the crisis will be a difficult question, so developers should ensure that users can opt out at any point. They should also consider building time limits into their applications themselves, along with regular check-ins with the users as to whether they want to continue broadcasting. Furthermore, as major providers like Apple and Google throw their weight behind these applications, they should articulate the circumstances under which they will and will not build similar products in the future.

Technology has the power to amplify society’s efforts to tackle complex problems, and this pandemic has already inspired many of the best and brightest. But we’re also all too familiar with the ability of governments and private entities to deploy harmful tracking technologies. Above all, even as we fight COVID-19, we must ensure that the word “crisis” does not become a magic talisman that can be invoked to build new and ever more clever means of limiting people’s freedoms through surveillance.

[Source: This article was published in eff.org By Andrew Crocker, Kurt Opsahl, and Bennett Cyphers - Uploaded by the Association Member: Anna K. Sasaki]

Categorized in Internet Privacy

GOOGLE CHROME users have been put on alert after thousands of people were tricked into download a dangerous download posing as a browser update.

Google Chrome fans are being warned about a fake download which has already tricked thousands users of the market leading browser. Google Chrome is the most popular browser in the world by a country mile, and it's not in danger of losing that illustrious crown anytime soon. Latest stats from NetMarketShare put Google Chrome as holding onto a 68.50 per cent share of the internet browser marketplace.

That's over two thirds of the market, and is far ahead of its nearest challengers Microsoft Edge and Mozzila Firefox.

These rival internet browsers hold 7.59 per cent and 7.19 per cent of the marketplace respectively.

And the huge Google Chrome user base have been put on alert about a fake download that has already tricked thousands of people.

Doctor Web in a post online revealed the existence of the dangerous Google Chrome download which poses as an update to the browser.

In total more than 2,000 people have downloaded the fake Google Chrome update.

Doctor Web said hackers had specifically been targeting Chrome users in the UK, US, Canada, Australia, Israel and Turkey.

The security experts said: "According to the Doctor Web virus laboratory, the hacker group behind this attack was previously involved in spreading a fake installer of the popular VSDC video editor through its official website and the CNET software platform.

"This time the cybercrooks managed to gain administrative access to several websites that began to be used in the infection chain.

"They embedded a malicious JavaScript code inside the compromised pages that redirects users to a phishing site, which is presented as legitimate Google service.

"Target selection is based on geolocation and browser detection. The target audience are users from the USA, Canada, Australia, Great Britain, Israel and Turkey, using the Google Chrome browser.

"It is worth noting that the downloaded file has a valid digital signature identical to the signature of the fake NordVPN installer distributed by the same criminal group."

As always a good anti-virus programme can help you detect any such threats and remove malicious software that does end up on your machines.

While you should always be wary if you randomly get redirected to a website asking you to download anything or input sensitive information.

This is not how companies alert users to important software updates, with Chrome in particular offering an auto-download feature for patches.

The news comes as in the past few days Google has released the latest version of Chrome, update 81.

However the search engine giant has opted to skip the planned-for version 82 of Chrome due to the ongoing coronavirus pandemic.

The Chrome development team revealed the news on Twitter saying: "Due to adjusted work schedules, we’re pausing upcoming Chrome & Chrome OS releases.

"Our goal is to ensure they continue to be stable, secure, and reliable for anyone who depends on them.

"We’ll prioritise updates related to security, which will be included in Chrome 80. Stay tuned."

[Source: This article was published in express.co.uk By DION DASSANAYAKE - Uploaded by the Association Member: Patrick Moore]

Categorized in Search Engine

An unlikely competitor enters the search engine market as Verizon Media launches its privacy-focused OneSearch.

OneSearch promises not to track, store, or share personal or search data with advertisers, which puts it in direct competition with DuckDuckGo. It’s available now on desktop and mobile at OneSearch.com.

What differentiates Verizon Media’s OneSearch from DuckDuckGo, a more established privacy-focused search engine, is the ability for businesses to integrate it with their existing privacy and security products.

In an announcement, the company states:

“OneSearch doesn’t track, store, or share personal or search data with advertisers, giving users greater control of their personal information in a search context. Businesses with an interest in security can partner with Verizon Media to integrate OneSearch into their privacy and security products, giving their customers another measure of control.”

Another unique offering from OneSearch is its advanced privacy mode. When enabled, OneSearch’s encrypted search results link will expire within an hour.

OneSearch’s advanced privacy mode is designed for situations where multiple people are using the same device, or if a search results link is being shared with a friend.

The full array of privacy-focused features offered by OneSearch include:

  • No cookie tracking, retargeting, or personal profiling
  • No sharing of personal data with advertisers
  • No storing of user search history
  • Unbiased, unfiltered search results
  • Encrypted search terms

Although it doesn’t sell data to advertisers, OneSearch does rely on advertising to keep its service free. Rather than using cookies and browsing history to target ads, OneSearche’s contextual ads are based on things like the current keyword being searched for.

OneSearch is only available in North America on desktop and mobile web browsers, though it will be available in other countries soon. A mobile app for Android and iOS will be available later this month.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Jay Harris]

Categorized in Search Engine

In 2008, the first privacy-focused search engine emerged on the scene - DuckDuckGo. The company was the first to bring consumers a search engine designed to protect consumer privacy as they searched online. By 2018, DuckDuckGo had 16 million searches a day, and by 2019, that number had jumped to 36 million searches.

Now, more than ten years later, privacy and search continue to evolve.

Privado is a new private search engine from  CodeFuel, which allows consumers to protect their right to online privacy. Search results are powered by Bing and driven by the consumer’s search query and not by their demographics or personal data

“Online privacy per se is not a new issue. But what we have seen until recently, is that a relatively narrow segment of users care enough to take action, mainly tech-savvy users, who understand how companies feed off their data,” said Tal Jacobson, General Manager of CodeFuel. “With the growing number of data breaches we hear about every other day, privacy concerns have finally made it to center stage.”

Jacobson says he strongly believes we have come very close to the privacy tipping point when people realize that this is just too much.

Tal Jacobson, General Manager, CodeFuel

“Think for a moment about the millions of parents out there, who have just heard about the accusations against TikTok secretly gathering user data and sending it to China,” added Jacobson. “Think about the Millions of users across social and search and how their data is used and abused to make more money, without their permission.”

Jacobson adds that users are waking up, and search privacy is making its way to the mainstream. “Privado enables users to realize the benefits of internet search without anxiety about their most intimate behaviors being observed and tracked.”

As consumer awareness increases around data and privacy, their actions have shifted as well. According to a data privacy and security report from RSA, 78 percent of consumers polled said they take action to limit the amount of personal information they share online.

[Source: This article was published in forbes.com By Jennifer Kite-Powell - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

The lawsuit against Amir Golestan and his web-services provider firm Micfo is shedding light on the ecosystem that governs the world of online spammers and hackers, a Wall Street Journal article said on Monday (Feb. 17).

In this first-of-its-kind fraud prosecution of a small technology company, Golestan is facing 20 counts of wire fraud in a suit brought in the U.S. District Court in South Carolina. Golestan and his corporation have pleaded not guilty.

The alleged victim is the nonprofit American Registry for Internet Numbers, based in Centreville, Virginia. The company is in charge of assigning internet protocol (IP) addresses to all online devices in North America and the Caribbean, which in turn allows devices to communicate with one another online. The case revolves around IP addresses.

This is the first federal case that brings fraud allegations to internet resources. It could end up defining “new boundaries for criminal behavior” with the confines of the largely undefined internet infrastructure.

People are largely assigned an IP address automatically when it comes to getting online with a cellphone or internet service provider. IP addresses, however, are the online equivalent of home phone numbers and are “key identifiers” for authorities going after online criminals.

In the May Micfo suit, the Justice Department alleges that Golestan established shell companies to fool the registry into giving him 800,000 IP addresses. He then leased or sold the IP addresses to clients, he said and the complaint indicated.

His clients were reportedly Virtual Private Networks — VPNs — which enable users to maintain anonymity online. VPNs could be used for online privacy protection or to shield the identity of fraudsters and cybercriminals. They can be used to transmit illicit content or for online thieves to hide their tracks.

As Micfo amassed VPN clients using the illegitimately-obtained IP addresses, a lot of traffic — some being criminal — filed through its network without a trace, according to government subpoenas directed at Micfo and reviewed by The Wall Street Journal.

Golestan and Micfo are not charged with being part of or even aware of illegal activity transmitted via VPNs across Micfo’s servers. The DOJ charged him and the company with “defrauding the internet registry to obtain the IP addresses over a period of several years.”

Prosecutors said Golestan’s alleged scheme was valued at $14 million, which was based on the government’s estimated value of between $13 and $19 for each address in the secondary market, according to the court complaint.

Born in Iran, Golestan, 36, started Micfo in 1999 in the bedroom of his childhood home in Dubai before emigrating to the U.S.

Even though the concept of smart cities is still largely under development, cybercriminals are waiting in the wings to begin laying virtual siege to infrastructure that the high-tech, highly responsive urban areas envisioned for the not-too-distant future.

[Source: This article was published in pymnts.com By PYMNTS - Uploaded by the Association Member: Bridget Miller]

Categorized in Internet Privacy
Page 1 of 8

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media