fbpx

With much of the country still under some form of lockdown due to COVID-19, communities are increasingly reliant upon the internet to stay connected.

The coronavirus’s ability to relegate professional, political, and personal communications to the web underscores just how important end-to-end encryption has already become for internet privacy. During this unprecedented crisis, just like in times of peace and prosperity, watering down online consumer protection is a step in the wrong direction.

The concept of end-to-end encryption is simple; platforms or services that use the system employ complex software to ensure that only the sender and the receiver can access the information being sent.

At present, many common messaging apps or video calling platforms offer end-to-end encryption, while the world’s largest social media platforms are in various stages of releasing their own form of encrypted protection.

End-to-end encryption provides consumers with the confidence that their most valuable information online will not be intercepted. In addition to personal correspondence, bank details, health records, and commercial secrets are just some of the private information entered and exchanged through encrypted connections.

With consumers unable to carry out routine business in person, such as visiting the DMV, a wealth of private data is increasingly being funneled into online transactions during the COVID-19 pandemic.

Unsurprisingly, however, the ability to communicate online in private has drawn the ire of law enforcement, who are wary of malicious actors being able to coordinate in secret. For example, earlier this year Attorney General Bill Barr called on Apple to unlock two iPhones as part of a Florida terror investigation.

The request is just the latest chapter in the Justice Department’s battle with cellphone makers to get access to private encrypted data.

While Apple has so far refused to forgo the integrity of its encryption, the push to poke loopholes into online privacy continues. The problem is not the Justice investigation, but rather the precedent it would set.

As Apple CEO Tim Cook noted in 2016, cracking encryption or installing a backdoor would effectively create a “master key.” With it, law enforcement would be able to access any number of devices.

Law enforcement agents already have a panoply of measures at their fingertips to access the private communications of suspected criminals and terrorists. From the now-infamous FISA warrants used to wiretap foreign spies to the routine subpoenas used to access historic phone records, investigators employ a variety of methods to track and prosecute criminals.

Moreover, creating a backdoor to encrypted services introduces a weak link in the system that could be exploited by countless third-party hackers. While would-be terrorists and criminals will simply shift their communications to new, yet-to-be cracked encryption services, everyday internet users will face a higher risk of having their data stolen. An effort to stop the crime that results in an opportunity for even more crime seems like a futile move.

Efforts to weaken encryption protections now appear even more misjudged due to a rise in cybercrime during the COVID-19 pandemic. Organizations such as the World Health Organization have come under cyberattack in recent weeks, with hundreds of email passwords being stolen.

Similarly, American and European officials have recently warned that hospitals and research institutions are increasingly coming under siege from hackers. According to the FBI, online crime has quadrupled since the beginning of the pandemic. In light of this cyber-crimewave, it seems that now is the time for more internet privacy protection, not less.

Internet users across America, and around the world, rely on end-to-end encryption for countless uses online. This reliance has only increased during the COVID-19 pandemic, as more consumers turn to online solutions.

Weakening internet privacy protections to fight crime might benefit law enforcement, but it would introduce new risk to law-abiding consumers.

[Source: This article was published in insidesources.com By Oliver McPherson-Smith- Uploaded by the Association Member: Jennifer Levin]

Categorized in Internet Privacy

The scientific community worldwide has mobilized with unprecedented speed to tackle the COVID-19 pandemic, and the emerging research output is staggering. Every day, hundreds of scientific papers about COVID-19 come out, in both traditional journals and non-peer-reviewed preprints. There's already far more than any human could possibly keep up with, and more research is constantly emerging.

And it's not just new research. We estimate that there are as many as 500,000 papers relevant to COVID-19 that were published before the outbreak, including papers related to the outbreaks of SARS in 2002 and MERS in 2012. Any one of these might contain the key information that leads to  or a vaccine for COVID-19.

Traditional methods of searching through the research literature just don't cut it anymore. This is why we and our colleagues at Lawrence Berkeley National Lab are using the latest artificial intelligence techniques to build COVIDScholar, a  dedicated to COVID-19. COVIDScholar includes tools that pick up subtle clues like similar drugs or research methodologies to recommend relevant research to scientists. AI can't replace scientists, but it can help them gain new insights from more papers than they could read in a lifetime.

Why it matters

When it comes to finding effective treatments for COVID-19, time is of the essence. Scientists spend 23% of their time searching for and reading papers. Every second our  can save them is more time to spend making discoveries in the lab and analyzing data.

AI can do more than just save scientists time. Our group's previous work showed that AI can capture latent scientific knowledge from text, making connections that humans missed. There, we showed that AI was able to suggest new, cutting-edge functional materials years before their discovery by humans. The information was there all along, but it took combining information from hundreds of thousands of papers to find it.

 

We are now applying the same techniques to COVID-19, to find existing drugs that could be repurposed, genetic links that might help develop a vaccine or effective treatment regimens. We're also starting to build in new innovations, like using molecular structures to help find which drugs are similar to each other, including those that are similar in unexpected ways.


1-aitoolsearch.jpg

How we do this work

The most important part of our work is the data. We've built web scrapers that collect new papers as they're published from a wide variety of sources, making them available on our website within 15 minutes of their appearance online. We also clean the data, fixing mistakes in formatting and comparing the same paper from multiple sources to find the best version. Our machine learning algorithms then go to work on the paper, tagging it with subject categories and marking work important to COVID-19.

We're also continuously seeking out experts in new areas. Their input and annotation of data is what allows us to train new AI models.

What's next

So far, we have assembled a collection of over 60,000 papers on COVID-19, and we're expanding the collection daily. We've also built search tools that group research into categories, suggest related research and allow users to find papers that connect different concepts, such as papers that connect a specific drug to the diseases it's been used to treat in the past. We're now building AI algorithms that allow researchers to plug  into quantitative models for studying topics like protein interactions. We're also starting to dig through the past literature to find hidden gems.

We hope that very soon, researchers using COVIDScholar will start to identify relationships that they might never have imagined, bringing us closer to treatments and a remedy for COVID-19.

[Source: This article was published in medicalxpress.com By Amalie Trewartha and John Dagdelen - Uploaded by the Association Member: Barbara larson]

Categorized in Online Research

Google‘s AI team has released a new tool to help researchers traverse through a trove of coronavirus papers, journals, and articles. The COVID-19 research explorer tool is a semantic search interface that sits on top of the COVID-19 Open Research Dataset (CORD-19). 

The team says that traditional search engines are sufficient at answering queries such as “What are the symptoms of coronavirus?” or “Where can I get tested in my country?”. However, when it comes to more pointed questions from researchers, these search engines and their keyword-based approach fail to deliver accurate results.

Google‘s new tool helps researchers solve that problem. The CORD-19 database has over 50,000 journal articles and research papers related to coronavirus. However, a simple keyword search wouldn’t yield reliable results. So, Google uses Natural Language Understanding (NLU) based semantic search to answer those queries. 

NLU is a subset of Natural Language Processing (NLP) that focuses on a smaller context while trying to derive the meaning of the question and draw distinct insights.

The COVID-19 research explorer tool not only returns related papers to the query, but it also highlights parts of papers that might provide relevant answers to the question. You can also ask follow-up questions to further narrow down results.

The semantic search is powered by Google’s popular BERT language model. In addition to that, the AI has been trained on BioASQ, a biomedical semantical search model to enhance results.

The team built a hybrid term-neural retrieval model for better results. While the term-based model provides accuracy with search results, the neural model helps with understanding the meaning and context of the query.

You can read more technical details about the model here and try out the search explorer here.

 

[Source: This article was published in sup.news By Ivan Mehta - Uploaded by the Association Member: Wushe Zhiyang]

Categorized in Online Research

Around the world, a diverse and growing chorus is calling for the use of smartphone proximity technology to fight COVID-19. In particular, public health experts and others argue that smartphones could provide a solution to an urgent need for rapid, widespread contact tracing—that is, tracking who infected people come in contact with as they move through the world. Proponents of this approach point out that many people already own smartphones, which are frequently used to track users’ movements and interactions in the physical world.

But it is not a given that smartphone tracking will solve this problem, and the risks it poses to individual privacy and civil liberties are considerable. Location tracking—using GPS and cell site information, for example—is not suited to contact tracing because it will not reliably reveal the close physical interactions that experts say are likely to spread the disease. Instead, developers are rapidly coalescing around applications for proximity tracing, which measures Bluetooth signal strength to determine whether two smartphones were close enough together for their users to transmit the virus. In this approach, if one of the users becomes infected, others whose proximity has been logged by the app could find out, self-quarantine, and seek testing. Just today, Apple and Google announced joint application programming interfaces (APIs) using these principles that will be rolled out in iOS and Android in May. A number of similarly designed applications are now available or will launch soon.

As part of the nearly unprecedented societal response to COVID-19, such apps raise difficult questions about privacy, efficacy, and responsible engineering of technology to advance public health. Above all, we should not trust any application—no matter how well-designed—to solve this crisis or answer all of these questions. Contact tracing applications cannot make up for shortages of effective treatment, personal protective equipment, and rapid testing, among other challenges.

COVID-19 is a worldwide crisis, one which threatens to kill millions and upend society, but history has shown that exceptions to civil liberties protections made in a time of crisis often persist much longer than the crisis itself. With technological safeguards, sophisticated proximity tracking apps may avoid the common privacy pitfalls of location tracking. Developers and governments should also consider legal and policy limits on the use of these apps. Above all, the choice to use them should lie with individual users, who should inform themselves of the risks and limitations, and insist on necessary safeguards. Some of these safeguards are discussed below. 

How Do Proximity Apps Work?

There are many different proposals for Bluetooth-based proximity tracking apps, but at a high level, they begin with a similar approach. The app broadcasts a unique identifier over Bluetooth that other, nearby phones can detect. To protect privacy, many proposals, including the Apple and Google APIs, have each phone’s identifier rotated frequently to limit the risk of third-party tracking.

When two users of the app come near each other, both apps estimate the distance between each other using Bluetooth signal strength. If the apps estimate that they are less than approximately six feet (or two meters) apart for a sufficient period of time, the apps exchange identifiers. Each app logs an encounter with the other’s identifier. The users’ location is not necessary, as the application need only know if the users are sufficiently close together to create a risk of infection.

When a user of the app learns that they are infected with COVID-19, other users can be notified of their own infection risk. This is where different designs for the app significantly diverge.

Some apps rely on one or more central authorities that have privileged access to information about users’ devices. For example, TraceTogether, developed for the government of Singapore, requires all users to share their contact information with the app’s administrators. In this model, the authority keeps a database that maps app identifiers to contact information. When a user tests positive, their app uploads a list of all the identifiers it has come into contact with over the past two weeks. The central authority looks up those identifiers in its database, and uses phone numbers or email addresses to reach out to other users who may have been exposed. This places a lot of user information out of their own control, and in the hands of the government. This model creates unacceptable risks of pervasive tracking of individuals’ associations and should not be employed by other public health entities.

Other models rely on a database that doesn’t store as much information about the app’s users. For example, it’s not actually necessary for an authority to store real contact information. Instead, infected users can upload their contact logs to a central database, which stores anonymous identifiers for everyone who may have been exposed. Then, the devices of users who are not infected can regularly ping the authority with their own identifiers. The authority responds to each ping with whether the user has been exposed. With basic safeguards in place, this model could be more protective of user privacy. Unfortunately, it may still allow the authority to learn the real identities of infected users. With more sophisticated safeguards, like cryptographic mixing, the system could offer slightly stronger privacy guarantees.

Some proposals go further, publishing the entire database publicly. For example, Apple and Google’s proposal, published April 10, would broadcast a list of keys associated with infected individuals to nearby people with the app. This model places less trust in a central authority, but it creates new risks to users who share their infection status that must be mitigated or accepted.

Some apps require authorities, like health officials, to certify that an individual is infected before they may alert other app users. Other models could allow users to self-report infection status or symptoms, but those may result in significant numbers of false positives, which could undermine the usefulness of the app.

In short, while there is early promise in some of the ideas for engineering proximity tracking apps, there are many open questions.

Would Proximity Apps Be Effective?

Traditional contact tracing is fairly labor intensive, but can be quite detailed. Public health workers interview the person with the disease to learn about their movements and people with whom they have been in close contact. This may include interviews with family members and others who may know more details. The public health workers then contact these people to offer help and treatment as needed, and sometimes interview them to trace the chain of contacts further. It is difficult to do this at scale during a pandemic. In addition, human memory is fallible, so even the most detailed picture obtained through interviews may have significant gaps or mistakes.

Any proximity app contact tracing is not a substitute for public health workers’ direct intervention. It is also doubtful that a proximity app could substantially help conduct COVID-19 contact tracing during a time like the present, when community transmission is so high that much of the general population is sheltering in place, and when there is not sufficient testing to track the virus. When there are so many undiagnosed infectious people in the population, a large portion of whom are asymptomatic, a proximity app will be unable to warn of most infection risks. Moreover, without rapid and widely available testing, even someone with symptoms cannot confirm to begin the notification process. And everyone is already being asked to avoid proximity to people outside their household. 

However, such an app might be helpful with contact tracing in a time we hope is coming soon, when community transmission is low enough that the population can stop sheltering in place, and when there is sufficient testing to quickly and efficiently diagnose COVID-19 at scale.

Traditional contact tracing is only useful for contacts that the subject can identify. COVID-19 is exceptionally contagious and may be spread from person to person during even short encounters. A brief exchange between a grocery clerk and a customer, or between two passengers on public transportation, may be enough for one individual to infect the other. Most people don’t collect contact information for everyone they encounter, but apps can do so automatically. This might make them useful complements to traditional contact tracing.

 

But an app will treat the contact between two people passing on the sidewalk the same as the contact between roommates or romantic partners, though the latter carry much greater risks of transmission. Without testing an app in the real world—which entails privacy and security risks—we can’t be sure that an app won’t also log connections between people separated by walls or in two adjacent cars stopped at a light. Apps also don’t take into account whether their users are wearing protective equipment, and may serially over-report exposure to users like hospital staff or grocery store workers, despite their extra precautions against infection. It is not clear how the technological constraints of Bluetooth proximity calculations will inform public health decisions to notify potentially infected individuals. Is it better for these applications to be slightly oversensitive and risk over-notifying individuals who may not have actually been standing within six feet of an infected user for the requisite amount of time? Or should the application have higher thresholds so that a notified user may have more confidence they were truly exposed?

Furthermore, these apps can only log contacts between two people who each have a phone on their person that is Bluetooth enabled and has the app installed. This highlights another necessary condition for a proximity app to be effective: its adoption by a sufficiently large number of people. The Apple and Google APIs attempt to address this problem by offering a common platform for health authorities and developers to build applications that offer common features and protections. These companies also aspire to build their own applications that will interoperate more directly and speed adoption. But even then, a sizable percentage of the world’s population—including a good part of the population of the United States—may not have access to a smartphone running the latest version of iOS or Android. This highlights the need to continue to employ tried-and-true public health measures such as testing and traditional contact tracing, to ensure that already-marginalized populations are not missed.

We cannot solve a pandemic by coding the perfect app. Hard societal problems are not solved by magical technology, among other reasons because not everyone will have access to the necessary smartphones and infrastructure to make this work. 

Finally, we should not excessively rely on the promise of an unproven app to make critical decisions, like deciding who should stop sheltering in place and when. Reliable applications of this sort typically go through many rounds of development and layers of testing and quality assurance, all of which takes time. And even then, new apps often have bugs. A faulty proximity tracing app could lead to false positives, false negatives, or maybe both. 

Would Proximity Apps Do Too Much Harm to Our Freedoms?

Any proximity app creates new risks for technology users. A log of a user’s proximity to other users could be used to show who they associate with and infer what they were doing. Fear of disclosure of such proximity information might chill users from participating in expressive activity in public places. Vulnerable groups are often disparately burdened by surveillance technology, and proximity tracking may be no different. And proximity data or medical diagnoses might be stolen by adversaries like foreign governments or identity thieves.

To be sure, some commonly used technologies create similar risks. Many track and report your location, from Fitbit to Pokemon Go. Just carrying a mobile phone brings the risk of tracking through cell tower triangulation. Stores try to mine customer foot traffic through Bluetooth. Many users are “opted in” to services like Google’s location services, which keep a detailed log of everywhere they have gone. Facebook attempts to quantify associations between people through myriad signals, including using face recognition to extract data from photographs, linking accounts to contact data, and mining digital interactions. Even privacy-preserving services like Signal can expose associations through metadata.

So the proposed addition of proximity tracking to these other extant forms of tracking would not be an entirely new threat vector. But the potentially global scale of contact tracing APIs and apps, and their collection of sensitive health and associational information, presents new risks for more users.

Context matters, of course. We face an unprecedented pandemic. Tens of thousands of people have died, and hundreds of millions of people have been instructed to shelter in place. A vaccine is expected in 12 to 18 months. While this gives urgency to proximity app projects, we must also remember that this crisis will end, but new tracking technologies tend to stick around. Thus proximity app developers must be sure they are developing a technology that will preserve the privacy and liberty we all cherish, so we do not sacrifice fundamental rights in an emergency. Providing sufficient safeguards will help mitigate this risk. Full transparency about how the apps and the APIs operate, including open source code, is necessary for people to understand, and give their informed consent to, the risks.

Does a Proximity App Have Sufficient Safeguards?

We urge app developers to provide, and users to require, the following necessary safeguards:

Consent

Informed, voluntary, and opt-in consent is the fundamental requirement for any application that tracks a user’s interactions with others in the physical world. Moreover, people who choose to use the app and then learn they are ill must also have the choice of whether to share a log of their contacts. Governments must not require the use of any proximity application. Nor should there be informal pressure to use the app in exchange for access to government services. Similarly, private parties must not require the app’s use in order to access physical spaces or obtain other benefits.

Individuals should also have the opportunity to turn off the proximity  tracing app. Users who consent to some proximity tracking might not consent to other proximity tracking, for example, when they engage in particularly sensitive activities like visiting a medical provider, or engaging in political organizing. People can withhold this information from traditional contact tracing interviews with health workers, and digital contact tracing must not be more intrusive. People are more likely to turn on proximity apps in the first place (which may be good for public health) if they know they have the prerogative to turn it off and back on when they choose.

While it may be tempting to mandate use of a contact tracing app, the interference with personal autonomy is unacceptable. Public health requires trust between public health officials and the public, and fear of surveillance may cause individuals to avoid testing and treatment. This is a particularly acute concern in marginalized communities that have historical reasons to be wary of coerced participation in the name of public health. While some governments may disregard the consent of their citizens, we urge developers not to work with such governments.

Minimization

Any proximity tracking application for contact tracing should collect the least possible information. This is probably just a record of two users being near each other, measured through Bluetooth signal strength plus device types, and a unique, rotating marker for the other person’s phone. The application should not collect location information. Nor should it collect time stamp information, except maybe the date (if public health officials think this is important to contact tracing).

The system should retain the information for the least possible amount of time, which likely is measured in days and weeks and not months. Public health officials should define the increment of time for which proximity data might be relevant to contact tracing. All data that is no longer relevant must be automatically deleted.

Any central authority that maintains or publishes databases of anonymous identifiers must not collect or store metadata (like IP addresses) that may link anonymous identifiers to real people.

The application should collect information solely for the purpose of contact tracing. Furthermore, there should be hard barriers between (a) the proximity tracking app and (b) anything else an app maker is collecting, such as aggregate location data or individual health records.

Finally, to the greatest extent possible, information collected should reside on a user’s own device, rather than on servers run by the application developer or a public health entity. This presents engineering challenges. But lists of devices with which the user has been in proximity should stay on the user’s own device, so that checking whether a user has encountered someone who is infected happens locally. 

Information security

An application running in the background on a phone and logging a user’s proximity to other users presents considerable information security risks. As always, limiting the attack surface and the amount of information collected will lower these risks. Developers should open-source their code and subject it to third-party audits and penetration testing. They should also publish details about their security practices.

Further engineering may be necessary to ensure that adversaries cannot compromise a proximity tracing system’s effectiveness or derive revealing information about the users of the application. This would include preventing individuals from falsely reporting infections as a form of trolling or denial of service, as well ensuring that well-resourced adversaries who monitor metadata cannot identify individuals using the app or log their connections with others.

“Anonymous” identifiers must not be linkable. Regularly rotating identifiers used by the phone is a start, but if an adversary can learn that multiple identifiers belong to the same user, it greatly increases the risk that they can tie that activity to a real person. As we understand Apple and Google’s proposal, users who test positive are asked to upload keys that tie together all their identifiers for a 24-hour period. (We have asked Apple and Google for clarification.) This could allow trackers to collect rotating identifiers if they had access to a widespread network of bluetooth readers, then track the movements of infected users over time. This breaks the safeguards created by using rotating identifiers in the first place. For that reason, rotating identifiers must be uploaded to any central authority or database in a way that doesn’t reveal the fact that many identifiers belong to the same person. This may require that the upload of a single user’s tokens are batched with other user data or spread out over time.

Finally, governments might try to force tech developers to subvert the limits they set, such as changing the application to report contact lists to a central authority. Transparency will mitigate these risks, but they remain inherent in building and deploying such an application. This is one of the reasons we call on developers to draw clear lines about the uses of their products and to pledge to resist government efforts to meddle in the design, as we’ve seen companies like Apple do in the San Bernardino case

Transparency

Entities that develop these apps must publish reports about what they are doing, how they are doing it, and why they are doing it. They must also publish open source code, as well as policies that address the above privacy and information security issues. These should include commitments to avoid other uses of information collected by the app and a pledge to avoid government interference to the extent allowed by law. Stated as application policy, this should also allow enforcement of violations through consumer protection laws. 

Addressing Bias

As discussed above, contact tracing applications will leave out individuals without access to the latest technology. They will also favor those predisposed to count on technology companies and the government to address their needs. We must ensure that developers and the government do not directly or indirectly leave out marginalized groups by relying on these applications to the exclusion of other interventions.

On the other side, these apps may lead to many more false positives for certain kinds of users, such as workers in the health or service sectors. This is another reason that contact-tracing apps must not be used as a basis to exclude people from work, public gatherings, or government benefits.

Expiration

When the COVID-19 crisis ends, any application built to fight the disease should end as well. Defining the end of the crisis will be a difficult question, so developers should ensure that users can opt out at any point. They should also consider building time limits into their applications themselves, along with regular check-ins with the users as to whether they want to continue broadcasting. Furthermore, as major providers like Apple and Google throw their weight behind these applications, they should articulate the circumstances under which they will and will not build similar products in the future.

Technology has the power to amplify society’s efforts to tackle complex problems, and this pandemic has already inspired many of the best and brightest. But we’re also all too familiar with the ability of governments and private entities to deploy harmful tracking technologies. Above all, even as we fight COVID-19, we must ensure that the word “crisis” does not become a magic talisman that can be invoked to build new and ever more clever means of limiting people’s freedoms through surveillance.

[Source: This article was published in eff.org By Andrew Crocker, Kurt Opsahl, and Bennett Cyphers - Uploaded by the Association Member: Anna K. Sasaki]

Categorized in Internet Privacy

By building contact-tracing into their operating systems, the companies could make a difference in the global pandemic response

Last week, Apple and Google surprised us with an announcement that the companies are spinning up a system to enable widespread contact tracing in an effort to contain the COVID-19 pandemic. The effort is barely two and a half weeks old, the companies said, and so there are many open questions about how it will work. On Monday afternoon, the companies invited us to call in and ask some questions, and I joined the group and did.

The basic idea is that as jurisdictions flatten the curve of infection and begin to consider re-opening parts of society, they need to implement a comprehensive “test and trace” scheme. You want to test people widely and thoroughly for the disease, as this article by Umair Irfan from Monday explains. And then, as you discover new cases, you want to see who those people may have come in contact with during the time that they were infectious.

Historically, this has been a manual process. Since the COVID-19 outbreak began, some countries have turned to technological means in an effort to enable public health authorities to find more people who may have been exposed and do so more efficiently. So far, it’s not clear that tech-enabled contact tracing has been all that effective. The system relies on voluntary participation, which has generally been weak. And the Bluetooth technology on which the system depends carries with it a high potential for false positives: it’s just not powerful enough to distinguish between cases where people were in very close proximity from ones in which they were 15 or more feet away.

My primary interest in this story — beyond the highly unusual nature of the collaboration between Apple and Google — is how effective it could be. But there are lots of other questions about how it will work that strike me as just as interesting. Let’s take a look at what people are saying, and what we learned today.

The biggest concern most people have expressed about the collaboration is that it will lead to damaging privacy violations. Democratic senators have led the charge here, sending an open letter to the companies expressing their fears. I’m less worried. For one thing, Apple and Google’s system is cleverly designed to maximize individual privacy; it avoids capturing location data and instead records only the proximity of your smartphone to someone else’s. And for another, I value my own privacy less during a public health emergency. I trust Apple and Google to prevent my personal health information from being identified as mine and shared with others, but given the design of the system, I fail to see how a breach would be catastrophic even if it did somehow materialize.

Still, if you’re the sort of person who likes to think through worst-case scenarios, my colleague Russell Brandom walks through some ideas about how data collected as part of this scheme could theoretically be de-identified. The schemes are generally so elaborate that it’s hard for me to imagine even a nation-state undertaking them, though it’s something to keep an eye on.

The second set of concerns has to do with how the system will work in practice. Apple and Google answered a lot of questions about that subject today; here are what I took to be the most consequential.

First, the companies said that by phase two of their effort, when contact tracing is enabled at the level of the operating system, they will notify people who have opted in to their potential exposure to COVID-19 even if they have not downloaded the relevant app from their public health authority. My understanding is that the operating system itself will alert people that they may have been exposed and direct them to download the relevant public health app. This is significant because it can be hard to get people to install software; Singapore saw only 12 percent adoption of its national contact-tracing app. Putting notifications at the system level represents a major step forward for this effort, even if still requires people to opt in.

 

Second, Google said it would distribute the operating system update through Google Play services, a part of Android controlled by the company that allows it to reach the majority of active devices. (Google says it will be available to everyone running Android 6.0, also known as Marshmallow, and higher on devices that have the Google Play store.) This is highly preferable than relying on carriers, which have historically been slow to distribute updates. It remains to be seen exactly which devices will be eligible for the update, on Android as well as on iOS. But it seems likely that the companies will be able to reach most active devices in the world — a significant feat. (Related: someone asked the companies what percentage of the population we need to use the system to get it to work. No one knows.)

Third, the companies said they would prevent abuse of the system by routing alerts through public health agencies. (They are also helping those agencies, such as Britain’s National Health Service, build apps to do just that.) While the details are still being worked out, and may vary from agency to agency, Apple and Google said they recognized the importance of not allowing people to trigger alerts based on unverified claims of a COVID-19 infection. Instead, they said, people who are diagnosed will be given a one-time code by the public health agency, which the newly diagnosed will have to enter to trigger the alert.

Fourth, the companies promised to use the system only for contact tracing, and to dismantle the network when it becomes appropriate. Some readers have asked me whether the system might be put to other uses, such as targeted advertising, or whether non-governmental organizations might be given access to it. Today Apple and Google explicitly said no.

Fifth, I’ve heard conflicting claims about the ability of Bluetooth-based tracking to measure distances. Last week I told you that Bluetooth could not distinguish between phones that were within six feet of one another, in contradiction of advice from public health agencies, and those that might be 20 or even 30 feet away. One reader pointed me to a part of the Bluetooth standard known as received signal strength indication, or RSSI, that is meant to offer fine-grained location detail.

Apple told me that the effectiveness of RSSI is blunted by various confounding factors: the orientation of the devices relative to one another, whether a phone is in a backpack or otherwise shielded from the signal, and so on. Taken together, those factors undermine the confidence of the system in how close two phones might be to one another. But it continues to be a subject of exploration.

So, to wrap up: do we feel more or less optimistic today about tech-enabled contact tracing than we did before? This post from security researcher Ross Anderson from over the weekend lays out a lot of the concerns I first shared here last week, plus some extra ones. “ I suspect the tracing apps are really just do-something-itis,” Anderson writes. “Most countries now seem past the point where contact tracing is a high priority; even Singapore has had to go into lockdown.”

On the flip side, argues Ben Thompson, there could be value in laying the technological groundwork now for expanded efforts later. He writes:

“They are creating optionality. When and if society decides that this sort of surveillance is acceptable (and, critically, builds up the other components — like testing — of an effective response) the technology will be ready; it is only a flip of a switch for Apple and Google to centralize this data (or, perhaps as a middle ground, enable mobile device management software used by enterprises, centralize this capability). This is no small thing considering that software is not built in a day.”

I still think that digital contact tracing is unlikely to be one of the two or three most important aspects of a country’s coronavirus response plan. Experts have told me that social distancing, wide-scale testing, and isolating sick individuals are significantly more important. And when it comes to contact tracing, we know that human beings often do a better job than smartphones — and some have argued that we need to hire hundreds of thousands of them to do the job.

At the same time, it’s possible to see how digital contact tracing could at least complement other, related efforts, including manual contact tracing. Compared to what, say Hong Kong is doing to test and trace, distributing digital tracking bracelets to everyone getting off the plane at the airport, what Apple and Google have proposed can only be described as a half measure. But in the United States at least, it may be the case that a series of half measures are all we will have to rely on.

THE RATIO

Today in news that could affect public perception of the big tech platforms.

⬆️Trending up: Oncologists say they are getting some of their best information lately on Twitter, and some are even crowdsourcing answers to difficult questions from other doctors.

⬇️ Trending down: Quarantined Amazon workers say they have not yet been paid, despite the company’s new policy about quarantine sick leave. The company says the workers will eventually get paid.

PANDEMIC

⭐ Amazon is hiring 75,000 additional workers after it filled more than 100,000 positions in the last month. The hiring spree is meant to help the company meet a surge in demand due to the coronavirus pandemic, reports Annie Palmer at CNBC:

As it continues to hire more workers, Amazon has also raised employees’ hourly pay and doubled overtime pay for warehouse workers. Through the end of April, warehouse and delivery workers can earn an additional $2 per hour in the U.S., 2 pounds per hour in the U.K., and approximately 2 euros per hour in many EU countries. Amazon currently pays $15 per hour or more in some areas of the U.S. for warehouse and delivery jobs.

Amazon has announced several benefits changes on top of the pay increases. The company has allowed workers to take unlimited unpaid time off and provides two weeks of paid leave for workers who tested positive for the virus or are in quarantine.

Amazon is going to start waitlisting new grocery delivery customers and curtail shopping hours at some Whole Foods stores. The move is meant to prioritize orders from existing customers buying food online during the coronavirus outbreak. (Meanwhile, people have resorted to using scripts downloaded from Github to scrounge for available delivery slots.) (Krystal Hu / Reuters)

After the Staten Island walkout, Amazon finally started checking workers’ temperatures at the warehouse entrance, enforcing social distancing rules, and piloting fog disinfectant. But some people say the roll out of the new safety measures has been uneven. Often, changes are made only after workers exert pressure. (Josh Dzieza / The Verge)

Here’s what nine Amazon workers have to say about working during the pandemic. “I feel like this job is essential because people need deliveries, but it’s also essential for me because I need the money to feed my family,” one said. (Louise Matsakis / Wired)

Amazon was already powerful. But with 250,000 US stores closed due to the pandemic, the company is poised to become even more dominant whenever the economy returns to normal. (Jason Del Rey / Recode)

Coronavirus is driving new surveillance systems in at least 28 countries around the world. OneZero is tracking the expansion of these programs, some of which undermine personal privacy. (And some of which are fairly ho-hum projects that aggregate anonymized data.) (Dave Gershgorn / OneZero)

The Supreme Court will start conducting oral arguments over teleconference, a major change spurred by the novel coronavirus pandemic. It will also stream a live audio feed — another first for the court. (Adi Robertson / The Verge)

The US economy isn’t going back to normal anytime soon, according to public policy think tanks and research centers. The groups have been putting together plans on how to reopen the US economy, and all say that without a vaccine, ending social distancing will be incredibly difficult. (Ezra Klein / Vox)

President Donald Trump has been promoting the antimalarial drugs chloroquine and hydroxychloroquine as treatments for the novel coronavirus. So far, there’s not enough evidence to say if they actually work. (And a study into their effectiveness was halted on Monday over the risk of fatal heart complications.) Trump’s comments, which have been covered by the mainstream press, show misinformation isn’t just a problem for social media. (Adi Robertson / The Verge)

Russian President Vladimir Putin has played a principal role in spreading false information about the origins of the novel coronavirus. The move is part of his wider effort to discredit the West and destroy his enemies from within. (William J. Broad / The New York Times)

In China, state media and influential diplomats are also pushing misinformation about the origins of COVID-19. In doing so, they’re legitimizing rumors from the recesses of the internet — and ensuring mass awareness of those ideas. (Renée DiResta / The Atlantic)

The Senate sergeant at arms warned offices that Zoom poses a high risk to privacy and could leave their data and systems exposed. The law enforcement chief urged lawmakers and their staff to use Skype instead. (Cristiano M.Lima / Politico)

Google is making changes to search results to make it easier for people to find virtual health care options. Virtual health care providers have seen a surge in demand due to the COVID-19 pandemic. (Jay Peters / The Verge)

Google launched a website dedicated to coronavirus updates in India. The company also tweaked its search engine and YouTube to prominently display trustworthy information about the pandemic. (Manish Singh / TechCrunch)

Google created an application portal to help the state of New York deal with a historic surge in unemployment filings. The company said it could potentially bring a similar service to other states as well. This is cool! (Jennifer Elias / CNBC)

The coronavirus pandemic has allowed Google to pull far ahead of its competitors in getting its tech into classrooms. Google Classroom, a free service teachers use to send out assignments and communicate with students, has doubled active users to more than 100 million since the beginning of March. (Gerrit De Vynck and Mark Bergen / Bloomberg)

Apple Maps will soon display COVID-19 testing locations as part of the company’s broader efforts to fight the novel coronavirus. (Benjamin Mayo / 9To5Mac)

WhatsApp rolled out its change to message forwarding to stop misinformation from spreading. Now, viral messages can only be forwarded to one person at a time. (Rita El Khoury / Android Police)

YouTube traffic is skyrocketing, but creators are still struggling. That’s because advertising rates have dropped significantly during the coronavirus pandemic. (Chris Stokel-Walker / OneZero)

Related: The audience for esports is soaring, but coronavirus has slowed down the ad market and made capitalizing on those viewers very difficult. (Seb Joseph / Digiday)

Coronavirus has ravaged the American job market, but big tech companies, including Apple, Google, Amazon, and Facebook, are still hiring. Facebook is planning to fill more than 10,000 product and engineering roles to help keep up with surging traffic. (Chip Cutter and Patrick Thomas / The Wall Street Journal)

More people are watching streamed sexual performances online due to the coronavirus quarantine. But models still aren’t earning more They say new viewers aren’t tipping as well, and there’s a lot of competition. (Gabrielle Drolet / The New York Times)

People are getting dumped over Zoom. And yes, we’re apparently calling the trend “Zumping.” (The Guardian)

[Source: This article was published in theverge.com By Casey Newton - Uploaded by the Association Member: Jasper Solander]

Categorized in Internet Privacy

TraceTogether works by exchanging short-distance Bluetooth signals between phones to detect other participating TraceTogether users in close proximity.

[UPDATE: According to Singapore's Minister for Foreign Affairs Vivian Balakrishnan's Facebook post on the morning of March 23, TraceTogether has been installed by more than 620,000 users so far.]

The Government Technology Agency of Singapore (GovTech), the in-house IT agency of the Singapore public service, in collaboration with the Ministry of Health (MOH) today announced the launch of a mobile app called TraceTogether, to help support and supplement current contact tracing efforts in the nation-state in an effort to reduce the spread of COVID-19.

HOW IT WORKS

Currently, contact tracing relies on the recall and memory of interviewees. There were however, instances when interviewees could not remember all their contacts, or did not have information on whom they had been in contact with. 

 

TraceTogether works by exchanging short-distance Bluetooth signals between phones to detect other participating TraceTogether users in close proximity. Records of such encounters are stored locally on each user’s phone. If a user is interviewed by MOH as part of the contact tracing efforts, he/she can consent to send his/her TraceTogether data to MOH.   

This facilitates the contact tracing process, and enables contact tracers to inform TraceTogether users who are close contacts of COVID-19 cases more quickly. This enables users to take the necessary action sooner, such as monitoring his own health closely for signs of flu-like symptoms. 

The TraceTogether app can be downloaded from the Android Google Play or Apple App Store.

PRIVACY SAFEGUARDS

TraceTogether does not collect or use location data of any kind, and does not access a user’s phone contact list or address book. It only uses Bluetooth data to establish a contact and does not store information about where the contact happened. 

Secondly, no data is uploaded to the government. All data collected is stored locally on the user’s phone and encrypted. In the event when a person is confirmed to be infected with COVID-19, the government will then request for him/her to upload the data to facilitate contact tracing of his/her close contacts.

If a user does not come into close contact with a COVID-19 case, TraceTogether data older than 21 days will be automatically deleted.

 THE LARGER PICTURE

According to the WHO, contact tracing is important as closely watching the contacts after exposure to an infected person will help the contacts to get care and treatment, and will prevent further transmission of a virus. There are three basic steps in contact tracing: contact identification, contact listing and contact follow-up.

In China where the COVID-19 virus is said to have originated, WHO’s Bruce Aylward said that strict quarantine, isolation and contact tracing measures were justified in the name of saving lives, and avoiding the swamping of health systems with seriously ill cases that even developed country health systems often lack capacity to treat, according to a report by Health Policy Watch.

In the case of Singapore, just as the first cases of COVID-19 popped up at the end of January, doctors promptly identified and isolated those people and started contact tracing. As of 20 March 2020, Singapore has a total of 385 confirmed cases with zero COVID-19 related deaths. MOH has identified 7,065 close contacts who have been quarantined. Of these, 2,437 are currently quarantined, and 4,628 have completed their quarantine (as of 12pm, 20 March). 

The extensive contact tracing approach by Singapore played a significant role in its efforts to contain the spread of COVID-19 within the country. 

[Source: This article was published in mobihealthnews.com By Dean Koh - Uploaded by the Association Member: Rene Meyer]

Categorized in Internet Privacy

Bing on Monday will begin accessing important information related to COVID-19 from government, business, and travel websites through a special Schema markup language that will allow people to search and find information on the search engine.

SEOs and website developers can use the SpecialAnnouncement schema markups to serve up in search results disease statistics, testing facilities and testing guidelines, school closures, travel restrictions including public transit closures, and special announcements from businesses related to hours or changes in service.

“We’re still developing all of the various scenarios for how the markup may appear,” Christi Olson, Microsoft evangelist, wrote in an email to Search Marketing Daily. “As more websites start marking up their sites with the specialannoucement code, we’ll extend and develop additional scenarios for how the data will surface in the search results.”

 

SpecialAnnoucement for businesses might show updates for business hours. Business services can appear in the local listings and in map, for example. The markup for COVID-19 testing facilities may be used to help locate a nearby facility within the search results page or within maps. The markup for public transportation closures can appear for related searches in queries.

The markup for DiseaseSpreadStatistics and for testing and guidelines may be integrated into Bing’s COVID tracker.

The markup for government health agencies will assist Bing in accessing statistics via country, state or province, administrative area, and city, but they must use the schema.org markup for diseaseSpreadStatistics.

Only official government site reporting case statistics for a specific region can use this tag. Information in the markup must be up-to-date and consistent with statistics displayed from the site to the general public. Special announcements must include the date and time posted, as well as the time the statistics were first reported.

There is also a SpecialAnnouncement schema markup for local businesses, hospitals, schools, and government offices. Again, the data must be posted on an official website and refer only to changes related to COVID-19. The name of the special announcement must be easily identified within the body copy on the website page. It must include the posting date and the time the announcement expires.

A label detailing the special announcements related to COVID-19 with a link to the site for more details may be used on web results and in local listings shown on the search engine results page or map. This provides an easy link for customers and community members to find the latest information.

The SpecialAnnouncement schema markup gettingTestedInfo and CovidTestingFacility should be used to direct those searching for risk assessment and testing centers. It can lead those searching to specific locations to well-known healthcare facilities or government health agencies. The schema.org markup must be used to add URLs and facility locations already associated with a provider or an agency. Listing other providers’ facilities is not supported at this time.

Each has its own markup language for website pages. More information can be found here. There, marketers and webmasters will find guidance to specify locations using “about” as a variable to identify the location. For SpecialAnnouncement schema markup this variable has been updated and changed to “spatialCoverage.”

 

[Source: This article was published in mediapost.com By Laurie Sullivan - Uploaded by the Association Member: Jennifer Levin]

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media