fbpx

My earliest Google search—the earliest one Google remembers, at least—was for "tetanus shot." My most recent was for "Tracy Morgan." In between, there are 52,493 searches, and Google remembers them all.

This shouldn’t come as a surprise. I know Google knows essentially everything there is to know about me—and you probably do, too. With its algorithms and analytics tools, it probably knows more about me than I know about myself (statistically, I most frequently search Google at 10 AM on Tuesdays in March). But presented in its totality, it's still a bit creepy to look at a history of every single Google search you've ever done.

​The company has now made it possible for you to export that history and download it from its servers. In one ZIP file, you can have a ​timestamped history of every random bit of trivia or thought you've ever had; of every restaurant you've ever cared to Yelp; of the times you looked up whether that movie you wanted to see was actually any good.

 

It has a record of the times you've looked up hangover cures and searched weird symptoms to perform a self diagnosis. It knows that you looked up the address to the hospital to visit a loved one and it knows that you didn't know the address to the funeral home a week later. And it knows every time you didn't turn on Incognito mode to search for porn.

Again, this is not necessarily surprising, but it is striking. We know Google uses its connected products and the information it has on you to help target ads and to personalize your experience, which makes using Google feel seamless. Maybe you’re fine with that—lots of people are willing to trade privacy for convenience, or for something that costs them no money. But what if you’re not?

​It’s possible to change your settings so that Google doesn’t link your search history to your account. That’s a start, but Google still logs searches according to IP addresses, which can still be potentially tied back to you. You can also consider using a company like Duck Duck Go, which runs a “search engine that doesn’t track you.”

Google’s not the only one who uses your search history, of course. The record it has can be and often is ​subpoenaed by the government or by law enforcement.

In the first half of last year (more recent data is not yet available), the US requested user information, including search history, from Google 12,539 times. Google complied in 84 percent of cases. There are concerns that the NSA can tap the data as well. Google says that “only you can see your history,” but how true is that, really?

Source:  http://motherboard.vice.com/read/reminder-google-remembers-everything-youve-ever-searched-for 

Categorized in Search Engine

Under Europe's "Right to be Forgotten" law, citizens there can petition Internet search providers such as Google to remove search results linked to personal information that is negative or defamatory. In many cases, these links lead to information about accusations of criminal activity or financial difficulties, which may be "delisted" if the information is erroneous or no longer relevant. 

But "gone" doesn't always mean "forgotten," according to a new study by researchers at the New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil.

"The Right to Be Forgotten has been largely working and is responding to legitimate privacy concerns of many Europeans," said New York University Professor Keith Ross. "Our research shows, however, that a third-party, such as a transparency activist or a private investigator, can discover many delisted links and determine the names of the people who requested the delistings." Ross, the Leonard J. Shustek Professor of Computer Science at NYU Tandon and dean of engineering and computer science at NYU Shanghai, led the research team, which included Professor of Computer Science Virgilio Almeida and doctoral students Evandro Cunha and Gabriel Magno, all of the Federal University of Minas Gerais, and Minhui Xue, a doctoral student at NYU Shanghai.

They focused only on requests to delist content from mass media sites such as online newspapers and broadcast outlets. Although the law requires search engines to delist search links, it does not require newspaper articles and other source material to be removed from the Internet.

A hacker faces a fairly low bar if he or she knows a particular URL has been delisted. Of 283 delisted URLs used in the study, the authors successfully determined the names of the requesters in 103 cases.

But the authors also demonstrated that a hacker can prevail even when the URL is unknown, by downloading media articles about topics most commonly associated with delisting, including sexual assault and financial misconduct; extracting the names from the articles; then sending multiple queries to a European Google search site to see if the articles were delisted.

The researchers estimate that a third party could potentially determine 30 to 40 percent of the delisted mass-media URLs, along with the names of the people who made the delisting requests. Such hackers do exist and have published the names of people who requested delisting, thereby opening them to even more public scrutiny - the so-called "Streisand effect," a phenomenon, named for the reclusive star, whereby an attempt to hide a piece of information has the unintended consequence of publicizing the information more widely.

Their results show that the law has fundamental technical flaws that could compromise its effectiveness in the future.

Demographic analysis revealed that the majority of requesters were men, ages 20-40, and most were ordinary citizens, not celebrities. In accordance with the law, Google delisted links for persons who were wrongfully charged, acquitted, or who finished serving their sentences, among other privacy issues.

The researchers believe that defenses to these privacy attacks are limited. One possible defense would be for Google to never display the delisted URL in its search results. (Currently, Jane Doe's delisted robbery article would not show up when her name is used in a search, but would do so if the name of the bank were searched, for example.) This defense is not only a strong form of censorship, but can also be partially circumvented, they said.

A French data protection authority recently ordered Google to delist links from all of its properties including Google.com, in addition to its search engines with European suffixes. Google has so far refused, and the dispute is likely to end up in European courts. "Even if this law is extended throughout all of the Google search properties, the potential for such attacks will be unchanged and they will continue to be effective," said Almeida of the Federal University of Minas Gerais.

The researchers noted that they will never publicly share the names discovered in association with their analysis. They informed Google of the research results. 

Source:  http://phys.org/news/2016-06-weak-europe-forgotten-privacy-law.html 

Categorized in Internet Privacy

When I think about the behavior of many business people today, I imagine a breadline. These employees are the data-poor, waiting around at the end of the day on the data breadline. The overtaxed data analyst team prioritizes work for the company executives, and everyone else must be served later. An employee might have a hundred different questions about his job. How satisfied are my customers? How efficient is our sales process? How is my marketing campaign faring?

These data breadlines cause three problems present in most teams and businesses today. First, employees must wait quite a while to receive the data they need to decide how to move forward, slowing the progress of the company. Second, these protracted wait times abrade the patience of teams and encourage teams to decide without data. Third, data breadlines inhibit the data team from achieving its full potential.

Once an employee has been patient enough to reach the front of the data breadline, he gets to ask the data analyst team to help him answer his question. Companies maintain thousands of databases, each with hundreds of tables and billions of individual data points. In addition to producing data, the already overloaded data teams must translate the panoply of figures into something more digestible for the rest of the company, because with data, nuances matter.

The conversation bears more than a passing resemblance to one between a third-grade student and a librarian. Even expert data analysts lose their bearings sometimes, which results in slow response times and inaccurate responses to queries. Both serve to erode the company’s confidence in their data.

Overly delayed by the strapped data team and unable to access the data they need from the data supply chain, enterprising individual teams create their own rogue databases. These shadow data analysts pull data from all over the company and surreptitiously stuff it into database servers under their desks. The problem with the segmented data assembly line is that errors can be introduced at any single step.

A file could be truncated when the operations team passes the data to the analyst team. The data analyst team might use an old definition of customer lifetime value. And an overly ambitious product manager might alter the data just slightly to make it look a bit more positive than it actually is. With this kind of siloed pipeline, there is no way to track how errors happen, when they happen or who committed them. In fact, the error may never be noticed. 

Data fragmentation has another insidious consequence. It incites data brawls, where people shout, yell and labor over figures that just don’t seem to align and that point to diametrically different conclusions.

Imagine two well-meaning teams, a sales team and a marketing team, both planning next year’s budget. They share an objective: to exceed the company’s bookings plan. Each team independently develops a plan, using metrics like customer lifetime value, cost of customer acquisition, payback period, sales cycle length and average contract value.

When there’s no consistency in the data among teams, no one can trust each other’s point of view. So meetings like this devolve into brawls, with people arguing about data accuracy, the definition of shared metrics and the underlying sources of their two conflicting conclusions.

Imagine a world where data is put into the hands of the people who need it, when they need it, not just for Uber drivers, but for every team in every company. This is data democratization, the beautiful vision of supplying employees with self-service access to the insights they need to maximize their effectiveness. This is the world of the most innovative companies today: technology companies like Uber, Google, Facebook and many others who have re-architected their data supply chains to empower their people to move quickly and intelligently. 

Source:  http://techcrunch.com/2016/06/12/data-breadlines-and-data-brawls/

Categorized in Online Research

IT security experts are developing a new method for detecting and fixing vulnerabilities in the applications run on different devices – regardless of the processor integrated in the respective device.

 

The number of devices connected to the Internet is continuously growing – including household appliances. They open up numerous new attack targets 

IT security experts from Bochum, headed by Prof Dr Thorsten Holz, are developing a new method for detecting and fixing vulnerabilities in the applications run on different devices -- regardless of the processor integrated in the respective device.

In future, many everyday items will be connected to the Internet and, consequently, become targets of attackers. As all devices run different types of software, supplying protection mechanisms that work for all poses a significant challenge.

This is the objective pursued by the Bochum-based project "Leveraging Binary Analysis to Secure the Internet of Things," short Bastion, funded by the European Research Council.

A shared language for all processors

As more often than not, the software running on a device remains the manufacturer's corporate secret, researchers at the Chair for System Security at Ruhr-Universität Bochum do not analyse the original source code, but the binary code of zeros and ones that they can read directly from a device.

However, different devices are equipped with processors with different complexities: while an Intel processor in a computer understands more than 500 commands, a microcontroller in an electronic key is able to process merely 20 commands. An additional problem is that one and the same instruction, for example "add two numbers," is represented as different sequences of zeros and ones in the binary language of two processor types. This renders an automated analysis of many different devices difficult.

In order to perform processor-independent security analyses, Thorsten Holz' team translates the different binary languages into a so called intermediate language. The researchers have already successfully implemented this approach for three processor types named Intel, ARM and MIPS.

Closing security gaps automatically

The researchers then look for security-critical programming errors on the intermediate language level. They intend to automatically close the gaps thus detected. This does not yet work for any software. However, the team has already demonstrated that the method is sound in principle: in 2015, the IT experts identified a security gap in the Internet Explorer and succeeded in closing it automatically.

The method is expected to be completely processor-independent by the time the project is wrapped up in 2020. Integrating protection mechanisms is supposed to work for many different devices, too.

Helping faster than the manufacturers

"Sometimes, it can take a while until security gaps in a device are noticed and fixed by the manufacturers," says Thorsten Holz. This is where the methods developed by his group can help. They protect users from attacks even if security gaps had not yet been officially closed.

Source:  https://www.sciencedaily.com/releases/2016/06/160609064300.htm

Categorized in Internet of Things

Unless you’ve specifically told it not to, Google remembers everything you’ve ever searched for—a fact that’s been useful for artists, Google’s bottom line, law enforcement investigations, among many other things. We’ve all searched for stuff we probably shouldn’t have from time to time, but a web developer has decided to take the shared experience of regretting a specific search to its logical extreme.

 

“Ruin My Search History” promises to “ruin your Google search history with a single click,” and that’s exactly what it does. Click on the magnifying glass and it’ll take over your browser and immediately cycles through a series of search terms ranging from the mildly embarrassing (“why doesn’t my poo float,” “smelly penis cure urgent”) to the potentially relationship-ruining (“mail order paternity test,” “attracted to mother why”) to the type of thing that might get your name on a list somewhere (“isis application form,” “cheap syria flights,” “how to kill someone hypothetically”).

 

Jon, the developer who made it, says more than 500,000 people have ruined their search histories in the last 24 hours. He says about a quarter of the people who visit the site aren’t brave enough to click the button.

 

 

 

 

 

Originally, the site was going to be a tour of the internet’s most horrible images and videos, such as Goatse and Two Girls One Cup, to “quickly get you up to speed on 15 years of horrible internet,” Jon told me in an email.

 

“I thought better of that and went down the route of things you'd hate for people to see in your search history,” he said. “I tried to make a semi-story out of the searches to add to the horror. And added in the person's location to the queries (though people don't seem to have noticed that).”

 

It’s fun, mostly harmless, and if you squint hard enough, it might even be a bit subversive. I saw it as a bit of a comment on our lack of digital privacy, anyway.

 

“Really not sure how I came up with the idea originally,” Jon wrote. “It was probably sparked by the never ending surveillance saga in the news: Snowden, NSA, phone taps, metadata, who searches for what.” I asked Jon if he thought there’s something to the idea that if we all search for words that are likely to be on a watchlist somewhere, we can confuse the NSA or make a comment about mass surveillance.

 

“I had the idea that the best way to make the government’s search surveillance useless is for us all to be on ‘the list,’” he said. “Maybe it does a bit, but if that's enough to throw their surveillance off course, it's probably not great surveillance.”

 

After it was posted, the website quickly went to the top of Reddit’s /r/internetisbeautiful, where people immediately began to freak the fuck out over the inclusion of ISIS-related search terms. The reaction has been so visceral, in fact, that one of the moderators has had to step in and defend leaving the link to the site—which now has warnings all over it—on the page: “We've taken adequate steps to warn redditors that this link might be something you shouldn't just blindly click,” internetisbeautiful moderator K_Lobstah wrote in an incredibly long post. “I promise the NSA is not going to black bag you in your sleep (unless you are a terrorist). I promise the police are not calling a judge off his poker game tonight to obtain an emergency search warrant for your apartment.”

 

Jon says it’s gotten out of hand.

 

“The reaction on Reddit has been mental, some people seem to be legitimately freaking out,” he said. “I guess that's just the sad times we live in. We assume the feds will turn up and that we're actually guilty because we typed some words into the internet.”

Happy searching.

 

Source:  http://motherboard.vice.com/en_uk/read/ruin-your-google-search-history-with-one-click-using-this-website 

Categorized in Search Engine

The Indian government recently refused to grant Google permission to cover India through Google Street View. The widely cited reason for this is security concerns, such as panoramic images available will aid terrorist attacks, and the sensitive areas such as nuclear establishments and defence areas can be exposed. It is suggested that this rejection is temporary, and will be reconsidered once the draft Geospatial Information Regulation Bill is finalised and passed. While the Geospatial Bill itself is widely criticised for the severe restrictions it places on the use of geospatial data, it can only resolve one half of the problem with the case of Street View – security concerns. Street View presents a more widespread issue – the issue of violation of people’s privacy through the taking and worldwide publishing of their images.

Geospatial bill can only tackle security issues

A mere photograph, particularly of sensitive areas, is dangerous from a security point of view. The 360 degree, panoramic, ‘feet on the ground’ photograph, as offered by Google Street View, is enough to get any reasonable government concerned. The Geospatial Bill, in its present form, is certainly broad enough to regulate Google’s Street View activities. The Geospatial Bill, however, makes no provision for safeguarding individual privacy. The government may, through its provisions, grant Google the permission to photograph Indian locations, but it cannot authorise Google to violate individual privacy.

Street View’s blurring policy is not adequate

The panoramic images as offered by Google obviously capture people, homes and other events at the location. Photographs are often taken without any warning, and certainly without the prior consent of the person getting photographed. Street View’s privacy policy states that information based on which a person can be identified, such as a person’s face and car license number plates are automatically blurred out. Often despite the blurring, a person can be identified. Street View therefore also allows people to request Google to blur out images which they feel are violative of their privacy.

The case of photographing of a home or any other personal property is relatively easy to resolve, since all a person has to do is access Street View, check that image and request blurring. The case of photographing individuals, on the other hand, is not as simple. It isn’t possible for a person to constantly check Google Street View to ensure that no compromising images of them have been caught. Images on Google Street View may be from a few weeks to a few years old. It is hardly possible for people to go back that far in time and ensure that their privacy is not infringed.

Street View Angkor Wat Google

Additionally, the blurring policies do not apply to user-contributed images, which means people’s faces, license plate numbers, etc., will not be automatically blurred out in such images. A person’s remedy will be to first try to resolve the dispute with the person contributing the image, and only if he fails can he ask Google to remove it. In the era we live in, any delay in removing such images can result in them being broadcast and viewed by the whole world. Moreover, images from Google Street View can be downloaded, and there will be nothing to prevent a person from saving such images before they are blurred out. Yet another problem is that Google retains the original, unblurred images. A limit of six months was attempted to be imposed on Google, but Google retains them for up to a year.

Right to privacy in public places under Indian laws

Interestingly, there are no Indian laws governing photography in a public place. The general rule is that in a public place, an individual cannot reasonably expect a right to privacy. For example, if a person is accidentally captured in a photograph of a public place, say a historical monument, taken on a smartphone, then he cannot claim that it violates his privacy. However Street View represents a different case. The photograph is taken without either a warning to the individual or the individual’s consent, and can be made available for (literally) the whole world to see, possibly for an eternity.

Indian laws provide people with recourse for certain images only. The Information Technology Act, 2000 and the Indian Penal Code, 1860, will protect people against the capture and publishing of certain types of images, such as capturing obscene images, private acts, capturing pictures of a woman without her consent, etc. The problem again is, that these laws are designed to protect people from acts such as voyeurism, and are certainly not designed to protect against a violation of privacy. A photograph of a person may simply be embarrassing, or a person may just not want his picture up on the internet. Surely, every individual has the right to decide whether or not he wants an image of his to be taken and published.

India’s draft Personal Data (Protection) Bill is perhaps better designed to protect this problem. It provides that no personal data, which is any data based on which a person can be identified, can be collected without the person’s prior informed consent. Until the Bill is passed, however, the only recourse available to people is under the right to privacy, as guaranteed under the Indian Constitution. This is broad enough to include the right to privacy in a public place as well.

Individual privacy must also be safeguarded before Street View is permitted

Other countries have begun to impose certain requirements on Street View, such as requiring it to inform the public in advance before it photographs that area, such as through the publication of an ad in a newspaper. Countries have also suggested that people be given the option of ‘opting in’ to be photographed instead of the current opt-out mechanism available with Google Street View. Any progress in the digital era is invariably involving a conflict between individual privacy and the public good. If Google Street View is for the public good, then the government must take adequate steps to protect not just national security, but also individual privacy.

Source:  http://tech.firstpost.com/news-analysis/privacy-concerns-with-google-street-view-should-be-addressed-before-permitting-it-in-india-320154.html

Categorized in Search Engine

Firewall is a term that most internet users are familiar with. They may have come across the term at the office when they browse the web, or at home, when several people use the same connection, where security needs to be setup. But what is Firewall, and how does it work? Although many people might have heard of the term, few are aware of its uses.

‘Firewall’ refers to a security system that is employed to keep out the viruses and doubtful networks. It  can be software based or hardware based. It regulates the information flowing in and out of the network and have a set of rules that is used to filter the trusted networks, prevent unauthorized access of information, as well as remote access to your network.

Most Firewalls employ the use of filters, which means the information or data that is flagged by the filters are not allowed through. Firewalls use several methods for this purpose; packet filtering, application gateways, proxy service and Stateful inspection. The good thing is, Firewalls are customizable, allowing you to choose the unique features for your protection online. You can customize them according to the level of security you need. Often, Firewalls use two or more of the techniques mentioned above for greater security. Such Firewalls are known as Hybrid Firewalls. You can also use settings which block out content with certain words, which is often done in offices, or at home, to filter out inappropriate content when children are using computers.

There are also several options of Firewall security that can be used. You may choose between the hardware firewall and a software firewall. Hardware firewall can be purchased, but usually comes preinstalled on a standard router, while software firewall is installed on computers as an added security measure. Hardware firewall is used typically by large corporations, who want a single security umbrella for several departments and systems. They can hide your IP from the connections outside, along with providing protection within corporations and between departments. However, since it is a place based security system, it is usually not recommended for individual users and personal computers. Software Firewalls can operate outside your home and office, hence it is recommended for digital security when you are on the move.

Firewall is a useful tool in protecting your PC from the external environment. It protects your computer from harmful content while protecting your personal information being sent out. It acts as a guard screening all the incoming and outgoing traffic from your computer. While different security levels can be established by different settings on Firewalls, it can also be customized to suit your needs. You can also choose between the hardware and software firewall, both of which work well in different situations.

Summary:

Firewall is a security system to protect your PC from the external environment. They are customizable, for you to choose a combination that suits your needs. Yu can also choose between the hardware and software firewall, both of which work well in different situations. 

 

Categorized in Internet Privacy
Page 7 of 7

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media