When some future Mars colonist is able to open his browser and watch a cat in a shark suit chasing a duck while riding a  roomba, they will have Vint Cerf to thank.

In his role as Google's chief internet evangelist, Cerf has spent much of his time thinking about the future of the computer networks that connect us all. And he should know. Along with Bob Kahn, he was responsible for developing the internet protocol suite, commonly known as TCP/IP, that underlies the workings of the net. Not content with just being a founding father of the internet on this planet, Cerf has spent years taking the world wide web out of this world.

Working with NASA and JPL, Cerf has helped develop a new set of protocols that can stand up to the unique environment of space, where orbital mechanics and the speed of light make traditional networking extremely difficult. Though this space-based network is still in its early stages and has few nodes, he said that we are now at "the front end of what could be an evolving and expanding interplanetary backbone."

Wired.com talked to Cerf about the interplanetary internet's role in space exploration, the frustrations of network management on the final frontier, and the future headline he never wants to see.

Wired: Though it's been around a while, the concept of an interplanetary internet is probably new to a lot of people.How exactly do you build a space network?Vint Cerf: Right, it's actually not new at all -- this project started in 1998. And it got started because 1997 was very nearly the 25th anniversary of the design of the internet. Bob Kahn and I did that work in 1973. So back in 1997, I asked myself what should I be doing that will be needed 25 years from then. And, after consultation with colleagues at the Jet Propulsion Laboratory, we concluded that we needed much richer networking than was then available to NASA and other space faring agencies.

Up until that time and generally speaking, up until now, the entire communications capabilities for space exploration had been point-to-point radio links. So we began looking at the possibilities of TCIP/IP as a protocol for interplanetary communication. We figure it worked on Earth and it ought to work on Mars. The real question was, "Would it work between the planets?"

Up until that time and generally speaking, up until now, the entire communications capabilities for space exploration had been point-to-point radio links. So we began looking at the possibilities of TCIP/IP as a protocol for interplanetary communication. We figure it worked on Earth and it ought to work on Mars. The real question was, "Would it work between the planets?"

And the answer turned out to be, "No."

The reason for this is two-fold: First of all, the speed of light is slow relative to distances in the solar system. A one-way radio signal from Earth to Mars takes between three and half and 20 minutes. So round trip time is of course double that. And then there's the other problem: planetary rotation. If you're communicating with something on the surface of the planet, it goes out of communication as the planet rotates. It breaks the available communications and you have to wait until the planet rotates back around again. So what we have is variable delay and disruption, and TCP does not do terribly well in those kinds of situations.

One of the things that the TCP/IP protocols assume is that there isn't enough memory in each of the routers to hold anything. So if a packet shows up and it's destined for a place for which you have an available path, but there isn't enough room, then typically the packet is discarded.

We developed a new suite of protocols that we called the Bundle protocols, which are kind of like internet packets in the sense that they're chunks of information. They can be quite big and they basically get sent like bundles of information. We do what's called storing forward, which is the way all packet switching works. It's just in this case the interplanetary protocol has the capacity to store quite a bit, and usually for quite a long time before we can get rid of it based on connectivity to the next hop.

What are the challenges with working and making a communications network in space as opposed to a ground-based internet?Among the hard things, first of all, is that we couldn't use the domain name system in its current form. I can give you a quick illustration why that's the case: Imagine for a moment you're on Mars, and somebody is trying to open up an HTTP web connection to Earth. They've given you a URL that contains a domain name in it, but before you can open up a TCP connection you need to have an IP address.

So you will have to do a domain name lookup, which can translate the domain name you're trying to lookup into an IP address. Now remember you're on Mars and the domain name you're trying to look up is on Earth. So you send out a DNS lookup. But it may take anywhere from 40 minutes to an unknown amount of time -- depending on what kind of packet loss you have, whether there's a period of disruption based on planetary rotation, all that kind of stuff -- before you get an answer back. And then it may be the wrong answer, because by the time it gets back maybe the node has moved and now it has a different IP address. And from there it just gets worse and worse. If you're sitting around Jupiter, and trying to do a lookup, many hours go by and then it's just impossible.

One of the things we wanted to avoid was the possibility of a headline that says: '15-Year-Old Takes Over Mars Net.'

So we had to break it into a two-phase lookup and use what's called delayed binding. First you figure out which planet you're going to, then you route the traffic to that planet, and only then you do a local lookup, possibly using the domain name.

The other thing is when you are trying to manage a network with this physical scope and all the uncertainty delays, the things we typically do for network management don't work very well. There's a protocol called SNMP, the simple network management protocol, and it is based on the idea that you can send a packet out and get an answer back in a few milliseconds, or a few hundreds of milliseconds. If you're familiar with the word ping, you'll know what I mean, because you ping something and expect to get an answer back fairly quickly. If you don't get it back in a minute or two, you begin to conclude that there is something wrong and the thing isn't available. But in space, it takes a long time for the signal to even get to the destination let alone get an answer back. So network management turns out to be a lot harder in this environment.

Then the other thing we had to worry about was security. The reason for that should be obvious -- one of the things we wanted to avoid was the possibility of a headline that says: "15-Year-Old Takes Over Mars Net." Against that possibility we put quite a bit of security into the system, including strong authentication, three way handshakes, cryptographic keys, and things of that sort in order to reduce the likelihood that someone would abuse access to the space network.

Because it has to communicate across such vast distances, it seems like the interplanetary internet must be huge.Well, in purely physical terms -- that is, in terms of distance -- it's a pretty large network. But the number of nodes is pretty modest. At the moment, the elements participating in it are devices in planet Earth, including the Deep Space Network, which is operated at JPL. That consists of three 70-metre dishes plus a smattering of 35-metre dishes that can reach out into the solar system with point-to-point radio links. Those are part of the TDRSS

[tee-driss] system, which is used for a lot of near-Earth communications by NASA. The ISS also has several nodes on board capable of using this particular set of protocols.

Two orbiters around Mars are running the prototype versions of this software, and virtually all the information that's coming back from Mars is coming back via these store-forward relays.

The Spirit and Opportunity rovers on the planet and the Curiosity rover are using these protocols. And then there's the Phoenix lander, which descended to the north pole of Mars in 2008. It also was using these protocols until the Martian winter shut it down.

And finally, there's a spacecraft in orbit around the sun, which is actually quite far away, called EPOXI [the spacecraft was 32 million kilometres from Earth when it tested the interplanetary protocols]. It has been used to rendezvous with two comets in the last decade to determine their mineral makeup.

But what we hope will happen over time -- assuming these protocols are adopted by the Consultative Committee on Space Data Systems, which standardises space communication protocols -- then every spacefaring nation launching either robotic or manned missions has the option of using these protocols. And that means that all the spacecraft that have been outfitted with those protocols could be used during the primary mission, and could then be repurposed to become relays in a stored forward network. I fully expect to see these protocols used for both manned and robotic exploration in the future.

What are the next steps to expand this?We want to complete the standardisation with the rest of the spacefaring community. Also, not all pieces are fully validated yet, including our strong authentication system. Then second, we need to know how well we can do flow control in this very, very peculiar and potentially disrupted environment.

Third, we need to verify that we can do serious real-time things including chat, video and voice. We will need to learn how to go from what appears to be an interactive real-time chat, like one over the phone, to probably an email-like exchange, where you might have voice and video attached but it's not immediately interactive.

Delivering the bundle is very much like delivering a piece of email. If there's a problem with email it usually gets retransmitted, and after a while you time out. The bundle protocol has similar characteristics, so you anticipate that you have variable delay that could be very long. Sometimes if you've tried many times and don't get a response, you have to assume the destination is not available.

We often talk about how the things we invent for space are being used here on Earth. Are there things about the interplanetary internet that could potentially be used on the ground?Absolutely. The Defense Advanced Research Projects Agency (DARPA) funded tests with the US Marine Corps on tactical military communication using these highly resilient and disruption-tolerant protocols. We had successful tests that showed in a typical hostile communication environment that we were able to put three to five times more data through this disrupted system than we could with traditional TCP/IP.

Part of the reason is that we assume we can store traffic in the network. When there's high activity, we don't have to retransmit from end to end, we can just retransmit from one of the intermediate points in the system. This use of memory in the network turns out to be quite effective. And of course we can afford to do that because memory has gotten so inexpensive.

The European Commission has also sponsored a really interesting project using the DTM protocols in northern Sweden. In an area called Lapland, there's a group called the Saami reindeer herders.

They've been herding reindeer for 8,000 years up there. And the European Commission sponsored a research project managed by the Lulea University of Technology in northern Sweden to put these protocols on board all-terrain vehicles in laptops. This way, you could run a Wi-Fi service in villages in Northern Sweden and drop messages off and pick them up according to the protocols. As you move around, you were basically a data mule carrying information from one village to another.

There was also an experiment called Mocup that involved remote controlling a robot on Earth from the space station.These protocols were used, right?Yes, we used the DTN protocols for that. We were all really excited for that because, although the protocols were originally designed to deal with very long and uncertain delay, when there is high quality connectivity, we can use it for real-time communication. And that's exactly what they did with the little German rover.

I think in general communication will benefit from this. Putting these protocols in mobile phones, for instance, would create a more powerful and resilient communications platform than what we typically have today

So if I have poor reception on my cell phone at my house, I could still call my parents?Well, actually what might happen is that you could store what you said and they would eventually get it. But it wouldn't be real time. If the disruption lasts for an appreciable length of time, it would arrive later. But at least the information would eventually get there.

Source: http://www.wired.co.uk/article/vint-cerf-interplanetary-internet

Categorized in Others


Journey with us to a state where an unaccountable panel of censors vets 95 per cent of citizens' domestic internet connections. The content coming into each home is checked against a mysterious blacklist by a group overseen by nobody, which keeps secret the list of censored URLs not just from citizens, but from internet service providers themselves. And until recently, few in that country even knew the body existed. Are we in China? Iran?

Saudi Arabia? No - the United Kingdom, in 2009. This month, we ask:

Who watches the Internet Watch Foundation?

It was on December 5, 2008 that the foundation decided that the Wikipedia entry for The Scorpions' 1976 album Virgin Killer was illegal under British law. The album-sleeve artwork, showing a photo of a naked ten-year-old girl with a smashed-glass effect masking her genitalia, had been reported to the IWF via its public-reporting system the day before. It was deemed to fall under the classification of "Child Abuse Imagery" (CAI). And because the IWF blacklists such material, and works with ISPs to stop people accessing it, an estimated 95 per cent of residential web users were not only unable to access the band's Wikipedia entry, but also unable to edit the site at all.

When Wired began investigating the foundation last December, our interest clearly lay not in advocating the use or distribution of child pornography. We simply wanted to know only what the Wikimedia Foundation, the owners of the Wikipedia, itself sought to know. "The major focus of our response was to publicise the fact of the block, with an emphasis on its arbitrariness and on the IWF's lack of accountability," says Wikimedia's general counsel Mike Godwin (incidentally famed for Godwin's Law, which he coined in 1990 and which states that "as a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1".) "When we first protested the block, their response was, 'We've now conducted an appeals process on your behalf and you've lost the appeal.' When I asked who exactly represented the Wikimedia Foundation's side in that appeals process, they were silent. It was only after the fact of their blacklist and its effect on UK citizens were publicised that the IWF appears to have felt compelled to relent. "If we had not been able to publicise what the IWF had done, I don't doubt that the block would be in place today."

As it happened, the IWF reversed its decision a few days later, issuing a statement to the effect that, while it still considered the image to be technically illegal, it had evaluated the specific "contextual issues" of this landmark case and taken into account the fact that it was not hosted on a UK server. The incident marked a major step: the IWF had for once been held up to wider scrutiny.

Concern about the IWF has been voiced by critics such as John Ozimek - political journalist and author of New Labour, New Puritanism - and translated into a more explicit concern: that its lack of accountability could be used as a method of sneaking state censorship through the back door. The relationship between the IWF and Home Office is particularly worthy of scrutiny, as Ozimek explains: "Neither has shown much interest in civil liberties. Few people who know about the net know much about the IWF, and those that do know it mostly only as a heroic body fighting child porn.

It has thus been preserved from having to answer awkward questions about its legal qualifications for carrying out its role, its lack of public accountability and its failure to apply due process." "I think that so long as censorship decisions are being made by an unaccountable private entity," says Godwin, "the freedom of United Kingdom citizens is at risk."So how did we get here? In August 1996 - appalled by the distribution of child-abuse imagery on several newsgroups - Metropolitan Police chief inspector Stephen French sent an open letter to every ISP in the UK. "We are looking to you," French wrote, "to monitor your newsgroups, identifying and taking necessary action against those found to contain such material." It finished with a statement that was a game-changer: "We trust that with your co-operation and self-regulation it will not be necessary for us to move to an enforcement policy." In other words: you deal with this, or we'll deal with you. "There had been a failure to get industry consensus to act on the issue up to that point," remembers Keith Mitchell, who - as the head of Linx, the London Internet Exchange - was brought in to discuss the issue by the Department of Trade and Industry, alongside several major ISPs and representatives from the Internet Services Providers' Association (Ispa). Together they drew up a memorandum of understanding, a model for what would soon be relabelled the Internet Watch Foundation.

The document was called the R3 Agreement. The three Rs were: "rating", the development of labelling to address the issue of "harmful and offensive" content; "reporting", a notice-and-take-down procedure to all ISPs hosting CAI in the UK; and "responsibility", the promotion of education about such issues.

The categories still remain cornerstones of the expanded remit of today's IWF, which was (and is) self regulating. Trusting in this, the government left the ISPs to deal with the matter. "The IWF was originally very much seen as a positive measure to avoid a problem for the UK internet industry, rather than a coercive measure," explains Mitchell. "At first, the Home Office just seemed to be glad this problem was being taken care of for them. In terms of its original mission, I think that the IWF has done an excellent job of keeping CAI off UK-based servers."

In 1998, the government carried out an independent review on how the IWF was working. The Home Office called in consultants Denton Hall and KMPG, notes were duly taken, and a "revamped" IWF was launched in 2000. "The IWF had reached a point at which it needed to be seen to be more independent of the industry," explains Roger Darlington, former head of research at the Communication Workers Union, who was brought in as an IWF independent chair. "They weren't terribly clear how it was going to work," he remembers. "I said, 'Look, we should publish all our board papers and all our board minutes.'

That caused a number of people to swallow hard. But we did it."

Keith Mitchell regards this as the point at which things began to change for the worse. "Since Tony Blair got in," he says, "there has been visible mission creep. Various additions to the IWF's remit have occurred, increasingly without consideration of their technical effectiveness or practicality. Most notable has been the introduction of a blacklist."

Introduced in 2004, the blacklist is the IWF's method of ensuring that members block user access to CAI hosted outside the UK. This confidential list of URLs is sent in encrypted format to the ISPs, which are subject to similarly secret terms of agreement regarding their employees' access to the list. Lilian Edwards, professor of internet law at Sheffield University and author of Law And The Internet, feels that such guarded conduct suggests that more may be going on behind closed doors. "The government now potentially possesses the power to exclude any kind of online content from the UK, without the notice of either the public or the courts," she says. "Perhaps even more worryingly, any ISP that takes the IWF blacklist can also add whatever URLs they please to it, again without public scrutiny." Or even anyone necessarily noticing. It's like knowing that Google Safe Search is on, but not being able to change your settings.

Of course, blacklists are not infallible. The website Wikileaks recently obtained a copy of the list kept by the foundation's Danish equivalent - also unsupervised by government . It shows a number of erroneous blocks, such as the URL of a Dutch haulier , as well as legitimate adult domains.

For an organisation so often accused of being secretive, the IWF headquarters, in a converted townhouse on a leafy and innocuous Cambridgeshire housing estate, does not do much to dispel the image. At first glance the place resembles a large suburban home.

Inside, a spacious, bright reception leads to an equally airy office area and conference rooms. The sense of openness extends only so far, however. The IWF let us know beforehand that it refused to allow any photography on site.

As a charity, the IWF must publish accounts - most recently for the year ending March 2008. The largest single donor was the European Union. It gave the organisation £320,837 in 2007 and £146,929 in 2008. The largest revenue stream, however, was "subscription fee income". This was £623,542 in 2006, £700,533 a year later and £754,742 in 2008.

Who pays the "subscription fee"? The major ISPs and a clutch of big-name brands such as Royal Mail and Google. The IWF's website solicits such payments, explaining that "being a member of the IWF offers many benefits including practical evidence of Corporate Social Responsibility - enhanced company reputation for consumers and improved brand perception and recognition in the online and digital industries". All yours for £20,000 per annum - if you're a "main ISP". Smaller fish are advised to pay between £5,000 and £20,000, and "very small" firms are steered towards "£500 to £5,000". Sponsors, which "support us with goods and services to help us pursue our objectives", include Microsoft. Additional money comes from what the IWF calls "CAI income". This is revenue from licensing the list of prohibited URLs to private net-security outfits. It totalled £5,183 in 2007, but had jumped to £40,734 a year later. In 2006, the IWF also received £14,502 from the Home Office.

The Charity Commission accounts state that the IWF has 13 employees and no volunteers - unusual for a charity. Its staff costs were £520,847 in 2008, with one person earning more than £50,000.

But back to that £14,502 from government. We asked the IWF what the Home Office money was for, but Peter Robbins, the chief executive, would only say it was for "a project". "You don't need to know."

Sarah Robertson, IWF's head of communications, has dealt before with concerns about the IWF's links to the Home Office. She's quick to dismiss the notion that the blacklist is any way influenced by the government. "Supposedly, the IWF compiles the list, passes it on to the Home Office twice a day, the Home Office adds whatever they want to it, the IWF doesn't look at it, then it goes out...

Hmm." She smiles. "They don't. I don't see why they would. It's a voluntary initiative. The government has expressed an expectation, but they haven't legislated. The industry was already doing it.

Because they wanted to protect their customers."

Robertson lays out the process behind the blacklist. Updated twice a day, the URLs on the list are those reported to the IWF by concerned members of the public via the organisation's hotline and website. Relevant IWF staff - police-trained internet content analysts - then draw on their legal training to determine whether the content is "potentially illegal". "We use the term 'potentially illegal'," Robertson explains, "because we are not a court. It's assessed according to UK law. I read certain articles that talk about the IWF's 'arbitrary scale'. It's the law." The law she refers to is the Protection of Children Act 1978 (as amended by, among others, the Sexual Offences Act 2003), which makes it "an offence to take, make, permit to be taken, distribute, show, possess with intent to distribute, and advertise indecent photographs or pseudo-photographs of children under the age of 18".

Robertson insists that the Home Officen has never expressed a desire to get involved in the foundation's day-to-day proceedings, before elaborating on its independent nature. "We are inspected, not by the Home Office, but they expect us to subject ourselves to inspections. They ask us to subject ourselves to external scrutiny, despite the reams and reams of articles I've been reading about how we're 'shadowy' and 'unelected'. "Obviously we're not elected," she continues. "But we try to be as transparent as we possibly, possibly can. We're also audited externally by independent experts in law enforcement, forensics, technological security and HR issues."

This is true: the last independent audit of the IWF was in May 2008, and the organisation allegedly passed with flying colours. We have to say "allegedly", however, because the audit itself hasn't been published - and despite requests from Wired, the IWF intends to keep it confidential. The blacklist also remains undisclosed. "Obviously the list is never going to be given out," says Robertson. "No one gets it [unencrypted]. We don't allow a list of abusive images to be released to the public. What we can say is that details of every URL on it are shared with the police." She is unwilling to elaborate on other details: "I'm sure you'll understand that we can't give full details of how the list is provided. Sadly there are a lot of people out there who would take delight in getting it."

Robertson maintains that she can understand the concerns from certain quarters. "We do engage the civil-liberties groups; they help me understand their point of view, and I find it all very interesting."

Although the IWF has publicly said it " learnt lessons" from the Wikipedia-Virgin Killer fracas, its blacklist strategy is not changing. "As for the design of the list, there were meetings when we started with engineers from all the companies and everyone involved, very technical people - all of whom decided that [the status quo was] the simplest and easiest-to-implement way... People say, 'Why don't you just block the image?' You can't. When you've got a thousand different URLs on your list, you can't have different rules for each."

It is not a prerequisite for IWF members to implement the blacklist - it is simply there, should they want it. The IWF maintains that it strives to ensure cost is not a barrier to implementation by smaller ISPs. The IWF is also not the one carrying out the blocking - that is left to the actual ISPs. "We just provide a list of URLs," Robertson insists. Of course, the list is blind, and an ISP blocks all of it or none of it.

The Home Office declined an invitation to take part in an interview, and rejected our Freedom of Information requests. We asked for details of the relationship between the Home Office and the IWF, and in particular about the latter's discussions with Home Office minister Vernon Coaker. Our request was refused under the clause in the FOI Act that allows ministers to withhold information if they consider the disclosure might inhibit "the free and frank provision of advice and the free and frank exchange of views for the purpose of deliberation". It added: "We have decided that it is not in the public interest at this time to disclose this information." (You can read our entire correspondence with the Home Office at tinyurl.com/d67uzn.) It did issue this statement: "Over 95 per cent of consumer broadband connections are covered by blocking of child sexual-abuse websites.

The UK has taken a collective approach to addressing this issue and has had considerable success in ensuring that the sites on the IWF list are blocked. We will continue to consider what further action or measures might be needed."

What about the world outside CAI? Although the IWF is not solely responsible for blocking pornographic images of children, the other areas it deals with - incitement to racial hatred, criminally obscene content - are not subject to the blacklist. Robertson told us that there was no racial-hatred material hosted in the UK last year, and the number of cases of criminally obscene content per annum can be "counted on one hand".

Earlier this year legislation was passed that outlawed "extreme pornography", thereby adding the category to the IWF's watchlist of illegal material. And catching the eye of Parliament of late have been " anorexia promoting" websites. Mark Hunter MP in particular has been anxious to raise the issue.

Lilian Edwards, meanwhile, proposes a new direction for the IWF, which indicates an active preference for governmental administration and which would be more accountable under UK law and less susceptible to whispered doomsday scenarios. "It is high time that the IWF was reconstituted as a public body," she says. "Having the cooperation of the ISP industry does not give them the authority or the safeguards of a public or judicial body. Books get censored by independent and public courts. Why don't websites?"

John Ozimek has further concerns. "There is an interesting line of thought running around the security world that suggests that this is counterproductive. The majority of [CAI] material is not 'out there' on the web any more. It's available via P2P. The more pressure there is on net-based porn, the more networks move to circumvent government measures." So blocking may not be the best solution.

In January 2009 - shortly after the dust had settled on the Wikipedia case - the IWF found itself under scrutiny once more when a blacklisted image on the Internet Archive's Wayback Machine resulted in some UK internet users being unable to access the entire site. Later resolved and explained as a "technical error", the incident threw more meat to those who had already decided that the IWF was becoming increasingly maverick. "As an industry," Keith Mitchell elaborates, "we have done a lot, but the internet illegal economy, including spammers, botnets and those who host child porn, will not go away... It would be good to see some enforcement action rather than misguided censorship attempts, which damage freedoms for the majority of law-abiding internet users."

Among all parties, there is one agreement: the fight against CAI is an invaluable one. The killer questions revolve around the power behind that fight - can a non-governmental body be trusted with unprecedented censorship muscle? - and whether, by concentrating on URLs rather than file-sharing, that body is even fighting any longer in the right arena.

Source: http://www.wired.co.uk/article/the-hidden-censors-of-the-internet


Categorized in Internet Privacy

With all the fake news, toxic speech, and online scams out there, you might be feeling like now is a good time to scale back your online footprint.

There's a new tool that promises to help you do just that — by essentially deleting yourself from the internet.

CBC Radio technology columnist Dan Misener explains how it works.

What is this new online tool?

Think of this as a kind of cleanse for your online life.

It's called Deseat.me, and it does one thing and one thing only — it displays a list of all the online services you've ever signed up for.

So if you had a MySpace account in the early 2000s, it'll probably show up in Deseat. If you created an avatar in Second Life, it's likely to show up as well. And of course, so will things like your Facebook or Twitter accounts.

To use Deseat.me, you first log in using a Google account. Then, once it knows your email address, it can find any accounts that have been linked in any way to that Google account.

Now, it will ask for some things which may sound creepy — it will not only ask to view your email address, but also to view your email messages and settings. Based on my experience, Deseat.me scans through your email archives to find sign-up confirmation messages from various services.

The creators of Deseat.me told the Telegraph they take user privacy seriously, and that the program runs on the user's computer, rather than Deseat.me's servers. They also say they're not storing any of your info, but you'd need to take them at their word on that.

It uses Google's OAuth security protocol — but if you're not comfortable allowing Deseat.me access to your email archives, I wouldn't authorize it.

In my case, it found 216 different accounts, most of which I had entirely forgotten about. For instance, I have an account with the now-defunct social network called Pownce, and an account on something called Microsoft HealthVault that I signed up for in 2007.

So once you have this list of online services and accounts, Deseat.me will — wherever possible — show you a direct link to remove those accounts.

Why is keeping unused accounts risky?

There are a number of reasons, according to Anatoliy Gruzd, a professor at Ryerson University and the Canada research chair in social media data stewardship.

He said we've seen many recent examples of online services being hacked — and there are some dangerous ways hackers can use the information they gather.

More than 412 million adult-website credentials hacked
Zuckerberg hack a cautionary tale about password security
"For example, they can start contacting your old friends or old contacts on those services on your behalf, pretending to be you," Gruzd said.

He also pointed out online services are constantly being bought and sold. So if you have an old, unused account on a website that has been sold to a new owner, you might not ever know who owns your personal data — or how it will be used.

How else can I shut down my accounts?

Some apps and services allow you to sign up without creating a new login. Instead, you can sign in using your existing Facebook, Twitter or Google account. This is what's called "social login." You've probably seen this if a website or app has invited you to "Log in with Facebook."

The good news is that each of those sites — Google, Facebook and Twitter — can show you a list of third-party sites and services you've authorized to use your account. These lists are all in different places, depending on the social network — you can find Google's here, Facebook's here and Twitter's here.

When I checked mine, I found dozens of services I no longer use — services I tried once then forgot about.

Another really great resource is a site called JustDelete.me. It's a huge list of online services and direct links to the page you need to visit to shut down your account. What's more, each service has been rated on a scale from "easy" to "impossible" in terms of how difficult it is to close your account — because not every service makes it easy to leave.

Why is it often difficult to close an account?

Gruzd said that's because the personal information you share with them has real value in the marketplace.

"You can sell data. There's a big data market out there with data resellers — a huge advertising market as well," he said.

  • 45% of Canadians willing to sell their digital data

"I think as internet users, we need to demand more transparency from online services [about] how our data is being used and what are our options in terms of completely deleting those accounts."

The old saying is true — if you're not paying for something, you're not the customer: you're the product and are being sold. Or more specifically, your personal data is the product being sold.

What should I do before shutting an account?

Large-scale hacks and data breaches are on the rise, so it's worth spending the time to remove possible attack vectors.

But before you shut down an account, it's worth looking for an export or backup option, so you can save a copy of your personal data before you shut down the account.

And be aware that even if you're successful in deactivating your account, that doesn't necessarily mean the site or service removed all of your information. Depending on the privacy policy and terms of service, your information may be saved in a database indefinitely, and there may not be much you can do about it.

So it's a good idea to spend a few minutes looking at the third-party apps you've authorized through Twitter, Facebook or Google and close down the ones you're not using anymore.

It's good practice from both a security and a privacy perspective, since these accounts are a liability — and might be a interesting cyber-trip down memory lane.

Author: Dan Misener
Source: http://www.cbc.ca/news/technology/deseat-me-deletes-unused-accounts-1.3872537

Categorized in Online Research

According to analysts of service "Yandex", in 2016 Russian citizens in the five times increased buying activity on the web in intimate trade. More on 53% increased the demand for children's products.

It is worth noting that the intimate nature of the goods before the New Year are bought actively, than, for example, it was in November (+ 30%). The report said that in general on the eve of the New Year holidays the number of online shopping is traditionally grown. In addition, there is increased activity of Russians in online stores and compared to last year. In particular, the New Year's Eve Russian web users have made 71% of purchases of shoes and clothing more than in the same period in 2015.

In addition, it is reported that Russian residents to spend more money on food. growth in demand is also recorded (12%) on the train tickets sold online.

Source:  http://vistanews.ru/computers/internet/101724 

Categorized in Search Engine

Last week I speculated that the current horrible state of internet security may well be as good as we're ever going to get. I focused on the technical and historical reasons why I believe that to be true. Today, I'll tell you why I'm convinced that, even if we were able to solve the technical issues, we'll still end up running in place.

Global agreement is tough

Have you ever gotten total agreement on a single issue with your immediate family? If so, then your family is nothing like mine. Heck, I have a hard time getting my wife to agree with 50 percent of what I say. At best I get eye rolls from my kids. Let's just say I'm not cut out to be a career politician.

Now think about trying to get the entire world to agree on how to fix internet security, particularly when most of the internet was created and deployed before it went global.

Over the last two decades, just about every major update to the internet we've proposed to the world has been shot down. We get small fixes, but nothing big. We've seen moderate, incremental improvement in a few places, such as better authentication or digital certificate revocation, but even that requires leadership by a giant like Google or Microsoft. Those updates only apply to those who choose to participate -- and they still take years to implement.

Most of the internet's underlying protocols and participants are completely voluntary. That's its beauty and its curse. These protocols have become so widely popular, they're de facto standards. Think about using the Internet without DNS. Can you imagine having to remember specific IP addresses to go online shopping?

A handful of international bodies review and approve the major protocols and rules that allow the internet to function as it does today (here's a great summary article on who "runs" the internet). To that list you should add vendors who make the software and devices that run on and connect to the Internet; vendor consortiums, such as the FIDO Alliance; and many other groups that exert influence and control.

That diversity makes any global agreement to improve Internet security almost impossible. Instead, changes tend to happen through majority rule that drags the rest of the world along. So in one sense, we can get things done even when everyone doesn't agree. Unfortunately, that doesn't solve an even bigger problem.

Governments don't want the internet to be more secure

If there is one thing all governments agree on, it's that they want the ability to bypass people's privacy whenever and wherever the need arises. Even with laws in place to limit privacy breaches, governments routinely and without fear of punishment violate protective statutes.

To really improve internet security, we'd have to make every communication stream encrypted and signed by default. But they would be invisible to governments, too. That's just not going to happen. Governments want to continue to have unfettered access to your private communications.

Democratic governments are supposedly run by the people for the people. But even in countries where that's the rule of law, it isn't true. All governments invade privacy in the name of protection. That genie will never be put back in the bottle. The people lost. We need to get over it.

The only way it might happen

I've said it before and I'll say it again: The only way I can imagine internet security improving dramatically is if a global tipping-point disaster occurs -- and allows us to obtain shared, broad agreement. Citizen outrage and agreement would have to be so strong, it would override the objections of government. Nothing else is likely to work.

I've been waiting for this all to happen for nearly three decades, the most recent marked by unimaginably huge data breaches. I'm not getting my hopes up any time soon.

Author : Roger A. Grimes

Source : http://www.infoworld.com/article/3152818/security/the-real-reason-we-cant-secure-the-internet.html

Categorized in Internet Privacy

There is no doubt that the internet and smatphones have changed everything from the way people shop to how they communicate.

As internet penetration levels keep rising, steady streams of buzz words related to the industries were created and went viral in the country.

Here we excerpt some representative topics that were most discussed across the internet this year.

Young man's death causes an uproar for online search giant

In April, Chinese internet giant Baidu Inc was criticized for influencing the treatment choice of a cancer patient, Wei Zexi, by presenting misguided medical information.

Wei, 22, died after undergoing a controversial cancer treatment at a Beijing hospital, which the Wei family found through Baidu's online search platform.

The case was hotly discussed in the country's online community and the Cyberspace Administration of China (CAC), the nation's internet regulator, later asked Baidu to improve its paid-for listings model and to rank the search results mainly according to credibility rather than price tags.

On June 25, the CAC publicized a regulation on search engines, ordering search providers to ensure objective, fair and authoritative search results.

All paid search results must be labeled clearly and checks on advertisers should be improved, according to the regulation. There also should be a limit on the number of paid results on a single page.

Moreover, the practice of blocking negative content concerning advertisers has been banned.

Year-ender: Most talked-about topics on the Chinese internet

Jia Yueting, co-founder and head of Le Holdings Co Ltd, also known as LeEco and, formerly, as LeTV, gestures as he unveils an all-electric battery "concept" car called LeSEE during a ceremony in Beijing. [Photo/Agencies]

A fund raiser struggles for being not insolvent but a game-changer

Chinese technology company LeEco founder Jia Yueting recently released an internal letter to his employees, indirectly admitting some of the rumors about supply chain and capital issues that caused the company's shares to plummet.

Jia talked for the first time about what he thought about the overly rapid growth of the company in the letter.

"There is a problem with LeEco's growth pace and organizational capacities," Jia said, adding that the company's global expansion had gone too far despite limited capital and resources.

Jia revealed that the company spent heavily (about 10 billion yuan) on the LeSEE all-electric concept car in its early stages. The company unveiled the vehicle, a rival to Tesla's Model S, in April.

On Nov 2, shares of Leshi Internet Information and Technology, which went public in 2010, fell nearly 7.5 percent on rumors that LeEco defaulted on payment for suppliers.

Jia said the company will address the capital issues in three to four months.

LeEco, founded in 2004, started as a video-streaming service provider akin to Netflix Inc, but it rapidly grew into a firm with a presence in smartphones, TVs, cloud computing, sports and electric cars.

Year-ender: Most talked-about topics on the Chinese internet

A woman looks at her mobile phone as she rides an escalator past an advertisement for Samsung's Galaxy Note 7 device at a Samsung store in the Gangnam district of Seoul. [Photo/Agencies]

An exploded phone put the manufacturer on a hot seat

In mid October, China's product quality watchdog said that Samsung Electronics Co Ltd's local unit would recall all 190,984 Galaxy Note 7 phones that it sold in China.

The latest recall in China includes the 1,858 early-release Galaxy Note 7 smartphones that the watchdog recalled on September 14.

Samsung said earlier Tuesday that it had decided to stop selling Note 7 phones in China and is now communicating with the Chinese authority to deal with this matter.

The tech giant decided to temporarily halt the global sales and exchange of its Galaxy Note 7 smartphones, while it investigates reports of fires in the devices.

On Sept 2, Samsung suspended sales of the Galaxy Note 7 and announced a "product exchange program", after it was found that a manufacturing defect in the phones' batteries had caused some of the handsets to generate excessive heat, resulting in fires and explosions.

However, in early October, reports emerged of incidents where these replacement phones also caught fire.

Year-ender: Most talked-about topics on the Chinese internet

The new Apple iPhone 6S and 6S Plus are displayed during an Apple media event in San Francisco, California, in this file photo from September 9, 2015. [Photo/Agencies]

Consumers' query about smartphone's mystery power-off

US tech giant Apple Inc on Dec 2 announced the reason behind an abrupt shutdown problem that recently affected some users of the iPhone 6s.

"We found that a small number of iPhone 6s devices made in September and October 2015 contained a battery component that was exposed to controlled ambient air longer than it should have been before being assembled into battery packs. As a result, these batteries degrade faster than a normal battery and cause unexpected shutdowns to occur," a statement posted on Apple's official website said.

The company also explained in the note that this was not a safety issue.

The statement was released after China's consumer protection watchdog - China Consumer Association (CCA) - issued a query letter earlier to ask the company to explain and provide solutions to malfunctions reportedly found in iPhones.

According to the CCA, many consumers continued to complain after Apple announced a free battery replacement program for iPhone 6s users, claiming that the abrupt shutdown problem also exists in iPhone 6, iPhone 6 Plus and iPhone 6s Plus models.

On Nov 21, Apple introduced a free replacement program, to resolve recent reports of the unexpected shutdown of the iPhone 6s.

Year-ender: Most talked-about topics on the Chinese internet

A woman uses Uber Technologies Inc's car-hailing service via an electronic screen in Tianjin.[Provided to China Daily]

Taxi-hailing app implements localized strategy

Uber China shut its old mobile app interface on the last weekend of November, to be replaced by a new one that integrates its functions and the drivers' pool with Didi Chuxing four months after the companies' merger.

Didi acquired Uber's China operations in August and became the No 1 ride-hailing service provider in China with 15 million drivers and over 400 million registered users.

All the Uber China platform's drivers and users were urged to move to a new interface introduced in early November.

Foreigners with Uber accounts are also required to download the new app if they would like to use Uber services in China.

Prior to the tie-up, Uber was one of very few foreign tech firms able to compete with domestic rivals head-on in China. Though Didi had bigger market share, Uber managed to gain a foothold in lower-tier cities. The two had been locked in fierce price wars to compete for market share.

Source : http://www.chinadaily.com.cn/bizchina/tech/2016-12/21/content_27727996_5.htm

Categorized in Search Engine

The amount of sexism on the internet is depressingly self-evident. Women in particular who speak their minds online are frequently attacked on the basis on their gender, and often in horrifyingly graphic ways. But what about the internet itself? There could be inherent characteristics in its very structure that could be considered sexist or gender biased.

It would seem so. To give you an idea, type ‘engineer’ or ‘managing director’ into a search engine and look at the images. You’ll find that the vast majority are of men. The stereotypes work both ways, of course. Type in ‘nurse’ and most of the images will be of women. Although this may simply reflect society as it stands, there is an argument to be made that, intentionally or otherwise, it also reinforces gender stereotyping. Given how influential the internet is on people’s perception of the world – a fact laid bare recently in both Brexit and the US Elections – isn’t there a responsibility among tech giants like Google, Yahoo, Microsoft and Facebook to fight the kind of prejudices that too often see internet users inhabit echo chambers where their own biases are reflected back at them?

It’s a question fraught with moral issues. On the one hand, search engines are automated and simply display the most common searches. It’s also clear that attempts to censor these facts of internet life is equally dubious, not only because it amounts to a denial of the issue, but because it sets a scary precedent, potentially providing a gateway into all kinds of Orwellian thought control.

Nevertheless, the issue is not about to go away, and making people more socially aware of gender bias on the internet is the first step in trying to find a solution. The problem was highlighted brilliantly in a UN campaign in 2013 concerned with women’s rights. It showed women’s faces with their mouths covered by the Google search bar and various auto-complete options, such as ‘women need’ transforming into ‘knowing their place’. It was also effectively publicised up by TED.com editor Emily McManus who, when attempting an internet search to find an English student who taught herself calculus, was asked by Google, ‘Do you mean a student who taught himself calculus?’ McManus’s subsequent screenshot was retweeted thousands of times and became a worldwide news story.

Part of the issue stems from a lack of gender balance in the tech industry itself. Office for National statistics figures from 2014 reveal that in the UK there are 723,000 male compared to 124,000 female professionals in the IT industry. In 2015, according to the companies’ own figures, only 17% of Microsoft’s technical staff were women, while men made up 83% of Google’s engineering staff and 80% of Apple’s technical staff. It’s true that these industries have put various initiatives in place to try to redress this balance, like Google’s ‘Made with Code’ or Microsoft’s ‘Women in tech’, spearheaded by Melinda Gates, but there’s clearly still a long way to go.

Although women are unquestionably the most disadvantaged when it comes to gender bias on the internet, men don’t escape stereotyping either. For example, with women making inroads into high-powered, well-paid jobs there are consequently more men taking on domestic roles or becoming stay-at-home dads. Trying to find this reflected on the internet is just as hard as trying to find female engineers. The attitude is still very much that if a man isn’t the ‘breadwinner’ he’s not really a man – type ‘homemaker’ in and see what comes up. Likewise, even as men’s involvement in child-rearing is transforming, the internet still fails to accurately represent such a significant social shift.

So what’s to be done, besides simply switching off the predictive function in settings? It seems some new approaches are being experimented with, ones that strike a balance between using the predictive function – which is otherwise a useful tool – and maintaining an element of choice. For example, global Swedish tech company Semcon has come up with a browser extension called Re-Search. This doesn’t stop the predictive function acting in its usual fashion, but it does provide an alternative search result that aims to give men and women more equal space in the search results.

Says Project Manager, Anna Funke, “If engineers are portrayed as men in yellow helmets, how can women feel that the job might be of interest to them? Role models are important when young people are thinking about their career choices and the internet is the first place many people look for information.” Semcon are making the software available free of charge, and its also open source in the hope it will encourage individuals and companies to develop the product further and find their own ways to spur on greater gender equality across the internet.

It’s worth remembering though, that when the internet first appeared back in the 1990s, it was hailed as a great democratic technology. Despite the ways in which states, corporations or individuals attempt to manipulate it, it remains just that, reflecting what we are, even when that’s pretty unpalatable. Ultimately then, if we’re going to have an internet that better reflects equality, openness and decency, it’s down to all of us who use it.

Author:  Robert Bright

Source:  http://www.huffingtonpost.co.uk/entry/the-great-gender-gap-debate-is-the-internet-bias-to-either-sex_uk_583d99d1e4b090a702a650c9

Categorized in Others

Nowadays, search engines have evolved dramatically. Earlier, we had Google, Bing and Yahoo to search and get specific information. These do not perform well when it comes to knowledge graph and some other smart features. But, now, you can find various alternative search engines as well as meta search engine. Some examples are – Mamma, iBoogie, Vroosh, TurboScout, Unabot and Search.

What is a meta search engine

Generally, you search for information on Google, Bing or Yahoo. But, do you know the source of information that is being used by those search engines? The source of information is some website like TheWindowsClub.com. Those search engines index blogs/websites and grab information from them. Now, meta search engines grab information from those search engines. You can get a detailed view if you check the following image,

Meta Search Engine List

Best Meta Search Engine List

If you are interested in meta search engines and want to give it a try, do check out this best meta search engine list. Here are the top meta search engines.

1] Mamma: This is a great website to get web, news, image and video search result. It grabs information from various search engine – as mentioned in the definition. The most interesting thing is you can get a Tab view. That means, this is very easy to switch from web search result to image and vice versa.

2] iBoogieThis is a better meta search engine than Mamma, as it uses various filters to show specific information. At the same time, you can also choose the number of results that you want to get on one page, filter domain to include or exclude that and more others. The best part is you can get plenty of related search terms to find something faster.

3] Vroosh: This is yet another nice meta search engine that can be used by anyone. Although, you cannot find web or image search, yet, you will get country-based search. For instance, if you are searching for something that is related to US, you can choose US version of Vroosh to get better search result. Similarly, you can choose Canada or worldwide version of Vroosh.

4] Turbo Scout: Turbo Scout is probably the biggest meta search engine out there as it grabs information from other meta search engine like iThaki, Mamma etc. You can search for web, images, news, products, blogs etc. using Turbo Scout. It comes with more information than any other meta search engines.

5] Search: Search.com is popular because of simplicity and great number of features. It shows search result just like Google. You will get search result on your left hand side and ads on right side. The related search terms will be shown on your right hand side. All these things make the page like Google search result.

6] Unabot: Unabot is consolidation of all meta search engines. That means, you will get a huge number of meta search engines in the list, which can be used anytime. On the other hand, you can refine search by country. It works like Vroosh and you can get results based more accurately.


There are more other meta search engines available for you and other regular internet users. Generally, users do not follow meta search engine because they get all the information on Google and other regular search engine. But, if you need more information under one roof, you can head over to meta search engines.

Source : http://www.thewindowsclub.com/meta-search-engine-list

Categorized in Search Engine

ON THE WEST coast of Australia, Amanda Hodgson is launching drones out towards the Indian Ocean so that they can photograph the water from above. The photos are a way of locating dugongs, or sea cows, in the bay near Perth—part of an effort to prevent the extinction of these endangered marine mammals. The trouble is that Hodgson and her team don’t have the time needed to examine all those aerial photos. There are too many of them—about 45,000—and spotting the dugongs is far too difficult for the untrained eye. So she’s giving the job to a deep neural network.

Neural networks are the machine learning models that identify faces in the photos posted to your Facebook news feed. They also recognize the questions you ask your Android phone, and they help run the Google search engine. Modeled loosely on the network of neurons in the human brain, these sweeping mathematical models learn all these things by analyzing vast troves of digital data. Now, Hodgson, a marine biologist at Murdoch University in Perth, is using this same technique to find dugongs in thousands of photos of open water, running her neural network on the same open-source software, TensorFlow, that underpins the machine learning services inside Google.

As Hodgson explains, detecting these sea cows is a task that requires a particular kind of pinpoint accuracy, mainly because these animals feed below the surface of the ocean. “They can look like whitecaps or glare on the water,” she says. But that neural network can now identify about 80 percent of dugongs spread across the bay.

The project is still in the early stages, but it hints at the widespread impact of deep learning over past year. In 2016, this very old but newly powerful technology helped a Google machine beat one of the world’s top players at the ancient game of Go—a feat that didn’t seem possible just a few months before. But that was merely the most conspicuous example. As the year comes to a close, deep learning isn’t a party trick. It’s not niche research. It’s remaking companies like Google, Facebook, Microsoft, and Amazon from the inside out, and it’s rapidly spreading to the rest of the world, thanks in large part to the open source software and cloud computing services offered by these giants of the internet.

The New Translation

In previous years, neural nets reinvented image recognition through apps like Google Photos, and they took speech recognition to new levels via digital assistants like Google Now and Microsoft Cortana. This year, they delivered the big leap in machine translation, the ability to automatically translate speech from one language to another. In September, Google rolled out a new service it calls Google Neural Machine Translation, which operates entirely through neural networks. According to the company, this new engine has reduced error rates between 55 and 85 percent when translating between certain languages.

Google trains these neural networks by feeding them massive collections of existing translations. Some of this training data is flawed, including lower quality translations from previous versions of the Google Translate app. But it also includes translations from human experts, and this buoys the quality of the training data as a whole. That ability to overcome imperfection is part of deep learning’s apparent magic: given enough data, even if some is flawed, it can train to a level well beyond those flaws.

Mike Schuster, a lead engineer on Google’s service, is happy to admit that his creation is far from perfect. But it still represents a breakthrough. Because the service runs entirely on deep learning, it’s easier for Google to continue improving the service. It can concentrate on refining the system as a whole, rather than juggling the many small parts that characterized machine translation services in the past.

Meanwhile, Microsoft is moving in the same direction. This month, it released a version of its Microsoft Translator app that can drive instant conversations between people speaking as many as nine different languages. This new system also runs almost entirely on neural nets, says Microsoft vice president Harry Shum, who oversees the company’s AI and research group. That’s important, because it means Microsoft’s machine translation is likely to improve more quickly as well.

The New Chat

In 2016, deep learning also worked its way into chatbots, most notably the new Google Allo. Released this fall, Allo will analyze the texts and photos you receive and instantly suggest potential replies. It’s based on an earlier Google technology called Smart Reply that does much the same with email messages. The technology works remarkably well, in large part because it respects the limitations of today’s machine learning techniques. The suggested replies are wonderfully brief, and the app always suggests more than one, because, well, today’s AI doesn’t always get things right.

Inside Allo, neural nets also help respond to the questions you ask of the Google search engine. They help the company’s search assistant understand what you’re asking, and they help formulate an answer. According to Google research product manager David Orr, the app’s ability to zero in on an answer wouldn’t be possible without deep learning. “You need to use neural networks—or at least that is the only way we have found to do it,” he says. “We have to use all of the most advanced technology we have.”

What neural nets can’t do is actually carry on a real conversation. That sort of chatbot is still a long way off, whatever tech CEOs have promised from their keynote stages. But researchers at Google, Facebook, and elsewhere are exploring deep learning techniques that help reach that lofty goal. The promise is that these efforts will provide the same sort of progress we’ve seen with speech recognition, image recognition, and machine translation. Conversation is the next frontier.

The New Data Center

This summer, after building an AI that cracked the game of Go, Demis Hassabis and his Google DeepMind lab revealed they had also built an AI that helps operate Google’s worldwide network of computer data centers. Using a technique called deep reinforcement learning, which underpins both their Go-playing machine and earlier DeepMind services that learned to master old Atari games, this AI decides when to turn on cooling fans inside the thousands of computer servers that fill these data centers, when to open the data center windows for additional cooling, and when to fall back on expensive air conditioners. All told, it controls over 120 functions inside each data center

As Bloomberg reported, this AI is so effective, it saves Google hundreds of millions of dollars. In other words, it pays for the cost of acquiring DeepMind, which Google bought for about $650 million in 2014. Now, Deepmind plans on installing additional sensors in these computing facilities, so it can collect additional data and train this AI to even higher levels.

The New Cloud

As they push this technology into their own products as services, the giants of the internet are also pushing it into the hands of others. At the end of 2015, Google open sourced TensorFlow, and over the past year, this once-proprietary software spread well beyond the company’s walls, all the way to people like Amanda Hodgson. At the same time, Google, Microsoft, and Amazon began offering their deep learning tech via cloud computing services that any coder or company can use to build their own apps. Artificial intelligence-as-a-service may wind up as the biggest business for all three of these online giants.

Over the last twelve months, this burgeoning market spurred another AI talent grab. Google hired Stanford professor Fei-Fei Li, one of the biggest names in the world of AI research, to oversee a new cloud computing group dedicated to AI, and Amazon nabbed Carnegie Mellon professor Alex Smolna to play much the same role inside its cloud empire. The big players are grabbing the world’s top AI talent as quickly as they can, leaving little for others. The good news is that this talent is working to share at least some of the resulting tech they develop with anyone who wants it.

As AI evolves, the role of the computer scientist is changing. Sure, the world still needs people who can code software. But increasingly, it also needs people who can train neural networks, a very different skill that’s more about coaxing a result from the data than building something on your own. Companies like Google and Facebook are not only hiring a new kind of talent, but also reeducating their existing employees for this new future—a future where AI will come to define technology in the lives of just about everyone.

Author:  CADE METZ

Source:  https://www.wired.com/2016/12/2016-year-deep-learning-took-internet

Categorized in Deep Web

Last Thursday, after weeks of criticism over its role in the proliferation of falsehoods and propaganda during the presidential election, Facebook announced its plan to combat “hoaxes” and “fake news.” The company promised to test new tools that would allow users to report misinformation, and to enlist fact-checking organizations including Snopes and PolitiFact to help litigate the veracity of links reported as suspect. By analyzing patterns of reading and sharing, the company said, it might be able to penalize articles that are shared at especially low rates by those who read them — a signal of dissatisfaction. Finally, it said, it would try to put economic pressure on bad actors in three ways: by banning disputed stories from its advertising ecosystem; by making it harder to impersonate credible sites on the platform; and, crucially, by penalizing websites that are loaded with too many ads.

Over the past month the colloquial definition of “fake news” has expanded beyond usefulness, implicating everything from partisan news to satire to conspiracy theories before being turned, finally, back against its creators. Facebook’s fixes address a far more narrow definition. “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain,” wrote Adam Mosseri, a vice president for news feed, in a blog post.

Facebook’s political news ecosystem during the 2016 election was vast and varied. There was, of course, content created by outside news media that was shared by users, but there were also reams of content — posts, images, videos — created on Facebook-only pages, and still more media created by politicians themselves. During the election, it was apparent to almost anyone with an account that Facebook was teeming with political content, much of it extremely partisan or pitched, its sourcing sometimes obvious, other times obscured, and often simply beside the point — memes or rants or theories that spoke for themselves.

Facebook seems to have zeroed in on only one component of this ecosystem — outside websites — and within it, narrow types of bad actors. These firms are, generally speaking, paid by advertising companies independent of Facebook, which are unaware of or indifferent to their partners’ sources of audience. Accordingly, Facebook’s anti-hoax measures seek to regulate these sites by punishing them not just for what they do on Facebook, but for what they do outside of it.

“We’ve found that a lot of fake news is financially motivated,” Mosseri wrote. “Spammers make money by masquerading as well-known news organizations and posting hoaxes that get people to visit to their sites, which are often mostly ads.” The proposed solution: “Analyzing publisher sites to detect where policy enforcement actions might be necessary.”

The stated targets of Facebook’s efforts are precisely defined, but its formulation of the problem implicates, to a lesser degree, much more than just “the worst of the worst.” Consider this characterization of what makes a “fake news” site a bad platform citizen: It uses Facebook to capture receptive audiences by spreading lies and then converts those audiences into money by borrowing them from Facebook, luring them to an outside site larded with obnoxious ads. The site’s sin of fabrication is made worse by its profit motive, which is cast here as a sort of arbitrage scheme. But an acceptable news site does more or less the same thing: It uses Facebook to capture receptive audiences by spreading not-lies and then converts those audiences into money by luring them to an outside site not-quite larded with not-as-obnoxious ads. In either case, Facebook users are being taken out of the safe confines of the platform into areas that Facebook does not and cannot control.

In this context, this “fake news” problem reads less as a distinct new phenomenon than as a flaring symptom of an older, more existential anxiety that Facebook has been grappling with for years: its continued (albeit diminishing) dependence on the same outside web that it, and other platforms, have begun to replace. Facebook’s plan for “fake news” is no doubt intended to curb certain types of misinformation. But it’s also a continuation of the company’s bigger and more consequential project — to capture the experiences of the web it wants and from which it can profit, but to insulate itself from the parts that it doesn’t and can’t. This may help solve a problem within the ecosystem of outside publishers — an ecosystem that, in the distribution machinery of Facebook, is becoming redundant, and perhaps even obsolete.

As Facebook has grown, so have its ambitions. Its mantralike mission (to “connect the world”) is rivaled among internet companies perhaps by only that of Google (to “organize the world’s information”) in terms of sheer scope. In the run-up to Facebook’s initial public offering, Mark Zuckerberg told investors that the company makes decisions “not optimizing for what’s going to happen in the next year, but to set us up to really be in this world where every product experience you have is social, and that’s all powered by Facebook.”

To understand what such ambition looks like in practice, consider Facebook’s history. It started as an inward-facing website, closed off from both the web around it and the general public. It was a place to connect with other people, and where content was created primarily by other users: photos, wall posts, messages. This system quickly grew larger and more complex, leading to the creation, in 2006, of the news feed — a single location in which users could find updates from all of their Facebook friends, in roughly reverse-chronological order.

When the news feed was announced, before the emergence of the modern Facebook sharing ecosystem, Facebook’s operating definition of “news” was pointedly friend-centric. “Now, whenever you log in, you’ll get the latest headlines generated by the activity of your friends and social groups,” the announcement about the news feed said. This would soon change.

In the ensuing years, as more people spent more time on Facebook, and following the addition of “Like” and “Share” functions within Facebook, the news feed grew into a personalized portal not just for personal updates but also for the cornucopia of media that existed elsewhere online: links to videos, blog posts, games and more or less anything else published on an external website, including news articles. This potent mixture accelerated Facebook’s change from a place for keeping up with family and friends to a place for keeping up, additionally, with the web in general, as curated by your friends and family. Facebook’s purview continued to widen as its user base grew and then acquired their first smartphones; its app became an essential lens through which hundreds of millions of people interacted with one another, with the rest of the web and, increasingly, with the world at large.

Facebook, in other words, had become an interface for the whole web rather than just one more citizen of it. By sorting and mediating the internet, Facebook inevitably began to change it. In the previous decade, the popularity of Google influenced how websites worked, in noticeable ways: Titles and headlines were written in search-friendly formats; pages or articles would be published not just to cover the news but, more specifically, to address Google searchers’ queries about the news, the canonical example being The Huffington Post’s famous “What Time Does The Super Bowl Start?” Publishers built entire business models around attracting search traffic, and search-engine optimization, S.E.O., became an industry unto itself. Facebook’s influence on the web — and in particular, on news publishers — was similarly profound. Publishers began taking into consideration how their headlines, and stories, might travel within Facebook. Some embraced the site as a primary source of visitors; some pursued this strategy into absurdity and exploitation.

Facebook, for its part, paid close attention to the sorts of external content people were sharing on its platform and to the techniques used by websites to get an edge. It adapted continually. It provided greater video functionality, reducing the need to link to outside videos or embed them from YouTube. As people began posting more news, it created previews for links, with larger images and headlines and longer summaries; eventually, it created Instant Articles, allowing certain publishers (including The Times) to publish stories natively in Facebook. At the same time, it routinely sought to penalize sites it judged to be using the platform in bad faith, taking aim at “clickbait,” an older cousin of “fake news,” with a series of design and algorithm updates. As Facebook’s influence over online media became unavoidably obvious, its broad approach to users and the web became clearer: If the network became a popular venue for a certain sort of content or behavior, the company generally and reasonably tried to make that behavior easier or that content more accessible. This tended to mean, however, bringing it in-house.

To Facebook, the problem with “fake news” is not just the obvious damage to the discourse, but also with the harm it inflicts upon the platform. People sharing hoax stories were, presumably, happy enough with they were seeing. But the people who would then encounter those stories in their feeds were subjected to a less positive experience. They were sent outside the platform to a website where they realized they were being deceived, or where they were exposed to ads or something that felt like spam, or where they were persuaded to share something that might later make them look like a rube. These users might rightly associate these experiences not just with their friends on the platform, or with the sites peddling the bogus stories but also with the platform itself. This created, finally, an obvious issue for a company built on attention, advertising and the promotion of outside brands. From the platform’s perspective, “fake news” is essentially a user-experience problem resulting from a lingering design issue — akin to slow-loading news websites that feature auto-playing videos and obtrusive ads.

Increasingly, legitimacy within Facebook’s ecosystem is conferred according to a participant’s relationship to the platform’s design. A verified user telling a lie, be it a friend from high school or the president elect, isn’t breaking the rules; he is, as his checkmark suggests, who he represents himself to be. A post making false claims about a product is Facebook’s problem only if that post is labeled an ad. A user video promoting a conspiracy theory becomes a problem only when it leads to the violation of community guidelines against, for example, user harassment. Facebook contains a lot more than just news, including a great deal of content that is newslike, partisan, widely shared and often misleading. Content that has been, and will be, immune from current “fake news” critiques and crackdowns, because it never had the opportunity to declare itself news in the first place. To publish lies as “news” is to break a promise; to publish lies as “content” is not.

That the “fake news” problem and its proposed solutions have been defined by Facebook as link issues — as a web issue — aligns nicely with a longer-term future in which Facebook’s interface with the web is diminished. Indeed, it heralds the coming moment when posts from outside are suspect by default: out of place, inefficient, little better than spam.


Source : http://www.nytimes.com/2016/12/22/magazine/facebooks-problem-isnt-fake-news-its-the-rest-of-the-internet.html?_r=1

Categorized in News & Politics

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media