Last week I speculated that the current horrible state of internet security may well be as good as we're ever going to get. I focused on the technical and historical reasons why I believe that to be true. Today, I'll tell you why I'm convinced that, even if we were able to solve the technical issues, we'll still end up running in place.

Global agreement is tough

Have you ever gotten total agreement on a single issue with your immediate family? If so, then your family is nothing like mine. Heck, I have a hard time getting my wife to agree with 50 percent of what I say. At best I get eye rolls from my kids. Let's just say I'm not cut out to be a career politician.

Now think about trying to get the entire world to agree on how to fix internet security, particularly when most of the internet was created and deployed before it went global.

Over the last two decades, just about every major update to the internet we've proposed to the world has been shot down. We get small fixes, but nothing big. We've seen moderate, incremental improvement in a few places, such as better authentication or digital certificate revocation, but even that requires leadership by a giant like Google or Microsoft. Those updates only apply to those who choose to participate -- and they still take years to implement.

Most of the internet's underlying protocols and participants are completely voluntary. That's its beauty and its curse. These protocols have become so widely popular, they're de facto standards. Think about using the Internet without DNS. Can you imagine having to remember specific IP addresses to go online shopping?


A handful of international bodies review and approve the major protocols and rules that allow the internet to function as it does today (here's a great summary article on who "runs" the internet). To that list you should add vendors who make the software and devices that run on and connect to the Internet; vendor consortiums, such as the FIDO Alliance; and many other groups that exert influence and control.

That diversity makes any global agreement to improve Internet security almost impossible. Instead, changes tend to happen through majority rule that drags the rest of the world along. So in one sense, we can get things done even when everyone doesn't agree. Unfortunately, that doesn't solve an even bigger problem.

Governments don't want the internet to be more secure

If there is one thing all governments agree on, it's that they want the ability to bypass people's privacy whenever and wherever the need arises. Even with laws in place to limit privacy breaches, governments routinely and without fear of punishment violate protective statutes.

To really improve internet security, we'd have to make every communication stream encrypted and signed by default. But they would be invisible to governments, too. That's just not going to happen. Governments want to continue to have unfettered access to your private communications.

Democratic governments are supposedly run by the people for the people. But even in countries where that's the rule of law, it isn't true. All governments invade privacy in the name of protection. That genie will never be put back in the bottle. The people lost. We need to get over it.

The only way it might happen

I've said it before and I'll say it again: The only way I can imagine internet security improving dramatically is if a global tipping-point disaster occurs -- and allows us to obtain shared, broad agreement. Citizen outrage and agreement would have to be so strong, it would override the objections of government. Nothing else is likely to work.

I've been waiting for this all to happen for nearly three decades, the most recent marked by unimaginably huge data breaches. I'm not getting my hopes up any time soon.

Author : Roger A. Grimes

Source : http://www.infoworld.com/article/3152818/security/the-real-reason-we-cant-secure-the-internet.html

Categorized in Internet Privacy

There is no doubt that the internet and smatphones have changed everything from the way people shop to how they communicate.

As internet penetration levels keep rising, steady streams of buzz words related to the industries were created and went viral in the country.

Here we excerpt some representative topics that were most discussed across the internet this year.

Young man's death causes an uproar for online search giant

In April, Chinese internet giant Baidu Inc was criticized for influencing the treatment choice of a cancer patient, Wei Zexi, by presenting misguided medical information.


Wei, 22, died after undergoing a controversial cancer treatment at a Beijing hospital, which the Wei family found through Baidu's online search platform.

The case was hotly discussed in the country's online community and the Cyberspace Administration of China (CAC), the nation's internet regulator, later asked Baidu to improve its paid-for listings model and to rank the search results mainly according to credibility rather than price tags.

On June 25, the CAC publicized a regulation on search engines, ordering search providers to ensure objective, fair and authoritative search results.

All paid search results must be labeled clearly and checks on advertisers should be improved, according to the regulation. There also should be a limit on the number of paid results on a single page.

Moreover, the practice of blocking negative content concerning advertisers has been banned.

Year-ender: Most talked-about topics on the Chinese internet

Jia Yueting, co-founder and head of Le Holdings Co Ltd, also known as LeEco and, formerly, as LeTV, gestures as he unveils an all-electric battery "concept" car called LeSEE during a ceremony in Beijing. [Photo/Agencies]

A fund raiser struggles for being not insolvent but a game-changer

Chinese technology company LeEco founder Jia Yueting recently released an internal letter to his employees, indirectly admitting some of the rumors about supply chain and capital issues that caused the company's shares to plummet.


Jia talked for the first time about what he thought about the overly rapid growth of the company in the letter.

"There is a problem with LeEco's growth pace and organizational capacities," Jia said, adding that the company's global expansion had gone too far despite limited capital and resources.

Jia revealed that the company spent heavily (about 10 billion yuan) on the LeSEE all-electric concept car in its early stages. The company unveiled the vehicle, a rival to Tesla's Model S, in April.

On Nov 2, shares of Leshi Internet Information and Technology, which went public in 2010, fell nearly 7.5 percent on rumors that LeEco defaulted on payment for suppliers.

Jia said the company will address the capital issues in three to four months.

LeEco, founded in 2004, started as a video-streaming service provider akin to Netflix Inc, but it rapidly grew into a firm with a presence in smartphones, TVs, cloud computing, sports and electric cars.

Year-ender: Most talked-about topics on the Chinese internet

A woman looks at her mobile phone as she rides an escalator past an advertisement for Samsung's Galaxy Note 7 device at a Samsung store in the Gangnam district of Seoul. [Photo/Agencies]

An exploded phone put the manufacturer on a hot seat

In mid October, China's product quality watchdog said that Samsung Electronics Co Ltd's local unit would recall all 190,984 Galaxy Note 7 phones that it sold in China.

The latest recall in China includes the 1,858 early-release Galaxy Note 7 smartphones that the watchdog recalled on September 14.


Samsung said earlier Tuesday that it had decided to stop selling Note 7 phones in China and is now communicating with the Chinese authority to deal with this matter.

The tech giant decided to temporarily halt the global sales and exchange of its Galaxy Note 7 smartphones, while it investigates reports of fires in the devices.

On Sept 2, Samsung suspended sales of the Galaxy Note 7 and announced a "product exchange program", after it was found that a manufacturing defect in the phones' batteries had caused some of the handsets to generate excessive heat, resulting in fires and explosions.

However, in early October, reports emerged of incidents where these replacement phones also caught fire.

Year-ender: Most talked-about topics on the Chinese internet

The new Apple iPhone 6S and 6S Plus are displayed during an Apple media event in San Francisco, California, in this file photo from September 9, 2015. [Photo/Agencies]

Consumers' query about smartphone's mystery power-off

US tech giant Apple Inc on Dec 2 announced the reason behind an abrupt shutdown problem that recently affected some users of the iPhone 6s.

"We found that a small number of iPhone 6s devices made in September and October 2015 contained a battery component that was exposed to controlled ambient air longer than it should have been before being assembled into battery packs. As a result, these batteries degrade faster than a normal battery and cause unexpected shutdowns to occur," a statement posted on Apple's official website said.

The company also explained in the note that this was not a safety issue.

The statement was released after China's consumer protection watchdog - China Consumer Association (CCA) - issued a query letter earlier to ask the company to explain and provide solutions to malfunctions reportedly found in iPhones.

According to the CCA, many consumers continued to complain after Apple announced a free battery replacement program for iPhone 6s users, claiming that the abrupt shutdown problem also exists in iPhone 6, iPhone 6 Plus and iPhone 6s Plus models.

On Nov 21, Apple introduced a free replacement program, to resolve recent reports of the unexpected shutdown of the iPhone 6s.

Year-ender: Most talked-about topics on the Chinese internet

A woman uses Uber Technologies Inc's car-hailing service via an electronic screen in Tianjin.[Provided to China Daily]

Taxi-hailing app implements localized strategy

Uber China shut its old mobile app interface on the last weekend of November, to be replaced by a new one that integrates its functions and the drivers' pool with Didi Chuxing four months after the companies' merger.

Didi acquired Uber's China operations in August and became the No 1 ride-hailing service provider in China with 15 million drivers and over 400 million registered users.

All the Uber China platform's drivers and users were urged to move to a new interface introduced in early November.

Foreigners with Uber accounts are also required to download the new app if they would like to use Uber services in China.

Prior to the tie-up, Uber was one of very few foreign tech firms able to compete with domestic rivals head-on in China. Though Didi had bigger market share, Uber managed to gain a foothold in lower-tier cities. The two had been locked in fierce price wars to compete for market share.

Source : http://www.chinadaily.com.cn/bizchina/tech/2016-12/21/content_27727996_5.htm

Categorized in Search Engine

The amount of sexism on the internet is depressingly self-evident. Women in particular who speak their minds online are frequently attacked on the basis on their gender, and often in horrifyingly graphic ways. But what about the internet itself? There could be inherent characteristics in its very structure that could be considered sexist or gender biased.

It would seem so. To give you an idea, type ‘engineer’ or ‘managing director’ into a search engine and look at the images. You’ll find that the vast majority are of men. The stereotypes work both ways, of course. Type in ‘nurse’ and most of the images will be of women. Although this may simply reflect society as it stands, there is an argument to be made that, intentionally or otherwise, it also reinforces gender stereotyping. Given how influential the internet is on people’s perception of the world – a fact laid bare recently in both Brexit and the US Elections – isn’t there a responsibility among tech giants like Google, Yahoo, Microsoft and Facebook to fight the kind of prejudices that too often see internet users inhabit echo chambers where their own biases are reflected back at them?


It’s a question fraught with moral issues. On the one hand, search engines are automated and simply display the most common searches. It’s also clear that attempts to censor these facts of internet life is equally dubious, not only because it amounts to a denial of the issue, but because it sets a scary precedent, potentially providing a gateway into all kinds of Orwellian thought control.

Nevertheless, the issue is not about to go away, and making people more socially aware of gender bias on the internet is the first step in trying to find a solution. The problem was highlighted brilliantly in a UN campaign in 2013 concerned with women’s rights. It showed women’s faces with their mouths covered by the Google search bar and various auto-complete options, such as ‘women need’ transforming into ‘knowing their place’. It was also effectively publicised up by TED.com editor Emily McManus who, when attempting an internet search to find an English student who taught herself calculus, was asked by Google, ‘Do you mean a student who taught himself calculus?’ McManus’s subsequent screenshot was retweeted thousands of times and became a worldwide news story.

Part of the issue stems from a lack of gender balance in the tech industry itself. Office for National statistics figures from 2014 reveal that in the UK there are 723,000 male compared to 124,000 female professionals in the IT industry. In 2015, according to the companies’ own figures, only 17% of Microsoft’s technical staff were women, while men made up 83% of Google’s engineering staff and 80% of Apple’s technical staff. It’s true that these industries have put various initiatives in place to try to redress this balance, like Google’s ‘Made with Code’ or Microsoft’s ‘Women in tech’, spearheaded by Melinda Gates, but there’s clearly still a long way to go.

Although women are unquestionably the most disadvantaged when it comes to gender bias on the internet, men don’t escape stereotyping either. For example, with women making inroads into high-powered, well-paid jobs there are consequently more men taking on domestic roles or becoming stay-at-home dads. Trying to find this reflected on the internet is just as hard as trying to find female engineers. The attitude is still very much that if a man isn’t the ‘breadwinner’ he’s not really a man – type ‘homemaker’ in and see what comes up. Likewise, even as men’s involvement in child-rearing is transforming, the internet still fails to accurately represent such a significant social shift.


So what’s to be done, besides simply switching off the predictive function in settings? It seems some new approaches are being experimented with, ones that strike a balance between using the predictive function – which is otherwise a useful tool – and maintaining an element of choice. For example, global Swedish tech company Semcon has come up with a browser extension called Re-Search. This doesn’t stop the predictive function acting in its usual fashion, but it does provide an alternative search result that aims to give men and women more equal space in the search results.

Says Project Manager, Anna Funke, “If engineers are portrayed as men in yellow helmets, how can women feel that the job might be of interest to them? Role models are important when young people are thinking about their career choices and the internet is the first place many people look for information.” Semcon are making the software available free of charge, and its also open source in the hope it will encourage individuals and companies to develop the product further and find their own ways to spur on greater gender equality across the internet.

It’s worth remembering though, that when the internet first appeared back in the 1990s, it was hailed as a great democratic technology. Despite the ways in which states, corporations or individuals attempt to manipulate it, it remains just that, reflecting what we are, even when that’s pretty unpalatable. Ultimately then, if we’re going to have an internet that better reflects equality, openness and decency, it’s down to all of us who use it.

Author:  Robert Bright

Source:  http://www.huffingtonpost.co.uk/entry/the-great-gender-gap-debate-is-the-internet-bias-to-either-sex_uk_583d99d1e4b090a702a650c9

Categorized in Others

Nowadays, search engines have evolved dramatically. Earlier, we had Google, Bing and Yahoo to search and get specific information. These do not perform well when it comes to knowledge graph and some other smart features. But, now, you can find various alternative search engines as well as meta search engine. Some examples are – Mamma, iBoogie, Vroosh, TurboScout, Unabot and Search.

What is a meta search engine

Generally, you search for information on Google, Bing or Yahoo. But, do you know the source of information that is being used by those search engines? The source of information is some website like TheWindowsClub.com. Those search engines index blogs/websites and grab information from them. Now, meta search engines grab information from those search engines. You can get a detailed view if you check the following image,

Meta Search Engine List

Best Meta Search Engine List

If you are interested in meta search engines and want to give it a try, do check out this best meta search engine list. Here are the top meta search engines.


1] Mamma: This is a great website to get web, news, image and video search result. It grabs information from various search engine – as mentioned in the definition. The most interesting thing is you can get a Tab view. That means, this is very easy to switch from web search result to image and vice versa.

2] iBoogieThis is a better meta search engine than Mamma, as it uses various filters to show specific information. At the same time, you can also choose the number of results that you want to get on one page, filter domain to include or exclude that and more others. The best part is you can get plenty of related search terms to find something faster.

3] Vroosh: This is yet another nice meta search engine that can be used by anyone. Although, you cannot find web or image search, yet, you will get country-based search. For instance, if you are searching for something that is related to US, you can choose US version of Vroosh to get better search result. Similarly, you can choose Canada or worldwide version of Vroosh.

4] Turbo Scout: Turbo Scout is probably the biggest meta search engine out there as it grabs information from other meta search engine like iThaki, Mamma etc. You can search for web, images, news, products, blogs etc. using Turbo Scout. It comes with more information than any other meta search engines.

5] Search: Search.com is popular because of simplicity and great number of features. It shows search result just like Google. You will get search result on your left hand side and ads on right side. The related search terms will be shown on your right hand side. All these things make the page like Google search result.

6] Unabot: Unabot is consolidation of all meta search engines. That means, you will get a huge number of meta search engines in the list, which can be used anytime. On the other hand, you can refine search by country. It works like Vroosh and you can get results based more accurately.


There are more other meta search engines available for you and other regular internet users. Generally, users do not follow meta search engine because they get all the information on Google and other regular search engine. But, if you need more information under one roof, you can head over to meta search engines.

Source : http://www.thewindowsclub.com/meta-search-engine-list

Categorized in Search Engine

ON THE WEST coast of Australia, Amanda Hodgson is launching drones out towards the Indian Ocean so that they can photograph the water from above. The photos are a way of locating dugongs, or sea cows, in the bay near Perth—part of an effort to prevent the extinction of these endangered marine mammals. The trouble is that Hodgson and her team don’t have the time needed to examine all those aerial photos. There are too many of them—about 45,000—and spotting the dugongs is far too difficult for the untrained eye. So she’s giving the job to a deep neural network.

Neural networks are the machine learning models that identify faces in the photos posted to your Facebook news feed. They also recognize the questions you ask your Android phone, and they help run the Google search engine. Modeled loosely on the network of neurons in the human brain, these sweeping mathematical models learn all these things by analyzing vast troves of digital data. Now, Hodgson, a marine biologist at Murdoch University in Perth, is using this same technique to find dugongs in thousands of photos of open water, running her neural network on the same open-source software, TensorFlow, that underpins the machine learning services inside Google.

As Hodgson explains, detecting these sea cows is a task that requires a particular kind of pinpoint accuracy, mainly because these animals feed below the surface of the ocean. “They can look like whitecaps or glare on the water,” she says. But that neural network can now identify about 80 percent of dugongs spread across the bay.

The project is still in the early stages, but it hints at the widespread impact of deep learning over past year. In 2016, this very old but newly powerful technology helped a Google machine beat one of the world’s top players at the ancient game of Go—a feat that didn’t seem possible just a few months before. But that was merely the most conspicuous example. As the year comes to a close, deep learning isn’t a party trick. It’s not niche research. It’s remaking companies like Google, Facebook, Microsoft, and Amazon from the inside out, and it’s rapidly spreading to the rest of the world, thanks in large part to the open source software and cloud computing services offered by these giants of the internet.


The New Translation

In previous years, neural nets reinvented image recognition through apps like Google Photos, and they took speech recognition to new levels via digital assistants like Google Now and Microsoft Cortana. This year, they delivered the big leap in machine translation, the ability to automatically translate speech from one language to another. In September, Google rolled out a new service it calls Google Neural Machine Translation, which operates entirely through neural networks. According to the company, this new engine has reduced error rates between 55 and 85 percent when translating between certain languages.

Google trains these neural networks by feeding them massive collections of existing translations. Some of this training data is flawed, including lower quality translations from previous versions of the Google Translate app. But it also includes translations from human experts, and this buoys the quality of the training data as a whole. That ability to overcome imperfection is part of deep learning’s apparent magic: given enough data, even if some is flawed, it can train to a level well beyond those flaws.

Mike Schuster, a lead engineer on Google’s service, is happy to admit that his creation is far from perfect. But it still represents a breakthrough. Because the service runs entirely on deep learning, it’s easier for Google to continue improving the service. It can concentrate on refining the system as a whole, rather than juggling the many small parts that characterized machine translation services in the past.

Meanwhile, Microsoft is moving in the same direction. This month, it released a version of its Microsoft Translator app that can drive instant conversations between people speaking as many as nine different languages. This new system also runs almost entirely on neural nets, says Microsoft vice president Harry Shum, who oversees the company’s AI and research group. That’s important, because it means Microsoft’s machine translation is likely to improve more quickly as well.


The New Chat

In 2016, deep learning also worked its way into chatbots, most notably the new Google Allo. Released this fall, Allo will analyze the texts and photos you receive and instantly suggest potential replies. It’s based on an earlier Google technology called Smart Reply that does much the same with email messages. The technology works remarkably well, in large part because it respects the limitations of today’s machine learning techniques. The suggested replies are wonderfully brief, and the app always suggests more than one, because, well, today’s AI doesn’t always get things right.

Inside Allo, neural nets also help respond to the questions you ask of the Google search engine. They help the company’s search assistant understand what you’re asking, and they help formulate an answer. According to Google research product manager David Orr, the app’s ability to zero in on an answer wouldn’t be possible without deep learning. “You need to use neural networks—or at least that is the only way we have found to do it,” he says. “We have to use all of the most advanced technology we have.”

What neural nets can’t do is actually carry on a real conversation. That sort of chatbot is still a long way off, whatever tech CEOs have promised from their keynote stages. But researchers at Google, Facebook, and elsewhere are exploring deep learning techniques that help reach that lofty goal. The promise is that these efforts will provide the same sort of progress we’ve seen with speech recognition, image recognition, and machine translation. Conversation is the next frontier.

The New Data Center

This summer, after building an AI that cracked the game of Go, Demis Hassabis and his Google DeepMind lab revealed they had also built an AI that helps operate Google’s worldwide network of computer data centers. Using a technique called deep reinforcement learning, which underpins both their Go-playing machine and earlier DeepMind services that learned to master old Atari games, this AI decides when to turn on cooling fans inside the thousands of computer servers that fill these data centers, when to open the data center windows for additional cooling, and when to fall back on expensive air conditioners. All told, it controls over 120 functions inside each data center

As Bloomberg reported, this AI is so effective, it saves Google hundreds of millions of dollars. In other words, it pays for the cost of acquiring DeepMind, which Google bought for about $650 million in 2014. Now, Deepmind plans on installing additional sensors in these computing facilities, so it can collect additional data and train this AI to even higher levels.

The New Cloud

As they push this technology into their own products as services, the giants of the internet are also pushing it into the hands of others. At the end of 2015, Google open sourced TensorFlow, and over the past year, this once-proprietary software spread well beyond the company’s walls, all the way to people like Amanda Hodgson. At the same time, Google, Microsoft, and Amazon began offering their deep learning tech via cloud computing services that any coder or company can use to build their own apps. Artificial intelligence-as-a-service may wind up as the biggest business for all three of these online giants.

Over the last twelve months, this burgeoning market spurred another AI talent grab. Google hired Stanford professor Fei-Fei Li, one of the biggest names in the world of AI research, to oversee a new cloud computing group dedicated to AI, and Amazon nabbed Carnegie Mellon professor Alex Smolna to play much the same role inside its cloud empire. The big players are grabbing the world’s top AI talent as quickly as they can, leaving little for others. The good news is that this talent is working to share at least some of the resulting tech they develop with anyone who wants it.

As AI evolves, the role of the computer scientist is changing. Sure, the world still needs people who can code software. But increasingly, it also needs people who can train neural networks, a very different skill that’s more about coaxing a result from the data than building something on your own. Companies like Google and Facebook are not only hiring a new kind of talent, but also reeducating their existing employees for this new future—a future where AI will come to define technology in the lives of just about everyone.

Author:  CADE METZ

Source:  https://www.wired.com/2016/12/2016-year-deep-learning-took-internet

Categorized in Deep Web

Last Thursday, after weeks of criticism over its role in the proliferation of falsehoods and propaganda during the presidential election, Facebook announced its plan to combat “hoaxes” and “fake news.” The company promised to test new tools that would allow users to report misinformation, and to enlist fact-checking organizations including Snopes and PolitiFact to help litigate the veracity of links reported as suspect. By analyzing patterns of reading and sharing, the company said, it might be able to penalize articles that are shared at especially low rates by those who read them — a signal of dissatisfaction. Finally, it said, it would try to put economic pressure on bad actors in three ways: by banning disputed stories from its advertising ecosystem; by making it harder to impersonate credible sites on the platform; and, crucially, by penalizing websites that are loaded with too many ads.

Over the past month the colloquial definition of “fake news” has expanded beyond usefulness, implicating everything from partisan news to satire to conspiracy theories before being turned, finally, back against its creators. Facebook’s fixes address a far more narrow definition. “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain,” wrote Adam Mosseri, a vice president for news feed, in a blog post.


Facebook’s political news ecosystem during the 2016 election was vast and varied. There was, of course, content created by outside news media that was shared by users, but there were also reams of content — posts, images, videos — created on Facebook-only pages, and still more media created by politicians themselves. During the election, it was apparent to almost anyone with an account that Facebook was teeming with political content, much of it extremely partisan or pitched, its sourcing sometimes obvious, other times obscured, and often simply beside the point — memes or rants or theories that spoke for themselves.

Facebook seems to have zeroed in on only one component of this ecosystem — outside websites — and within it, narrow types of bad actors. These firms are, generally speaking, paid by advertising companies independent of Facebook, which are unaware of or indifferent to their partners’ sources of audience. Accordingly, Facebook’s anti-hoax measures seek to regulate these sites by punishing them not just for what they do on Facebook, but for what they do outside of it.

“We’ve found that a lot of fake news is financially motivated,” Mosseri wrote. “Spammers make money by masquerading as well-known news organizations and posting hoaxes that get people to visit to their sites, which are often mostly ads.” The proposed solution: “Analyzing publisher sites to detect where policy enforcement actions might be necessary.”

The stated targets of Facebook’s efforts are precisely defined, but its formulation of the problem implicates, to a lesser degree, much more than just “the worst of the worst.” Consider this characterization of what makes a “fake news” site a bad platform citizen: It uses Facebook to capture receptive audiences by spreading lies and then converts those audiences into money by borrowing them from Facebook, luring them to an outside site larded with obnoxious ads. The site’s sin of fabrication is made worse by its profit motive, which is cast here as a sort of arbitrage scheme. But an acceptable news site does more or less the same thing: It uses Facebook to capture receptive audiences by spreading not-lies and then converts those audiences into money by luring them to an outside site not-quite larded with not-as-obnoxious ads. In either case, Facebook users are being taken out of the safe confines of the platform into areas that Facebook does not and cannot control.

In this context, this “fake news” problem reads less as a distinct new phenomenon than as a flaring symptom of an older, more existential anxiety that Facebook has been grappling with for years: its continued (albeit diminishing) dependence on the same outside web that it, and other platforms, have begun to replace. Facebook’s plan for “fake news” is no doubt intended to curb certain types of misinformation. But it’s also a continuation of the company’s bigger and more consequential project — to capture the experiences of the web it wants and from which it can profit, but to insulate itself from the parts that it doesn’t and can’t. This may help solve a problem within the ecosystem of outside publishers — an ecosystem that, in the distribution machinery of Facebook, is becoming redundant, and perhaps even obsolete.


As Facebook has grown, so have its ambitions. Its mantralike mission (to “connect the world”) is rivaled among internet companies perhaps by only that of Google (to “organize the world’s information”) in terms of sheer scope. In the run-up to Facebook’s initial public offering, Mark Zuckerberg told investors that the company makes decisions “not optimizing for what’s going to happen in the next year, but to set us up to really be in this world where every product experience you have is social, and that’s all powered by Facebook.”

To understand what such ambition looks like in practice, consider Facebook’s history. It started as an inward-facing website, closed off from both the web around it and the general public. It was a place to connect with other people, and where content was created primarily by other users: photos, wall posts, messages. This system quickly grew larger and more complex, leading to the creation, in 2006, of the news feed — a single location in which users could find updates from all of their Facebook friends, in roughly reverse-chronological order.

When the news feed was announced, before the emergence of the modern Facebook sharing ecosystem, Facebook’s operating definition of “news” was pointedly friend-centric. “Now, whenever you log in, you’ll get the latest headlines generated by the activity of your friends and social groups,” the announcement about the news feed said. This would soon change.

In the ensuing years, as more people spent more time on Facebook, and following the addition of “Like” and “Share” functions within Facebook, the news feed grew into a personalized portal not just for personal updates but also for the cornucopia of media that existed elsewhere online: links to videos, blog posts, games and more or less anything else published on an external website, including news articles. This potent mixture accelerated Facebook’s change from a place for keeping up with family and friends to a place for keeping up, additionally, with the web in general, as curated by your friends and family. Facebook’s purview continued to widen as its user base grew and then acquired their first smartphones; its app became an essential lens through which hundreds of millions of people interacted with one another, with the rest of the web and, increasingly, with the world at large.

Facebook, in other words, had become an interface for the whole web rather than just one more citizen of it. By sorting and mediating the internet, Facebook inevitably began to change it. In the previous decade, the popularity of Google influenced how websites worked, in noticeable ways: Titles and headlines were written in search-friendly formats; pages or articles would be published not just to cover the news but, more specifically, to address Google searchers’ queries about the news, the canonical example being The Huffington Post’s famous “What Time Does The Super Bowl Start?” Publishers built entire business models around attracting search traffic, and search-engine optimization, S.E.O., became an industry unto itself. Facebook’s influence on the web — and in particular, on news publishers — was similarly profound. Publishers began taking into consideration how their headlines, and stories, might travel within Facebook. Some embraced the site as a primary source of visitors; some pursued this strategy into absurdity and exploitation.


Facebook, for its part, paid close attention to the sorts of external content people were sharing on its platform and to the techniques used by websites to get an edge. It adapted continually. It provided greater video functionality, reducing the need to link to outside videos or embed them from YouTube. As people began posting more news, it created previews for links, with larger images and headlines and longer summaries; eventually, it created Instant Articles, allowing certain publishers (including The Times) to publish stories natively in Facebook. At the same time, it routinely sought to penalize sites it judged to be using the platform in bad faith, taking aim at “clickbait,” an older cousin of “fake news,” with a series of design and algorithm updates. As Facebook’s influence over online media became unavoidably obvious, its broad approach to users and the web became clearer: If the network became a popular venue for a certain sort of content or behavior, the company generally and reasonably tried to make that behavior easier or that content more accessible. This tended to mean, however, bringing it in-house.

To Facebook, the problem with “fake news” is not just the obvious damage to the discourse, but also with the harm it inflicts upon the platform. People sharing hoax stories were, presumably, happy enough with they were seeing. But the people who would then encounter those stories in their feeds were subjected to a less positive experience. They were sent outside the platform to a website where they realized they were being deceived, or where they were exposed to ads or something that felt like spam, or where they were persuaded to share something that might later make them look like a rube. These users might rightly associate these experiences not just with their friends on the platform, or with the sites peddling the bogus stories but also with the platform itself. This created, finally, an obvious issue for a company built on attention, advertising and the promotion of outside brands. From the platform’s perspective, “fake news” is essentially a user-experience problem resulting from a lingering design issue — akin to slow-loading news websites that feature auto-playing videos and obtrusive ads.

Increasingly, legitimacy within Facebook’s ecosystem is conferred according to a participant’s relationship to the platform’s design. A verified user telling a lie, be it a friend from high school or the president elect, isn’t breaking the rules; he is, as his checkmark suggests, who he represents himself to be. A post making false claims about a product is Facebook’s problem only if that post is labeled an ad. A user video promoting a conspiracy theory becomes a problem only when it leads to the violation of community guidelines against, for example, user harassment. Facebook contains a lot more than just news, including a great deal of content that is newslike, partisan, widely shared and often misleading. Content that has been, and will be, immune from current “fake news” critiques and crackdowns, because it never had the opportunity to declare itself news in the first place. To publish lies as “news” is to break a promise; to publish lies as “content” is not.

That the “fake news” problem and its proposed solutions have been defined by Facebook as link issues — as a web issue — aligns nicely with a longer-term future in which Facebook’s interface with the web is diminished. Indeed, it heralds the coming moment when posts from outside are suspect by default: out of place, inefficient, little better than spam.


Source : http://www.nytimes.com/2016/12/22/magazine/facebooks-problem-isnt-fake-news-its-the-rest-of-the-internet.html?_r=1

Categorized in News & Politics

A new and somewhat bizarre lawsuit filed against Google accuses the search giant of running an “internal spying program” and forcing employees to adhere to “illegal confidentiality agreements, policies, guidelines and practices.”

The lawsuit was filed earlier this week by an anonymous product manager. The suit claims that Google’s employment agreements expressly prohibit Google personnel from reporting illegal conduct they may have witnessed or even bringing to light potentially dangerous product defects. The complaint alleges that Google discourages the aforementioned type  ofwhistle blowing activities because such statements might ultimately resurface during legal proceedings

The complaint also details that Google’s employment agreement precludes employees from disclosing their base pay to potential employers and even from discussing what their working experience at Google was like.

“The policies even prohibit Googlers from speaking to their spouse or friends about whether they think their boss could do a better job,” the complaint adds.

Also interesting is the allegation that Google “prohibits employees from writing creative fiction”, without prior approval, if the main character works at a tech company in Silicon Valley.


The lawsuit takes the position that Google’s sweeping confidentiality agreements are unnecessarily broad and ultimately violate California labor laws.

The complaint reads in part:

The unnecessary and inappropriate breadth of the policies are intended to control Google’s former and current employees, limit competition, infringe on constitutional rights, and prevent the disclosure and reporting of misconduct. The policies are wrong and illegal.

In regards to the allegations that Google wants employees to keep illegal activity and potentially dangerous products on the down low, the complaint reads:

Google restricts what Googlers say internally in order to conceal potentially illegal conduct. It instructs employees in its training programs to do the following: “Don’t send an email that says, ‘I think we broke the law’ or ‘I think we violated this contract.'”The training program also advises employees that they should not be candid when speaking with Google’s attorneys about dangerous products or violations of the law. The program advises Googlers that some jurisdictions do not recognize the attorney-client privilege and “Inside the U.S., government agencies often pressure companies to waive the privilege.”

As a point of interest, the plaintiff in this case has been a Google employee for just over 2 years and, per the complaint, was recently outed, albeit falsely, for leaking proprietary information to the press.

Google has since issued a statement to The Verge relaying that it “will defend this suit vigorously because it’s baseless.”

The full suit can be read below.

Author:  Yoni Heisler

Source:  https://www.yahoo.com/tech/lawsuit-claims-google-employees-forced-ignore-serious-product-040350012.html

Categorized in Internet Privacy

This holiday season, when we Google for the most trending gifts, compare different items on Amazon or take a break to watch a holiday movie on Netflix, we are making use of what might be called “the three R’s” of the Internet Age: rating, ranking and recommending.

Much like the traditional “three R’s” of education – “reading, ’riting and ’rithmetic” – no modern education is complete without understanding how websites’ algorithms combine, process and synthesize information before presenting it to us.

As we explore in our new book, “The Power of Networks: Six Principles that Connect Our Lives,” the three tasks of rating, ranking and recommending are interdependent, though it may not be initially obvious. Before we can rank a set of items, we need some measure by which they can be ordered. This is really a rating of each item’s quality according to some criterion.


With ranked lists in hand, we may turn around and make recommendations about specific items to people who may be interested in purchasing them. This interrelationship highlights the importance of how the quality and attractiveness of an item is quantified into a rating in the first place.


What consumers and internet users often call “rating,” tech companies may call “scoring.” This is key to, for example, how Google’s search engine returns high-quality links at the top of its search results, with the most relevant information usually contained in the first page of responses. When a person enters a search query, Google assigns two main scores to each page in its database of trillions, and uses these to generate the order for its results.

The first of these scores is a “relevance score,” a combination of dozens of factors that measure how closely related the page and its content are to the query. For example, it takes into account how prominently placed search keywords are on the result page. The second is an “importance score,” which captures the way the network of webpages are connected to one another via hyperlinks to quantify how important each page is.

The combination of these two scores, along with other information, gives a rating for each page, quantifying how useful it might be to the end user. Higher ratings will be placed toward the top of the search results. These are the pages Google is implicitly recommending that the user visit.


The three Rs also pervade online retail. Amazon and other e-commerce sites allow customers to enter reviews for products they have purchased. The star ratings contained in these reviews are usually aggregated into a single number representing customers’ overall opinion. The principle behind this is called “the wisdom of crowds,” the assumption that combining many independent opinions will be more reflective of reality than any single individual’s evaluation.

Key to the wisdom of crowds is that the reviews accurately reflect customers’ experiences, and are not biased or influenced by, say, the manufacturer adding a series of positive assessments to its own items. Amazon has mechanisms in place to screen out these sorts of reviews – for example, by requiring a purchase to have been made from a given account before it can submit a review. Amazon then averages the star ratings for the reviews that remain.


Averaging ratings is fairly straightforward. But it’s more complicated to figure out how to effectively rank products based on those ratings. For example, is an item that has 4.0 stars based on 200 reviews better than one that has 4.5 stars but only from 20 reviews? Both the average rating and sample size need to be accounted for in the ranking score.

There are even more factors that may be taken into consideration, such as reviewer reputation (ratings based on reviewers with higher reputations may be trusted more) and rating disparity (products with widely varying ratings may be demoted in the ordering). Amazon may also present products to different users in varying orders based on their browsing history and records of previous purchases on the site.


The prime example of recommendation systems is Netflix’s method for determining which movies a user will enjoy. Algorithms predict how each specific user would rate different movies she has not yet seen by looking at the past history of her own ratings and comparing them with those of similar users. The movies with the highest predictions are those that will then make the final cut for a particular user.

The quality of these recommendations depends heavily on the algorithm’s accuracy and its use of machine learning, data mining and the data itself. The more ratings we start with for each user and each movie, the better we can expect the predictions to be.

A simple rating predictor might assign one parameter to each user that captures how lenient or harsh a critic she tends to be. Another parameter might be assigned to each movie, capturing how well-received the movie is relative to others. More sophisticated models will identify similarities among users and movies – so if people who like the kinds of movies you like have given a high rating to a movie you haven’t seen, the system might suggest you’ll like it too.

This can involve hidden dimensions that underlie user preferences and movie characteristics. It can also involve measuring how the ratings for any given movie have changed over time. If a previously unknown film becomes a cult classic, it might start appearing more in people’s recommendation lists. A key aspect of dealing with several models is combining and tuning them effectively: The algorithm that won the Netflix Prize competition of predicting movie ratings in 2009, for example, was a blend of hundreds of individual algorithms.

This combination of rating, ranking and recommendation algorithms has transformed our daily online activities, far beyond shopping, searching and entertainment. Their interconnection brings us clearer – and sometimes unexpected – insights into what we want and how we get it.

Source : http://theconversation.com/rating-ranking-and-recommending-three-rs-for-the-internet-age-70512

Categorized in Future Trends

The Internet Society has released the findings of its 2016 Global Internet Report in which 40% of users admit they would not do business with a company which had suffered a data breach.

Highlighting the extent of the data breach problem, the report makes key recommendations for building user trust in the online environment, stating that more needs to be done to protect online personal information.

With a reported 1,673 breaches and 707 million exposed records occurring in 2015, the Internet Society is urging organisations to change their stance and follow five recommendations to reduce the number and impact of data breaches globally:


1. Put users - who are the ultimate victims of data breaches - at the centre of solutions. When assessing the costs of data breaches, include the costs to both users and organisations. 

2. Increase transparency about the risk, incidence and impact of data breaches globally. Sharing information responsibly helps organisations improve data security, helps policymakers improve policies and regulators pursue attackers, and helps the data security industry create better solutions.

3. Data security must be a priority – organisations should be held to best practice standards when it comes to data security.

4. Increase accountability – organisations should be held accountable for their breaches. Rules regarding liability and remediation must be established up front.

5. Increase incentives to invest in security – create a market for trusted, independent assessment of data security measures so that organisations can credibly signal their level of data security. Security signals enable organisations to indicate that that they are less vulnerable than competitors.

The report also draws parallels with threats posed by the Internet of Things (IoT). Forecasted to grow to tens of billions of devices by 2020, interconnected components and sensors that can track locations, health and other daily habits are opening gateways into user’s personal lives, leaving data exposed.

“We are at a turning point in the level of trust users are placing in the Internet,” said Internet Society’s Olaf Kolkman, Chief Internet Technology Officer. “With more of the devices in our pockets now having Internet connectivity, the opportunities for us to lose personal data is extremely high.

“Direct attacks on websites such as Ashley Madison and the recent IoT-based attack on Internet performance management company, Dyn, that rendered some of the world’s most famous websites including Reddit, Twitter and The New York Times temporarily inaccessible, are incredibly damaging both in terms of profits and reputation, but also to the levels of trust users have in the Internet.”

Other report highlights include:

  • The average cost of a data breach is now $4 million, up 29 percent since 2013
  • The average cost per lost record is $158, up 15 percent since 2013
  • Within business, the retail sector represents 13 percent of all breaches and six percent of all records stolen, while financial institutions represent 15 percent of breaches, but just 0.1 percent of records stolen, indicating these businesses might have greater resilience built in to protect their users

Source  :  https://www.finextra.com/pressarticle/67186/internet-trust-at-all-time-low-not-enough-data-protection

Categorized in Internet Ethics

The Internet Archive has been making waves lately, and not entirely by choice. The non-profit has been growing, and recently announced an intriguing new feature for its famous Wayback Machine that will make it far more useful, but it’s also been at center of a number of controversies over censorship and data freedom. This week it announced a provocative plan to spend millions mirroring its archives on Canadian soil, apparently to avoid future attacks from the Trump Administration. The two are at least somewhat related; as Archive.org makes its services larger and more user friendly, those services become more problematic in the eyes of the authorities.


The Internet Archive basically has two components: the website archive, called the Wayback Machine, and everything else, including databases of digitized books, music, movies, and more. The Wayback Machine has become a major pillar of the internet, not nearly as highly trafficked as Wikipedia but similar in that quite a few people would be screwed without it. In principle, its goal is really just part of the overall thesis of the internet: The Internet Archive is meant to ensure that knowledge and the public record stay intact over time. Through the Wayback Machine, it archives “snapshots” of as many websites as it can, as often as it can, and makes the full history available, for free. For most of its history, the biggest controversy it saw was whether it was appropriate to ask users for cash.



The issue reached a much higher level of profile earlier this week, when the organization revealed that it had received a so-called National Security Letter from the FBI. The group also posted a redacted version of the document online, one of the few such publications that has ever taken place. The letter was even shown to be pushing false information about how to challenge the automatic gag order that comes with an NSL, and the FBI has admitted that the same mistake was sent to some portion of NSL recipients. It’s not known quite how many just yet, but there were over 13,000 sent out last year alone. Archive.org is now one of the most successful challengers to their legal authority.Recently, though, Archive.org has been getting a very different sort of profile.


The ability to use its servers to anonymously host files has led ISIS and other extremist organizations to habitually post their videos and literature there. Much of it is aimed at recruiting impressionable teens around the world, and much of the rest depicts real crimes of gratuitous violence — but there it is, free and public, just a (slightly outside-the-box) search away on Archive.org. The Internet Archive’s ideological beliefs about censorship, along with its genuine inability to police the vastness of its own databases, has transformed its once squeaky-clean image. In some circles, Archive.org is a multimedia PasteBin, but with a lot more self-righteousness.


And it’s those brushes with the spooks and criminals alike that are driving Archive.org’s concern. Trump, who will be the oldest President ever at first swearing-in, has said that he would “certainly be open to closing areas [of the internet] where we are at war with somebody… I’m not talking about closing the internet. I’m talking about closing parts of the internet where ISIS is.” Evidently, the Internet Archive is unsure of whether it would be categorized as “where ISIS is,” since it explicitly referenced the “new administration promising radical change” as the reason for its new, Northern mirror.




As mentioned, though, the Internet Archive is famously cash-strapped, so the whole initiative is to be paid for with donations. It will cost “millions” according to their own estimates, but that’s actually pretty reasonable considering that the data itself comes in at a whopping 15 petabytes, or 15,000 terabytes. With such volume, the base storage costs should be at least around a few million all on their own. The project’s banner ad states that the entire thing could be funded if everybody reading gave just $50 — far beyond what Wikipedia generally suggests. The organization already has a small number of employees in Toronto, though, so presumably creating a copy there would be cheaper than other countries.Canada is of course a terrible choice for the archive’s backup, especially since the stated goal of the move is to keep a Library of Alexandria-style disaster from ending its existence for good. If there were to be a malicious attempt to burn down Archive.org, what protection would Canada’s draconian free speech laws provide, compared with those in the United States? Most attempts to take down the American side would presumably have the force of American law — does the Internet Archive think the Canadian government is going to resist a legal data seizure or server take-down request from the United States? Iceland would have been a much more logical choice — a country that, at the very least, doesn’t openly share virtually all intelligence with the agencies the Internet Archive is trying to escape.


The Internet Archive might seem like an odd sort of organization to go head to head with Big Government, but society and law seem to be slowly veering into a collision course with everything it represents. The archive’s views haven’t changed; if a conflict is coming, it’s because law and society are changing. Any public service with a true commitment to data freedom is going to become home to the people who need such freedom the most, both the journalist/activist types and the criminal/terrorist types. And that means that they will naturally attract the attention of anyone interested in countering one or both of those types of user.


By announcing even the intention to mirror their content in a different legal environment, this little archiving group has signaled that it will not back down, if challenged. Luckily for the archive, its seems to have the support of larger, more experienced groups like the Electronic Frontier Foundation. This may all be rank overreaction on the part of the Internet Archive, but if not, there is now enough attention focused on it to ensure that any legal challenge becomes a major battle. For groups like the ACLU, which basically exist to fight and win battles of legal precedent, that might be the most desirable outcome of all.


Author : Graham Templeton

Source :  https://www.extremetech.com/internet/240720-internet-archive-just-got-bit-useful-lot-political

Categorized in Online Research

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more