fbpx

Digital marketing professional Jayakumar K says users become separated from information that disagrees with their viewpoints

THIRUVANANTHAPURAM, MAY 29:  

Google and other social-media networks’ resort to ‘filter bubbles’ to divide users into like-minded people will only create a community of ‘frogs in the internet well,’ says an expert.

Filter bubbles created by personalised search technologies restrict a user’s perspective, says Jayakumar K, a digital marketing professional and CEO, Cearsleg Technologies.

 

Data analytics

A Google analytics expert, Jayakumar also holds the honorary position of a Deputy Commander with the Kerala Police CyberDome here. Google and major social-media companies employ complex data analytics to restrict the access to actual or full facts about a subject.

Personalised search results generated by these bots will give information that may not be adequate, correct or complete.

Google updates its ‘personalised searched’ algorithms and Facebook its ‘personalised news-stream’ algorithms to isolate users in this manner, Jayakumar said.

Ideological bubbles

Users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.

The aim is to retain publishers to help drive revenue, by forcing them to use paid activity for a longer period.

 

But this could in turn create an ‘echo-chamber’ effect as users search for information related to a particular topic and bump into each other.

They become insulated within their own online community and fail to get exposed to different views.

The resulting narrow information base could have its own adverse impact on critical discourse on the online medium and, by extension, freedom of expression, Jayakumar said.

Continual process

Google and Facebook claim their latest change in algorithms aims to prioritise content from friends over those of brands.

This is part of a continual process of improving the user experience, they aver. But it could be also an attempt to further limit the ‘organic reach’ of publishers, Jayakumar counters.

According to the latest reports, the European Union has taken measures to lessen the impact of the filter bubble in that region. It is sponsoring inquiries into how filter bubbles affect people’s ability to access diverse news.

India will be better advised to exercise caution and limit the impact of filter bubbles on online discourse in the country, Jayakumar said.

Source: This article was published thehindubusinessline.com By VINSON KURIAN

Categorized in Social

On the heels of Facebook defending its Content Policy after the leak of its content moderation guidelines, a research analyst has said that existing laws on live broadcasts don’t apply to the internet.

“The social media companies have no liability towards online content like murder, rape, terrorism and suicide under intermediary laws around the world. Social media companies’ obligation is restricted to removing the illegal content on being informed of it,” said Shobhit Srivastava, research analyst, Mobile Devices and Ecosystems at market research firm Counterpoint Research.

 

Earlier this week, Facebook’s several documents, included internal training manuals, spreadsheets and flowcharts, were leaked, showing how the social media giant moderates issues such as hate speech, terrorism, pornography and self-harm on its platform.

Citing the leaks, the Guardian said that Facebook’s moderators are overwhelmed with work and often have “just 10 seconds” to make a decision on content posted on the platform.

“The recent incidents where harmful videos were posted online raise serious question on how social media companies moderate online content. Facebook has a very large user base (nearly two billion monthly active users) and is expanding, and therefore moderating content with help of content moderators is a difficult task,” Srivastava told IANS.

“Facebook is also using a software to intercept content before it is posted online but it is still in early stages. This means that Facebook has to put a lot more effort to make the content safe,” he added.

According to Monika Bickert, head of global policy management, Facebook, more than a billion people use Facebook on an average day and they share posts in dozens of languages.

A very small percentage of those will be reported to the company for investigation and the range of issues is broad — from bullying and hate speech to terrorism — and complex.

 

“Designing policies that both keep people safe and enable them to share freely means understanding emerging social issues and the way they manifest themselves online, and being able to respond quickly to millions of reports a week from people all over the world,” she said.

Bickert said it is difficult for the company reviewers to understand the context.

“It’s hard to judge the intent behind one post or the risk implied in another,” she said.

The company does not always get things right, Bickert explained, but it believes that a middle ground between freedom and safety is ultimately the best answer.

She said that Facebook has to be “as objective as possible” in order to have consistent guidelines across every area it serves.

Srivastava noted that “from social and business point of view social media companies like Facebook, etc have to dedicate more resources for content moderating purposes which are inadequate now, otherwise we will see various governments restricting access to these players which will spell bad news for both users and these companies.”

Last month, Facebook announced that it was hiring additional 3,000 reviewers to ensure the right support for users.

Source: This article was published factordaily.com By IANS

Categorized in Social

Google has, perhaps more than any other company, realized that information is power. Information about the Internet, information about innumerable trends, and information about its users, YOU.

So how much does Google know about you and your online habits? It’s only when you sit down and actually start listing all of the various Google services you use on a regular basis that you begin to realize how much information you’re handing over to Google.

This has, as these things tend to do, given rise to various privacy concerns. It probably didn’t help when Google’s CEO, Eric Schmidt, recently went on the record saying: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

 

Now let’s have a look at how Google is gathering information from you, and about you.

Google’s information-gathering channels

Google’s stated mission is “to organize the world’s information and make it universally accessible and useful” and it is making good on this promise. However, Google is gathering even more information than most of us realize.

  • Searches (web, images, news, blogs, etc.) – Google is, as you all know, the most popular search engine in the world with a market share of almost 70% (for example, 66% of searches in the US are made on Google). Google tracks all searches, and now with search becoming more and more personalized, this information is bound to grow increasingly detailed and user specific.
  • Clicks on search results – Not only does Google get information on what we search for, it also gets to find out which search results we click on.
  • Web crawling – Googlebot, Google’s web crawler, is a busy bee, continuously reading and indexing billions of web pages.
  • Website analytics – Google Analytics is by far the most popular website analytics package out there. Due to being free and still supporting a number of advanced features, it’s used by a large percentage of the world’s websites.
  • Ad serving – Adwords and Adsense are cornerstones of Google’s financial success, but they also provide Google with a lot of valuable data. Which ads are people clicking on, which keywords are advertisers bidding on, and which ones are worth the most? All of this is useful information.
  • Email – Gmail is one of the three largest email services in the world, together with competing options from Microsoft (Hotmail) and Yahoo. Email content, both sent and received, is parsed and analyzed. Even from a security standpoint this is a great service for Google. Google’s email security service, Postini, gets a huge amount of data about spam, malware and email security trends from the huge mass of Gmail users.
  • Twitter – “All your tweets are belong to us,” to paraphrase an early Internet meme. Google has direct access to all tweets that pass through Twitter after a deal made late last year.
  • Google Apps (Docs, Spreadsheets, Calendar, etc.) – Google’s office suite has many users and is of course a valuable data source to Google.
  • Google Public Profiles – Google encourages you to put a profile about yourself publicly on the Web, including where you can be found on social media sites and your homepage, etc.
  • Orkut – Google’s social network isn’t a success everywhere, but it’s huge in some parts of the world (mainly Brazil and India).
  • Google Public DNS – Google’s newly launched DNS service doesn’t just help people get fast DNS lookups, it helps Google too, because it will get a ton of statistics from this, for example what websites people access.
  • The Google Chrome browser – What is your web browsing behavior? What sites do you visit?
  • Google Finance – Aside from the finance data itself, what users search for and use on Google Finance is sure to be valuable data to Google.
  • YouTube – The world’s largest and most popular video site by far is, as you know, owned by Google. It gives Google a huge amount of information about its users’ viewing habits.
  • Google Translate – Helps Google perfect its natural language parsing and translation.
  • Google Books – Not huge for now, but has the potential to help Google figure out what people are reading and want to read.
  • Google Reader – By far the most popular feed reader in the world. What RSS feeds do you subscribe to? What blog posts do you read? Google will know.
  • Feedburner – Most blogs use Feedburner to publicize their RSS feeds, and every Feedburner link is tracked by Google.
  • Google Maps and Google Earth – What parts of the world are you interested in?
  • Your contact network – Your contacts in Google Talk, Gmail, etc, make up an intricate network of users. And if those also use Google, the network can be mapped even further. We don’t know if Google does this, but the data is there for the taking.
  • Coming soon – Chrome OS, Google Wave, more up-and-coming products from Google.

And the list could go on since there are even more Google products out there, but we think that by now you’ve gotten the gist of it… 

 

Much of this data is anonymized, but not always right away. Logs are kept for nine months, and cookies (for services that use them) aren’t anonymized until after 18 months. Even after that, the sheer amount of generic user data that Google has on its hands is a huge competitive advantage against most other companies, a veritable gold mine.

Google’s unstoppable data collection machine

There are many different aspects of Google’s data collection. The IP addresses requests are made from are logged, cookies are used for settings and tracking purposes, and if you are logged into your Google account, what you do on Google-owned sites can often be coupled to you personally, not just your computer.

In short, if you use Google services, Google will know what you’re searching for, what websites you visit, what news and blog posts you read, and more. As Google adds more services and its presence gets increasingly widespread, the so-called Googlization (a term coined by John Batelle and Alex Salkever in 2003) of almost everything continues.

The information you give to any single one of Google’s services wouldn’t be much to huff about. The really interesting dilemma comes when you use multiple Google services, and these days, who doesn’t?

Try using the Internet for a week without touching a single one of Google’s services. This means no YouTube, no Gmail, no Google Docs, no clicking on Feedburner links, no Google search, and so on. Strictly, you’d even have to skip services that Google partner with, so, sorry, no Twitter either.

This increasing Googlization is probably why some people won’t want to use Google’s Chrome OS, which will be strongly coupled with multiple Google services and most likely give Google an unprecedented amount of data about your habits.

Why does Google do this?

As we stated in the very first sentence of this article, information is power.

With all this information at its fingertips, Google can group data together in very useful ways. Not just per user or visitor, but Google can also examine trends and behaviors for entire cities or countries.

Google can use the information it collects for a wide array of useful things. In all of the various fields where Google is active, it can make market decisions, research, refine its products, anything, with the help of this collected data.

 

For example, if you can discover certain market trends early, you can react effectively to the market. You can discover what people are looking for, what people want, and make decisions based on those discoveries. This is of course extremely useful to a large company like Google.

And let’s not forget that Google earns much of its money serving ads. The more Google knows about you, the more effectively it will be able to serve ads to you, which has a direct effect on Google’s bottom line.

It’s not just Google

It should be mentioned that Google’s isn’t alone in doing this kind of data collection. Rest assured that Microsoft is doing similar things with Bing and Hotmail, to name just one example.

The problem (if you want to call it a problem) with Google is that, like an octopus, its arms are starting to reach almost everywhere. Google has become so mixed up in so many aspects of our online lives that it is getting an unprecedented amount of information about our actions, behavior and affiliations online.

Google, an octopus?
Google, an octopus?

Accessing Google’s data vault

To its credit, Google is making some of its enormous cache of data available to you as well via various services.

 

If Google can make that much data publicly available, just imagine the amount of data and the level of detail Google can get access to internally. And ironically, these services give Google even more data, such as what trends we are interested in, what sites we are trying to find information about, and so on.

An interesting observation when using these tools is that in many cases information can be found for everything except for Google’s own products. For example, Ad Planner and Trends for Websites don’t show site statistics for Google sites, but you can find information about any other sites.

No free lunch

Did you ever wonder why almost all of Google’s services are free of charge? Well, now you know. That old saying, “there ain’t no such thing as a free lunch,” still holds true. You may not be paying Google with dollars (aside from clicking on those Google ads), but you are paying with information. That doesn’t have to be a bad thing, but you should be aware of it.

Source: This article was published royal.pingdom.com

Categorized in Search Engine

Hello. It’s my first day back covering technology for The Atlantic. It also marks roughly 10 years that I’ve been covering science and technology, so I’ve been thinking back to my early days at Wired in the pre-crash days of 2007.

The internet was then, as it is now, something we gave a kind of agency to, a half-recognition that its movements and effects were beyond the control of any individual person or company. In 2007, the web people were triumphant. Sure, the dot-com boom had busted, but empires were being built out of the remnant swivel chairs and fiber optic cables and unemployed developers. Web 2.0 was not just a temporal description, but an ethos. The web would be open. A myriad of services would be built, communicating through APIs, to provide the overall internet experience.

 

The web itself, en toto, was the platform, as Tim O’Reilly, the intellectual center of the movement, put it in 2005. Individual companies building individual applications could not hope to beat the web platform, or so the thinking went. “Any Web 2.0 vendor that seeks to lock in its application gains by controlling the platform will, by definition, no longer be playing to the strengths of the platform,” O’Reilly wrote.

O’Reilly had just watched Microsoft vanquish its rivals in office productivity software (Word, Excel, etc.) as well as Netscape: “But a single monolithic approach, controlled by a single vendor, is no longer a solution, it's a problem.”

And for a while, this was true. There were a variety of internet services running on an open web, connected to each other through APIs. For example, Twitter ran as a service for which many companies created clients and extensions within the company’s ecosystem. Twitter delivered tweets you could read not just on twitter.com but on Tweetdeck or Twitterific or Echofon or Tweetbot, sites made by independent companies which could build new things into their interfaces. There were URL shortening start-ups (remember those?) like TinyURL and bit.ly, and TwitPic for pictures. And then there were the companies drinking at the firehose of Twitter’s data, which could provide the raw material for a new website (FavStar) or service (DataSift). Twitter, in the experience of it, was a cloud of start-ups.

But then in June of 2007, the iPhone came out. Thirteen months later, Apple’s App Store debuted. Suddenly, the most expedient and enjoyable way to do something was often tapping an individual icon on a screen. As smartphones took off, the amount of time that people spent on the truly open web began to dwindle.

Almost no one had a smartphone in early 2007. Now there are 2.5 billion smartphones in the world—2.5 billion! That’s more than double the number of PCs that have ever been at use in the world.

As that world-historical explosion began, a platform war came with it. The Open Web lost out quickly and decisively. By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web.

O’Reilly’s lengthy description of the principles of Web 2.0 has become more fascinating through time. It seems to be describing a slightly parallel universe. “Hyperlinking is the foundation of the web,” O’Reilly wrote. “As users add new content, and new sites, it is bound into the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.”

Nowadays, (hyper)linking is an afterthought because most of the action occurs within platforms like Facebook, Twitter, Instagram, Snapchat, and messaging apps, which all have carved space out of the open web. And the idea of “harnessing collective intelligence” simply feels much more interesting and productive than it does now. The great cathedrals of that time, nearly impossible projects like Wikipedia that worked and worked well, have all stagnated. And the portrait of humanity that most people see filtering through the mechanics of Facebook or Twitter does not exactly inspire confidence in our social co-productions.

 

Outside of the open-source server hardware and software worlds, we see centralization. And with that centralization, five giant platforms have emerged as the five most valuable companies in the world: Apple, Google, Microsoft, Amazon, Facebook.

Market Capitalization for Apple (AAPL), Amazon (AMZN), Facebook (FB), Google (GOOGL), and Microsoft (MSFT), May 14, 2007 to present

In mid-May of 2007, these five companies were worth $577 billion. Now, they represent $2.9 trillion worth of market value! Not so far off from the combined market cap ($2.85) of the top 10 largest companies in the second quarter of 2007: Exxon Mobil, GE, Microsoft, Royal Dutch Shell, AT&T, Citigroup, Gazprom, BP, Toyota, and Bank of America.

And it’s not because the tech companies are being assigned astronomical price-to-earnings ratios as in the dot-com bust. Apple, for example, has a PE ratio (17.89) roughly equal to Walmart’s (17.34). Microsoft’s (30.06) is in the same class as Exxon’s (34.36).

Massive size has become part and parcel to how these companies do business.“Products don't really get that interesting to turn into businesses until they have about 1 billion people using them,” Mark Zuckerberg said of WhatsApp in 2014. Ten years ago, there were hardly any companies that could count a billion customers. Coke? Pepsi? The entire internet had 1.2 billion users. The biggest tech platform in 2007 was Microsoft Windows and it had not crossed a billion users.

Now, there are a baker’s dozen individuals products with a billion users. Microsoft has Windows and Office. Google has Search, Gmail, Maps, YouTube, Android, Chrome, and Play. Facebook has the core product, Groups, Messenger, and WhatsApp.

All this to say: These companies are now dominant. And they are dominant in a way that almost no other company has been in another industry. They are the mutant giant creatures created by software eating the world.

 

It is worth reflecting on the strange fact that the five most valuable companies in the world are headquartered on the Pacific coast between Cupertino and Seattle. Has there ever been a more powerful region in the global economy? Living in the Bay, having spent my teenage years in Washington state, I’ve grown used to this state of affairs, but how strange this must seem from from Rome or Accra or Manila.

Even for a local, there are things about the current domination of the technology industry that are startling. Take the San Francisco skyline. In 2007, the visual core of the city was north of Market Street, in the chunky buildings of the downtown financial district. The TransAmerica Pyramid was a regional icon and had been the tallest building in the city since construction was completed in 1972. Finance companies were housed there. Traditional industries and power still reigned. Until quite recently, San Francisco had primarily been a cultural reservoir for the technology industries in Silicon Valley to the south.

But then came the growth of Twitter and Uber and Salesforce. To compete for talent with the big guys in Silicon Valley, the upstarts could offer a job in the city in which you wanted to live. Maybe Salesforce wasn’t as sexy as Google, but could Google offer a bike commute from the Mission?

Fast-forward 10 years and the skyline has been transformed. From Market Street to the landing of the Bay Bridge, in the swath known as South Market or, after the fashion of the day, SOMA, has been reshaped completely by steel and glass towers. At times over the last decade, a dozen cranes perched over the city, nearly all of them in SOMA. Further south, in Mission Bay, San Francisco’s mini-Rust Belt of former industrial facilities and cargo piers became just one big gleam of glass and steel on landfill. The Warriors will break ground on a new, tech industry-accessible basketball manse nearby. All in an area once called Butchertown, where Mission Creek ran red to the Bay with the blood of animals.

So, that’s what I’ll be covering back here at The Atlantic: technology and the ideas that animate its creation, starting with broad-spectrum reporting on the most powerful companies the world has ever known, but encompassing the fringes where the unexpected and novel lurk. These are globe-spanning companies whose impact can be felt at the macroeconomic scale, but they exist within this one tiny slice of the world. The place seeps into the products. The particulars and peccadilloes from a coast become embedded in the tools that half of humanity now finds indispensable.

Source: This article was published theatlantic.com By ALEXIS C.MADRIGAL

Categorized in Science & Tech

China’s already-big WeChat is searching for how to get even bigger. The answer? Search.

Today the company publicly unveiled (link in Chinese) a feature—called “Search,” simply enough—that lets users enter keywords and find relevant information.

While WeChat has had a search feature in the past, this one is more powerful. It’s not quite a search engine in the style of Google or Baidu, the reigning search engine of China. Rather, it’s an alternate vision of search, one that’s uniquely suited to a social media app. And it could very well become huge in China.

Searching Google for “Apple iPhone” will typically yield ads at the very top, followed by recent news articles, YouTube and Wikipedia pages, and addresses of nearby Apple stores. Baidu works in a similar way. Clicking on the offered links takes the user to a page completely outside of Google or Baidu.

Searching in WeChat.

 Searching in WeChat. (Quartz)

WeChat’s search feature is a bit more social. Searching for “Apple” yields recent news at the very top, followed by mentions of Apple made by one’s friends. An assortment of random articles follows at the bottom. Tapping on any of these links—even the articles—always keeps one within WeChat’s built-in browser, and many of the linked articles are ones published directly to WeChat (much like Facebook’s “Instant Articles”). Tencent, WeChat’s parent company, did not answer questions about how it devises its results rankings.

 

To put it simply, it’s a “walled garden” approach to search. Whereas Google and Baidu’s vision of search entails aggregating everything published on the internet, WeChat’s entails aggregating everything that’s published and shared on WeChat. And when you tap on something, you always stay inside WeChat.

WeChat has plenty of data to draw from to build and perfect a search engine. With 938 million registered monthly users, it not only knows who your friends are (more than Baidu or Google do), it knows what they read and share, and what you read and share. It knows where you and your friends are located, and what you buy. And it’s addictive—50% of WeChat users spend more than 90 minutes per day on the app.

WeChat can also benefit from Tencent owning a stake in Sogou, which is an also-ran search engine in China but has potentially valuable data and technology.

Meanwhile, in recent years, WeChat has become a major publishing platform for traditional news sites, online media, and solo bloggers alike. Publishers will often push articles directly to WeChat through “Public Accounts” (rough analogs to Facebook Pages) that subscribers will read and share, eschewing external websites altogether. This search feature collects all the content published to WeChat (and more), and makes it easier for everyone to discover. And when they cruise through news-oriented search results, they’ll still never leave the confines of WeChat.

 

While the feature remains in its early stages, it will likely become a boon to Tencent. Many Chinese internet users spend most of their online lives in WeChat. This search feature gives them one more reason to do so.

If WeChat perfects its search capabilities, expect a giant of the Chinese internet to suffer. Baidu, China’s Google analog, has reported slowing revenue growth and declining operating profits. It makes most of its money from Chinese advertisers that are now keen to put their ads in more vibrant real estate—like WeChat.

While Tencent’s stock price has soared over the past two years, Baidu’s has wavered. Since Nov. 14, 2014, its last high, Baidu’s share price has tanked 25.5%, according to FactSet. Within the same time period, Tencent’s has jumped 96.9%.

Source: This article was published qz.com By Josh Horwitz

Categorized in Search Engine

Neuralink – which is “developing ultra high bandwidth brain-machine interfaces to connect humans and computers” – is probably a bad idea. If you understand the science behind it, and that’s what you wanted to hear, you can stop reading.The Conversation

But this is an absurdly simple narrative to spin about Neuralink and an unhelpful attitude to have when it comes to understanding the role of technology in the world around us, and what we might do about it. It’s easy to be cynical about everything Silicon Valley does, but sometimes it comes up with something so compelling, fascinating and confounding it cannot be dismissed; or embraced uncritically.

Putting aside the hyperbole and hand-wringing that usually follows announcements like this, Neuralink is a massive idea. It may fundamentally alter how we conceive of what it means to be human and how we communicate and interact with our fellow humans (and non-humans). It might even represent the next step in human evolution.

Neurawhat?

But what exactly is Neuralink? If you have time to read a brilliant 36,400-word explainer by genius Tim Urban, then you can do so here. If you don’t, Davide Valeriani has done an excellent summary right here on The Conversation. However, to borrow a few of Urban’s words, NeuraLink is a “wizard hat for your brain”.

 

Essentially, Neuralink is a company purchased by Elon Musk, the visionary-in-chief behind Tesla, Space X and Hyperloop. But it’s the company’s product that really matters. Neuralink is developing a “whole brain interface”, essentially a network of tiny electrodes linked to your brain that the company envisions will allow us to communicate wirelessly with the world. It would enable us to share our thoughts, fears, hopes and anxieties without demeaning ourselves with written or spoken language.

One consequence of this is that it would allow us to be connected at the biological level to the internet. But it’s who would be connecting back with us, how, where, why and when that are the real questions.

Through his Tesla and Space X ventures, Musk has already ruffled the feathers of some formidable players; namely, the auto, oil and gas industries, not to mention the military-industrial complex. These are feathers that mere mortals dare not ruffle; but Musk has demonstrated a brilliance, stubborn persistence and a knack for revenue generation (if not always the profitability) that emboldens resolve.

However, unlike Tesla and Space X, Neuralink operates in a field where there aren’t any other major players – for now, at least. But Musk has now fired the starting gun for competitors and, as Urban observes, “an eventual neuro-revolution would disrupt almost every industry”.

Part of the human story

There are a number of technological hurdles between Neuralink and its ultimate goal. There is reason to think they can surmount these; and reason to think they won’t.

While Neuralink may ostensibly be lumped in with other AI/big data companies in its branding and general desire to bring humanity kicking and screaming into a brave new world of their making, what it’s really doing isn’t altogether new. Instead, it’s how it’s going about it that makes Neuralink special – and a potentially major player in the next chapter of the human story.

Depending on who you ask, the human story generally goes like this. First, we discovered fire and developed oral language. We turned oral language into writing, and eventually we found a way to turn it into mechanised printing. After a few centuries, we happened upon this thing called electricity, which gave rise to telephones, radios, TVs and eventually personal computers, smart phones – and ultimately the Juicero.

file-20170502-17245-86mpzv.jpg

Fire: a great leap forward. Shutterstock

Over time, phones lost their cords, computers shrunk in size and we figured out ways to make them exponentially more powerful and portable enough to fit in pockets. Eventually, we created virtual realities, and melded our sensate reality with an augmented one.

 

But if Neuralink were to achieve its goal, it’s hard to predict how this story plays out. The result would be a “whole-brain interface” so complete, frictionless, bio-compatible and powerful that it would feel to users like just another part of their cerebral cortex, limbic and central nervous systems.

A whole-brain interface would give your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone who has a similar interface in their head. This flow of information between your brain and the outside world would be so easy it would feel the same as your thoughts do right now.

But if that sounds extraordinary, so are the potential problems. First, Neuralink is not like putting an implant in your head designed to manage epileptic seizures, or a pacemaker in your heart. This would be elective surgery on (presumably) healthy people for non-medical purposes. Right there, we’re in a completely different ball park, both legally and ethically.

There seems to be only one person who has done such a thing, and that was a bonkers publicity stunt conducted by a Central American scientist using himself as a research subject. He’s since suffered life threatening complications. Not a ringing endorsement, but not exactly a condemnation of the premise either.

Second, because Neuralink is essentially a communications system there is the small matter of regulation and control. Regardless of where you stand on the whole privacy and surveillance issue (remember Edward Snowden) I cannot imagine a scenario in which there would not be an endless number of governments, advertisers, insurers and marketing folks looking to tap into the very biological core of our cognition to use it as a means of thwarting evildoers and selling you stuff. And what’s not to look forward to with that?

 

And what if the tech normalises to such a point that it becomes mandatory for future generations to have a whole-brain implant at birth to combat illegal or immoral behaviour (however defined)? This obviously opens up a massive set of questions that go far beyond the technical hurdles that might never be cleared. It nonetheless matters that we think about them now.

Brain security

There’s also the issue of security. If we’ve learned one thing from this era of “smart” everything, it’s that “smart” means exploitable. Whether it’s your fridge, your TV, your car, or your insulin pump, once you connect something to something else you’ve just opened up a means for it to be compromised.

Doors are funny like that. They’re not picky about who walks through them, so a door into your head raises some critical security questions. We can only begin to imagine what forms hacking would take when you have a direct line into the minds of others. Would this be the dawn of Cognitive Law? A legal regime that pertains exclusively to that squishy stuff between your ears?

What it really all comes down to is this: across a number of fields at the intersection of law, philosophy, technology and society we are going to need answers to questions no one has yet thought of asking (at least not often enough; and for the right reasons). We have faced, are facing, and will face incredibly complex and overwhelming problems that we may well not like the answers to. But it matters that we ask good questions early and often. If we don’t, they’ll be answered for us.

And so Neuralink is probably a bad idea, but to the first person who fell into a firepit, so was fire. On a long enough time line even the worst ideas need to be reckoned with early on. Now who wants a Juicero?

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Categorized in Science & Tech
At the lab at the Department of Electronic Systems at Aalborg University, Elisabeth De Carvalho and her team are developing a massive MIMO-system; hundreds of antennas that will make mobile data transmission far more efficient and safe in the future. Credit: Jakob Brodersen
Mobile base stations for 5G solutions will consist of hundreds of small antennas. Benefits include faster transmission, improved energy efficiency, better security and wider coverage. Researchers at Aalborg University are at the forefront of developing the new antenna technology.

As we move toward a world that is increasingly interconnected with a 5G network that is expected to roll out during the next 3-5 years, the need for a well-functioning mobile network with ample room for more connected devices as well as increased data traffic becomes increasingly important.

 

"With the 'internet of things,' as it is popularly known, more and more devices need to be connected," says Elisabeth De Carvalho, Associate Professor in the Department of Electronic Systems at Aalborg University. Along with her colleagues Associate Professors Patrick Eggers, Jesper Ødum Nielsen and PhD fellow Anders Karstensen and with funding from the Chinese technology giant Huawei she is working on a new type of base system that caters to the seemingly endless increased need for .

The system that is still in its early stages is called a 'massive MIMO'. MIMO is an abbreviation for 'Multiple-Input Multiple-Output' – a wireless technology used to transmit and receive data between a large crowd of connected devices and users at the same time.

In mobile base stations attached to tall buildings or rooftops, each unit might have a maximum of eight antennas that point out in different directions, spreading out data transmission over a large area. But the team in Aalborg is working on a  unit that holds several hundred antennas, making it possible to connect much more precisely to each mobile unit.

"We don't know exactly what it is going to look like in the end. Maybe it could be a wall of a building that is covered in antennas on the outside—or on the inside. We are still uncertain about that," says Elisabeth De Carvalho.

 

Adding hundreds of antennas to a base station increases the data transmission rate many times because the energy is much more focused. And because focused energy is also able to travel farther, the mobile coverage is likely to improve.

At the same time, base station energy consumption is expected to drop in comparison to present day systems:

"With many antennas, it is like having thin, concentrated water hoses that are aimed directly where you want them, rather than a huge, leaky fire hose that just splashes water all over the place," explains Patrick Eggers.

"With the new technology, we are not simply sending out data in all directions like a radio transmitter; we hope to be able to create a sort of virtual cable that is focused and narrow between the base station and the connected unit. We confine the space that we use for the transmission. This provides a faster and better connection."

Improved security

Confining the transmission space is not only a capacity issue. One of the added benefits of having a massive number of antennas is that it improves the security of the data transmission.

"The more you can confine space, the harder it gets for others to listen in," says Patrick Eggers. "When you have a broadcast, you can always put up an  and pick up the signal if you are CIA or KGB or whatever, if you have equipment that is strong enough to decode the signal. But if you can't get the signal in the first place, it becomes really difficult. That is a major advantage for industries as well as private persons. The more you control space, the harder it gets for intruders to get in," he says.

 

However, many of the benefits of a massive MIMO system will remain assumptions for some time to come. So far, the team has built a scaled-up model of a part of a massive MIMO array in the lab in order to do channel measurement and to figure out how to build a base station later on.

"Currently, we are looking at which types of performances you can get out of a massive MIMO. The performances depend very much of what is happening in the air between your device and the base station. We want to build channel models from those measurements. The models are necessary for engineers to test their algorithms," says Elisabeth De Carvalho. "There is a lot of research that we still need to do before we build our prototype."

At the moment, there is practically no real life information about how a massive MIMO would work and what the wireless channel looks like, and that information is crucial to the way that the system is going to be built.

"There are a lot of assumptions and theories, but they all assume that what happens in the air goes on in a certain way, but no one really knows. Not yet, at least," says Jesper Ødum Nielsen.

Source: This article was published phys.org

Categorized in Science & Tech

Illustration by Chris Gash

One attorney says cleaning the internet of negative content for highly influential executives is a huge business.

Gawker may be gone, but Michael Lynton hasn’t forgotten about a story that ran on the now-bankrupt news site following the 2014 hack at Sony Pictures.

In fact, Sony's outgoing chairman has in recent weeks taken advantage of the troubles that have befallen Gawker in the wake of Hulk Hogan’s stunning $140 million judgment to have an unflattering story about his family quietly wiped from the site’s archives. Not only has the post vanished from the Gawker archive, its administrators have attempted to “de-index” it using special metacode to ensure it isn't cached by search engines nor captured by other digital preservationists.

 

The story in question was written by Sam Biddle and published on April 21, 2015. The article quoted heavily from Lynton's emails, which became public thanks to a massive intrusion that the Obama administration attributed to the North Koreans in advance of the release of the Seth Rogen film The Interview.

When the hack happened three years ago, Sony begged journalists to exercise care with leaked information and even threatened the media with legal action for exposing secrets, though the studio never did go to court to challenge what news outlets published. Had that happened, it would have surely invited a huge First Amendment battle. Nevertheless, after the Gawker Media Group declared bankruptcy and sold most of its assets to Univision’s Fusion Media Group for $135 million last August — with the notable exception of the Gawker.com trademark and archives — Lynton saw an opportunity. In order to clean up its legal liabilities in advance of the sale, Gawker reached several settlements in which it agreed to take down a few of its other controversial stories, including the one about Hogan’s sex tape that brought on its demise. These removals happened thanks to claims officially lodged in court against the debtor. It's unclear how Lynton effectuated a removal. Nothing publicly was filed, although it's possible there were claims filed under seal.

 

The story came down after the argument came that Biddle’s piece was defamatory and an invasion of privacy, though Andrew Celli, the Lynton family attorney, declines to discuss the particulars of who he contacted or how he succeeded in getting the story taken down. According to Gawker bankruptcy records, Celli did file proof of claims on behalf of two anonymous individuals under seal in September. (A lawyer for Gawker’s administrator didn’t respond to a request for comment.) Judging by what’s been captured at Archive.org, the removal seems to have occurred in April. Even though the story was based on communications between Lynton, now chairman at Snap, Inc. (which built its brand off of the appeal of messages that won't remain on the internet forever), and others, Lynton's family asserted the story carried the untrue assertion that he unduly influenced an elite academic institution. 

Celli made contact with The Hollywood Reporter's general counsel to express concern after I made inquiries about the vanished article with Gawker. He later suggested that to even repeat the gist of the original Gawker story would be damaging. He threatened a lawsuit and, referring to the Sony hack, told me, “There is a sin at the bottom of this. It’s wrong. The source for information is the result of a crime.”

 

The attorney has a point, but there are also some deeper issues at stake. Last month, UCLA Law professor Eugene Volokh wrote a column for The Washington Post about an actor who had been indicted on sex crime charges only to later be cleared. Volokh discovered how the actor (or someone working for him) had demanded Google de-index news coverage of his case. Volokh wrote, “What should our view be when someone tries to get the stories about them to vanish from search results this way? Should it matter that there is real evidence that he was innocent?”

Around this time, I was in communications with a reputation specialist who had been hired by an entertainment professional who had been sued a few years back in a case I had covered. The client was dismayed to see my article atop the results of a Google search for her name. This was causing her problems getting employment, the specialist said: Would I kindly remove the story?

 

This is altogether very common.

“Cleaning the internet of negative content by highly influential executives is a huge business,” says Bryan Freedman, a Hollywood attorney who represents talent agencies and many stars. “I spend a great portion of every day for high-level clients analyzing the approach to be taken and then creating a plan and executing it usually on various platforms. There are other tricks that are not commonly known but incredibly effective.”

As to Volokh’s questions, I see value to news archives and believe removing articles sets a dangerous precedent, but I can at least understand in certain situations the attempts to make information harder to find. Is manipulating search engines really so troubling?

In Europe, authorities have given private citizens a “right to be forgotten,” or more precisely, the ability to demand search engines like Google eradicate information that is no longer newsworthy. Here in America, there isn’t this right. Journalists don’t even have an onus to update — and unfortunately, many don’t.

 

As for the Lynton situation, I asked Volokh about it.

“Normally, I think that asking Gawker to take down material that’s allegedly defamatory and privacy-invading would be the right approach,” he responded. “The problem here is that it sounds like Lynton approached the [Gawker] administrator, which does raise the problem of material being squelched without the exercise of real editorial judgment. Yet I take it we wouldn’t want a rule that, once a media site goes bankrupt, people who have legit defamation/privacy claims about stories on the site would have no one to turn to. So maybe this comes down to the merits of his objection.”

Gawker founder Nick Denton didn’t respond to a request for comment, but last July, he spoke to The New York Times about how the Lynton article, along with one about Bill O’Reilly’s temper and Hillary Clinton’s secret kitchen cabinet, were ones he was proud of. “In all those examples, there was a point, and a public interest in the truth getting wider circulation," said Denton at the time.

 

Celli makes his own points how even painting the Gawker story in broad brush strokes creates a false portrait for Lynton’s family. I could have also written this story without detailing what exactly Gawker had reported. That’s something that Buzzfeed did when it rushed its own version of this story on Thursday.

But as Volokh said, it's important to understand the merits of Lynton's objection. And it could also be argued that writing about any defamation claim constitutes some echo of information damaging to someone’s reputation. Ultimately, I decided that moves made by public figures to take down information — including by way of robots.txt files — are, well, newsworthy regardless of the origins and that it was important enough to provide at least some detail.

Source: This article was published on hollywoodreporter.com by Eriq Gardner

Categorized in Science & Tech

Despite the reach of the Internet and its growing complexity, no physical map of the Internet had been produced, until now. The outcome highlights the Internet-dependent nature of our world.

To understand the depth of the project it is important to appreciate what the Internet is. The Internet should not be thought of as synonymous with World Wide Web. The Internet is a physical entity, a massive network of networks made up of cables, servers and computers. The networking infrastructure connects millions of computers together globally. This creates a network in which any computer can communicate with any other computer.

 

The end-product is an Internet Atlas, which is the first detailed map of the Internet's structure worldwide. The map resembles, at first glance, a conventional map of a geographical territory; however the series of lines represent crucial pieces of the physical infrastructure of the Internet rather than geographical features or political boundaries. For most people these interactions are out-of-sight yet they are critical items of physical infrastructure ad without them the Internet as we understand it would not exist.

The map has been developed by a team put together by Professor Paul Barford and Ramakrishnan Durairajan. The scale of the project reveals the complexity of the connected world, showing aspects like submarine cables buried beneath the ocean floor, which are necessary to allow continents to communicate with each other. On land, the map reveals how buildings packed with servers engage in communications traffic exchange with different service providers, across Internet exchange points.

 

To construct the map millions of data items were inputted. One complexity was the lack of data about where most of the Internet is. While the researchers received some information from Internet providers they had to resort to cumulating local permits in various countries for works like laying cables.

There’s a point to it which goes beyond a mere intellectual exercise. The Internet remains under threat from low-grade hackers to major terrorist groups. Beyond this the Internet is under threat from natural forces, such as freak weather or extreme weather, as with hurricanes. Add to this other accidental events such as problems with rail tracks; this matters because considerable stretches of cabling runs under the rail network in many countries, including the U.S.

The map has recently been presented to the RSA Conference in San Francisco, which is a major cyber security conference. Commenting on this, Ramakrishnan Durairajan explains: "The question of 'how does mapping contribute to security?' is one of our fundamental concerns” By taking the map to the conference, the issue of Internet security received wider appreciation and coverage. These issues are global and require world governments to work together since Internet security is something of shared risk. In all likelihood to damage to one area impacts upon more than one entity, be that a networking hub or even multiple countries.

 

With the static map of the Internet produced, the researchers want to turn it into something interactive, to show how the Internet is functioning and evolving in real-time.

Source: This article was published on digitaljournal.com

Categorized in Science & Tech

Net neutrality, the idea that internet service providers must treat everything equally, has been described as ‘the first amendment of the internet’. Photograph: Juice/REX/Shutterstock

US campaigners rejoiced in 2015 when ‘net neutrality’ enshrined the internet as a free and level playing field. A vote on 18 May could take it all back

Thursday 26 February 2015 was a good day for internet freedom campaigners. On that day the Federal Communications Commission (FCC) voted to more strictly regulate internet service providers (ISPs) and to enshrine the principles of “net neutrality” as law.

 

The vote reclassified wireless and fixed-line broadband service providers as Title II “common carriers”, a public utility-type designation that gives the FCC the ability to set rates, open up access to competitors and more closely regulate the industry.

“The internet is the most powerful and pervasive platform on the planet,” said FCC chairman Tom Wheeler. “It’s simply too important to be left without rules and without a referee on the field.”

Two years on and Trump’s new FCC chairman Ajit Pai, a former Verizon lawyer, has announced plans to overturn the 2015 order, in turn gutting net neutrality. A vote on this proposal is due to take place on 18 May. Here’s why it matters.

What is net neutrality?

Net neutrality is the idea that internet service providers (ISPs) treat everyone’s data equally, whether that’s an email from your mom, a bank transfer, or a streamed episode of The Handmaid’s Tale. It means that ISPs don’t get to choose which data is sent more quickly and which sites get blocked or throttled (for example slowing the delivery of a TV show because it’s streamed by a video company that competes with a subsidiary of the ISP) and who has to pay extra. For this reason some have described net neutrality as the “first amendment of the internet”.

 

Protesters hold a rally at the Federal Communications Commission (FCC) in Washington in 2015, several weeks before net neutrality was made law. Photograph: Karen Bleier/AFP/Getty Imag

What is the difference between an ISP and a content provider?

ISPs provide you with access to the internet and include companies such as Verizon, Comcast, Charter, Verizon, CenturyLink and Cox. Content companies include Netflix, Hulu and Amazon. In some cases ISPs are also content providers, for example Comcast owns NBCUniversal and delivers TV shows through its Xfinity internet service.

 

Who supports net neutrality?

Content providers including Netflix, Apple and Google. They argue that people are already paying for connectivity and so deserve access to a quality experience. Mozilla, the non-profit company behind the Firefox web browser, is a vocal supporter, and argues that it allows for creativity, innovation and economic growth.

More than 800 startups, investors and other people and organizations sent a letterto Pai stating that “without net neutrality the incumbents who provide access to the internet would be able to pick winners or losers in the market. They could impede traffic from our services in order to favor their own services or established competitors. Or they could impose new tolls on us, inhibiting consumer choice.”

Many consumers support the rules to protect the openness of the internet. Some of them may have been swayed by Last Week Tonight host John Oliver, who pointed out that “there are multiple examples of ISP fuckery over the years” so restrictions are important.

 

Who doesn’t support the FCC’s 2015 net neutrality rules?

Big broadband companies including AT&T, Comcast, Verizon and Cox. They argue that the rules are too heavy-handed and will stifle innovation and investment in infrastructure and have filed a series of lawsuits challenging the FCC’s authority to impose net neutrality rules.

Publicly, however, the message is different. Verizon released an odd video on the topic insisting that they were not trying to kill net neutrality rules and that pro-net neutrality groups are using the issue to fundraise.

Verizon’s PR campaign insists that the company supports net neutrality, despite a history of fighting against legislation that would enshrine it.

Comcast also launched a Twitter campaign insisting it supports net neutrality.

net-tweet

Are there other reasons why people don’t like the 2015 rules?

Yes. Opponents don’t like the idea of putting the federal government at the center of the internet when, as Pai has said, “nothing is broken”.

The new FCC chairman argues that the 2015 rules were established on “hypothetical harms and hysterical prophecies of doom” and that they are generally bad for business.

“It’s basic economics. The more heavily you regulate something, the less of it you’re likely to get,” he said.

The big broadband companies publicly state they are quibbling the Title II “common carrier” designation rather than net neutrality per se. They believe they shouldn’t be regulated in the same way that telecommunications services are and prefer the light touch regulation they would otherwise be subject to under their previous Title I designation of the Telecommunications Act of 1996. The FCC lacks the direct authority to regulate Title I “information services”.

How does this tie in to Trump’s approach to the internet?

Trump’s Republican party is showing its colors as friendly to big corporations even if it leads to the unfettered accumulation of corporate power.

 

It’s the second major roll-back of Obama-era internet protections. In March, Congress voted to allow ISPs to sell the browsing habits of their customers to advertisers. The move, which critics charge will fundamentally undermineconsumer privacy in the US, overturned rules drawn up by the FCC that would have given people more control over their personal data. Without the rules, ISPs don’t have to get people’s consent before selling their data – including their browsing histories – to advertisers and others.

What can people do?

There are 10 days to protest the roll-back of the FCC’s net neutrality rules. You can use this website to write to the FCC and Congress or leave a voice message for Mozilla, which will collect them all together as an audio file and send them to the FCC.

Source: This article was published on theguardian.com

Categorized in Internet Privacy

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media