fbpx

There are over 1.1 billion websites on the internet, but the vast majority of all traffic actually goes to a very select list of them. Google.com, for example, has an astounding 28 billion visits per month. The next closest is also a Google-owned property, Youtube.com, which brings in 20.5 billion visits.

Today’s infographic comes to us from Vodien, and it lists the 100 highest ranking websites in the U.S. by traffic, according to website analytics company Alexa.

The information is grouped by company – for example, you can see that Google controls four sites in the Top 100 (Google, Youtube, Blogger, and Google User Content), while Verizon owns the Huffington Post and AOL.com (they will also control Yahoo and Tumblr when that deal closes in Q2). The data is also sorted by industry, so sites in a similar category are grouped in the same color.

 

A STEEP DROPOFF

The dropoff from #1 to #100 is significant. Google.com has 28 billion visits, but a website like Citi.com (ranked #98) only has 53 million visits a month. That’s a 500x difference!

Meanwhile, a website like ours (Visualcapitalist.com) gets one million visits per month, and is ranked #33,000 in the United States – a 50x difference from Citi. Further down the trail – there are literally millions of tiny websites that get thousands or just hundreds of visits per month, and some that don’t get any love at all.

The whole distribution is quite fascinating, and it is clear that the spoils go overwhelmingly to the very top of the food chain. However, that also means that there is an entire world of millions of websites out there that almost no one (except Google’s crawler) has ever seen.

Source : visualcapitalist.com

Categorized in Online Research

I’m going to confess an occasional habit of mine, which is petty, and which I would still enthusiastically recommend to anyone who frequently encounters trolls, Twitter eggs, or other unpleasant characters online.

Sometimes, instead of just ignoring a mean-spirited comment like I know I should, I type in the most cathartic response I can think of, take a screenshot and then file that screenshot away in a little folder I only revisit when I want to make my coworkers laugh.

I don’t actually send the response. I delete my silly comeback and move on with my life. For all the troll knows, I never saw the original message in the first place. The original message being something like the suggestion, in response to a piece I once wrote, that there should be a special holocaust just for women.

 

It’s bad out there, man!

We all know it by now. The internet, like the rest of the world, can be as gnarly as it is magical.

But there’s a sense lately that the lows have gotten lower, that the trolls who delight in chaos are newly invigorated and perhaps taking over all of the loveliest, most altruistic spaces on the web. There’s a real battle between good and evil going on. A new report by the Pew Research Center and Elon University’s Imagining the Internet Center suggests technologists widely agree: The bad guys are winning.

Researchers surveyed more than 1,500 technologists and scholars about the forces shaping the way people interact with one another online. They asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust and disgust?”

The vast majority of those surveyed—81 percent of them—said they expect the tone of online discourse will either stay the same or get worse in the next decade.

Not only that, but some of the spaces that will inevitably crop up to protect people from trolls may contribute to a new kind of “Potemkin internet,” pretty façades that hide the true lack of civility across the web, says Susan Etlinger, a technology industry analyst at the Altimeter Group, a market research firm.

“Cyberattacks, doxing and trolling will continue, while social platforms, security experts, ethicists and others will wrangle over the best ways to balance security and privacy, freedom of speech and user protections. A great deal of this will happen in public view,” Etlinger told Pew. “The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor.”

Tor is software that enables people to browse and communicate online anonymously—so it’s used by people who want to cover their tracks from government surveillance, those who want to access the dark web, trolls, whistleblowers and others.

“Of course, this is already happening, just out of sight of most of us,” Etlinger said, referring to the use of hidden channels online. “The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

 

The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.

“Very often, hate, anxiety and anger drive participation with the platform,” said Frank Pasquale, a law professor at the University of Maryland, in the report. “Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

News organizations, which once set the tone for civic discourse, have less cultural importance than they once did. The rise of formats like cable news—where so much programming involves people shouting at one another—and talk radio are clear departures from a once-higher standard of discourse in professional media.

Few news organizations are stewards for civilized discourse in their own comment sections, which sends mixed messages to people about what’s considered acceptable. And then, of course, social media platforms like Facebook and Twitter serve as the new public square.

“Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals,” said Andrew Nachison, the founder of We Media, in his response to Pew. “But that’s really a ruse to optimize for quantity. The more we come back, the more money they make... So the shouting match goes on.”

The resounding message in the Pew report is this: There’s no way the problem in public discourse is going to solve itself. “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day,” said Randy Bush, a research fellow at Internet Initiative Japan, in his response to Pew.

Many of those polled said we’re now witnessing the emergence of “flame wars and strategic manipulation” that will only get worse. This goes beyond obnoxious comments, or Donald Trump’s tweets, or even targeted harassment. Instead, we’ve entered the realm of “weaponized narrative” as a 21st-century battle space, as the authors of a recent Defense One essay put it. And just like other battle spaces, humans will need to develop specialized technology for the fight ahead.

 

Researchers have already used technology to begin to understand what they’re up against. Earlier this month, a team of computer scientists from Stanford University and Cornell University wrote about how they used machine-learning algorithms to forecast whether a person was likely to start trolling. Using their algorithm to analyze a person’s mood and the context of the discussion they were in, the researchers got it right 80 percent of the time.

They learned that being in a bad mood makes a person more likely to troll, and that trolling is most frequent late at night (and least frequent in the morning). They also tracked the propensity for trolling behavior to spread. When the first comment in a thread is written by a troll—a nebulous term, but let’s go with it—then it’s twice as likely additional trolls will chime in compared with a conversation not led by a troll to start, the researchers found. On top of that, the more troll comments there are in a discussion, the more likely it is participants will start trolling in other, unrelated threads.

“A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, and even more troll comments elsewhere,” the Stanford and Cornell researchers wrote. “As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.”

Using technology to understand when and why people troll is essential, but many people agree the scale of the problem requires technological solutions. Stopping trolls isn’t as simple as creating spaces that prevent anonymity, many of those surveyed told Pew, because doing so also enables “governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate,” Pew wrote.

“One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” Bailey Poland, the author of “Haters: Harassment, Abuse and Violence Online,” told Pew. Pseudonymity may be one useful approach—so someone’s offline identity is concealed, but their behavior in a certain forum over time can be analyzed in response to allegations of harassment. Machines can help, too: Chatbots, filters and other algorithmic tools can complement human efforts. But they’ll also complicate things.

“When chatbots start running amok—targeting individuals with hate speech—how will we define ‘speech’?” said Amy Webb, the CEO of the Future Today Institute, in her response to Pew. “At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

 

Another challenge is no matter what solutions people devise to fight trolls, the trolls will fight back. Even among those who are optimistic the trolls can be beaten back, and that civic discourse will prevail online, there are myriad unknowns ahead.

“Online discourse is new, relative to the history of communication,” said Ryan Sweeney, the director of analytics at Ignite Social Media, in his response to the survey. “Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”

Source : nextgov.com

Categorized in Science & Tech

The South Korean Giant has taken another stride and has released its Internet browser application for mobiles on the Google Play Store. This is the first time that a browser from Samsung is made available for smartphones from other brands. According to the company, the Internet browser app will work on all the Samsung Galaxy and Google Nexus Phones which run on Android 5.0 Lollipop or more but it isn’t available for all the countries yet.

The application is still in beta version, but it includes many new features like the support for 360 videos which will allow its user to enjoy 360-degree videos without using the Gear VR headset. Samsung also announced that those whose who install Samsung’s browser would receive the newest features that are added by the developers. Moreover, there is a picture-in-picture mode along with Amazon Shopping Assistant which will be letting its users compare prices on Amazon for different products.

Other features like web payments and DuckDuckGo search engine support is also included for better use of the services. There is also the inclusion of Content Blockers which allows 3rd party apps to provide filters for Content blocking and let its users browse clutter free without unnecessary content. There is also a Video assistant that lets the user switch between various video viewing modes, and the Pop-up player allows its user browse even while watching a video.

The browser also provides high-level security as the user’s needs to verify their identity before they get access to the browser. The user can either use a fingerprint sensor or a password, and there is also secret mode provided for the users. We are not sure when Samsung will be rolling out this app across the globe, but it is expected to happen soon. 

Source : phoneradar.com

 

Categorized in Others

The common JPEG could be about to get a lot smaller thanks to Google’s new software. This could lead not only to a faster Web, but also to direct savings on storage costs for everyone from hosting services to hobbyist photographers and especially smartphone users on metered connections.

20x24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right)
shows less ringing artefacts than libjpeg (middle) without requiring a larger file size.

Back in 2014, I wrote about BGP, a new file format purported to deliver equivalent quality to JPEG but with much smaller file sizes. The purposes of BPG is ‘to replace the JPEG image format when quality or file size is an issue’.

Fast forward to 2017 and BGP has obviously failed to achieve its stated purpose, with the humble JPEG still firmly entrenched as the file format of choice for the vast majority of users.

However a new, and free to use, compression technology from Google now hopes to revolutionise the JPEG where BGP has failed. Announced earlier this month, Guetzli is a new open source algorithm which creates JPEG files 35% smaller than typical current methods.

What sets Guetzli apart

The crucial difference between Guetzli and BGP is that the latter requires new code to be written before it can be read. Standard browsers and image software would simply fail to read the files without specific support for the format.

Guetzli, on the other hand, continues to use the established JPEG format. So, all software which can currently read standard JPEGs will also be able to read Guetzli JPEGs without modification.

There’s some crossover, in terms of the final outcome at least, with Google’s RAISR technology, announced at the end of last year, which can blow up small images into much larger versions with significantly higher quality than was previously possible.

Both Geutzli and RAISR can cut down significantly on the required size of image files, albeit in rather different ways. There’s also no reason why the two technologies can’t be used together.

 

How does it work?

The Guetzli encoder works by increasing the level of compression while creating the JPEGs, leaving the standard decompression algorithms for reading and displaying the images unchanged. The increase in compression comes from a new and more sophisticated model of human colour perception than is used by current JPEG encoders.

This results in higher quality images at a reduced file size, but also comes with a tradeoff in speed. Google’s engineers say that Guetzli is currently significantly slower than a standard JPEG encoder. Users of the current Windows version are reporting conversion times of several minutes for a single large JPEG.

Guetzli is potentially great news for anyone who stores or displays JPEG images as the time taken to download and the space needed to store them is significantly reduced. However, there are other options for those who want to create smaller JPEGs.

One such option is JPEGmini which claims even bigger file size reductions than Guetzli, up to 80%, with no loss in perceived quality. The big difference here though is that JPEGmini is a commercial product with prices ranging from $29 for the basic Home User option, up to $199 per month for those wishing to use it on Web servers and large photo repositories.

At the moment, JPEGmini is still a good option for everyday use as it is a polished end-user product, complete with plugins for Adobe Lightroom and Photoshop. However, a free algorithm such as Guetzli, once the speed issues are addressed, will most definitely pose threat to paid-for products like JPEGmini which hope to charge for exactly the same final result.

Source : forbes.com

Categorized in Science & Tech

Companies like Paul Bunyan Communications, AT&T and Comcast have made public announcements pledging that their values remain unchanged in the face of the FCC ruling that now allows them to sell customer data.

(TNS) — BEMIDJI — Internet providers locally and nationally have stated they won't collect and sell Web browsing history.

The responses follows the passage of federal legislation this week allowing internet service providers to sell their customers' web browsing history. The legislation is a reverse of an Obama administration era privacy rule through the Federal Communications Commission.

 

Regionally, Paul Bunyan Communications stated in a press release that regardless of what the law allows, it won't sell members' web browsing history.

 

"Our members' privacy is of the utmost importance to our member-owned and governed cooperative," Gary Johnson, the CEO and general manager of Bemidji-based Paul Bunyan Communications, said in the release. "We have never sold member web browsing history and have no plans to do so in the future regardless of what the rules and regulations may allow.

"We feel it is extremely important to reassure our customers that our cooperative will not sell their web browsing history," Johnson said. "Any provider who sells their customers' web browsing history without their consent is putting profits ahead of the trust of its customers and we believe that flies in the face of common decency, customer privacy and certainly our cooperative values and principles."

Other companies across the country made similar statements, such as Comcast.

"We do not sell our broadband customers' individual web browsing history. We did not do it before the FCC's rules were adopted and we have no plans to do so," said Comcast's Chief Privacy Officer Gerard Lewis in a release, according to Reuters Media.

AT&T, meanwhile, said in a statement that the company, "will not sell your personal information to anyone, for any purpose. Period," Reuters reported.

Author : BEMIDJI PIONEER, MINN

Source : govtech.com

Categorized in Internet Privacy

As government chisels away at internet privacy protections, researchers at the Massachusetts Institute of Technology and Stanford University have developed a system they say will give you more anonymity in cyberspace.

The catch: You’ll probably have to pay for it.

To start with, it’s important to remember that every single thing you type online gets stored as data, no matter what kind of web protection software you own.

Sites like Google, Yelp, Kayak and others translate each request into a query, which gets stored in a database center.

With that in mind, the MIT-led research group has developed Splinter, a system that cuts the cord on that data flow without having to mask or delete the actual information.

 

Splinter allows websites to encrypt a user’s internet searches so they're never saved. The data is still out there, but is split among multiple database centers. That scrambles the search information a person has entered, preventing the websites from gathering information about the person who made the request.

“Data is floating around everywhere, so if you have anything on the internet, anyone can learn about it at some point. That’s scary,” said Frank Wang, an MIT graduate student who helped develop Splinter. “Your internet service provider is learning about you, so [we said] ‘How can we leak the least amount of information?’" 

Now, about that catch: Instead of relying on internet service providers to look out for your privacy, Splinter puts the power of protection into the hands of web services. The consumer cannot install the software. It's a system each company would have to incorporate into their own site.

Because consumers are demanding more privacy in their web browsing, they may be willing to pay for the privilege. And companies may be willing to satisfy that demand.

With Splinter, services could charge a nominal fee for queries, like maybe $5 a month. Wang said using Splinter costs a web service less than 2 cents per search.

"Web services could say, 'Hey, I charge you to [search for a flight], but I won't release any of that information or use your data,” Wang said. 

David O’Brien, a senior researcher at Harvard’s Berkman Center for Internet & Society, said the concept could open a window.

“It’s been discussed for years now and could be one alternative path,” he said. The problem, though, is that payment for privacy “hasn’t been supported by demand and maybe won’t ever be.”

 

But more people are demanding privacy, especially in the wake of both the House and Senate voting to repeal internet privacy protections adopted by the Federal Communications Commission during the Obama administration. That means that internet service providers like Comcast and AT&T would be able to share or sell information from a web user without that user’s permission.

Now more than ever, Wang said, "users are starting to care more about privacy, and the government is not regulating it as much as [people] wanted them to. So there’s an opportunity for web services to differentiate themselves and say, ‘We have a private offering.’” 

So what can you do, for free, to protect your privacy?

The best way to protect yourself is to practice abstinence, virtually, said David O’Brien of Harvard’s Berkman Center for Internet & Society.

“Unfortunately, the simplest answer is: Don’t use online services,” O’Brien said. “For most people, that doesn’t really fly. Short of that, a lot of services you can use obscure your identity online.”

Aside from that, O’Brien and MIT researcher Frank Wang suggested using Tor, a system that links your internet connection to multiple servers, placed all around the country or even the world. This means that when you go to a website like Google, it looks, to that website, “as if you’re coming from multiple different countries with each query you send.”

Another service, DuckDuckGo, is a search engine that promises not to track your queries.

Your computer or smart device's "Incognito" mode can be used in conjunction with these programs to provide a bit more privacy. It protects against targeted advertisements, but doesn't prevent companies from obtaining important data about the searches themselves.

In the end, O’Brien said, true privacy may be a dream. “It’s hard to be truly anonymous on the internet. It might even be impossible.”

Author : KRISTIN TOUSSAINT

Source : metro.us

Categorized in Internet Privacy

Internet service providers have warned that using WhatsApp offline can expose subscribers to hacking and malicious viruses.

This was contained in a message by Airtel Nigeria, which advised Nigerians, especially subscribers on its network, to be vigilant in accepting certain messages.

It said on Tuesday in Lagos that there had been messages in circulation which tend to show that a subscriber could make use of WhatsApp without access to the internet.

“Dear customer, our attention has been drawn to messages notifying customers of the use of WhatsApp without internet.

 

“Kindly ignore and do not click on those links, as it redirects to cloned applications.

“The links may be used to harvest sensitive information from your device. Be cautious,” Airtel said in a text message to its customers.

The News Agency of Nigeria reports that since the beginning of 2017, the message has been circulating, while many may have fallen victims.

The hackers’ message usually reads, “First, you need to update your WhatsApp iOS to the latest WhatsApp version 2.17.1.

“Now, this allows sending the message to any contact in your list without having internet connection.

“This feature was available on Android for more than a year, but iOS users are only getting it now.

“Also, you will be getting an option to send 30 photos or videos at a time if you update your WhatsApp.”

The message has been certified to be a hoax, and should be disregarded. (NAN)

Source : punchng.com

Categorized in Internet Privacy

A powerful laser shining up into space will soon transmit data between the Earth and the International Space Station.

Nasa is hoping to establish laser links at a rate of over one gigabit per second - a speed most home broadband users could only dream of.

This would pave the way for 3D video from space and enable high definition remote robotic exploration of other moons and planets.

The Laser Communications Relay Demonstration (LCRD) will help Nasa to understand the best ways to operate laser communications systems. 

This could enable much higher data rates for connections between spacecraft and Earth, including downloading scientific data and allowing astronauts to send better video messages back home.

LCRD - which will be launched by Nasa's Goddard Space Flight Centre in Greenbelt, Maryland - is designed to function for between two and five years. 

Two ground terminals equipped with laser modems will be set up on Table Mountain, California, and in Hawaii.

They will test the communications capability to and from LCRD - which will be located in an orbit that matches Earth's rotation, called a geosynchronous orbit - between the two stations.

The LCRD launch is scheduled for summer 2019, and a terminal is also being designed for the International Space Station that will be launched in 2021. 

Steve Jurczyk, associate administrator of Nasa's Space Technology Mission Directorate, said: 'LCRD is the next step in implementing Nasa's vision of using optical communications for both near-Earth and deep space missions.

'This technology has the potential to revolutionise space communications.'

Laser communications - also known as optical communications - encode data onto a beam of light.

This is then transmitted between spacecraft and eventually to computers back on Earth. 

This technology offers data rates that are 10 to 100 times better than current radio-frequency (RF) communications systems. 

The LCRD mission is hoping to reach gigabit per second speeds.

The LCRD will beam data between modems on Earth and the satellite in geosynchronous orbit at speeds 10 to 100 times better than current radio-frequency

While such speeds are possible through conventional fibre optics back here on Earth, it is likely to be the best part of a decade before they are seen in most homes. 

The systems themselves are also much smaller than RF, weigh less and consume less power.

This combination of factors will become critically important as humans embark on long journeys to the moon, Mars and beyond.

The Laser Communications Relay Demonstration (LCRD) will help Nasa to understand the best ways to operate laser communications systems

It will also test the long term reliability of such systems, as well exposing it to different environmental and operational conditions.

The mission builds upon a previous mission, the Lunar Laser Communications Demonstration (LLCD).

 

Launched aboard the lunar atmosphere dust and environment explorer in 2013, LLCD successfully demonstrated the potential for laser communications in space.

The test, in October 2013, beamed data at speeds reaching 622 megabits per second to Earth from a spacecraft orbiting the moon.  

The LCRD payload will consist of two identical optical terminals connected by a component called a space switching unit, which acts as a data router. 

The space switching unit is also connected to a radio-frequency downlink.

A terminal is also being designed for the International Space Station that will be launched in 2021. Scientists at Nasa¿s Goddard Space Flight Centre in Greenbelt, Maryland (pictured) have been testing out the device in advance of its launch

A terminal is also being designed for the International Space Station that will be launched in 2021. Scientists at Nasa's Goddard Space Flight Centre in Greenbelt, Maryland (pictured) have been testing out the device in advance of its launch

The modems translate digital data into laser or radio-frequency signals and back again. 

Once they convert the data to laser light, the optical module will beam the data to Earth. 

To do so, the module must be perfectly pointed to receive and transmit the data. 

The controller electronics (CE) module commands actuators to help point and steady the telescope despite any movement or vibration on the spacecraft.   

The LCRD payload will consist of two identical optical terminals connected by a component called a space switching unit, which acts as a data router 

The LCRD payload will consist of two identical optical terminals connected by a component called a space switching unit, which acts as a data router.

Author : TIM COLLINS FOR MAILONLIN

Source : dailymail.co.uk

 

Categorized in Science & Tech

The internet is amazingly robust, but like any complex network is still prone to the occasional failure. A new analysis using network theory explains why the dark net – the hidden underbelly of the regular internet, invisible to search engines – is less vulnerable to attacks. The lessons learned could help inform the design of more robust communications networks in the future.

The regular internet’s design is deliberately decentralised, which makes it very stable under normal circumstances. Think of each site or server as a node, connected to numerous nodes around it, which in turn connect to even more nodes, and so on. Take out a node or two here or there and the network continues to function just fine. But this structure also makes it more vulnerable to a coordinated attack: take out many nodes at once, as happens during a distributed denial of service (DDoS) attack, and the result can be catastrophic failure that cascades through the entire network.

 

The dark net is much less vulnerable to such directed attacks, thanks to its unique structure. Manlio De Domenico and Alex Arenas at Rovira i Virgili University in Tarragona, Spain, used data from the Internet Research Lab at the University of California, Los Angeles, to build their own model of the dark net. They ran simulations to see how it would react to three failure scenarios: random node failures, targeted attacks on specific nodes, and cascading failures throughout the network.

They found that an attack on the dark net would need to hit four times as many nodes to cause a cascading failure as on the regular internet. This stems from its use of “onion routing”, a technique for relaying information that hides data in many layers of encryption. Rather than connecting a user’s computer directly to a host server, onion routing bounces the information through various intermediary nodes before delivering it to the desired location. This stops an attack from spreading so widely.

 

Powerful connections

Another reason for the dark net’s resilience is its lack of something called the “rich-club effect”. In the regular internet, powerful nodes connect more readily with other powerful nodes, creating what Simon DeDeo at Carnegie Mellon University in Pittsburgh, Pennsylvania, terms a “smoky back room” of “network elites”. An attack on one such node can trigger the failure of others, which can in turn lead to cascading failure across the network. The dark net doesn’t have this high level of connectivity between powerful nodes.

“This is [another] one of the things that make it more robust to attack,” says DeDeo. “The network elites are more spread out. In fact, the elites appear to be avoiding each other.”

This model of the dark net somewhat resembles a so-called “small-world network”, in which several heavily connected nodes link clusters of smaller local nodes – similar to how major air traffic hubs connect smaller local airports. Both systems exhibit similar resilience to catastrophic failure, although in-depth comparisons have yet to be completed.

Reconfiguring the entire internet to make it as robust as the dark net would be prohibitively expensive, but De Domenico thinks the pair’s work could still offer practical insights. “It is possible to rethink next-generation upgrades and the design of more localised communication networks, like the intranets of large companies,” he says.

Author : Jennifer Ouellette

Source : https://www.newscientist.com/article/2123354-why-the-dark-net-is-more-resilient-to-attack-than-the-internet/

 

Categorized in Deep Web

Personal data, misinformation (or as it's now been dubbed, fake news) and online political advertising: these are the three major problems of the modern internet that keep the inventor of the World Wide Web up at night.

Sir Tim Berners-Lee has shared his thoughts on the 28th anniversary of the day he submitted his proposal for the web.

The three new trends are something he's become increasingly worried about in the last 12 months, and which he believes "we must tackle in order for the web to fulfil its true potential as a tool which serves all of humanity".

 

1. Personal data

Berners-Lee warned that we have lost control of our personal data online and that companies and governments are increasingly using it to watch our every move online. He pointed toward the UK's Investigatory Powers Bill as an example of extreme laws that "trample on our rights to privacy".

"Even in countries where we believe governments have citizens’ best interests at heart, watching everyone, all the time is simply going too far," he said.

It comes just days after a huge leak of information claiming to detail the methods and tools used by the CIA to spy on people.

2. Fake news

Misinformation can spread like wildfire, Berners-Lee warns, putting it down to the dominance of just a few social media and search engines.

"These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire."

3. Political advertising

Also connected to the idea of fake news and personal data, Berners-Lee says the sophisticated industry that has sprung up around these two things is being used unethically.

"The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data, means that political campaigns are now building individual adverts targeted directly at users.

 

"One source suggests that in the 2016 US election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls."

He added: "Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?"

It comes after concerns have been raised around the US election and in the UK MPs have launched an inquiry into the issue and its threat to democracy. Meanwhile the UK data regulator is investigating the use of data in Brexit campaigning.

But. the computer programmer is also taking action to help fix these things as head of the Web Foundation, and is calling on others to help. "It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone," he said.

He also made several suggestions as to how action can be taken:

  1. Work with companies on putting greater data control in the hands of people, including new technology, and alternative revenue models such as subscriptions and micropayments.
  2. Fight against surveillance laws.
  3. Encourage gatekeepers such as Google and Facebook to to continue fighting misinformation, but avoid the formation of a single body deciding on what is "true".
  4. Berners-Lee wants more transparent algorithms.
  5. Close the "internet blind spot" in regulating political campaigning. 

Author : Lynsey Barber

Source : http://www.cityam.com/260739/three-major-problems-modern-internet-keeping-tim-berners

Categorized in Science & Tech

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media