Barbara Larson

Barbara Larson

Android malware is a serious problem that can cause you all kinds of trouble if you’re not paying attention to what you install on your device. Even apps that come from the Google Play store can sometimes contain malware, and researchers have discovered new tools that would allow hackers to take control of an Android device without the user even knowing it.

Described as a “Cloak and Dagger” attack by researchers from UC Santa Barbara and Georgia Tech, the malware would let a malicious app gain complete control of an Android phone or tablet. The user, meanwhile, would not suspect anything, and the malware would even be able to perform tasks with the screen turned off.

“These attacks only require two permissions that, in case the app is installed from the Play Store, the user does not need to explicitly grant and for which [the user] is not even notified,” the researchers explained. “The possible attacks include advanced clickjacking, unconstrained keystroke recording, stealthy phishing, the silent installation of a God-mode app (with all permissions enabled), and silent phone unlocking + arbitrary actions (while keeping the screen off).”

All Android versions to date, including Android 7.1.2, which is the latest stable version of Android, are at risk to this type of attack, according to the researchers.

Hackers exploiting these vulnerabilities would be able to record everything you type on the phone, including passwords and private messages. They would be able to steal PINs, unlock the device while keeping the screen off, and even steal two-factor authentication tokens.

Google is aware of the issue and is working on a fix. But it’s unclear when fixes might be made available, or whether the patches will be applied to older versions of Android.

“We’ve been in close touch with the researchers and, as always, we appreciate their efforts to help keep our users safer,” a spokesperson told Engadget. “We have updated Google Play Protect — our security services on all Android devices with Google Play — to detect and prevent the installation of these apps. Prior to this report, we had already built new security protections into Android O that will further strengthen our protection from these issues, moving forward.”

The full paper describing Cloak and Dagger is available at this link, and the following videos show various exploits in action:

Source: This article was published on bgr.com by Chris Smith

Christopher Mondini of ICANN, Christian Dawson of the i2Coalition, Shane Tews of the American Enterprise Institute, and Matt Perault of Facebook gather to discuss the Internet's vulnerabilities—and what we can do about them

Austin, Tx.—If you’ve ever wondered whether there is a single point of failure that could take the entire Internet down in one fell swoop, rest assured: experts (at least, experts participating in a panel discussion here at South by Southwest) say there is no such thing.

But even though the Internet does not possess a “kill switch”, so to speak, it does have plenty of vulnerabilities and limitations. Outages are commonplace, security is a perennial struggle, and governments can (and do) shut down access to specific services or even the entire Internet. 

On Friday, the panel highlighted a range of limitations that still plague this network of networks and called for technically-minded people to participate in organizations such as ICANN to help improve resiliency, security, and connectivity. 

Two recent events highlight some of the issues we face. In October, a botnet disrupted access to a range of services including Twitter and Netflix through distributed denial-of-service (DDoS) attacks on the Managed Domain Name System of internet services company Dyn. And just last week, an improperly inputted command took down a number of Amazon’s widely used AWS services.  

As our networks grow with the Internet of Things, we could find ourselves more vulnerable to DDoS attacks, said Shane Tews of the American Enterprise Institute. “If we don’t have a good concept of how we’re going to be able to manage that traffic for good or for bad, that has to ability of being Dyn 100x,” Tews said. “There are people who do think that it was really just a shot across the bow as to what could be coming.”

Tews noted that there are changes afoot to try to make the system more secure. In 2010, the Domain Name System Security Extensions system was deployed. “The idea of domain name security…is basically to put a lock on a system that’s always been wide open,” Tews said, a holdover from the early days of the Internet when everyone knew one another. The master key for this system will be “rolled over”, or changed, for the first time in October.

“There are certain components of internet infrastructure that are not as resilient as others,” said Christian Dawson, executive director of the Internet Infrastructure Coalition, or i2Coalition. But he said challenges like attacks only serve to make the system more robust: “I don’t think it’s getting more risky. I think when people figure out how to push the right buttons to bring certain components down, it just makes us better at...realizing that taking the steps to get more resilience are necessary.”

The greatest stresses to the system are policies not technology, he said: “My issue always come down to the people, and that’s why we’re heavily involved in internet governance issues, making sure the right people are at the table so that people don’t make the wrong decisions because they don’t have the right information.”

“Almost all the challenges to the Internet are human,” Tews said, highlighting a YouTube censorship order in Pakistan in 2008 that ended up stretching well past the country’s borders. 

The idea of vulnerability at the human level was echoed by Matt Perault, head of global policy development at Facebook. Perault started at the company the week that Egypt turned off access to the Internet for five days.

Recently, he said, such large-scale shutdowns seem to be less frequent, but in their place are smaller-scale “blocks”, such as shutting down access in a particular region of India while students there are taking exams.

This sort of interference doesn’t garner the headlines that Egypt’s shutdown did. But it adds up. A report (pdf) from the Brookings Institution last year that highlighted 81 short-term shutdowns between July 1, 2015 and June 30, 2016 concluded the outages together cost upwards of US $2.4 billion in gross domestic product.

“My main concern right now is [that we are] moving toward a world where there [are] increasingly sophisticated small-scale blocks,” Perault said. “I would assume that the thing we would be most scared of would be a government being able to turn off your access to one particular product for 15 minutes. Because the ability to do that on a large scale might impose enough friction into your ability to access certain services that it would change your relationship to the Internet, and it would be very hard to diagnose.”

Countries that do not make access to the Internet a priority are also a limitation, Tews said. “Even though it’s not a kill switch, it’s certainly a killjoy.”

Then there is the power sovereign countries can exercise to hinder, prevent, or monitor the exchange of information. “There’s been a lot of talk of Internet fragmentation,” said Christopher Mondini of ICANN. But he said even restrictive countries and governments agree “that the Internet should remain an interconnected system with just one addressing system, and that a platform of ICANN and its multi-stakeholder discussion should be supported to maintain the interconnectivity of all of the networks.”

Mondini said there are a number of organizations, such as the Internet Society and its Internet Engineering Task Force as well as network operator groups, by which individuals can participate in setting the Internet’s path going forward: “You can find your people, and you can get involved and shape the future of the Internet, which is pretty exciting.”


Source: This article was published on spectrum.ieee.org By Rachel Courtland

Image credit: Crowd of small symbolic 3d figures linked by lines by higyou via Shutterstock.com

Fermat, a collaborative, open-source technology project, has announced the launch of the Internet of People (IoP) Consortium, an initiative aimed at boosting academic research and encourage university-led pilot projects related to the “person-to-person economy.”

The IoP is meant to allow people to hold direct control and ownership of their data and digital footprint. The project seeks to develop and provide individuals with the tools to freely interact electronically, both for social and commercial purposes, “without unnecessary third party interferences.”

The newly formed consortium will provide opportunities to universities and research institutions to develop and participate in innovative projects in that field. Current members include ELTE, Infota, Virtual Planet and Cyber Services PLC.

First pilot project

In March, the consortium launched its first pilot project through a research lab at ELTE, the largest and one of the most prestigious universities in Hungary, in cooperation with the EIT Digital Internet of Things Open Innovation Lab.

Focusing on the shipping industry, the pilot project found that with disintermediating technology, multinational companies in a wide range of verticals can significantly increase effectiveness and reduce costs. Technology which removes unnecessary intermediaries and creates a decentralised system, improves privacy for both senders and receivers, allows on-demand contractors to better monitor failure situations, and helps smaller shipping companies enter the market.

“Our first project has already delivered important findings on the power of IoP technology,” Csendes said. “Though the study focused on the shipping industry, the technology developed could improve the logistics industry as a whole.”

The Internet of People

Fermat's Internet of People projectFermat, an organization based in Switzerland, is in charge of building the decentralized infrastructure for the IoT, which includes an open social graph, a direct, peer-to-peer access channel to individual people, and a direct device-to-device communication layer.

The IoT intends to be an information space where people’s profiles are identified by a public key and interlinked by profile relationship links. Profiles can be accessed via the Internet.

The project aims to empower people by allowing them freedom to administer their online privacy, protect themselves from spying, censorship or data mining, by establishing direct person-to-person interactions.

Speaking to CoinJournal, Fermat founder Luis Molina explained:

“The information on the Internet of People is controlled by end users using their profile private key, in the same way they control their bitcoin balances using their bitcoin private keys. This means that only them can modify the information of their profiles and the relationship with others profiles as well.”

Similarly to Facebook, an individual is able to configure the privacy level of his or her profile and choose which information is public.

“A profile uploaded to the IoT does not mean that everyone can access all the information on it,” Molina said.

“The main difference is that when you upload your info to Facebook, Facebook is in control and they monetize your information using it for their own profit. On the other hand the Internet of People allows you to sell pieces of your private data or digital footprint on a global marketplace to whoever you choose and as many times you want, even the same piece of data.”

The IoP uses a new type of cryptographically secured data structure called the graphchain. The main difference between a graphchain and a blockchain is that the first acts as a cryptographically secured data structure in which no blocks or transactions have to be stored.

According to Molina, Fermat’s graphchain technology enables a global mapping of everybody with verified proof of how they are related, and also people-to-people and company-to-people interactions without going through intermediaries.

Csendes said that the graphchain technology brings “endless business opportunities because of the additional network components and methodologies added on top of blockchain technology.”

“The IoP Consortium was formed in response to the need for concrete and developed use cases demonstrating this value,” he concluded.

Source: This article was published coinjournal.net By Diana Ngo

Online images of King Maha Vajiralongkorn in a revealing cropped T-shirt have been blocked in Thailand.

The Internet is as American as it gets. Not just the hardware and software, but the mind-set. Many of those who built it are libertarian types — First Amendment fans who regard uncensored access to information as an unquestionably good thing, a natural right.

The trouble is, most of the world doesn’t think that way. And their pushback against the anything-goes attitude of the American Internet gets more aggressive by the month.

It’s no surprise when China and Russia don’t want to do things the American way. But our friends in Europe are also objecting, and that will test how far major US companies such as Google and Facebookwould bend their principles to protect their profits.

Facebook, Google, and other Internet services for years have removed or blocked postings on a country-by-country basis. In Germany, for example, they block Nazi-related images, which are illegal there. Just the other day, Facebook began blocking embarrassing photos of the king of Thailand in that country, where it’s illegal to mock the monarch.

In the United States and other countries, we can still gape at images of King Maha Vajiralongkorn strolling through a mall wearing one of those revealing crop-top T-shirts. But in some countries, blocking publication just to their borders is not good enough. In Austria, for example, a judge in early May ordered Facebook to delete postings that described politician Eva Glawischnig as a “traitor” and a “corrupt tramp.” Pretty tame stuff by our country’s political standards, but the Austrian judge ruled that we in the United States and elsewhere shouldn’t be able to read them, either, ordering Facebook to erase the postings from its networks around the world.

Austria is, in effect, declaring that its hate-speech laws can be enforced globally, against any online entity. So if an Internet service publishes something nasty enough to offend an Austrian, the judge’s reasoning is that Austrian law should apply — regardless of where the content was published. And if Austria can bring Facebook to heel anywhere in the world, so can any other country. It’s easy to see why Facebook is digging in for a hard fight.

The news didn’t shock Harvard law professor Jonathan Zittrain, who foresaw this problem in a 2003 research paper. “I’m surprised it hasn’t happened sooner,” he said to me. Actually, it did, in France several years ago, in a still-pending case involving the European Union’s famed “right to be forgotten.”

Under this principle, any EU citizen can ask Internet search services to hide embarrassing facts about his past life, if those facts are no longer relevant. So if a Frenchman was arrested in a bar fight 30 years ago, he can demand that Google stop telling his fellow Europeans about it.

Anyway, that’s how Google thought the law would be applied. But in 2015, a French court held that the right to be forgotten applies worldwide and Google must delete that information from every one of its data centers around the world. Google is appealing.

The rationale behind the Austrian and French rulings is pretty understandable: A company in France, for example, looking for information on a potential hire can simply search the American version of Google, rather than the French edition. That renders the “right to be forgotten” mostly meaningless.

The French and Austrians hope to solve that problem by declaring their global sovereignty over entire social networks. If these rulings survive on appeal, would Facebook or Google stop doing business in these countries? Fat chance. They could cave in, but then they should probably expect a demand for a court in China to purge all references to the Tiananmen Square massacre of 1989.

Meanwhile in Germany, a new law requires social networks to delete within 24 hours any postings that violate that country’s hate-speech laws. These laws ban not only Nazi-related messages but any speech that insults or maligns people based on race, national origin, or religious belief. The penalty for noncompliance is a fine of up to $53 million per incident. No, that’s not a misprint.

Critics of the law say that to avoid massive fines, Internet companies will ban anything even vaguely controversial. It’s an ice bath for free speech.

So which will prevail: national sovereignty or Internet liberty? The side that has the courts, cops, and guns seems to be winning, for now. The human rights group Freedom House recently reported that Internet freedom is on the wane worldwide as a growing roster of countries crack down on social networks and instant-messaging apps.

What to do? I favor a Geneva Convention for Internet freedom, where countries would establish a global framework for regulating online content. As an American, I resent the notion that we even need such a treaty. But America can’t prevent Europe or Russia or China from regulating US Internet companies doing business on their turf. Our best option is to limit the reach of their regulations. It’s time to negotiate a pact that will prevent foreign laws from infecting American liberties.

Source: This article was published bostonglobe.com By HIAWATHA BRAY

Let the speculation about the iPhone 9 (not the 7 Plus pictured above) begin!

Just when you thought we'd reached peak iPhone speculation, the rumor mill pops off and churns out a hot new leak to kickstart the chatter.

The leak is concerned with the handset Apple's dropping in 2018 — and no, that's not a potentially delayed version of the device currently being called the iPhone 8, which is still months away from even being officially revealed. 

We're talking iPhone rumors within the standard release cycle for a device slated for fall 2018, which we'll refer to as the iPhone 9 because why not. That next next iPhone could come with massively huge OLED screens, according to a report in the Korean Herald, which was spotted by Mac Rumors.

The rumor comes from supply chain sources claiming to have some insider knowledge of a new agreement between Apple and Samsung. The report claims the iPhone 9 is expected to come in two OLED-screened models, with giant 5.28-inch and 6.46-inch display sizes. 

Those screens would dwarf current displays, as the standard iPhone 7 measures at 4.7 inches while the 7 Plus comes in at 5.5 inches. We can't say anything about the iPhone 9's design just yet, but the uptick in display area might come without a significant bump in the phones' casing size as Apple moves to an edge-to-edge design — for instance, the upcoming 8 is rumored to boast a 5.8-inch display in a profile of just over 5 inches. 

Samsung signed on to provide the iPhone maker with OLED screens for its handsets, starting with this year's model, but the new deal could potentially more than double the number of units due to Apple to 180 million.   

For those of you who aren't satisfied looking just one measly year into Apple's future, no worries: We've already heard a little something about 2019's phones too, from the same sources. That supply chain leak claimed every iPhone will switch to an OLED display by then — and with the Samsung deals and a rumored secret OLED development lab in Taiwan, there's actually more smoke here than some of the less grounded speculations about this year's device.    

As always, though, there's no way to know what Apple will do for sure until the company tells us itself, so we'll have to wait until next year, and then the year after, to know if these massive OLED screens will really be coming to our future phones. 

Until then, we'll just have to stay occupied with this year's rumors. The "final form" of the iPhone 8 has supposedly come to light — but it's still a long time until September, and there will be plenty of new leaks and rumors along the way. 

Source: This article was published mashable.com By BRETT WILLIAMS

Wednesday, 24 May 2017 17:15

How Many Stars Are In The Universe?

A Hubble Space Telescope image of the distant universe. Credit: NASA.
 
Looking up into the night sky, it's challenging enough for an amateur astronomer to count the number of naked-eye stars that are visible. With bigger telescopes, more stars become visible, making counting impossible because of the amount of time it would take. So how do astronomers figure out how many stars are in the universe?

The first sticky part is trying to define what "universe" means, said David Kornreich, an assistant professor at Ithaca College in New York State. He was the founder of the "Ask An Astronomer" service at Cornell University.

"I don't know [the answer] because I don't know if the universe is infinitely large or not," he said. The observable universe appears to go back in time by about 13.7 billion light-years, but beyond what we could see there could be much, much more. Some astronomers also believe that we may live in a "multiverse" where there would be other universes like ours contained in some sort of larger entity.

The simplest answer may be to estimate the number of stars in a typical galaxy, and then multiply that by the estimated number of galaxies in the universe. But even that is tricky, as some galaxies shine better in visible or some in infrared, for example. There also are estimation hurdles that must be overcome.

In October 2016, an article in Science (based on deep-field images from the Hubble Space Telescope) suggested that there are about 2 trillion galaxies in the observable universe, or about 10 times more galaxies than previously suggested. In an email with Live Science, lead author Christopher Conselice, a professor of astrophysics at the University of Nottingham in the United Kingdom, said there were about 100 million stars in the average galaxy

Telescopes may not be able to view all the stars in a galaxy, however. A 2008 estimate by the Sloan Digital Sky Survey (which catalogs all the observable objects in a third of the sky) found about 48 million stars, roughly half of what astronomers expected to see. A star like our own sun may not even show up in such a catalog. So, many astronomers estimate the number of stars in a galaxy based on its mass — which has its own difficulties, since dark matter and galactic rotation must be filtered out before making an estimate.

Missions such as the Gaia mission, a European Space Agency space probe that launched in 2013, may provide further answers. Gaia aims to precisely map about 1 billion stars in the Milky Way. It builds on the previous Hipparchus mission, which precisely located 100,000 stars and also mapped 1 million stars to a lesser precision. 

"Gaia will monitor each of its 1 billion target stars 70 times during a five-year period, precisely charting their positions, distances, movements and changes in brightness," ESA said on its website. "Combined, these measurements will build an unprecedented picture of the structure and evolution of our galaxy. Thanks to missions like these, we are one step closer to providing a more reliable estimate to that question asked so often: 'How many stars are there in the universe?'" 

Observable universe

Even if we narrow down the definition to the "observable" universe — what we can see — estimating the number of stars within it requires knowing just how big the universe is. The first complication is that the universe itself is expanding, and the second complication is that space-time is curved.

To take a simple example, light from the objects farthest away from us would take approximately 13.7 billion light-years to travel to Earth, taking into account that the very youngest objects would be shrouded because light couldn't carry in the early universe. So the radius of the observable universe should be 13.7 billion light-years since light only has that long to reach us.

Or should it? "It's a logical way to define distance, but not how a relativist defines distance," Kornreich said. A relativist would use a device such as a meter stick, measuring the distance along that device and then extending it as long as needed. 

This produces a different answer, which some sources define as 48 billion light-years in radius. Sources vary on this number, however. That's because space-time is curved. As the observer does the measurement with the meter sticks, light travels at the same time and influences the measurements.

Galaxy observations

It's easier to count stars when they are inside galaxies, since that's where they tend to cluster. To even begin to estimate the number of stars, then you would need to estimate the number of galaxies and come up with some sort of an average.

Some estimates peg the Milky Way's star mass as having 100 billion "solar masses," or 100 billion times the mass of the sun. Averaging out the types of stars within our galaxy, this would produce an answer of about 100 billion stars in the galaxy. This is subject to change, however, depending on how many stars are bigger and smaller than our own sun. Also, other estimates say the Milky Way could have 200 billion stars or more.

The number of galaxies is an astonishing number, however, as shown by some imaging experiments performed by the Hubble Space Telescope. Several times over the years, the telescope has pointed a detector at a tiny spot in the sky to count galaxies, performing the work again after the telescope was upgraded by astronauts during the shuttle era.

A 1995 exposure of a small spot in Ursa Major revealed about 3,000 faint galaxies. In 2003-4, using upgraded instruments, scientists looked at a smaller spot in the constellation Fornax and found 10,000 galaxies. An even more detailed investigation in Fornax in 2012, with even better instruments, showed about 5,500 galaxies.

Kornreich used a very rough estimate of 10 trillion galaxies in the universe. Multiplying that by the Milky Way's estimated 100 billion stars results in a large number indeed: 1,000,000,000,000,000,000,000,000 stars, or a "1" with 24 zeros after it. Kornreich emphasized that number is likely a gross underestimation, as more detailed looks at the universe will show even more galaxies.

Source: This article was published on space.com


Google has, perhaps more than any other company, realized that information is power. Information about the Internet, information about innumerable trends, and information about its users, YOU.

So how much does Google know about you and your online habits? It’s only when you sit down and actually start listing all of the various Google services you use on a regular basis that you begin to realize how much information you’re handing over to Google.

This has, as these things tend to do, given rise to various privacy concerns. It probably didn’t help when Google’s CEO, Eric Schmidt, recently went on the record saying: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

Now let’s have a look at how Google is gathering information from you, and about you.

Google’s information-gathering channels

Google’s stated mission is “to organize the world’s information and make it universally accessible and useful” and it is making good on this promise. However, Google is gathering even more information than most of us realize.

  • Searches (web, images, news, blogs, etc.) – Google is, as you all know, the most popular search engine in the world with a market share of almost 70% (for example, 66% of searches in the US are made on Google). Google tracks all searches, and now with search becoming more and more personalized, this information is bound to grow increasingly detailed and user specific.
  • Clicks on search results – Not only does Google get information on what we search for, it also gets to find out which search results we click on.
  • Web crawling – Googlebot, Google’s web crawler, is a busy bee, continuously reading and indexing billions of web pages.
  • Website analytics – Google Analytics is by far the most popular website analytics package out there. Due to being free and still supporting a number of advanced features, it’s used by a large percentage of the world’s websites.
  • Ad serving – Adwords and Adsense are cornerstones of Google’s financial success, but they also provide Google with a lot of valuable data. Which ads are people clicking on, which keywords are advertisers bidding on, and which ones are worth the most? All of this is useful information.
  • Email – Gmail is one of the three largest email services in the world, together with competing options from Microsoft (Hotmail) and Yahoo. Email content, both sent and received, is parsed and analyzed. Even from a security standpoint this is a great service for Google. Google’s email security service, Postini, gets a huge amount of data about spam, malware and email security trends from the huge mass of Gmail users.
  • Twitter – “All your tweets are belong to us,” to paraphrase an early Internet meme. Google has direct access to all tweets that pass through Twitter after a deal made late last year.
  • Google Apps (Docs, Spreadsheets, Calendar, etc.) – Google’s office suite has many users and is of course a valuable data source to Google.
  • Google Public Profiles – Google encourages you to put a profile about yourself publicly on the Web, including where you can be found on social media sites and your homepage, etc.
  • Orkut – Google’s social network isn’t a success everywhere, but it’s huge in some parts of the world (mainly Brazil and India).
  • Google Public DNS – Google’s newly launched DNS service doesn’t just help people get fast DNS lookups, it helps Google too, because it will get a ton of statistics from this, for example what websites people access.
  • The Google Chrome browser – What is your web browsing behavior? What sites do you visit?
  • Google Finance – Aside from the finance data itself, what users search for and use on Google Finance is sure to be valuable data to Google.
  • YouTube – The world’s largest and most popular video site by far is, as you know, owned by Google. It gives Google a huge amount of information about its users’ viewing habits.
  • Google Translate – Helps Google perfect its natural language parsing and translation.
  • Google Books – Not huge for now, but has the potential to help Google figure out what people are reading and want to read.
  • Google Reader – By far the most popular feed reader in the world. What RSS feeds do you subscribe to? What blog posts do you read? Google will know.
  • Feedburner – Most blogs use Feedburner to publicize their RSS feeds, and every Feedburner link is tracked by Google.
  • Google Maps and Google Earth – What parts of the world are you interested in?
  • Your contact network – Your contacts in Google Talk, Gmail, etc, make up an intricate network of users. And if those also use Google, the network can be mapped even further. We don’t know if Google does this, but the data is there for the taking.
  • Coming soon – Chrome OS, Google Wave, more up-and-coming products from Google.

And the list could go on since there are even more Google products out there, but we think that by now you’ve gotten the gist of it… 

Much of this data is anonymized, but not always right away. Logs are kept for nine months, and cookies (for services that use them) aren’t anonymized until after 18 months. Even after that, the sheer amount of generic user data that Google has on its hands is a huge competitive advantage against most other companies, a veritable gold mine.

Google’s unstoppable data collection machine

There are many different aspects of Google’s data collection. The IP addresses requests are made from are logged, cookies are used for settings and tracking purposes, and if you are logged into your Google account, what you do on Google-owned sites can often be coupled to you personally, not just your computer.

In short, if you use Google services, Google will know what you’re searching for, what websites you visit, what news and blog posts you read, and more. As Google adds more services and its presence gets increasingly widespread, the so-called Googlization (a term coined by John Batelle and Alex Salkever in 2003) of almost everything continues.

The information you give to any single one of Google’s services wouldn’t be much to huff about. The really interesting dilemma comes when you use multiple Google services, and these days, who doesn’t?

Try using the Internet for a week without touching a single one of Google’s services. This means no YouTube, no Gmail, no Google Docs, no clicking on Feedburner links, no Google search, and so on. Strictly, you’d even have to skip services that Google partner with, so, sorry, no Twitter either.

This increasing Googlization is probably why some people won’t want to use Google’s Chrome OS, which will be strongly coupled with multiple Google services and most likely give Google an unprecedented amount of data about your habits.

Why does Google do this?

As we stated in the very first sentence of this article, information is power.

With all this information at its fingertips, Google can group data together in very useful ways. Not just per user or visitor, but Google can also examine trends and behaviors for entire cities or countries.

Google can use the information it collects for a wide array of useful things. In all of the various fields where Google is active, it can make market decisions, research, refine its products, anything, with the help of this collected data.

For example, if you can discover certain market trends early, you can react effectively to the market. You can discover what people are looking for, what people want, and make decisions based on those discoveries. This is of course extremely useful to a large company like Google.

And let’s not forget that Google earns much of its money serving ads. The more Google knows about you, the more effectively it will be able to serve ads to you, which has a direct effect on Google’s bottom line.

It’s not just Google

It should be mentioned that Google’s isn’t alone in doing this kind of data collection. Rest assured that Microsoft is doing similar things with Bing and Hotmail, to name just one example.

The problem (if you want to call it a problem) with Google is that, like an octopus, its arms are starting to reach almost everywhere. Google has become so mixed up in so many aspects of our online lives that it is getting an unprecedented amount of information about our actions, behavior and affiliations online.

Google, an octopus?
Google, an octopus?

Accessing Google’s data vault

To its credit, Google is making some of its enormous cache of data available to you as well via various services.

If Google can make that much data publicly available, just imagine the amount of data and the level of detail Google can get access to internally. And ironically, these services give Google even more data, such as what trends we are interested in, what sites we are trying to find information about, and so on.

An interesting observation when using these tools is that in many cases information can be found for everything except for Google’s own products. For example, Ad Planner and Trends for Websites don’t show site statistics for Google sites, but you can find information about any other sites.

No free lunch

Did you ever wonder why almost all of Google’s services are free of charge? Well, now you know. That old saying, “there ain’t no such thing as a free lunch,” still holds true. You may not be paying Google with dollars (aside from clicking on those Google ads), but you are paying with information. That doesn’t have to be a bad thing, but you should be aware of it.

Source: This article was published royal.pingdom.com

If you believe in ghosts, you're not alone. Cultures all around the world believe in spirits that survive death to live in another realm. In fact, ghosts are among the most widely believed of paranormal phenomenon: Millions of people are interested in ghosts, and a 2013 Harris Poll found that 43 percent of Americans believe in ghosts.

The idea that the dead remain with us in spirit is an ancient one, appearing in countless stories, from the Bible to "Macbeth." It even spawned a folklore genre: ghost stories. Belief in ghosts is part of a larger web of related paranormal beliefs, including near-death experience, life after death, and spirit communication. The belief offers many people comfort — who doesn't want to believe that our beloved but deceased family members aren't looking out for us, or with us in our times of need? 

People have tried to (or claimed to) communicate with spirits for ages; in Victorian England, for example, it was fashionable for upper-crust ladies to hold séances in their parlors after tea and crumpets with friends. Ghost clubs dedicated to searching for ghostly evidence formed at prestigious universities, including Cambridge and Oxford, and in 1882 the most prominent organization, the Society for Psychical Research, was established. A woman named Eleanor Sidgwick was an investigator (and later president) of that group, and could be considered the original female ghostbuster. In America during the late 1800s, many psychic mediums claimed to speak to the dead — but were later exposed as frauds by skeptical investigators such as Harry Houdini. 

It wasn't until recently that ghost hunting became a widespread interest around the world. Much of this is due to the hit Syfy cable TV series "Ghost Hunters," now in its second decade of not finding good evidence for ghosts. The show spawned dozens of spinoffs and imitators, and it's not hard to see why the show is so popular: the premise is that anyone can look for ghosts. The two original stars were ordinary guys (plumbers, in fact) who decided to look for evidence of spirits. Their message: You don't need to be an egghead scientist, or even have any training in science or investigation. All you need is some free time, a dark place, and maybe a few gadgets from an electronics store. If you look long enough any unexplained light or noise might be evidence of ghosts.

The science and logic of ghosts

One difficulty in scientifically evaluating ghosts is that a surprisingly wide variety of phenomena are attributed to ghosts, from a door closing on its own, to missing keys, to a cold area in a hallway, to a vision of a dead relative. When sociologists Dennis and Michele Waskul interviewed ghost experiencers for their 2016 book "Ghostly Encounters: The Hauntings of Everyday Life" (Temple University Press) they found that "many participants were not sure that they had encountered a ghost and remained uncertain that such phenomena were even possible, simply because they did not see something that approximated the conventional image of a 'ghost.' Instead, many of our respondents were simply convinced that they had experienced something uncanny — something inexplicable, extraordinary, mysterious, or eerie." Thus, many people who go on record as claiming to have had a ghostly experience didn't necessarily see anything that most people would recognize as a classic "ghost," and in fact they may have had completely different experiences whose only common factor is that it could not be readily explained. 

Personal experience is one thing, but scientific evidence is another matter. Part of the difficulty in investigating ghosts is that there is not one universally agreed-upon definition of what a ghost is. Some believe that they are spirits of the dead who for whatever reason get "lost" on their way to The Other Side; others claim that ghosts are instead telepathic entities projected into the world from our minds.

Still others create their own special categories for different types of ghosts, such as poltergeists, residual hauntings, intelligent spirits and shadow people. Of course, it's all made up, like speculating on the different races of fairies or dragons: there are as many types of ghosts as you want there to be.

There are many contradictions inherent in ideas about ghosts. For example, are ghosts material or not? Either they can move through solid objects without disturbing them, or they can slam doors shut and throw objects across the room. According to logic and the laws of physics, it's one or the other. If ghosts are human souls, why do they appear clothed and with (presumably soulless) inanimate objects like hats, canes, and dresses — not to mention the many reports of ghost trains, cars and carriages?

If ghosts are the spirits of those whose deaths were unavenged, why are there unsolved murders, since ghosts are said to communicate with psychic mediums, and should be able to identify their killers for the police. And so on — just about any claim about ghosts raises logical reasons to doubt it.

Ghost hunters use many creative (and dubious) methods to detect the spirits' presences, often including psychics. Virtually all ghost hunters claim to be scientific, and most give that appearance because they use high-tech scientific equipment such as Geiger counters, Electromagnetic Field (EMF) detectors, ion detectors, infrared cameras and sensitive microphones. Yet none of this equipment has ever been shown to actually detect ghosts. For centuries, people believed that flames turned blue in the presence of ghosts. Today, few people accept that bit of lore, but it's likely that many of the signs taken as evidence by today's ghost hunters will be seen as just as wrong and antiquated centuries from now. 

Other researchers claim that the reason ghosts haven't been proven to exist is that we simply don't have the right technology to find or detect the spirit world. But this, too, can't be correct: Either ghosts exist and appear in our ordinary physical world (and can therefore be detected and recorded in photographs, film, video and audio recordings), or they don't. If ghosts exist and can be scientifically detected or recorded, then we should find hard evidence of that — yet we don't. If ghosts exist but cannot be scientifically detected or recorded, then all the photos, videos, audio and other recordings claimed to be evidence of ghosts cannot be ghosts. With so many basic contradictory theories — and so little science brought to bear on the topic — it's not surprising that despite the efforts of thousands of ghost hunters on television and elsewhere for decades, not a single piece of hard evidence of ghosts has been found.

And, of course, with the recent development of "ghost apps" for smartphones, it's easier than ever to create seemingly spooky images and share them on social media, making separating fact from fiction even more difficult for ghost researchers. 

Why many believe

Most people who believe in ghosts do so because of some personal experience; they grew up in a home where the existence of (friendly) spirits was taken for granted, for example, or they had some unnerving experience on a ghost tour or local haunt. However, many people believe that support for the existence of ghosts can be found in no less a hard science than modern physics. It is widely claimed that Albert Einstein suggested a scientific basis for the reality of ghosts, based on the First Law of Thermodynamics: if energy cannot be created or destroyed but only change form, what happens to our body's energy when we die? Could that somehow be manifested as a ghost?

Carol Anne: Hello? What do you look like? Talk louder, I can't hear you! Poltergeist helped define a paranormal culture in the United States.
Carol Anne: Hello? What do you look like? Talk louder, I can't hear you! Poltergeist helped define a paranormal culture in the United States.

It seems like a reasonable assumption — unless you understand basic physics. The answer is very simple, and not at all mysterious. After a person dies, the energy in his or her body goes where all organisms' energy goes after death: into the environment. The energy is released in the form of heat, and the body is transferred into the animals that eat us (i.e., wild animals if we are left unburied, or worms and bacteria if we are interred), and the plants that absorb us. There is no bodily "energy" that survives death to be detected with popular ghost-hunting devices.

While amateur ghost hunters like to imagine themselves on the cutting edge of ghost research, they are really engaging in what folklorists call ostension or legend tripping. It's basically a form of playacting in which people "act out" a legend, often involving ghosts or supernatural elements. In his book "Aliens, Ghosts, and Cults: Legends We Live" (University Press of Mississippi, 2003) folklorist Bill Ellis points out that ghost hunters themselves often take the search seriously and "venture out to challenge supernatural beings, confront them in consciously dramatized form, then return to safety. ... The stated purpose of such activities is not entertainment but a sincere effort to test and define boundaries of the 'real' world."

If ghosts are real, and are some sort of as-yet-unknown energy or entity, then their existence will (like all other scientific discoveries) be discovered and verified by scientists through controlled experiments — not by weekend ghost hunters wandering around abandoned houses in the dark late at night with cameras and flashlights.

In the end (and despite mountains of ambiguous photos, sounds, and videos) the evidence for ghosts is no better today than it was a year ago, a decade ago, or a century ago. There are two possible reasons for the failure of ghost hunters to find good evidence. The first is that ghosts don't exist, and that reports of ghosts can be explained by psychology, misperceptions, mistakes and hoaxes. The second option is that ghosts do exist, but that ghost hunters are simply incompetent and need to bring more science to the search. 

Ultimately, ghost hunting is not about the evidence (if it was, the search would have been abandoned long ago). Instead, it's about having fun with friends, telling stories, and the enjoyment of pretending they are searching the edge of the unknown. After all, everyone loves a good ghost story.

Additional resources

  • The Committee for Skeptical Inquiry promotes scientific inquiry, critical investigation and the use of reason in examining controversial and extraordinary claims.
  • Experiments suggest that children can distinguish fantasy from reality, but are tempted to believe in the existence of imaginary creatures, according to an article published in the British Journal of Developmental Psychology.

Source: This article was published livescience.com By Benjamin Radford

A team of researchers from the University of British Columbia, Canada, has proposed a radical new theory about the expansion of the universe. Scientists do not know exactly why the universe is expanding at an ever-accelerating pace, but the most popular theory is that this growth is being driven by dark energy, the theoretical force thought to make up 68 percent of the universe.

However, the University of British Columbia researchers have another theory—that quantum fluctuations of vacuum energy are responsible.

Scientists discovered the universe is expanding at an accelerating rate in 1998. Researchers analyzed the light from supernovae (exploding stars) and discovered the supernovae were moving away from each other at an ever increasing speed. They concluded the universe must be expanding at an accelerating rate.

The discovery led to the widely accepted theory that the universe is filled with dark energy, which is constantly pushing matter further and further away.

But this comes with problems. At present, there is a disconnect between our two best theories to explain the universe—quantum mechanics and Einstein’s theory of general relativity. What we see on a quantum level cannot be explained by general relativity.

When we apply quantum mechanics to vacuum energy that exists throughout the universe it results in a huge density of energy and, because general relativity says this energy would have a strong gravitational effect, it would likely result in the universe exploding.

But the universe is still here, and expanding at a relatively slow rate. In a study published in the journal Physical Review D, Qingdi Wang and colleagues sought to solve this problem.

If we looked at the universe extremely close up, we would see space and time constantly fluctuating—rather than being static, they are always moving. In its study, the team shows how previous models failed to take into account that the universe is constantly moving.

They use the large density of vacuum energy but look at it on the quantum scale. In this, they show how space is fluctuating wildly, expanding and contracting all the time. However, in these oscillations, there is a very slight difference—meaning it is always expanding slightly more than it is contracting. The result? A universe that is expanding at an ever accelerating rate.

crab nebula

The Crab Nebula, an expanding remnant of a star's supernova explosion. Scientists used supernovae to discover that the universe is expanding at an accelerating pace. NASA, ESA, J. HESTER (ARIZONA STATE UNIVERSITY)

“This result suggests that there is no necessity to introduce the cosmological constant…or other forms of dark energy, which are required to have peculiar negative pressure, to explain the observed accelerating expansion of the universe,” they conclude.

In an email interview with Newsweek, study co-author Bill Unruh says people will probably see the paper as controversial because it goes against the most widely accepted theory. “But we have been surprised by the positive reception it has received from the referee, the journal, and from the few people who have written to us,” he says. “I myself was very skeptical for a long time, but believe it deserves its chance out there in the marketplace of ideas.”

He says they hope to be able to test the theory by eventually finding the actual fluctuations of energy, or the local fluctuations in the expansion of the universe. However, he says this is “way beyond us at present so one would have to do indirect tests. And knowing what those are requires much more development of the theory. We are but on the first steps.”

In terms of what it could mean for our understanding of the cosmos, Unruh says it means the speed of the universe’s expansion will continue “forever.”

“We will never run out of those vacuum fluctuations of energy,” he says. “There may be some limit once the Hubble constant [related to the rate of expansion] becomes large enough, but that needs looking into. But since the immediate timescale is tens of billions of years, this is not a current worry.”

Source: This article was published on newsweek.com BY HANNAH OSBORNE


Spanish director Pedro Almodovar has been at the centre of a generational clash at Cannes film festival over the future of cinema (AFP Photo/Valery HACHE)

Cannes (France) (AFP) - The storm over whether Netflix films should be shown at the Cannes film festival has been billed as a generational clash that calls into question the future of cinema.

That the US streaming giant's movie "Okja" was greeted both by booing and cheering at its premiere Friday showed how divided film-makers are about the new cash-rich kid on the Hollywood block.

On one side are traditionalists who want to preserve the "immersive experience" of seeing movies on the big screen, and on the other, young millennials who have enthusiastically embraced streaming.

While critics adored "Okja", an adventure story of a girl who tries to rescue a giant genetically-modified pig from its ruthless creators, Variety said it really "belonged on the big screen" while The Guardian's Peter Bradshaw said "it's a terrible waste to shrink them to an iPad".

Before a single movie had been shown, the head of the Cannes jury, Pedro Almodovar, declared that "he could not imagine" either "Okja" or the other Netflix film "The Meyerowitz Stories" winning anything.

"For as long as I live I will fight to safeguard the hypnotic power of the big screen," he told reporters.

While Almodovar backtracked Friday, promising scrupulous fairness, there was no hiding his irritation at Netflix's refusal to take their two films in the running for the top prize, the Palme d'Or, to French cinemas.

Netflix claims "the establishment is closing ranks against us" and its supporters rail against French rules which prevent it from streaming films until three years after they are released in cinemas there.

- Kids still love cinema -

Hollywood stars at Cannes jumped to Netflix's defence, with Will Smith -- who sits on the jury with Almodovar -- warning he would "slam my hand on the table and disagree with Pedro. I'm looking forward to a good jury scandal."

He insisted Netflix has opened up young people to independent films, saying: "In my house Netflix has been nothing but an absolute benefit because my children get to watch films they never would have seen."

That view was confirmed by two young American Netflix fans at Cannes, who told AFP that it hadn't dimmed their love for the big screen.

For horror films in particular the cinema was infinitely superior, said Kelly Greer, a 24-year-old student from Nashville.

"You're with a crowd of people and you want to respond in the same way," she said.

Her friend Myah Lipscomb, 26, said, "I love Netflix. But I don't go to the cinema less."

That said, if Netflix and its rival Amazon make more and more movies, she admitted she'd be increasingly tempted to stay at home.

Most French producers believe their country's streaming rules should be relaxed, although many insist films should be shown in cinemas first.

However, Vincent Maraval, who produced "The Wrestler" -- which got actor Mickey Rourke an Oscar nomination -- argued that "we must not impose our way of watching films on the next generation" who often watch on tablets and smartphones.

- Amazon wins fans -

Yet even for Netflix star Robin Wright, who acts and directs in its flagship series "House of Cards", that is anathema.

"I think it's actually really poor for people to watch films on their phones," she said at Cannes. "It is not fair to film-makers."

"We (Netflix) are getting criticised right now because we have never had this medium before," she added.

But the movie theatre will forever be the first choice for films."

"Okja" star Tilda Swinton, who found herself in the eye of the Cannes storm, said it was clear that "an enormous and really interesting conversation was beginning... the truth is there is room for everybody."

Co-star Jake Gyllenhaal pleaded for film-makers to embrace changing technology. "I think it's truly a blessing when any art gets to reach one person, let alone hundreds of thousands upon millions of people."

While Netflix divides, its archrival Amazon has not stirred the same ire.

Its films are shown in cinemas before they go online and in France they can be streamed individually four months after release -- it's Netflix's subscription-only model that falls foul of existing rules.

Todd Haynes, the "Carol" director whose new Amazon-backed movie "Wonderstruck" is also in competition at Cannes, was effusive about working with a platform "made up of true cineastes who love movies and really want to try to provide an opportunity for independent film visions".

"They love cinema," he said.

Source: This article was published ca.news.yahoo.com By Fiachra GIBBONS

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now