Barbara Larson

Barbara Larson

Sometimes, it occurs when a person suffers a nearly fatal accident or life-threatening situation. In others, they are born with a developmental disorder, such as autism. But a slim margin of each group develop remarkable capabilities, such as being able to picture advanced mathematical figures in one's head, have perfect recall, or to draw whole cityscapes from memory alone. This is known as savant syndrome. Of course, it’s exceedingly rare. But how does it work? And do we all hide spectacular capabilities deep within our brain?

In 2002, 31-year-old Jason Padgett, a community college dropout and self-described “goof,” was mugged outside of a karaoke bar. Two men knocked him down and kicked him in the back of the head repeatedly, leaving him unconscious. Padgett was checked out and sent home from the hospital that same night.

He’d suffered a serious concussion but didn’t know it, until the next morning when he noticed something peculiar. Upon entering the bathroom and turning on the faucet, he saw “lines emanating out perpendicularly from the flow.” He couldn’t believe it.

“At first, I was startled, and worried for myself, but it was so beautiful that I just stood in my slippers and stared.” It was like, “watching a slow-motion film.” He soon realized that he could see geometric shapes and fractals—irregular patterns that repeat themselves, in everything. “It’s just really beautiful,” he said. 

Padgett began to find that he could intuitively understand the mathematical nature of everything around him. Before, he never rose beyond pre-Algebra. After the incident, he became infatuated with fractals and pi. His perception had completely changed. He soon grew obsessed with all the shapes he found in his house.

In his memoir, Struck by Genius: How a Brain Injury Made Me a Mathematical Marvel, Padgett writes, “I noticed the light bouncing off a car window in the form of an arc, and the concept came to life. It clicked for me-­because the circle I saw was subdivided by light rays, and I realized each ray was really a representation of pi.”

Freehand fractal drawing created by Jason Padgett. Wikimedia Commons

He soon locked himself away and began drawing precise and beautiful geometric figures for days and sometimes, weeks at a time. Padgett is one of the few people on earth who can draw fractals accurately, freehand. He became a germaphobe too and rather than seeing it as a gift, he started to wonder whether or not he was mentally ill.

He’d acquired an exceedingly rare condition. Only about 70 people in the world so far have been identified with savant syndrome. There are two ways for it to occur, either through an injury that causes brain damage or through a disorder, such as autism.

We’re familiar with the autistic savant, like the 1988 hit movie Rain Man, where the main character, played by Dustin Hoffman, can count a large number of toothpicks spilled onto the floor, instantaneously. It’s estimated that around 50% of those with savant syndrome are autistic.

The other 50% are either due to an injury to the central nervous system or a developmental disorder. Some researchers believe at least 10% of those with autism have some form of savant-like talent. Acquired savant syndrome is far rarer.

World renowned savant Daniel Tammet. Getty Images.

Things changed for Padgett after he saw a BBC documentary about Daniel Tammet. The British, autistic savant can recite pi to the 22,514th place, speaks 10 different languages—two he made up himself, and performs intricate mathematical calculations in his head, at lightning speed.

He’s also a synthete, meaning he experiences numbers not only visually, but as colors and geometric figures, as well. (Synesthesia is the blending of the senses, like certain letters having corresponding colors or letters-flavors. Or how certain people claim to smell music, in addition to hearing it. Synesthesia occurs in a variety of ways and differs widely from one person to the next).

Other famous savants include British-borne Stephen Wiltshire, who can draw panoramic cityscapes accurately, from memory, Dr. Anthony Cicoria, an orthopedic surgeon from New York, who, after being struck by lightning, can suddenly play the piano, and Alonzo Clemens who, after falling on his head as a child, can sculpt any animal from memory—down to the minutest detail.

Padgett soon contacted psychiatrist Dr. Darold Treffert, who’s been studying savant syndrome for over 50 years. “The most common ability to emerge is art, followed by music,” Treffert told The Guardian. “But I’ve had cases where brain damage makes people suddenly interested in dance, or in Pinball Wizard.”

In 2011, Padgett underwent an fMRI, along with transcranial magnetic stimulation (TMS). It was discovered the left side of his brain had become more active, while the right side far less so. Dr. Treffert contends that savant syndrome has to do with neuroplasticity—the brains remarkable ability to repair and rewire itself. Studies have shown that those who have acquired savant syndrome often sustained damage to the front area of their left temporal lobe.

According to Dr. Treffert,

“Following an injury to the brain, there’s recruitment of undamaged cortex from elsewhere in the brain, then there’s rewiring to that undamaged area, and a release of dormant potential. It’s a compensatory mechanism involving areas that may have been dormant, or areas that are ‘stolen’ and their function changed.”

Scientists began to wonder, what if they were to induce such conditions. Would it trigger a savant-like state in a subject?

Scientists have developed a “thinking cap,” which can increase cognitive abilities. Getty Images.

Neuroscientist Allan Snyder at the University of Sydney, Australia, is doing exactly that, with what he calls the “thinking cap." This is a rubber strap with two conductors. It’s fastened around the head and provides low levels of electricity to incapacitate a certain part of the brain. Tests on subjects have shown it induces savant-like skills including improved memory, better attention to detail, more creativity, better proofreading skills, and even better problem-solving. These capabilities fade about an hour after the cap is removed.

We are just now entering a golden age of neuroscience, according to neurobiologist Nicholas Spitzer at the University of California, San Diego. Initiatives to unearth the secrets of the human brain are ongoing at numerous institutions around the world. Prof. Spitzer predicts that in the coming decades, besides advancements in imaging technology, we’ll be able to send nanobots inside the brain that are so small, they’ll be able to enter and travel along neurons. Moreover, they’ll be able to communicate back their findings through Wi-Fi.

We’ll also be able to attach implants that can hook our brains up to computers and Wi-Fi, giving us instantaneous knowledge, and the ability to control devices with our minds. It may also be possible to identify and stimulate innate capabilities and in this way, awaken one’s latent savant. First, we’ll have to figure out if everyone has the ability to become a savant or not. And if so, what’s required on a neurological basis to get us there.

To learn more about the thinking cap, click here: 

Source :  bigthink.com

NASA will unveil new discoveries this week that involve alien oceans in Earth's solar system, space agency officials announced.

On Thursday, April 13, NASA will hold a press conference that will "discuss new results about ocean worlds in our solar system," according to a press release from the agency. The discovery will involve findings from the Hubble Space Telescope and NASA's Cassini spacecraft, which is orbiting Saturn. 

"These new discoveries will help inform ocean world exploration — including NASA's upcoming Europa Clipper missionplanned for launch in the 2020s — and the broader search for life beyond Earth," NASA officials wrote in the same press release.

NASA's ocean worlds press conference will begin at 2 p.m. EDT (1800 GMT) on Thursday and include a question-and-answer session with a panel of scientists from the Cassini and Hubble missions, as well as NASA's planetary exploration and science directorates. 

Those speakers include:

  • Thomas Zurbuchen, associate administrator, Science Mission Directorate at NASA Headquarters in Washington; 
  • Jim Green, director, Planetary Science Division at NASA Headquarters;
  • Mary Voytek, astrobiology senior scientist at NASA Headquarters;
  • Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California;
  • Hunter Waite, Cassini Ion and Neutral Mass Spectrometer team lead at the Southwest Research Institute (SwRI) in San Antonio;
  • Chris Glein, Cassini INMS team associate at SwRI;
  • William Sparks, astronomer with the Space Telescope Science Institute in Baltimore.

"Members of the public also can ask questions during the briefing using #AskNASA," NASA officials wrote. 

NASA's Cassini spacecrafthas been orbiting Saturn since 2004 to make detailed observations of the ringed planet and its many moons. The spacecraft is scheduled to end its mission on Sept. 15with a fiery plunge into Saturn itself to avoid contaminating the planet's icy moons, NASA officials have said. 

The Hubble Space Telescope, meanwhile, has been in orbit around Earth since 1990 and has captured spectacular images of the universe, including of some solar system planets, during its mission. Last week, NASA unveiled amazing new images of Jupiteras seen by Hubbleas the giant planet approached opposition(its closest point to Earth for 2017) on April 7.

Source: space.com

Adobe has a tendency to show early demos of the technologies it’s working on but isn’t quite ready to launch publicly. The latest of these is a selfie app from Adobe Research the company outed on its YouTube channel earlier today.

The iOS app, which doesn’t currently have a name as far as we can see, uses a number of the machine learning smarts the company has developed over the last few years and then applies them to selfies (or mobile portraits, as Adobe likes to call them).

Adobe previously demoed quite a few of the features that are available in the app, including the ability to turn 2D selfies into (limited) 3D models so you can change the tilt of your head a bit, for example. The app also uses Adobe’s liquify filter to smooth lines or make it look as if the photo was taken from a different distance. The app also lets you automatically blur the background to mimic the depth-of-field blur you would get from an expensive DSLR.

The most interesting feature, though, is likely the ability to look for photos that mimic the look and feel of what you’re going for with your selfie and then Adobe’s machine learning-powered tools will automatically try to apply that look to your “mobile portrait.” In machine-learning speak, that’s called “style transfer.”

Those features aren’t all new, but previous demos mostly ran on the desktop and this is the first time the company has shown how they could be used together in a single mobile app.

As usual, it’s not clear when (or if) Adobe plans to launch this new app to the public. The demo makes it look like a product that’s ready for release, but that’s the point of the demo, of course. For the most part, today’s video is surely meant to raise awareness of Adobe’s Sensei AI platform, but my guess is that there’s a real product hidden behind here that we’ll see outside of the Photoshop world.

Here is the official statement from Adobe about its future release plans: “Sneaking glimpses of new technology is something we do from time to time to showcase the amazing innovation that Adobe engineers come up with. We’ll be sure to keep you updated on future developments of Creative Cloud products as they’re available.”

Source : techcrunch.com

The Galaxy S8 was supposed to be Samsung’s first flagship to sport a dual lens camera like the iPhone 7 Plus, at least, according to a few reports from earlier this year. But Samsung ditched those plans because it had to reposition the fingerprint sensor on the back of the phone. Now, a report from a source with a terrific track record indicates that the Galaxy Note 8 will get a dual lens camera that’s currently missing from Samsung’s new S8 and S8+.

In a research note seen by 9to5Google, KGI Securities analyst Ming-Chi Kuo says the Galaxy Note 8 will be Samsung’s first dual-camera handset.

The “Note 8’s dual-camera will be much better than that of iPhone 7 Plus, and likely match that of OLED iPhone,” the analyst wrote. He said that the camera will be the Galaxy Note 8’s most significant update, apart from the Infinity Display design we’re all expecting. The camera will offer 3x optical zoom, 12MP wide-angle CIS (correction image space), dual photodiode (2PD) support, 13MP telephoto CIS, dual 6P lenses and dual OIS (optical image stabilization).

The Galaxy Note 8 will reportedly pack a 6.4-inch OLED display with QHD+ resolution, an Exynos 8895 or Snapdragon 835 chip depending on region, and a fingerprint sensor on the back. Apparently, Samsung won’t be able to perfect technology that would let it embed the sensor into the display — that was the original plan for the Galaxy S8 series as well.

Meanwhile, the iPhone 8’s rear camera is tipped to have a vertical orientation, with each lens expected to offer OIS. According to a recent report, Apple is also looking to integrate the fingerprint sensor into the display, but the process is challenging and might delay the phone’s release.

Source : bgr.com

 

When it comes to online advertising, the search engine giant Google has completely taken the world by storm. Now if you want to research, learn, watch, buy etc. everyone immediately goes to Google with the power of the entire internet shown instantly. Currently over 40,000+ people search on Google every second; and now with the power of Google AdWords, businesses of any size are able to market themselves and appear at the fore-front of these searches.

Over the last five to six years the search engine has grown at a tremendous scale, Google AdWords is the biggest source of revenue and profits for Google currently. Previously companies used to promote themselves via flyers, newspaper and radio ads; now, it has changed. People want to do background checks and research more about companies/services online - giving rise to Google AdWords. Previously the ratio of advertising spend was 80-90% offline media, this has gone into a complete reverse with businesses now hardly spending any marketing on outdoor and print media - now solely pushing to online advertising.

Whether companies are looking at generally more visibility or getting more sales & the phone ringing, Google AdWords has become the source of all businesses desires for growth. With the power to target customers locally and internationally in any country across the globe, Google AdWords allows businesses to target the right people exactly when their customers are looking for them. Saving the best for last, you only pay Google when you get results! Meaning only when someone actually clicks on your ad do you pay, you can start with any budget and work your way up depending on the success you see from your advertising.

In-order to help companies to manage their Google AdWords In Dubai they've introduced a program called 'Google Partners'. Google Partners are agencies with online marketing experts, certified by Google, to completely run AdWords campaigns.

As leading business consultants based out of Dubai, Be Unique Group has grown to become one of the fastest emerging agencies in Dubai, UAE. Regarded by businesses as "marketing experts" they've proudly become recognized by Google as one of the top Google Partner Agencies in the Middle East.

Specializing in maximizing profits whilst reducing costs for specifically small to medium sized businesses, today Be Unique Group has officially opened two offices simultaneously in both Abu Dhabi, UAE and New York City USA to cope with expansion.

Quote from Be Unique Group Director, Ali Soudi:

"It's been a true pleasure working with businesses from completely different industries to promote them online. There is no greater feeling seeing the power online advertising can do for businesses. Being initially formed in 2009 we strive to exceed our clients' expectations every single day.

Primarily majority of our clients come to us from recommendations and referrals and as long as we always continue to deliver on our word, every-time, we will continue to expand. We're very grateful for this honor from Google to be considered a Google Premier Partner Agency."

Ali Soudi, Director, Be Unique Group.

http://beuniquegroup.com/marketing/services/google-adwords-management/

Contact: Ali Soudi | +971-50-948-5505, +971-4-380-5077

SOURCE Be Unique Group

Source : prnewswire.com

 

A huge, shiny, peanut-shaped asteroid will safely swing by Earth tomorrow morning (April 19), coming within a distance of 1.1 million miles (1.8 million kilometers) of the planet — about 4.6 times the distance from Earth to the moon.

The bright asteroid 2014 JO25  is coming toward Earth from the sun's direction and should be visible in the sky in small telescopes for a few days afterward as it fades from view. It will be at its closest point to Earth at 8:24 a.m. EDT (1224 GMT). You can see a video animation of the asteroid's orbit here.

Asteroid 2014 JO25 was first spotted in May 2014 by astronomers at the Catalina Sky Survey in Arizona, and measurements from NASA's NEOWISE mission suggested it was about 2,000 feet (650 meters) across, according to NASA's Jet Propulsion Laboratory (JPL). Radar observations from the Arecibo Observatory in Puerto Rico suggest it could be as big as 4,270 feet (1.3 km) at its widest point.

 

The asteroid's surface reflects about twice as much light as the moon. Its approach marks the closest an object this large has come to Earth since the gigantic asteroid 4179 Toutatis tumbled by in 2004, within 4 times the distance from the Earth to the moon.

The Aricebo Observatory caught this radio image of the asteroid 2014 JO25 on April 17, 2017, as the large, peanut-shaped asteroid neared its closet approach to Earth.
The Aricebo Observatory caught this radio image of the asteroid 2014 JO25 on April 17, 2017, as the large, peanut-shaped asteroid neared its closet approach to Earth.
Credit: Aricebo Observatory/NASA/NSF

Wednesday's approach is the closest 2014 JO25 has come in at least 400 years, and there's no known close approach coming through at least the year 2500. Although the asteroid's approach poses no risk to Earth — with a 0 percent impact probability — the International Astronomical Union's Minor Planet Center still classified it as "potentially hazardous" because of its size and nearness to Earth. Researchers will have to keep an eye on it to see if it drifts closer over the centuries, the Minor Planet Center wrote.

Astronomers around the world will study asteroid 2014 JO25 during and after its approach, including skywatchers at Arecibo and NASA's Goldstone Solar System Radar in California, JPL officials said in the statement — making observations that can potentially reveal features just a few meters across.

"Using radar, we can illuminate a near-Earth asteroid and directly measure its features,"astronomer Edgard Rivera-Valentín, a planetary scientist with the Universities Space Research Association (USRA) at the Arecibo Observatory, said in a separate statement. That's how scientists pinned down the asteroid's peanut shape, he added.

The next close approach of a known giant asteroid will happen in 2027, when the half-mile-wide (800 m) asteroid 1999 AN10 passes by at about the distance from the Earth to the moon.

The asteroid 2014 JO25 will fly safely past Earth April 19, coming within 1.1 million miles (1.8 million km) of the planet — about 4.6 times the distance between the Earth and the moon. This map shows the asteroid's locations as it passes through the sky April 19 to April 22 — it will appear bright in the sky for small telescopes for a few days after closest approach. Its position will vary from location to location; you can calculate its position on this NASA site.
Credit: Gianluca Masi (Virtual Telescope Project)/TheSkySixPro

You can watch asteroid 2014 JO25's journey live on the Slooh online observatory's website starting at 7 p.m. EDT (2300 GMT) April 19. You can also seek it out in the sky using the celestial map above and a small telescope (although its position will vary, depending on your location).

While you're looking, keep an eye out for the comet PanSTARRS (C/2015 ER61), which will be visible in the dawn sky to observers with binoculars or a small telescope as it makes its closest approach of 109 million miles (175 million km).

Editor's note: If you capture a photo of asteroid 2014 JO25 through a telescope and would like to share it with Space.com, please send images and comments in to: This email address is being protected from spambots. You need JavaScript enabled to view it..

Correction: A previous version of this article incorrectly stated that the Aricebo Observatory is in Chile; it is Puerto Rico.

Source: Space.com

Netflix has completely transformed the way we watch and enjoy TV, a feat made all the more remarkable given that the streaming giant was nothing more than a purveyor of DVDs by mail just a few years ago. These days, Netflix is churning out content faster than most people can keep up with. In April alone, Netflix has plans to roll out 25 new original programs, including a highly anticipated new stand-up special from Louis C.K. And in case you missed it, Netflix late last week rolled out 13 Reasons Why, a gripping drama based on the book of the same name that has already garnered a 95% rating on Rotten Tomatoes.

Now if this all old news to you and you fancy yourself something of a Netflix connoisseur, we’ve put together a long list of Netflix tips and tricks to help you take your game to the next level. So whether you’re curious about exploring some of Netflix’s hidden subcategories or are interested in exploring new Netflix features before they roll out to the masses, we’ve got you covered. So as much as it pains me to say it, now might be a good time to pause whatever you’re currently watching and check out some of the coolest Netflix features lurking right beneath the site’s surface.

Secret Netflix codes and hidden subcategories

With Netflix doubling down on original content, not to mention the fact that the list of available content is constantly changing on a monthly basis, figuring out what to watch next can sometimes be an exercise in frustration. While Netflix in recent months has made it much easier to search for content across genres and sub-genres, there are tons of secret Netflix codes you can use to really explore the full breadth of Netflix’s content library.

Here’s how it all works.

Every genre of content in the Netflix library has its own unique code. So, for example, the following URL http://www.netflix.com/browse/genre/6839 will take you to Netflix’s documentary page as 6839 is the code Netflix uses for documentaries. Seems simple enough, but if you start playing around with different numerical strings, you can really start having some fun.

An unofficial list of genre codes can be found over here. It’s a long list, but if you’re looking for something completely random, such as a list of critically acclaimed feel-good movies from the 1980s or Exciting Japanese movies, you can use the codes from the list above to really make your search for new content much more efficient. As a quick example, the list above relays that the special code for “British Independent Crime Movies” is 2962. That being the case, you can access films that fall under that category by going to http://www.netflix.com/browse/genre/2962.

Manage your data usage when viewing from a mobile device

Given how popular Netflix on mobile is, you might want to be aware of an app setting which can let you choose how much data you want to use per hour of programming. To do so, open up your Netflix app and go to App Settings > Cellular Data usage. Here, users can opt to have their streaming quality set automatically or pick between low and high. As laid out by Netflix, here’s how different video quality settings can impact your data usage.

  • Low: 4 hours per GB of data
  • Medium: 2 hours per GB of data
  • High: 1 hour per GB of data

Request your favorite movies and TV shows

If you have a particular movie or TV show that you’d like to become available, Netflix not only allows user suggestions, it encourages it. If you’d like to make a request, you can do so by checking out this page on Netflix’s website. A request is obviously no guarantee that Netflix will grant you your wish, but it certainly never hurts to try.

Download shows for offline viewing

Netflix was slow to roll out this feature, but we’re definitely glad that it’s finally here. If you’re headed on a trip or simply want to have some content at the ready in case you get stuck in a place with limited reception, Netflix will let you download certain programs to your device for later viewing.

To see what shows are available, go to the Netflix menu on your mobile device and select “Available for Download.” You’ll be taken to a page where you can search for a particular title or simply scroll through a list of recommended programs.

There are two points worth adding: One, the number of devices under your account that can have downloaded content concurrently may be limited by the type of Netflix subscription you have. Two, if you want to delete all of your downloaded Netflix content in one fell swoop, you can go to Menu > App Settings > Delete all downloads.

Save storage space when downloading shows

If space is a premium on your device, or if you simply want to shorten your download times, you can opt to download shows in either standard def or HD. To toggle between these two settings, go to App Settings > Video quality. To be fair, standard definition video on a screen as small as the iPhone isn’t too shabby so you might want to avoid HD downloads unless you’ve got the space and bandwidth to spare.

See or delete everything you’ve ever watched or rated

If you ever have a need to browse through your entire Netflix viewing history, you can simply check out Netflix’s My Activity page here. Aside from taking a bizarre trip down memory lane, the feature is a godsend if, for example, you’re trying to remember the name of a documentary you watched 4 years ago whose title you can’t remember.

To see a list of the shows you’ve given ratings to before, simply go to the My Activity page linked above and hit the “Rating” option on the right-hand side of the page.

Bonus tip: You can delete select titles from your viewing history simply by hitting the ‘x’ to the right of each listing

Check if anyone is using your account without permission

It’s pretty common for users of various streaming sites to share passwords, but sometimes you never know just how far a shared password of yours has travelled. That said, if you want to check in on where your account is being used, or even if you just want to make sure you haven’t been hacked, you can check out a list of all the IPs that logged into your account via Netflix’s Recent Account Access page. You’ll also see information about the city where each login originated and which logins came via a desktop computer, a mobile device or a set-top box.

Play Netflix Roulette

If you’re bad at making decisions or simply want to have a little fun and dance with the devil, you might want to play a little Netflix Roulette. As the name implies, the website (which is not affiliated with Netflix) spits out a random movie or TV show once a user inputs a specific genre and a desired ratings range. You can also filter on keywords such as a specific actor’s name.

Test new Netflix features before they become available to everyone

Netflix is a big fan of A/B testing and the company is always experimenting with new layouts, recommendation algorithms and more. If you wanna get in on this, go to your Account page and then scroll all the way down to the bottom and select “Test Participation.” As Netflix describes it, toggling this feature on will enable users to “help improve the Netflix experience and see potential changes before they are available to all members.”

Kick any moochers off your account

If you forgot to log off of a particular device, you can kick all signed-in accounts off-line with a single, swift blow. To do so, go to your Account section and then select “Sign out of all devices” located in the Settings portion of the page.

Change how Subtitles appear

If you ever happen to watch a show with subtitles (subtext: watch Narcos), you can change the way subtitles appear on the screen. It may sound trivial, but if you’re reading quite a bit during a program, you may have strong feelings as to whether or not the translated text appears in white or yellow. To get things just to your liking, head on over to Netflix’s Subtitle Appearance section.

Source: bgr.com

Tuesday, 18 April 2017 12:27

Eleven habits of mentally strong people


Travis Bradberry says mentally strong people set themselves apart from the crowd by seeing barriers as challenges to overcome. He lists some strategies we can all use to develop our mental strength.


We all reach critical points in our lives where our mental strength is tested. It might be a difficult friend or colleague, a dead-end job, or a struggling relationship.

Whatever the challenge, you have to be strong, see things through a new lens, and take decisive action if you want to move through it successfully.

It sounds easy. We all want good friends, good jobs, and good relationships.

But it isn’t. It’s hard to be mentally strong, especially when you feel stuck. The ability to break the mold and take a bold new direction requires that extra grit, daring, and spunk that only the mentally strongest people have.

It’s fascinating how mentally strong people set themselves apart from the crowd. Where others see impenetrable barriers, they see challenges to overcome. There are habits you can develop to improve your mental strength. In fact, the hallmarks of mentally strong people are actually strategies that you can begin using today.

They’re emotionally intelligent:

Emotional intelligence is the cornerstone of mental strength. You cannot be mentally strong without the ability to fully understand and tolerate strong negative emotions and do something productive with them. Moments that test your mental strength are ultimately testing your emotional intelligence (EQ).

Unfortunately EQ skills are in short supply. TalentSmart has tested more than a million people, and we’ve found that just 36 per cent of these are able to accurately identify their emotions as they happen.

They’re confident:

Mentally strong people subscribe to the notion that your mentality has a powerful effect on your ability to succeed. A recent study at the University of Melbourne showed that confident people went on to earn higher wages and get promoted more quickly than others did.

Fuel chaos

True confidence — as opposed to the false confidence people project to mask their insecurities — has a look all its own. Mentally strong people have an upper hand over the doubtful and the skittish because their confidence inspires others and helps them to make things happen.

They say no:

Research conducted at UC Berkeley showed that the more difficulty you have saying no, the more likely you are to experience stress, burnout, and even depression. Mentally strong people know that saying no is healthy, and they have the self-esteem and foresight to make their ‘nos’ clear.

When it’s time to say no, mentally strong people avoid phrases such as: “I don’t think I can” or “I’m not certain.” They say no with confidence because they know that saying no to a new commitment honours their existing commitments and gives them the opportunity to successfully fulfill them.

They neutralise difficult people:

Dealing with difficult people is frustrating and exhausting for most. Mentally strong people control their interactions with toxic people by keeping their feelings in check.

When they need to confront a toxic person, they approach the situation rationally. They identify their emotions and don’t allow anger or frustration to fuel the chaos. They also consider the difficult person’s standpoint and are able to find common ground and solutions to problems.

Even when things completely derail, mentally strong people are able to take the toxic person with a grain of salt to avoid letting him or her bring them down.

They embrace change:

Mentally strong people are flexible and are constantly adapting. They know that fear of change is paralysing and a major threat to their success and happiness. They look for change that is lurking just around the corner, and they form a plan of action should these changes occur.

Only when you embrace change can you find the good in it. You need to have an open mind and open arms if you’re going to recognise, and capitalise on, the opportunities that change creates.

They embrace failure:

Mentally strong people embrace failure because they know that the road to success is paved with it. No one ever experienced true success without first embracing failure. By revealing when you’re on the wrong path, your mistakes pave the way for you to succeed.

The biggest breakthroughs typically come when you’re feeling the most frustrated and the most stuck. It’s this frustration that forces you to think differently, to look outside the box, and to see the solution that you’ve been missing.

Yet, they don't dwell on mistakes:

Mentally strong people know that where you focus your attention determines your emotional state. When you fixate on the problems that you’re facing, you create and prolong negative emotions and stress, which hinders performance.

When you focus on actions to better yourself and your circumstances, you create a sense of personal efficacy, which produces positive emotions and improves performance.

They don't compare themselves to others:

Toxic proteins

Mentally strong people don’t pass judgment on other people because they know that everyone has something to offer, and they don’t need to take other people down a notch in order to feel good about themselves.

Comparing yourself to other people is limiting. Jealousy and resentment suck the life out of you. Mentally strong people don’t waste time or energy sizing people up and worrying about whether or not they measure up. Instead of wasting your energy on jealousy, funnel that energy into appreciation.

They exercise:

A study conducted at the Eastern Ontario Research Institute found that people who exercised twice a week for 10 weeks felt more socially, intellectually, and athletically competent. They also rated their body image and self-esteem higher.

Best of all, rather than the physical changes in their bodies being responsible for the uptick in confidence, which is key to mental strength, it was the immediate, endorphin-fueled positivity from exercise that made all the difference.

They get enough sleep:

It’s difficult to overstate the importance of sleep to increasing your mental strength. When you sleep, your brain removes toxic proteins, which are by-products of neural activity when you're awake.

Your brain can remove them adequately only while you're asleep, so when you don't get enough sleep, the toxic proteins remain in your brain cells, wreaking havoc by impairing your ability to think — something no amount of caffeine can fix.

They’re relentlessly positive:

Keep your eyes on the news for any length of time, and you’ll see that it’s just one endless cycle of war, violent attacks, fragile economies, failing companies, and environmental disasters.

It’s easy to think the world is headed downhill fast. And who knows? Maybe it is. But mentally strong people don’t worry about that because they don’t get caught up in things they can’t control.

Instead of trying to start a revolution overnight, they focus their energy on directing the two things that are completely within their power — their attention and their effort.

Mental strength is not an innate quality bestowed upon a select few. It can be achieved and enjoyed.

*Travis Bradberry is the co-founder of TalentSmart, a provider of emotional intelligence tests, emotional intelligence training, and emotional intelligence certification.

This article first appeared on the TalentSmart website.

Source : psnews.com.au

“Drawing on your phone or computer can be slow and difficult — so we created AutoDraw, a new web-based tool that pairs machine learning with drawings created by talented artists to help you draw,” wrote Google Creative Lab’s “creative technologist,” Dan Motzenbecker, earlier this week.

AutoDraw is one of Google’s artificial intelligence (AI) experiments, working across platforms to let anyone, irrespective of their artistic flair, create something super quick with little more than a scribble. It guesses what you’re trying to draw, then lets you pick from a list of previously created pictures. “So you can’t draw? No worries!” is the general idea here.

Above: AutoDraw

First up, AutoDraw is a super fun tool that gets increasingly addictive — that much is clear. But what’s also clear is that the tool is more a display of AI smarts than it is a tool to improve your artwork, because it would be just as easy to embody the exact same functionality within a text-based search engine. I mean, why bother drawing a crap dolphin with your finger when you could just type in the word “dolphin”? Because it wouldn’t be nearly as much fun, and Google wouldn’t get to show off its fancy new toys.

A few days after Google debuted AutoDraw, it revealed some other research its scientists have been carrying out, designed to enable computers to generate simple sketches using artificial intelligence (AI). In effect, they trained a recurrent neural network (RNN) on sketches that real people made, which emanated from an experimental app called Quick, Draw! that launched last year (again … it is really fun). The app tells you to draw things, like a giraffe or a butterfly, and then it guesses what you’ve drawn. So what Google is doing is training machines to sketch like real people, with all the line overlaps and crappy squiggles included.

What this helps demonstrate is the growing crossover between art and algorithms. But does this hint at a future where humans have little incentive to be creative at all?

The rise of the fourth industrial revolution

As part of the so-called fourth industrial revolution, millions of jobs will be lost to automation, according to a recent World Economic Forum report. The net loss is expected to be as many as five million jobs by 2020, though of course a whole bunch of new jobs will be created, including positions in IT and data science. Jobs such as manufacturing and production are expected to be heavily affected, while another recent report indicated that more than 100,000 legal jobs will be automated over the next 20 years.

But art… art is sacred. Art is an expression of human sentiment and emotion. Computers stand zero chance of consigning human creativity to the history books. Right? Well, maybe. We’re already seeing the early signs that art will be disrupted by machine intelligence and automation.

Why bother learning to paint a landscape or pay someone to sketch your newborn when you can download Prisma to your smartphone and transform your snapshots into ultra-realistic pieces of art in seconds? Prisma, for the uninitiated, uses neural networks to analyze each photo and then applies a style the user selects. And it really is rather good.

“Based on deep-learning techniques, we redraw the image from scratch,” said Alexey Moiseenkov, Prisma Labs cofounder, in an interview with VentureBeat last year. “We analyze tons of photos and get the typical forms and lines, then take a style and draw your picture with those lines in a taken style.”

Above: Prisma: Bottle with Prisma effect applied

The point here isn’t that these tools are better than human creators. The point is that such tools are pretty good just now, and they’ll only get better. If someone can press a couple of buttons to get an instant “hand-drawn” family portrait, using little more than a DSLR camera, tripod, and a Prisma-style AI image-rendering app, why would they bother employing the services of a professional artist?

It’s not beyond possibility that artists and art retailers will one day have to sell their services based on their authenticity — “100% hand-painted pictures” could become the only visible marking that separates human creations from those produced by machines.

But technology’s algorithmic arm stretches far beyond that of photography and art and into other creative realms.

In design

For years, automated web design services such as Wix and Weebly have offered novices an easy-to-use web development platform that makes it simple to build HTML5 sites using drag-and-drop tools rather than code. For basic websites without much deep functionality, such tools work fairly well. But the formulaic, simplistic, template-based approach leaves much to be desired, which is why professional designers and developers still manage to eke out a living.

Last June, Wix launched an automated web design service built on artificial intelligence, called Wix ADI. Using data garnered from its existing user base to feed into this new AI offering, the “creator” basically answers a few questions and provides the platform with cues as to what theme the website should be based on and what category it exists in, and then Wix pulls in relevant photos, words, and layouts based on the business type and location.

“Wix ADI isn’t just a new website builder — it sets a new market standard for web design,” said Wix ADI head Nitzan Achsaf at the launch. “We have been at the forefront of this market for nearly a decade, and now as one of the leading AI technology providers, we will make website creation accessible and easy for everyone.”

Wix promises that no two websites will look the same.

Other similar AI-focused web design platforms have blossomed in recent times and raised significant venture capital funding, including TheGrid, which has been operating its AI smarts for a few years already, and B12, which launched a similar proposition in beta last year with more than $12 million in funding.

The credibility of DIY web- and app-design tools that promise to turn “noobs” into designers and coders has been questioned for years. And now that AI is going the extra mile to remove any further effort from the process, it will only ruffle the naysayers’ feathers even more. But the usefulness of such tools really depends on what the purpose of the website is. Why pay for a professional designer and developer when you can hit a few buttons and have a simple, informative, Google-friendly site made with next to no spadework?

Again, the point here isn’t that the machines are now good enough to replace professionals in building fully functional websites and online services. The point is that AI is encroaching further into creative professions and, more importantly, it’s improving all the time.

In music

Could an algorithm ever be able to produce something as exquisite as Lennon & McCartney, Jagger & Richards, or even Mozart? Maybe. But probably not, at least for a while.

Back in September, headlines across the web screamed that the first AI-written pop song had been made. It made for alluring headlines, but it wasn’t strictly true. Sony researchers, using specialist Flow Machines software, were able to train a system on different music styles using a gargantuan database of songs. Then combining “style transfer, optimization and interaction techniques,” the system is able to compose music in any style.

So what we have here is a song called “Daddy’s Car,” written in the style of The Beatles. And hey, it’s not too bad.

However, a more accurate description of this composition would be that it was “AI-assisted.” French composer Benoit Carré wrote the lyrics (which are pretty nonsensical) and arranged the song — all the computer did was identify commonalities across this style of pop music and provided Carré with the parts to play around with. Sony’s researchers have actually been working on AI-assisted music creations for a few years already, and an entire album of such music is expected later this year.

Sony isn’t the only company dabbling in this field. Last year, Google announced Magenta, a project from the Google Brain team that’s setting out to discover whether machine learning can “create compelling art and music.” And earlier this year, the internet giant released a working interactive version of AI Duet, an app that lets you play a virtual piano with accompaniment from a computer system that riffs off what you play.

Elsewhere, London-based startup Jukedeck is working on an AI-powered music composer that writes original music completely on its own volition. Aimed at video creators on the hunt for original background music, Jukedeck has been training deep neural networks to understand how to compose and adapt music, with the end-user able to customize the sound they’re looking for.

All the guitar bands, DJs, and orchestras of the world can perhaps rest easy for now. While computers will improve at “songwriting,” artists’ biggest worry for the time being is how to make money in the age of on-demand streaming. Speaking of which….

Spotify snapped up music intelligence and data platform Echo Nest back in 2014, and off the back of that acquisition has been doubling down on its music recommendation efforts. The star of the show is Discover Weekly, a personalized playlist of music built around songs you’ve previously listened to on the platform.

In effect, Spotify analyzes your history and meshes it with the listening behavior of others to see what songs commonly appear next to each other, then based on this information it recommends new music. And it is more than pretty good — it is pretty excellent. While Apple is banking on human curators via the likes of Apple Radio, Spotify is arguably winning the music-recommendation battle using algorithms and automation.

What’s most interesting about this is that it is infinitely more scalable than a human DJ’s ability to recommend new music. Playlists built on algorithms are always tailored to the individual, while human recommendations will always have biased subjectivity weighted against it that will never appeal to everyone at all times.

Similarly, Shazam analyzes song structure to tell you what the name of the song is and who performs it. All you need to do is hold your phone up, tap a button, and voila. It really is a great way to discover new music and build up a library of tunes that you encounter on your day-to-day business, be it in a shop, at a football stadium, or while watching TV. Such technologies make everyone an expert, without having to become an expert. You don’t need to know anything except how to tap a button to identify a song, while Shazam links in directly with Spotify and iTunes to make it easy to stream or buy music.

Together, the likes of Spotify and Shazam could put a sizable dent into the knowledge-powered smarts of music writers and DJs around the world. People have instant access to all the information they need on the music they hear around them. And why listen to the top 10 charts on the radio, or read the top 5 albums of the week in the NME, when you know that Spotify has all the best new music? And why turn to your music-obsessed buddy to ask what the name of the song in that TV advertisement is when you can just Shazam it?

With algorithms at work, the need for human knowledge and expertise diminishes.

In writing

Above: Lego robot typing

It’s difficult to envisage a time when a machine will be capable of crafting a best-selling novel, but lord knows geeks have been trying to make that happen for a while. It’s not overly difficult to create something that is formed of words and roughly comprehensible in parts, but generating something with a proper narrative that flows beautifully from start to finish and is infused with wit and passion — well, that could be a long way off yet.

But we are already at a stage where machines are producing journalistic content (for want of a better phrase). Last summer, the Associated Press (AP) revealed it was expanding its baseball coverage with automated stories generated by algorithms through a partnership with Automated Insights. The AP had worked with Automated Insights for years already, generating thousands of computer-generated corporate earnings reports.

Automated Insights uses artificial intelligence to analyze big data and transform it into stories. Chicago-based Narrative Science offers something similar, with a specific focus on business intelligence for the enterprise, or “data storytelling,” as it puts it.

Here’s an AP report from a baseball game in the New York-Penn league, powered by Automated Insights.

STATE COLLEGE, Pa. (AP) — Dylan Tice was hit by a pitch with the bases loaded with one out in the 11th inning, giving the State College Spikes a 9-8 victory over the Brooklyn Cyclones on Wednesday.

Danny Hudzina scored the game-winning run after he reached base on a sacrifice hit, advanced to second on a sacrifice bunt and then went to third on an out.

Gene Cone scored on a double play in the first inning to give the Cyclones a 1-0 lead. The Spikes came back to take a 5-1 lead in the first inning when they put up five runs, including a two-run home run by Tice.

Brooklyn regained the lead 8-7 after it scored four runs in the seventh inning on a grand slam by Brandon Brosher.

State College tied the game 8-8 in the seventh when Ryan McCarvel hit an RBI single, driving in Tommy Edman.

Reliever Bob Wheatley (1-0) picked up the win after he struck out two and walked one while allowing one hit over two scoreless innings. Alejandro Castro (1-1) allowed one run and got one out in the New York-Penn League game.

Vincent Jackson doubled twice and singled, driving in two runs in the win.
State College took advantage of some erratic Brooklyn pitching, drawing a season-high nine walks in its victory.

Despite the loss, six players for Brooklyn picked up at least a pair of hits. Brosher homered and singled twice, driving home four runs and scoring a couple. The Cyclones also recorded a season-high 14 base hits.

This story was generated by Automated Insights (http://automatedinsights.com) using data from and in cooperation with MLB Advanced Media and Minor League Baseball, http://www.milb.com.

And here’s an earnings report in Forbes, powered by Narrative Science.

Over the past three months, the consensus estimate has sagged from $1.25. For the fiscal year, analysts are expecting earnings of $5.75 per share. A year after being $1.37 billion, analysts expect revenue to fall 1% year-over-year to $1.35 billion for the quarter. For the year, revenue is expected to come in at $5.93 billion.

A year-over-year drop in revenue in the fourth quarter broke a three-quarter streak of revenue increases.

The company has been profitable for the last eight quarters, and for the last four, profit has risen year-over-year by an average of 16%. The biggest boost for the company came in the third quarter, when profit jumped by 32%.

Earnings estimates provided by Zacks.

Narrative Science, through its proprietary artificial intelligence platform, transforms data into stories and insights.

Such reports won’t be winning any Pulitzer prizes yet, but they’re perfectly readable and the algorithms are constantly improving. There’s no evidence that machines will be capable of producing something akin to Dickens or Proust, but who knows what another 10 years’ worth of data could do to improve their writing smarts?

“A machine will win a Pulitzer one day,” noted Narrative Science’ chief scientist Kris Hammond, in the Guardian. “We can tell the stories hidden in data.”

While fears abound that algorithms will kill off human journalists, figuratively speaking, the AP has previously stated that embracing machine-written stories is more about expanding its coverage than replacing journalists. Through this method, it can cover many more Minor League Baseball games it would not have previously covered, simply by using data provided by news and statistics body Major League Baseball Advanced Media (MLBAM).

“Augmented content was never intended to replace human-generated content,” explained Joe Procopio, Automated Insights’ chief innovation office, in an interview with VentureBeat. “It’s another tool, another arrow in the journalist’s quiver, so to speak, and it should be used in places where it can take a lot of the data science and number crunching off the journalist’s plate. That frees up the journalist’s time to be able to do more of the investigative and reasoning work inherent in their jobs.”

What will ultimately decide whether an artistic endeavor is replaced by an algorithm or set of algorithms, in a business setting at least, is whether it’s more efficient. The question is: Does it save time and money without compromising on quality?

“There are basically two boxes that need to be checked when deciding to use automation to tell a story,” added Procopio. “One, is the data available to write something compelling, and two, is the business case there — in other words, does automation save enough time and resources to make it worthwhile?”

So can a machine be trained to amend its style of writing depending on whether it’s writing an earnings reports, a baseball review, or an obituary? Absolutely — this is already happening. Could a machine write a review of a music gig? Or write up an interview? Potentially, but it all comes down to the quality of the data the platform is given, and whether it’s actually cost effective to train a system to become efficient at such write-ups.

“Automation can be used when writing the types of pieces you describe — feature, interviews, reviews, etc., where automation makes sense,” continued Procopio. “How much of the piece should be automated depends on the scope of the piece.”

What’s emerging here is that such tools could be more about assisting the journalist than replacing them. It might not make sense to attempt entire computer-generated write-ups of a music gig, for example, if it already requires a human to attend the gig and form an opinion. But it may make sense to use a machine to fill in the gaps in the final review, or even to format it properly. For example, automation could generate paragraphs on a particular band’s sales and downloads, or maybe ticket sales, through tapping existing databases that contain up-to-date information. It’s not really important whether a human or a machine finds and compiles such data, so long as it’s accurate, but using an automated approach could save a journalist a lot of time.

Found in translation

Away from the journalistic sphere, the global translation and interpretation industry is reported to be worth around $40 billion. And contrary to what some may think, the process of converting words and meanings between languages requires a great deal of creativity. Often words or sentiment don’t convert well between languages and vernaculars, leaving the translator to trawl the nuanced depths of their linguistic abilities to communicate the intended meaning in another tongue.

Historically, machine translation tools have had a bad rap, but they are getting better. It’s now possible to plug any foreign-language newspaper article into Google Translate and receive a pretty faithful interpretation in another language, though there are many colloquialisms that will still trip up the best machine translation tools out there. Google has started using its AI-based neural machine translation across more of its public-facing services.

Skype also has a real-time voice translation tool, which lets you speak with someone (verbally) in a foreign tongue such as Japanese, in real time. Skype Translator uses AI smarts such as deep learning to train artificial neural networks, meaning it should improve over time as it listens to more conversations.

Any business worth its salt would not rely 100 percent on machine translations for mission-critical communications with customers. But we are certainly fast approaching a stage where machines can be called upon for less important stuff, and perhaps used in tandem with a proofreader to correct mistakes and clarify any ambiguities made by the machine for use in more important communications.

So, as with Automated Insights, we could have a situation where 100 percent automation is used in some instances where it makes sense, but in cases where the nuanced understanding of a human is needed, the two would work in conjunction with each other.

Where we’re at

It’s clear that the threat from automation to human jobs is real for many industries, and that includes the creative realm: streaming services that serve you the perfect playlist, apps that turn a family photo into something straight from Van Gogh’s easel, real-time translations and interpretations, robot-written news reports, and websites created automatically simply by answering a few questions.

This leads us to one stark question. Creativity is a core defining human trait, something that truly separates us from the machines, so where is the incentive to get creative when all these tools out there are setting out to save us from doing it ourselves?

There are a number of positives here. If a computer was to get as good as, or better than, humans at drawing in a natural style, then it could become the teacher, or assist an artist in their own creative process. Plus, there is a strong line of argument that says that people will always have a creative streak and will want to do things themselves. If you can click a button to turn a photo into a work of art, where is the fun in that?

And that is something that humans will never lose: a desire to have fun and make things themselves. Whether they will be able to get a job off the back of it in 20 years time is another question, of course.

When technology is constantly “fixing” human errors, be it a typo in a Word document or a wonky line in a drawing, humans may gradually lose the ability to perform certain creative tasks without computer intervention. It’s no longer necessary to remember facts, or phone numbers, or routes to your grandma’s house in the next town, because we know it’s all instantly accessible through a phone. This surely has an impact on a brain’s ability to remember things. Similarly, if kids grow up with tools to “help them draw” on their phone or computer because it’s “slow and difficult” otherwise, this can’t bode well if it becomes the norm.

But let’s not get too carried away. Machines have yet to prove they’re up to the job of many creative tasks; all they’ve shown so far is they can chip away at the edges — and even then they still need human assistance. Highly creative projects such as writing novels, writing investigative journalism, or penning an entire album of original music with heartfelt, meaningful lyrics — it’s difficult to see a time in the near future where computers will trump humans.

A good example is this cool little short sci-fi film produced last year, called Sunspring. It stars real actors, but the script was written by a machine. It was inspired by Alphabet’s AlphaGo AI system beating a pro player at the age-old strategy game Go.

The script for the short film was authored by a “recurrent neural network called long short-term memory, or LSTM for short,” according to a report in Ars last year. It is actually really funny, and makes little sense, but it serves as a reminder as to how far behind machines are in terms of creating genuine works of art that humans would wish to enjoy at scale.

It’s also important to distinguish between artificial intelligence and “algorithmic intelligence.” The former is more about computers being able to think, understand, and adapt in way a human might, while the latter is more about using mathematics to help people and machines work together.

Phil Tee is chairman and CEO of Moogsoft, a company that specializes in bringing algorithmic intelligence to enterprises — Moogsoft basically helps them adopt algorithms to address mundane operational tasks. He told VentureBeat:

Artificial intelligence is the ability for computer systems to perform tasks that traditionally have required human intelligence, such as visual perception, speech recognition, decision-making and language translation. Algorithmic technologies such as Algorithmic IT Operations (AIOps), on the other hand, leverage mathematics to help operators navigate dynamic, and highly unpredictable settings such as enterprise IT environments. There isn’t anything artificial about algorithms.

And this is a key point. Using algorithms to predict what music you’ll like on Spotify or what movies you should watch next on Netflix is smart for sure, but it’s not creative in itself. It may be better at doing its job than a human is, but it doesn’t exist as part of “the arts.” So while we’ll see businesses increasingly turn to algorithmic intelligence to optimize and streamline their operations and differentiate themselves from the competition, art itself may not be directly under threat.

But will we ever reach a stage where a computer could write a completely coherent book, song, or movie of its own volition?

“Absolutely, but the advances necessary are quite imposing,” added Tee. “The typical neural network today has roughly hundreds to tens of thousands of neurons, which makes it even less intelligent than a sea slug, which has 18,000 neurons in its brain. This journey to a creative thinking machine is vital, but a long one. Perhaps we should be more focused on intelligence as an aid to creativity rather than a replacement. After all, creativity probably is ultimately what defines humanity.”

Art needs humans, and humans need art. Machines may increasingly help the two work together, and it may even replace some jobs, but as one of our defining characteristics, humans and art will continue to be inseparable.

Source : venturebeat.com

Research from Newcastle University in the U.K. has shown how malicious websites can use the motion sensors in mobile phones to uncover PINs and other information. (Rich Pedroncelli/Associated Press)

New research reveals hackers can use sensor technology to gather all kinds of data

A new study has revealed just how easy it is for hackers to use the sensors in mobile devices to crack four-digit PINs and to access a wide variety of other information about users.

Cyber-security experts from Newcastle University in the U.K. found that once a mobile user visits a website, code embedded on the page could then use the phone's motion and orientation sensors to correctly guess the users' PIN. This worked on the first attempt 75 per cent of the time, and by the third try 94 per cent of the time.

The study, published in the International Journal of Information Security this week, also found that most people have little idea of what the sensors in our phones can do and the security vulnerabilities they pose.

The researchers identified 25 different sensors that are now standard on most phones. Yet websites and apps only ask for permission to use a small fraction of these — GPS and camera, for example.

Downside of fitness tracking

"A lot of these sensors came to help people have a better experience when they work with these devices, and they bring a lot of advantages to our lives," said Maryan Mehrnezhad, a research fellow in Newcastle's school of computing science and lead author of the paper.

track fitnessThe sensors that enable popular fitness-tracking apps contribute to security risks. (Getty Images)

Examples of these include the accelerometer and gyroscope sensors that enable the fitness-tracking apps so popular with cellphone users.

Yet the sensor technology is well ahead of any regulatory restrictions pertaining to our privacy, said Mehrnezhad in an interview with CBC News.

She and her colleagues mimicked what's known as a "side channel attack" on Android mobile phones using a website embedded with JavaScript code.

The results show that the attack site could learn details such as the timing of phone calls, whether the user is working, sitting or running, as well as any touch activity, including PINs, she said.

Underestimating risk

The second part of the study evaluated people's understanding of these risks.

Interviews with around 100 mobile users found that most people are not aware of the sensors on their mobile devices, said Mehrnezhad, and that there is "significant disparity" between the actual risk and perceived risk of having a compromised PIN.

In fact, as the sensors were being developed, even the phone manufacturers didn't have a clear understanding of the risks associated with them, said Urs Hengartner, an associate professor in computer science at the University of Waterloo.

"Everybody thought that accelerometer data and gyroscope data is not sensitive, so there's no need to ask for permission. Now research shows that it is an issue," said Hengartner in an interview with CBC News.

"These are security researchers that figured this out, and so nobody else seems to have known, not the browser vendors, not the operating system vendors and definitely not the general public."

Solving the problem is "a big research challenge," he said, in part because users may not understand the implications of what they're being asked by an app or website and may simply default to saying yes.

Decision fatigue

Research has shown that when people get tired of being asked for permission, they default to saying yes so they can access the website they want to visit or use the app they need, said Hengartner. 

Some browsers have begun asking for permissions for things like location data, but there is no uniform standard for doing so, he said.

As study author Mehrnezhad notes, tech companies also don't want to sacrifice the convenience and functionality we've come to expect of our mobile devices.

"It's a battle between security and privacy on one hand and usability issues on the other hand," she said — and it's only going to get more important.

"Sensors are going to be everywhere. The problem will get more serious when smart kitchens, smart homes and smart cities are connected via the internet of things," she said.

Preventive measures

It sounds obvious, but the first step users should take to protect themselves is to choose more complex passcodes. Previous research has found that 27 per cent of all possible four-digit PINs belong to a set of 20 that include dead-easy combinations such as "1111" or "1234," said Mehrnezhad.

"I know people hate it because it's not convenient," she said, but it's also critical to change your passwords regularly.

In addition, keep your operating systems up to date, only download apps from trusted sources like Google Play or the App Store, delete apps you're not using, and close both apps and browser tabs when you're done using them, she said.

Source :  cbc.ca

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now