fbpx
Association Admin

Association Admin

A new technique for studying exoplanet atmospheres could make it possible for scientists to get a close look at the atmosphers of planets like Proxima b in the 2020s.

A newly proposed technique could make it possible to search for life on alien planets much sooner than scientists had expected.  

Earlier this year, scientists discovered a planet orbiting the nearest star to Earth's own sun. Although relatively little is known about this newly discovered planet, which was dubbed Proxima b, evidence suggests it's possible that it has the right conditions to support life.

Of course, scientists are eager to look for signs of life on Proxima b (and members of the general public are eager to hear the results). But a deep look at the planet's atmosphere, where signs of life might hide, might require massive, next-generation, space-based telescopes that aren't expected to get off the ground until at least the 2030s. [Giant Space Telescopes of the Future (Infographic)]

But now, at least two different groups of astronomers are investigating a method for doing atmospheric studies of Proxima b — and other, possibly habitable planets like it — using ground-based telescopes that are scheduled to come online in the 2020s, significantly cutting down on the wait time.

Vermin of the sky

Thousands of planets have been identified around stars other than our own, a majority of them in the past six years, thanks to the dedicated Kepler space telescope (although many other observatories have contributed to this exoplanet treasure trove).

But finding planets is much different from characterizing their properties — things such as a planet's mass and diameter; whether it is made of rock or primarily of gas; its surface temperature; whether it has an atmosphere; and what that atmosphere is composed of.  

Earlier this month, at a workshop hosted by the National Academy of Sciences that explored the search for life beyond Earth, Matteo Brogi, a Hubble fellow at the University of Colorado, described a method for studying the atmosphere of Proxima b using next-generation ground-based telescopes.

The approach could be applied to other planets that, like Proxima b, are rocky, and orbit in the habitable zone of relatively cool stars, known as red dwarfs. The astronomical community is already emphasizing the search for "Earth-like" planets around these small stars because the latter are incredibly common in the galaxy; astronomers have even jokingly referred to red dwarfs as the "vermin of the sky."

"The frequency of small planets around small stars is extremely high; on average, there are about 2.5 planets per star," Brogi said. "Regarding habitable planets around small stars, there should be more or less a frequency of close to 30 percent. So every three stars should have a habitable planet."

An accordion of light

The approach Brogi and his colleagues are investigating would combine two different techniques for studying stars and exoplanets. The first is an extremely common technique in astronomy called high-resolution spectroscopy, which essentially looks at light from an object in extremely fine detail.

To understand high-resolution spectroscopy, consider the way sunlight passes through a prism and produces a rainbow; the glass takes the light and fans it out like an accordion, revealing that the whitish colored light is actually composed of various colors.

Spectroscopy spreads the light out even more — stretching that accordion out to unrealistic lengths for a musical instrument — revealing finer and finer detail about the colors (wavelengths) that are contained in the light from stars, planets and other cosmic objects. The resulting band of colors is called an object's spectrum.

The first scientists to use spectroscopy discovered something so amazing that, without it, the field of modern astronomy might be entirely unrecognizable: Chemical elements leave a unique fingerprint in the light spectrum. In other words, if a star is made of hydrogen and helium, those elements will leave a distinct signature on the light the star emits — when astronomers fan out the light from the star, they can see that signature in the wavelengths that are present or not present. This tool has allowed astronomers to learn about the composition of objects billions of light-years away, and helped to uncover the incredible fact that we are all made of stardust.

So if spectroscopy can be applied to the light coming from exoplanets, scientists might get a look at the composition of the planetary atmospheres. It's still unclear to scientists which atmospheric chemical mixtures would strongly indicate the presence of life — most plants on Earth consume carbon dioxide and produce oxygen, and other forms of life produce methane, so a combination with high levels of oxygen and methane might indicate the presence of biology. However, there are potential false positives and false negatives, not to mention potential life-forms that consume and produce different chemicals than living organisms on Earth.

But there are a couple of hurdles standing in the way of performing spectroscopy on a planet, and one of the biggest is that trying to see the light from a planet (which is fairly dim) when it is orbiting right next to a star (which is incredibly bright) is like trying to see the glow of a firefly against a backdrop of 1,000 stage spotlights (which would be difficult).

So Brogi and his colleagues have proposed a way to help separate those two sources of light. Because the planet is moving around the star, it is also moving toward, and then away from, the Earth throughout its orbit. When a source of light moves toward an observer, the light waves become compressed; when the source moves away from the observer, the light waves become stretched out. This is called the Doppler effect, or redshift. It also happens with sound waves, which is why when a police siren is moving toward you, it sounds like it is increasing in pitch; the waves get pushed together so that they literally have a higher frequency. When the car passes you and starts moving away, it sounds like the siren is getting lower in pitch, because the waves get stretched out and the frequency goes down.

The idea is that, out of the sea of light coming from a distant star, scientists could pick out the island of light coming from the planet by looking for the redshifted/Doppler shifted light. (This also could be used to separate any interference from Earth's own atmosphere.) Looking for those shifts in the light also falls under the header of spectroscopy.

Nonetheless, the Doppler shift approach wouldn't be powerful enough to work on its own, and this is where the second technique comes in: Astronomers would need to directly image the star or planet system first.

The planet-finding technique known as "direct imaging" is pretty much what it sounds like: an attempt to get a direct snapshot of both a planet and the star it orbits. To do this, scientists try to reduce the star's blinding glare enough so that they can see the light from the planet. It's a challenging method and one that can't be done for just any system — the planet has to be sufficiently bright compared to its parent star, which means most of the planets seen with direct imaging thus far are gas giants like Jupiter, and oriented in such a way that it can be viewed clearly from Earth. 

So Brogi and his colleagues proposed the method of first directly imaging the planetary system, using that image to locate the planet, and then further separating the planet's light from the star's light using the Doppler method. From there, they can use high-resolution spectroscopy to learn about the planet's atmosphere.

Telescopes currently in operation don't have the sensitivity to make this plan a reality, but some very large telescopes currently under development could. These scopes should be able to directly image smaller planets, as long as those planets are orbiting dimmer stars. Those include the Giant Magellan Telescope, scheduled to turn on around 2021, and the European Extremely Large Telescope, set to begin taking data as early as 2024. Direct imaging capabilities are likely to improve by leaps and bounds with these telescopes, but with direct imaging alone, it will likely not be possible to characterize many Earth-size, potentially habitable worlds.

During his talk, Brogi said there should be "on the order of 10" potentially habitable planets that this method could identify and study.

Challenges and progress

Brogi noted that there are caveats to the plan. For example, many of the predictions that he and his team made about how sensitive the method would be were "based on best-case scenarios," so dealing with real data will undoubtedly pose challenges. Moreover, the method compares the observed planetary spectra with laboratory experiments that recreate the expected spectra for various chemical elements, which means any errors in that laboratory work will carry over into the planet studies. But overall, Brogi said he and his colleagues think the approach could provide a better glimpse of the atmospheres of small, rocky, potentially habitable planets than scientists are likely to see for a few decades.

They aren't the only group that thinks so. Researchers based at the California Institute of Technology (Caltech) are investigating this approach as well, according to Dimitri Mawet, an associate professor of astronomy at Caltech. Mawet and his colleagues call the approach high dispersion coronagraphy (HDC) — a combination of high-resolution spectroscopy and high-contrast imaging techniques (direct imaging). (Similar lines of thought have been proposed by other groups.)

Mawet told Space.com in an email that he and his colleagues recently submitted two research papers that explore the "practical limits of HDC" and demonstrate "a promising instrument concept in the lab at Caltech." He said he and his colleagues plan to test the technique using the Keck telescope, located in Hawaii, "about two years from now," to study young, giant planets (so not very Earth-like). He confirmed that to use the technique to study small, rocky planets like Proxima b, scientists will have to wait for those next-generation, ground-based telescopes, like the Giant Magellan Telescope and the European Extremely Large Telescope. He also confirmed Brogi's estimation of "on the order of 10" rocky exoplanets in the habitable zone of their stars that could be studied using this technique.

"As [Brogi] mentioned, there are several caveats associated with the HDC technique," Mawet told Space.com. "However, we are working on addressing them and, in the process, studying the fundamental limits of the technique. Our initial results are very promising, and exciting."

Follow Calla Cofield@callacofield.Follow us@Spacedotcom,Facebook andGoogle+. Original article onSpace.com.

When a star is born, it needs a healthy diet of gas and dust to grow up into a big, powerful star like our Sun. Now, for the first time, scientists have directly observed a protostar going through this early “feeding” process. The discovery, published today in Science Advances, settles an old debate about exactly what happens when a new star is born.

Using high-powered radio telescopes, researchers recorded a so-called accretion disk forming around a star named IRAS 05413-0104. These rotating disks are made of up of interstellar matter, including iron and silicate, and feed the star’s core, causing it to grow in size. In the case of this particular newborn star, the disk even looked like a hamburger, according to the study.

“THIS HAS NEVER BEEN SEEN BEFORE.”

“There’s a dark lane in the middle where it’s colder, and brighter features on the top and bottom where the matter is being heated by the center of the star,” Chin-Fei Lee, a researcher at Taiwan’s national Institute of Astronomy and Astrophysics and lead author of the study, tells The Verge. “I think this is very exciting science. It’s never been seen before.

”Accretion disks like this have been glimpsed around older, bigger stars in the past, says Lee, but never on such a small protostar. The star has a mass of between a fifth and a third of the Sun, and is just 40,000 years old, while our own Sun has been around for some 4.5 billion years.

IRAS 05413-0104’s accretion disk also settles an old debate in the science of star formation. Up until now, astrophysicists weren’t sure whether or not these disks even formed around very young stars. Computer simulations suggested that the magnetic fields in the protostar’s core might be too intense, stopping the disk from spinning and gathering matter. Now we know this isn’t the case, although the exact interaction of the magnetic field and the accretion disk has yet to be fully mapped out.

The figure a) shows a zoomed out view of the star, with two gaseous jets ejected from its top and bottom. Figure b) shows a close-up of the accretion disk, and c) a model of the same. The darker color is because the gas is colder; the brighter regions are hotter. Image: ALMA (ESO/NAOJ/NRAO)/Lee et al.
 

The researchers used a group of radio telescopes in Chile known as ALMA (or the Atacama Large Millimeter/submillimeter Array) to find the disk. ALMA is the most expensive ground-based telescope in the world, and became fully operational in mid-2013 after costing $1.4 billion to build.

Gilles Chabrier, an astrophysicist at the University of Exeter who researches star formations and did not take part in the study, praised the paper and said the discovery was “just the tip of the iceberg.” “Before ALMA the resolution was too crude to probe these early interstellar discs,” Chabrier tells The Verge. “I think very soon we are going to find a lot of these.”

Source: theverge.com

When a star is born, it needs a healthy diet of gas and dust to grow up into a big, powerful star like our Sun. Now, for the first time, scientists have directly observed a protostar going through this early “feeding” process. The discovery, published today in Science Advances, settles an old debate about exactly what happens when a new star is born.

Using high-powered radio telescopes, researchers recorded a so-called accretion disk forming around a star named IRAS 05413-0104. These rotating disks are made of up of interstellar matter, including iron and silicate, and feed the star’s core, causing it to grow in size. In the case of this particular newborn star, the disk even looked like a hamburger, according to the study.

“THIS HAS NEVER BEEN SEEN BEFORE.”

“There’s a dark lane in the middle where it’s colder, and brighter features on the top and bottom where the matter is being heated by the center of the star,” Chin-Fei Lee, a researcher at Taiwan’s national Institute of Astronomy and Astrophysics and lead author of the study, tells The Verge. “I think this is very exciting science. It’s never been seen before.

”Accretion disks like this have been glimpsed around older, bigger stars in the past, says Lee, but never on such a small protostar. The star has a mass of between a fifth and a third of the Sun, and is just 40,000 years old, while our own Sun has been around for some 4.5 billion years.

IRAS 05413-0104’s accretion disk also settles an old debate in the science of star formation. Up until now, astrophysicists weren’t sure whether or not these disks even formed around very young stars. Computer simulations suggested that the magnetic fields in the protostar’s core might be too intense, stopping the disk from spinning and gathering matter. Now we know this isn’t the case, although the exact interaction of the magnetic field and the accretion disk has yet to be fully mapped out.

The figure a) shows a zoomed out view of the star, with two gaseous jets ejected from its top and bottom. Figure b) shows a close-up of the accretion disk, and c) a model of the same. The darker color is because the gas is colder; the brighter regions are hotter. Image: ALMA (ESO/NAOJ/NRAO)/Lee et al.
 

The researchers used a group of radio telescopes in Chile known as ALMA (or the Atacama Large Millimeter/submillimeter Array) to find the disk. ALMA is the most expensive ground-based telescope in the world, and became fully operational in mid-2013 after costing $1.4 billion to build.

Gilles Chabrier, an astrophysicist at the University of Exeter who researches star formations and did not take part in the study, praised the paper and said the discovery was “just the tip of the iceberg.” “Before ALMA the resolution was too crude to probe these early interstellar discs,” Chabrier tells The Verge. “I think very soon we are going to find a lot of these.”

Source: theverge.com

Wednesday, 12 April 2017 00:10

The Real Reasons People Troll

Trolling can spread from person to person, research shows.

Credit: blambca/Shutterstock

This article was originally published at The Conversation. The publication contributed the article to Live Science's Expert Voices: Op-Ed & Insights.

"Fail at life. Go bomb yourself."

Comments like this one, found on a CNN article about how women perceive themselves, are prevalent today across the internet, whether it's Facebook, Reddit or a news website. Such behavior can range from profanity and name-calling to personal attacks, sexual harassment or hate speech.

A recent Pew Internet Survey found that four out of 10 people online have been harassed online, with far more having witnessed such behavior. Trolling has become so rampant that several websites have even resorted to completely removing comments.

Many believe that trolling is done by a small, vocal minority of sociopathic individuals. This belief has been reinforced not only in the media, but also in past research on trolling, which focused on interviewing these individuals. Some studies even showed that trolls have predisposing personal and biological traits, such as sadism and a propensity to seek excessive stimulation.

But what if all trolls aren't born trolls? What if they are ordinary people like you and me? In our research, we found that people can be influenced to troll others under the right circumstances in an online community. By analyzing 16 million comments made on CNN.com and conducting an online controlled experiment, we identified two key factors that can lead ordinary people to troll.

What makes a troll?

We recruited 667 participants through an online crowdsourcing platform and asked them to first take a quiz, then read an article and engage in discussion. Every participant saw the same article, but some were given a discussion that had started with comments by trolls, where others saw neutral comments instead. Here, trolling was defined using standard community guidelines – for example, name-calling, profanity, racism or harassment. The quiz given beforehand was also varied to either be easy or difficult.

Our analysis of comments on CNN.com helped to verify and extend these experimental observations.

The first factor that seems to influence trolling is a person's mood. In our experiment, people put into negative moods were much more likely to start trolling. We also discovered that trolling ebbs and flows with the time of day and day of week, in sync with natural human mood patterns. Trolling is most frequent late at night, and least frequent in the morning. Trolling also peaks on Monday, at the beginning of the work week.

Time to troll

The proportion of flagged posts peaks late at night, according to a study of comments on CNN.com.

time-to-troll.jpg

Source: Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions (CSCW 2017)Get the data

Moreover, we discovered that a negative mood can persist beyond the events that brought about those feelings. Suppose that a person participates in a discussion where other people wrote troll comments. If that person goes on to participate in an unrelated discussion, they are more likely to troll in that discussion too.

The second factor is the context of a discussion. If a discussion begins with a "troll comment," then it is twice as likely to be trolled by other participants later on, compared to a discussion that does not start with a troll comment.

In fact, these troll comments can add up. The more troll comments in a discussion, the more likely that future participants will also troll the discussion. Altogether, these results show how the initial comments in a discussion set a strong, lasting precedent for later trolling.

We wondered if, by using these two factors, we could predict when trolling would occur. Using machine learning algorithms, we were able to forecast whether a person was going to troll about 80 percent of the time.

Interestingly, mood and discussion context were together a much stronger indicator of trolling than identifying specific individuals as trolls. In other words, trolling is caused more by the person's environment than any inherent trait.

Since trolling is situational, and ordinary people can be influenced to troll, such behavior can end up spreading from person to person. A single troll comment in a discussion – perhaps written by a person who woke up on the wrong side of the bed – can lead to worse moods among other participants, and even more troll comments elsewhere. As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.

Fighting back

Despite these sobering results, there are several ways this research can help us create better online spaces for public discussion.

By understanding what leads to trolling, we can now better predict when trolling is likely to happen. This can let us identify potentially contentious discussions ahead of time and preemptively alert moderators, who can then intervene in these aggressive situations.

Machine learning algorithms can also sort through millions of posts much more quickly than any human. By training computers to spot trolling behavior, we can identify and filter undesirable content with much greater speed.

Social interventions can also reduce trolling. If we allow people to retract recently posted comments, then we may be able to minimize regret from posting in the heat of the moment. Altering the context of a discussion, by prioritizing constructive comments, can increase the perception of civility. Even just pinning a post about a community's rules to the top of discussion pages helps, as a recent experiment conducted on Reddit showed.

Nonetheless, there's lots more work to be done to address trolling. Understanding the role of organized trolling can limit some types of undesirable behavior.

Trolling also can differ in severity, from swearing to targeted bullying, which necessitates different responses.

It's also important to differentiate the impact of a troll comment from the author's intent: Did the troll mean to hurt others, or was he or she just trying to express a different viewpoint? This can help separate undesirable individuals from those who just need help communicating their ideas.

When online discussions break down, it's not just sociopaths who are to blame. We are also at fault. Many "trolls" are just people like ourselves who are having a bad day. Understanding that we're responsible for both the inspiring and depressing conversations we have online is key to having more productive online discussions.

Jure Leskovec at Stanford University also contributed to this article.

Justin Cheng, Ph.D Student in Computer Science, Stanford UniversityCristian Danescu-Niculescu-Mizil, Assistant Professor of Information Science, Cornell University, and Michael Bernstein, Assistant Professor of Computer Science, Stanford University

This article was originally published on The Conversation. Read the original article.

The first full moon of spring for observers in the Northern Hemisphere will peak early Tuesday (or late tonight), depending on your time zone. 

The April full moon, known as a "Pink Moon," will shine overnight tonight (April 10 to 11), and skywatchers will be able to see the bright, beautiful moon all night long.

For those who want to catch the moon at its fullest and most illuminated — when it is exactly opposite the Earth from the sun — the time to look is 2:08 a.m. EDT (0608 GMT) the morning of April 11 — or 11:08 p.m. PDT tonight (April 10) for those on the West Coast. The moon will first rise at 8:02 p.m. EDT (1202 GMT, April 11) on the East Coast, and will rise at 7:04 p.m. PDT on the West Coast.

 

The Pink Moon is also called the Sprouting Grass Moon, the Egg Moon and the Fish Moon. The "pink" part of its title, used by Native Americans and early colonial Americans, refers to a spring flower called moss pink, or wild ground phlox, that appears around the same time as the moon, according to the Old Farmer's Almanac.

Skywatchers will also be able to spot Jupiter near the moon tonight — viewers in Europe and farther east will be able to see them reach within two degrees of each other in the sky. (An outstretched fist in the sky covers about 10 degrees.) At points farther west, like the United States, observers will still be able to see the moon, Jupiter and the Spica star system form a tight triangle in the sky.

Jupiter was recently in the same position as the moon now is — on the opposite side of Earth from the sun, called opposition — which marked the closest it will come to Earth in 2017.

Source : space.com

Google has released an update for its iOS app which takes advantage of the 3D touch controls on the iPhone 6S, iPhone 7, and iPad Pro models.

In addition, Google’s iOS app now comes packaged with Gboard as well its first-ever widget. Here’s a deeper look at what’s new.

3D Touch Controls

Google is catering to owners of the latest iOS devices with a slew of new 3D touch controls. Now you can save time by hard pressing on the Google app icon to gain quicker access to what you want to do.

Hard pressing on the Google app gives you the options to conduct a quick search, a voice search, an image search, or search in incognito mode. You can also get a look at what’s currently trending on Google.

There’s also more 3D touch controls once you’re in the app. Hard pressing on a search snippet, or cards in your Google Now feed, will bring up a preview of the content.

When you want to conduct a new search, just hard press on the G button at the bottom.

Trending on Google Widget

Google finally has an iPhone widget! The “Trending on Google” widget will keep you informed of the hottest searches and breaking news from around the world, updated in real-time. If you see a topic you’re interested in, just tap on it to open up a set of search results in the Google app.

There are a couple of ways you install this widget. The easiest way is to hard press on the app icon and select ‘Add Widget’. Alternatively, you can scroll to the bottom of the widget screen, tap “Edit”, then add “Trending on Google.”

Gboard

Google is still heavily pushing Gboard, its alternative keyboard for iOS. It first launched as a standalone app, but now it comes packaged with the Google app. So if you wish to use Gboard as your iPhone’s default keyboard, you can now do so by having the Google app installed. Keep in mind that it requires an inordinate amount of privacy permissions in order to use it, compared to other third-party keyboards.

Source : searchenginejournal.com

It sounds like something out of a Dan Brown book, but it isn't: The whole internet is protected by seven highly protected keys in the hands of 14 people.

And in a few days, they will hold a historic ritual known as the Root Signing Ceremony.

On Friday morning, the world got a good reminder about the importance of the organization these people belong to.

good chunk of the internet went down for a while when hackers managed to throw so much traffic at a company called Dyn that Dyn's servers couldn't take it.

Dyn is a major provider of something called a Domain Name System, which translates web addresses such as businessinsider.com (easier for humans to remember) into the numerical IP addresses that computers use to identify web pages.

Dyn is just one DNS provider. And while hackers never gained control of its network, successfully taking it offline for even just a few hours via a distributed denial of service attack shows how much the internet relies on DNS. This attack briefly brought down sites like Business Insider, Amazon, Twitter, Github, Spotify, and many others.

Upshot: If you control all of DNS, you can control all of the internet

DNS at its highest levels is secured by a handful of people around the world, known crypto officers.

Every three months since 2010, some — but typically not all — of these people gather to conduct a highly secure ritual known as a key ceremony, where the keys to the internet's metaphorical master lock are verified and updated.

The people conducting the ceremony are part of an organization called the Internet Corporation for Assigned Names and Numbers. ICANN is responsible for assigning numerical internet addresses to websites and computers.

If someone were to gain control of ICANN's database, that person would pretty much control the internet. For instance, the person could send people to fake bank websites instead of real bank websites.

To protect DNS, ICANN came up with a way of securing it without entrusting too much control to any one person. It selected seven people as key holders and gave each one an actual key to the internet. It selected seven more people as backup key holders — 14 people in all. The ceremony requires at least three of them, and their keys, attend, because three keys are needed to unlock the equipment that protects DNS. The Guardian's James Ball wrote a great story about them in 2014.

ICANN key ceremony 2016Participants in the August 2016 ICANN key ceremony. ICANN

A highly scripted ritual

The physical keys unlock safe deposit boxes. Inside those boxes are smart key cards. It takes multiple keys to gain access to the device that generates the internet's master key.

That master key is really some computer code known as a root key-signing key. It is a password of sorts that can access the master ICANN database. This key generates more keys that trickle down to protect various bits and pieces of the internet, in various places, used by different internet security organizations.

The security surrounding the ceremonies before and after is intense. It involves participants passing through a series of locked doors using key codes and hand scanners until they enter a room so secure that no electronic communications can escape it. Inside the room, the crypto officers assemble along with other ICANN officials and typically some guests and observers.

The whole event is heavily scripted, meticulously recorded, and audited. The exact steps of the ceremony are mapped out in advance and distributed to the participants so that if any deviation occurs the whole room will know.

The group conducts the ceremony, as scripted, then each person files out of the room one by one. They've been known to go to a local restaurant and celebrate after that.

But as secure as all of this is, the internet is an open piece of technology not owned by any single entity. The internet was invented in the US, but the US relinquished its decades of stewardship of DNS earlier this month. ICANN is officially in charge.

Keenly aware of its international role and the worldwide trust placed on it, ICANN lets anyone monitor this ceremony, providing a live stream over the internet. It also publishes the scriptsfor each ceremony.

On October 27, ICANN will hold another ceremony — and this one will be historic, too. For the first time, it will change out the master key itself. Technically speaking, it will change the "key pair" upon which all DNS security is built, known as the Root Zone Signing Key.

"If you had this key and were able to, for example, generate your own version of the root zone, you would be in the position to redirect a tremendous amount of traffic," Matt Larson, vice president of research at ICANN, recently told Motherboard's Joseph Cox.

Here's an in-depth description of the ceremony by CloudFlare's Olafur Guomundsson.

Here's a video of the very first key ceremony, conducted in 2010. Skip to 1:58 to see it.

 

Source : http://www.businessinsider.com/the-internet-is-controlled-by-secret-keys-2016-10

Much of the data of the World Wide Web hides like an iceberg below the surface. The so-called 'deep web' has been estimated to be 500 times bigger than the 'surface web' seen through search engines like Google. For scientists and others, the deep web holds important computer code and its licensing agreements. Nestled further inside the deep web, one finds the 'dark web,' a place where images and video are used by traders in illicit drugs, weapons, and human trafficking. A new data-intensive supercomputer called Wrangler is helping researchers obtain meaningful answers from the hidden data of the public web.

The Wrangler supercomputer got its start in response to the question, can a computer be built to handle massive amounts of I/O (input and output)? The National Science Foundation (NSF) in 2013 got behind this effort and awarded the Texas Advanced Computing Center (TACC), Indiana University, and the University of Chicago $11.2 million to build a first-of-its-kind data-intensive supercomputer. Wrangler's 600 terabytes of lightning-fast flash storage enabled the speedy reads and writes of files needed to fly past big data bottlenecks that can slow down even the fastest computers. It was built to work in tandem with number crunchers such as TACC's Stampede, which in 2013 was the sixth fastest computer in the world.

While Wrangler was being built, a separate project came together headed by the Defense Advanced Research Projects Agency (DARPA) of the U.S. Department of Defense. Back in 1969, DARPA had built the ARPANET, which eventually grew to become the Internet, as a way to exchange files and share information. In 2014, DARPA wanted something new - a search engine for the deep web. They were motivated to uncover the deep web's hidden and illegal activity, according to Chris Mattmann, chief architect in the Instrument and Science Data Systems Section of the NASA Jet Propulsion Laboratory (JPL) at the California Institute of Technology.

"Behind forms and logins, there are bad things. Behind the dynamic portions of the web like AJAX and Javascript, people are doing nefarious things," said Mattmann. They're not indexed because the web crawlers of Google and others ignore most images, video, and audio files. "People are going on a forum site and they're posting a picture of a woman that they're trafficking. And they're asking for payment for that. People are going to a different site and they're posting illicit drugs, or weapons, guns, or things like that to sell," he said.

Mattmann added that an even more inaccessible portion of the deep web called the 'dark web' can only be reached through a special browser client and protocol called TOR, The Onion Router. "On the dark web," said Mattmann, "they're doing even more nefarious things." They traffic in guns and human organs, he explained. "They're basically doing these activities and then they're tying them back to terrorism."

In response, DARPA started a program called Memex. Its name blends 'memory' with 'index' and has roots to an influential 1945 Atlantic magazine article penned by U.S. engineer and Raytheon founder Vannevar Bush. His futuristic essay imagined making all of a person's communications - books, records, and even all spoken and written words - in fingertip reach. The DARPA Memex program sought to make the deep web accessible. "The goal of Memex was to provide search engines the information retrieval capacity to deal with those situations and to help defense and law enforcement go after the bad guys there," Mattmann said.

Karanjeet Singh is a University of Southern California graduate student who works with Chris Mattmann on Memex and other projects. "The objective is to get more and more domain-specific (specialized) information from the Internet and try to make facts from that information," said Singh said. He added that agencies such as law enforcement continue to tailor their questions to the limitations of search engines. In some ways the cart leads the horse in deep web search. "Although we have a lot of search-based queries through different search engines like Google," Singh said, "it's still a challenge to query the system in way that answers your questions directly."

Once the Memex user extracts the information they need, they can apply tools such as named entity recognizer, sentiment analysis, and topic summarization. This can help law enforcement agencies like the U.S. Federal Bureau of Investigations find links between different activities, such as illegal weapon sales and human trafficking, Singh explained.

"Let's say that we have one system directly in front of us, and there is some crime going on," Singh said. "The FBI comes in and they have some set of questions or some specific information, such as a person with such hair color, this much age. Probably the best thing would be to mention a user ID on the Internet that the person is using. So with all three pieces of information, if you feed it into the Memex system, Memex would search in the database it has collected and would yield the web pages that match that information. It would yield the statistics, like where this person has been or where it has been sited in geolocation and also in the form of graphs and others."

"What JPL is trying to do is trying to automate all of these processes into a system where you can just feed in the questions and and we get the answers," Singh said. For that he worked with an open source web crawler called Apache Nutch. It retrieves and collects web page and domain information of the . The MapReduce framework powers those crawls with a divide-and-conquer approach to big data that breaks it up into small pieces that run simultaneously. The problem is that even the fastest computers like Stampede weren't designed to handle the input and output of millions of files needed for the Memex project.

The World Wide Web is like an iceberg, with most of its data hidden below the surface. There lies the 'deep web,' estimated at 500 times bigger than the 'surface web' that most people see through search engines like Google. A innovative …more

The Wrangler data-intensive supercomputer avoids data overload by virtue of its 600 terabytes of speedy flash storage. What's more, Wrangler supports the Hadoop framework, which runs using MapReduce. "Wrangler, as a platform, can run very large Hadoop-based and Spark-based crawling jobs," Mattmann said. "It's a fantastic resource that we didn't have before as a mechanism to do research; to go out and test our algorithms and our new search engines and our crawlers on these sites; and to evaluate the extractions and analytics and things like that afterwards. Wrangler has been an amazing resource to help us do that, to run these large-scale crawls, to do these type of evaluations, to help develop techniques that are helping save people, stop crime, and stop terrorism around the world."

Singh and Mattmann don't just use Wrangler to help fight crime. A separate project looks for a different kind of rule breaker. The Distributed Release Audit Tool (DRAT) audits software licenses of massive code repositories, which can store hundreds of millions of lines of code and millions of files. DRAT got its start because DARPA needed to audit the massive code repository of its national-scale 100-million-dollar-funded presidential initiative called XDATA. Over 60 different kinds of software licenses exist that authorize the use of code. What got lost in the shuffle of XDATA is whether developers followed DARPA guidelines of permissive and open source licenses, according to Chris Mattmann.

Mattmann's team at NASA JPL initially took the job on with an Apache open source tool called RAT, the Release Audit Tool. Right off the bat, big problems came up working with the big data. "What we found after running RAT on this very large code repository was that after about three or four weeks, RAT still hadn't completed. We were running it on a supercomputer, a very large cloud computer. And we just couldn't get it to complete," Mattmann said. Some other problems with RAT bugged the team. It didn't give status reports. And RAT would get hung up checking binary code - the ones and zeroes that typically just hold data such as video and were not the target of the software audit.

Mattmann's team took RAT and tailored it for parallel computers with a distributed algorithm, mapping the problem into small chunks that run simultaneously over the many cores of a supercomputer. It's then reduced into a final result. The MapReduce workflow runs on top of the Apache Object Oriented Data Technology, which integrates and processes scientific archives.

The distributed version of RAT, or DRAT, was able to complete the XDATA job in two hours on a Mac laptop that previously hung up a 24-core, 48 GB RAM supercomputer at NASA for weeks. DRAT was ready for even bigger challenges.

"A number of other projects came to us wanting to do this," Mattmann said. The EarthCube project of the National Science Foundation had a very large climate modeling repository and sought out Mattmann's team. "They asked us if all these scientists are putting licenses on their code, or whether they're open source, or if they're using the right components. And so we did a very big, large auditing for them," Mattmann said.

"That's where Wrangler comes in," Karanjeet Singh said. "We have all the tools and equipment on Wrangler, thanks to the TACC team. What we did was we just configured our DRAT tool on Wrangler and ran distributedly with the compute nodes in Wrangler. We scanned whole Apache SVN repositories, which includes all of the Apache open source projects."

The project Mattmann's team is working on early 2017 is to run DRAT on the Wrangler supercomputer over historically all of the code that Apache has developed since its existence - including over 200 projects with over two million revisions in a code repository on the order of hundreds of millions to billions of files.

"This is something that's only done incrementally and never done at that sort of scale before. We were able to do it on Wrangler in about two weeks. We were really excited about that," Mattmann said.

Apache Tika formed one of the key components to the success of DRAT. It discerns Multipurpose Internet Mail Extensions (MIME) file types and extracts its metadata, the data about the data. "We call Apache Tika the 'babel fish,' like 'The Hitchhiker's Guide to the Galaxy,'" Mattmann said. "Put the babel fish to your ear to understand any language. The goal with Tika is to provide any type of file, any file found on the Internet or otherwise to it and it will understand it for you at the other end...A lot of those investments and research approaches in Tika have been accelerated through these projects from DARPA, NASA, and the NSF that my group is funded by," Mattmann said.

When data's deep, dark places need to be illuminated

File type breakdown of XDATA. Credit: Chris Mattmann

"A lot of the metadata that we're extracting is based on these machine-learning, clustering, and named-entity recognition approaches. Who's in this image? Or who's it talking about in these files? The people, the places, the organizations, the dates, the times. Because those are all very important things. Tika was one of the core technologies used - it was one of only two - to uncover the Panama Papers global controversy of hiding money in offshore global corporations," Mattmann said.

Chris Mattmann, the first NASA staffer to join the board of the Apache Foundation, helped create Apache Tika, along with the scalable text search engine Apache Lucerne and the search platform Apache Solr. "Those two core technologies are what they used to go through all the leaked (Panama Papers) data and make the connections between everybody - the companies, and people, and whatever," Mattmann said.

Mattmann gets these core technologies to scale up on supercomputers by 'wrapping' them up on the Apache Spark framework software. Spark is basically an in-memory version of the Apache Hadoop capability MapReduce, intelligently sharing memory across the compute cluster. "Spark can improve the speed of Hadoop type of jobs by a factor of 100 to 1,000, depending on the underlying type of hardware," Mattmann said.

"Wrangler is a new generation system, which supports good technologies like Hadoop. And you can definitely run Spark on top of it as well, which really solves the new technological problems that we are facing," Singh said.

Making sense out of  guides much of the worldwide efforts behind 'machine learning,' a slightly oxymoronic term according to computer scientist Thomas Sterling of Indiana University. "It's a somewhat incorrect phrase because the machine doesn't actually understand anything that it learns. But it does help people see patterns and trends within data that would otherwise escape us. And it allows us to manage the massive amount and extraordinary growth of information we're having to deal with," Sterling said in a 2014 interview with TACC.

One application of machine learning that interested NASA JPL's Chris Mattmann is TensorFlow, developed by Google. It offers  commodity-based access to very large-scale machine learning. TensorFlow's Inception version three model trains the software to classify images. From a picture the model can basically tell a stop sign from a cat, for instance. Incorporated into Memex, Mattmann said Tensorflow takes its web crawls of images and video and looks for descriptors that can aid in "catching a bad guy or saving somebody, identifying an illegal weapon, identifying something like counterfeit electronics, and things like this."

"Wrangler is moving into providing TensorFlow as a capability," Mattmann said. "One of the traditional things that stopped a regular Joe from really taking advantage of large-scale machine learning is that a lot of these toolkits like Tensorflow are optimized for a particular type of hardware, GPUs or graphics processing units." This specialized hardware isn't typically found in most computers.

"Wrangler, providing GPU-types of hardware on top of its petabyte of flash storage and all of the other advantages in the types of machines it provides, is fantastic. It lets us do this at very large scale, over lots of data and run these machine learning classifiers and these tool kits and models that exist," Mattmann said.

What's more, Tensorflow is compute intensive and runs very slowly on most systems, which becomes a big problem when analyzing millions of images looking for needles in the haystack. "Wrangler does the job," Singh said. Singh and others of Mattmann's team are currently using Tensorflow on Wrangler. "We don't have any results yet, but we know that - the tool that we have built through Tensorflow is definitely producing some results. But we are yet to test with the millions of images that we have crawled and how good it produces the results," Singh said.

"I'm appreciative," said Chris Mattmann, "of being a member of the advisory board of the staff at TACC and to Niall Gaffney, Dan Stanzione, Weijia Xu and all the people who are working at TACC to make Wrangler accessible and useful; and also for their listening to the people who are doing science and research on it, like my group. It wouldn't be possible without them. It's a national treasure. It should keep moving forward."

Source : https://phys.org/news/2017-02-deep-dark-illuminated.html

SEO, search engine optimization, has come a long way over the past quarter of a century. It is now an essential element of our world. However, while businesses are expanding on a global scale, which would suggest the world is getting smaller, metaphorically speaking, there is actually in increase in people who want to return to slightly older ways, where shopping local was still the way forward. This is why there has been a surge in people starting a local SEO company, focusing specifically on delivering local services. So what is the future they envisage for SEO?

The Future of SEO

Nobody can truly predict the future, but it is reasonable to make some assumptions based on the fact that history repeats itself, and that we can predict trends. One of the biggest predictions, which is also the most likely to come true, is that there will be a more focused and niche experience as standard with all websites. The focus will continue to point more strongly towards unique, high quality content, intended for the real user.

We also know that the internet is changing itself to provide people with instant, personalized gratification. An online user now wants to type in a search term and be presented with the exact answer to what they were looking for, without having to do any work. This is why smart tech is likely to develop a lot more, and this includes wearable gadgets. People want to be connected all the time, and this is also increasing the need for predictive content solutions.

It is also believed that SEO will not stand still and that it will develop so that it can meet any emerging needs. Some suggest that search will be personalized by leveraging external platform data, which would offer added value. If you are a business, you need to make sure your brand is ready for this. Start working now on optimizing any content you have for different platforms, like mobile apps, and focus on how your users are likely to search for this. You want to drive engagement and exposure by being direct and concise, with a focus on what your users actually want.

It is also reasonable to assume that Google will be 100% dominated by SEO and that their rules on online content will continue. You need to make sure that your brand is present in all forms of social media, and that it is consistent and strong in its message. You can do this by experimenting using visual content, because it is believed that search engines will soon start to focus on this. And you must make sure that everything you post online can be accessed through any kind of device, from Cortana to a desktop computer.

What we have seen in SEO since its inception is that it is an ethical solution. This won’t change, so as long as you continue to focus on ethical practices, you should be fine.

Author: Anwar Hossain

Source: https://www.theglobaldispatch.com/what-is-the-future-of-search-engine-optimization-90511/

Don't you hate it when your phone loses its internet connection and you can't search for something on Google? Well, Google's latest Android app makes that experience a lot better.

Google announced that the new version of the Google Android search app will work better for you when you are in a poor internet connection area. Now, if you do a search using the app and your internet connection drops, Google will keep trying until the internet connection returns. Then, Google will show you a notification that the app was able to find the search results for your query.

When you first do the query, you will get a notification that your device is offline but Google will let you know when it comes back online. When it does come back online, a new notification will tell you that your search results are ready for viewing. You click on that notification and the search results load.

Here is a GIF of it in action:

english_1A.gif

Google added that this does not impact your data charges or battery life on your device.

Author : Barry Schwartz

Source : http://searchengineland.com/google-android-search-app-will-keep-trying-searches-internet-connection-poor-267708

Page 3 of 701

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media