Source: This article was published internetofbusiness.com By Malek Murison - Contributed by Member: Carol R. Venuti

Facebook has announced a raft of measures to prevent the spread of false information on its platform.

Writing in a company blog post on Friday, product manager Tessa Lyons said that Facebook’s fight against fake news has been ongoing through a combination of technology and human review.

However, she also wrote that, given the determination of people seeking to abuse the social network’s algorithms for political and other gains, “This effort will never be finished and we have a lot more to do.”

Lyons went on to announce several updates and enhancements as part of Facebook’s battle to control the veracity of content on its platform. New measures include expanding its fact-checking programme to new countries and developing systems to monitor the authenticity of photos and videos.

Both are significant in the wake of the Cambridge Analytica fiasco. While fake new stories are widely acknowledged or alleged to exist on either side of the left/right political divide, concerns are also growing about the fast-emerging ability to fake videos.


Meanwhile, numerous reports surfaced last year documenting the problem of teenagers in Macedonia producing some of the most successful viral pro-Trump content during the US presidential election.

Other measures outlined by Lyons include increasing the impact of fact-checking, taking action against repeat offenders, and extending partnerships with academic institutions to improve fact-checking results.

Machine learning to improve fact-checking

Facebook already applies machine learning algorithms to detect sensitive content. Though fallible, this software goes a long way toward ensuring that photos and videos containing violence and sexual content are flagged and removed as swiftly as possible.

Now, the company is set to use similar technologies to identify false news and take action on a bigger scale.

In part, that’s because Facebook has become a victim of its own success. With close to two billion registered users, one billion regularly active ones, and over a billion pieces of content posted every day, it’s impossible for human fact-checkers to review stories on an individual basis, without Facebook employing vast teams of people to monitor citizen behavior.

Lyons explained how machine learning is being used, not only to detect false stories but also to detect duplicates of stories that have already been classed as false. “Machine learning helps us identify duplicates of debunked stories,” she wrote.

“For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.”

The big-picture challenge, of course, is that real science is constantly advancing alongside pseudoscience, and new or competing theories constantly emerge, while others are still being tested.

Facebook is also working on technology that can sift through the metadata of published images to check their background information against the context in which they are used. This is because while the fake news is a widely known problem, the cynical deployment of genuine content, such as photos, in false or deceptive contexts can be a more insidious problem.

Machine learning is also being deployed to recognise where false claims may be emanating from. Facebook filters are now actively attempting to predict which pages are more likely to share false content, based on the profile of page administrators, the behavior of the page, and its geographical location.

Internet of Business says

Facebook’s moves are welcome and, many would argue, long overdue. However, in a world of conspiracy theories – many spun on social media – it’s inevitable that some will see the evidenced, fact-checked flagging-up of false content as itself being indicative of bias or media manipulation.

In a sense, Facebook is engaged in an age-old battle, belief versus evidence, which is now spreading into more and more areas of our lives. Experts are now routinely vilified by politicians, even as we still trust experts to keep planes in the sky, feed us, teach us, clothe us, treat our illnesses, and power our homes.

Many false stories are posted on social platforms to generate clicks and advertising revenues through controversy – hardly a revelation. However, red flags can automatically be raised when, for example, page admins live in one country but post content to users on the other side of the world.

“These admins often have suspicious accounts that are not fake, but are identified in our system as having suspicious activity,” Lyons told Buzzfeed.

An excellent point. But some media magnates also live on the other side of the world, including – for anyone outside of the US – Mark Zuckerberg.

Categorized in Social

Source: This article was published phys.org - Contributed by Member: Logan Hochstetler

As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools.

With this in mind, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley is developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific , called Science Search, that the Berkeley team is building.

As a proof-of-concept, the team is working with staff at the Department of Energy's (DOE) Molecular Foundry, located at Berkeley Lab, to demonstrate the concepts of Science Search on the images captured by the facility's instruments. A beta version of the platform has been made available to Foundry researchers.

"A tool like Science Search has the potential to revolutionize our research," says Colin Ophus, a Molecular Foundry research scientist within the National Center for Electron Microscopy (NCEM) and Science Search Collaborator. "We are a taxpayer-funded National User Facility, and we would like to make all of the data widely available, rather than the small number of images chosen for publication. However, today, most of the data that is collected here only really gets looked at by a handful of people—the data producers, including the PI (principal investigator), their postdocs or graduate students—because there is currently no easy way to sift through and share the data. By making this raw data easily searchable and shareable, via the Internet, Science Search could open this reservoir of 'dark data' to all scientists and maximize our facility's scientific impact."

The Challenges of Searching Science Data

Today, search engines are ubiquitously used to find information on the Internet but searching  data presents a different set of challenges. For example, Google's algorithm relies on more than 200 clues to achieve an effective search. These clues can come in the form of keywords on a webpage, metadata in images or audience feedback from billions of people when they click on the information they are looking for. In contrast, scientific data comes in many forms that are radically different than an average web page, requires context that is specific to the science and often also lacks the metadata to provide context that is required for effective searches.

At National User Facilities like the Molecular Foundry, researchers from all over the world apply for time and then travel to Berkeley to use extremely specialized instruments free of charge. Ophus notes that the current cameras on microscopes at the Foundry can collect up to a terabyte of data in under 10 minutes. Users then need to manually sift through this data to find quality images with "good resolution" and save that information on a secure shared file system, like Dropbox, or on an external hard drive that they eventually take home with them to analyze.

Oftentimes, the researchers that come to the Molecular Foundry only have a couple of days to collect their data. Because it is very tedious and time-consuming to manually add notes to terabytes of scientific data and there is no standard for doing it, most researchers just type shorthand descriptions in the filename. This might make sense to the person saving the file but often doesn't make much sense to anyone else.

"The lack of real metadata labels eventually causes problems when the scientist tries to find the data later or attempts to share it with others," says Lavanya Ramakrishnan, a staff scientist in Berkeley Lab's Computational Research Division (CRD) and co-principal investigator of the Science Search project. "But with machine-learning techniques, we can have computers help with what is laborious for the users, including adding tags to the data. Then we can use those tags to effectively search the data."

To address the metadata issue, the Berkeley Lab team uses machine-learning techniques to mine the "science ecosystem"—including instrument timestamps, facility user logs, scientific proposals, publications and file system structures—for contextual information. The collective information from these sources including the timestamp of the experiment, notes about the resolution and filter used and the user's request for time, all provide critical contextual information. The Berkeley lab team has put together an innovative software stack that uses machine-learning techniques including natural language processing pull contextual keywords about the scientific experiment and automatically create metadata tags for the data.

For the proof-of-concept, Ophus shared data from the Molecular Foundry's TEAM 1 electron microscope at NCEM that was recently collected by the facility staff, with the Science Search Team. He also volunteered to label a few thousand images to give the machine-learning tools some labels from which to start learning. While this is a good start, Science Search co-principal investigator Gunther Weber notes that most successful machine-learning applications typically require significantly more data and feedback to deliver better results. For example, in the case of search engines like Google, Weber notes that training datasets are created and machine-learning techniques are validated when billions of people around the world verify their identity by clicking on all the images with street signs or storefronts after typing in their passwords, or on Facebook when they're tagging their friends in an image.

Berkeley Lab researchers use machine learning to search science data
This screen capture of the Science Search interface shows how users can easily validate metadata tags that have been generated via machine learning or add information that hasn't already been captured. Credit: Gonzalo Rodrigo, Berkeley Lab

"In the case of science data only a handful of domain experts can create training sets and validate machine-learning techniques, so one of the big ongoing problems we face is an extremely small number of training sets," says Weber, who is also a staff scientist in Berkeley Lab's CRD.

To overcome this challenge, the Berkeley Lab researchers used to transfer learning to limit the degrees of freedom, or parameter counts, on their convolutional neural networks (CNNs). Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model on a second task, which allows the user to get more accurate results from a smaller training set. In the case of the TEAM I microscope, the data produced contains information about which operation mode the instrument was in at the time of collection. With that information, Weber was able to train the neural network on that classification so it could generate that mode of operation label automatically. He then froze that convolutional layer of the network, which meant he'd only have to retrain the densely connected layers. This approach effectively reduces the number of parameters on the CNN, allowing the team to get some meaningful results from their limited training data.

Machine Learning to Mine the Scientific Ecosystem

In addition to generating metadata tags through training datasets, the Berkeley Lab team also developed tools that use machine-learning techniques for mining the science ecosystem for data context. For example, the data ingest module can look at a multitude of information sources from the scientific ecosystem—including instrument timestamps, user logs, proposals, and publications—and identify commonalities. Tools developed at Berkeley Lab that uses natural language-processing methods can then identify and rank words that give context to the data and facilitate meaningful results for users later on. The user will see something similar to the results page of an Internet search, where content with the most text matching the user's search words will appear higher on the page. The system also learns from user queries and the search results they click on.

Because scientific instruments are generating an ever-growing body of data, all aspects of the Berkeley team's science search engine needed to be scalable to keep pace with the rate and scale of the data volumes being produced. The team achieved this by setting up their system in a Spin instance on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC). Spin is a Docker-based edge-services technology developed at NERSC that can access the facility's high-performance computing systems and storage on the back end.

"One of the reasons it is possible for us to build a tool like Science Search is our access to resources at NERSC," says Gonzalo Rodrigo, a Berkeley Lab postdoctoral researcher who is working on the natural language processing and infrastructure challenges in Science Search. "We have to store, analyze and retrieve really large datasets, and it is useful to have access to a supercomputing facility to do the heavy lifting for these tasks. NERSC's Spin is a great platform to run our search engine that is a user-facing application that requires access to large datasets and analytical data that can only be stored on large supercomputing storage systems."

An Interface for Validating and Searching Data

When the Berkeley Lab team developed the interface for users to interact with their system, they knew that it would have to accomplish a couple of objectives, including effective search and allowing human input to the machine learning models. Because the system relies on domain experts to help generate the training data and validate the machine-learning model output, the interface needed to facilitate that.

"The tagging interface that we developed displays the original data and metadata available, as well as any machine-generated tags we have so far. Expert users then can browse the data and create new tags and review any machine-generated tags for accuracy," says Matt Henderson, who is a Computer Systems Engineer in CRD and leads the user interface development effort.

To facilitate an effective search for users based on available information, the team's search interface provides a query mechanism for available files, proposals and papers that the Berkeley-developed machine-learning tools have parsed and extracted tags from. Each listed search result item represents a summary of that data, with a more detailed secondary view available, including information on tags that matched this item. The team is currently exploring how to best incorporate user feedback to improve the models and tags.

"Having the ability to explore datasets is important for scientific breakthroughs, and this is the first time that anything like Science Search has been attempted," says Ramakrishnan. "Our ultimate vision is to build the foundation that will eventually support a 'Google' for scientific data, where researchers can even  distributed datasets. Our current work provides the foundation needed to get to that ambitious vision."

"Berkeley Lab is really an ideal place to build a tool like Science Search because we have a number of user facilities, like the Molecular Foundry, that has decades worth of data that would provide even more value to the scientific community if the data could be searched and shared," adds Katie Antypas, who is the principal investigator of Science Search and head of NERSC's Data Department. "Plus we have great access to machine-learning expertise in the Berkeley Lab Computing Sciences Area as well as HPC resources at NERSC in order to build these capabilities."

Categorized in Online Research

All you need is a wormhole, the Large Hadron Collider or a rocket that goes really, really fast 

'Through the wormhole, the scientist can see himself as he was one minute ago. But what if our scientist uses the wormhole to shoot his earlier self? He's now dead. So who fired the shot?'

Hello. My name is Stephen Hawking. Physicist, cosmologist and something of a dreamer. Although I cannot move and I have to speak through a computer, in my mind I am free. Free to explore the universe and ask the big questions, such as: is time travel possible? Can we open a portal to the past or find a shortcut to the future? Can we ultimately use the laws of nature to become masters of time itself?

Time travel was once considered scientific heresy. I used to avoid talking about it for fear of being labelled a crank. But these days I'm not so cautious. In fact, I'm more like the people who built Stonehenge. I'm obsessed by time. If I had a time machine I'd visit Marilyn Monroe in her prime or drop in on Galileo as he turned his telescope to the heavens. Perhaps I'd even travel to the end of the universe to find out how our whole cosmic story ends.

To see how this might be possible, we need to look at time as physicists do - at the fourth dimension. It's not as hard as it sounds. Every attentive schoolchild knows that all physical objects, even me in my chair, exist in three dimensions. Everything has a width and a height and a length.

But there is another kind of length, a length in time. While a human may survive for 80 years, the stones at Stonehenge, for instance, have stood around for thousands of years. And the solar system will last for billions of years. Everything has a length in time as well as space. Travelling in time means travelling through this fourth dimension.

To see what that means, let's imagine we're doing a bit of normal, everyday car travel. Drive in a straight line and you're travelling in one dimension. Turn right or left and you add the second dimension. Drive up or down a twisty mountain road and that adds height, so that's travelling in all three dimensions. But how on Earth do we travel in time? How do we find a path through the fourth dimension?

Let's indulge in a little science fiction for a moment. Time travel movies often feature a vast, energy-hungry machine. The machine creates a path through the fourth dimension, a tunnel through time. A time traveller, a brave, perhaps foolhardy individual, prepared for who knows what, steps into the time tunnel and emerges who knows when. The concept may be far-fetched, and the reality may be very different from this, but the idea itself is not so crazy.

Physicists have been thinking about tunnels in time too, but we come at it from a different angle. We wonder if portals to the past or the future could ever be possible within the laws of nature. As it turns out, we think they are. What's more, we've even given them a name: wormholes. The truth is that wormholes are all around us, only they're too small to see. Wormholes are very tiny. They occur in nooks and crannies in space and time. You might find it a tough concept, but stay with me. 

Enlarge 
Time travel through a wormhole

A wormhole is a theoretical 'tunnel' or shortcut, predicted by Einstein's theory of relativity, that links two places in space-time - visualised above as the contours of a 3-D map, where negative energy pulls space and time into the mouth of a tunnel, emerging in another universe. They remain only hypothetical, as obviously nobody has ever seen one, but have been used in films as conduits for time travel - in Stargate (1994), for example, involving gated tunnels between universes, and in Time Bandits (1981), where their locations are shown on a celestial map

Nothing is flat or solid. If you look closely enough at anything you'll find holes and wrinkles in it. It's a basic physical principle, and it even applies to time. Even something as smooth as a pool ball has tiny crevices, wrinkles and voids. Now it's easy to show that this is true in the first three dimensions. But trust me, it's also true of the fourth dimension. There are tiny crevices, wrinkles and voids in time. Down at the smallest of scales, smaller even than molecules, smaller than atoms, we get to a place called the quantum foam. This is where wormholes exist. Tiny tunnels or shortcuts through space and time constantly form, disappear, and reform within this quantum world. And they actually link two separate places and two different times.

Unfortunately, these real-life time tunnels are just a billion-trillion-trillionths of a centimetre across. Way too small for a human to pass through - but here's where the notion of wormhole time machines is leading. Some scientists think it may be possible to capture a wormhole and enlarge it many trillions of times to make it big enough for a human or even a spaceship to enter.

Given enough power and advanced technology, perhaps a giant wormhole could even be constructed in space. I'm not saying it can be done, but if it could be, it would be a truly remarkable device. One end could be here near Earth, and the other far, far away, near some distant planet.

Theoretically, a time tunnel or wormhole could do even more than take us to other planets. If both ends were in the same place, and separated by time instead of distance, a ship could fly in and come out still near Earth, but in the distant past. Maybe dinosaurs would witness the ship coming in for a landing. 

The fastest manned vehicle in history was Apollo 10. It reached 25,000mph. But to travel in time we'll have to go more than 2,000 times faster

Now, I realise that thinking in four dimensions is not easy, and that wormholes are a tricky concept to wrap your head around, but hang in there. I've thought up a simple experiment that could reveal if human time travel through a wormhole is possible now, or even in the future. I like simple experiments, and champagne. 

So I've combined two of my favourite things to see if time travel from the future to the past is possible.

Let's imagine I'm throwing a party, a welcome reception for future time travellers. But there's a twist. I'm not letting anyone know about it until after the party has happened. I've drawn up an invitation giving the exact coordinates in time and space. I am hoping copies of it, in one form or another, will be around for many thousands of years. Maybe one day someone living in the future will find the information on the invitation and use a wormhole time machine to come back to my party, proving that time travel will, one day, be possible.

In the meantime, my time traveller guests should be arriving any moment now. Five, four, three, two, one. But as I say this, no one has arrived. What a shame. I was hoping at least a future Miss Universe was going to step through the door. So why didn't the experiment work? One of the reasons might be because of a well-known problem with time travel to the past, the problem of what we call paradoxes.

Paradoxes are fun to think about. The most famous one is usually called the Grandfather paradox. I have a new, simpler version I call the Mad Scientist paradox.

I don't like the way scientists in movies are often described as mad, but in this case, it's true. This chap is determined to create a paradox, even if it costs him his life. Imagine, somehow, he's built a wormhole, a time tunnel that stretches just one minute into the past. 

Stephen Hawking in a scene from Star Trek

Hawking in a scene from Star Trek with dinner guests from the past, and future: (from left) Albert Einstein, Data and Isaac Newton

Through the wormhole, the scientist can see himself as he was one minute ago. But what if our scientist uses the wormhole to shoot his earlier self? He's now dead. So who fired the shot? It's a paradox. It just doesn't make sense. It's the sort of situation that gives cosmologists nightmares.

This kind of time machine would violate a fundamental rule that governs the entire universe - that causes happen before effects, and never the other way around. I believe things can't make themselves impossible. If they could then there'd be nothing to stop the whole universe from descending into chaos. So I think something will always happen that prevents the paradox. Somehow there must be a reason why our scientist will never find himself in a situation where he could shoot himself. And in this case, I'm sorry to say, the wormhole itself is the problem.

In the end, I think a wormhole like this one can't exist. And the reason for that is feedback. If you've ever been to a rock gig, you'll probably recognise this screeching noise. It's feedback. What causes it is simple. Sound enters the microphone. It's transmitted along the wires, made louder by the amplifier, and comes out at the speakers. But if too much of the sound from the speakers goes back into the mic it goes around and around in a loop getting louder each time. If no one stops it, feedback can destroy the sound system.

The same thing will happen with a wormhole, only with radiation instead of sound. As soon as the wormhole expands, natural radiation will enter it, and end up in a loop. The feedback will become so strong it destroys the wormhole. So although tiny wormholes do exist, and it may be possible to inflate one some day, it won't last long enough to be of use as a time machine. That's the real reason no one could come back in time to my party.

Any kind of time travel to the past through wormholes or any other method is probably impossible, otherwise paradoxes would occur. So sadly, it looks like time travel to the past is never going to happen. A disappointment for dinosaur hunters and a relief for historians.

But the story's not over yet. This doesn't make all time travel impossible. I do believe in time travel. Time travel to the future. Time flows like a river and it seems as if each of us is carried relentlessly along by time's current. But time is like a river in another way. It flows at different speeds in different places and that is the key to travelling into the future. This idea was first proposed by Albert Einstein over 100 years ago. He realised that there should be places where time slows down, and others where time speeds up. He was absolutely right. And the proof is right above our heads. Up in space.

This is the Global Positioning System, or GPS. A network of satellites is in orbit around Earth. The satellites make satellite navigation possible. But they also reveal that time runs faster in space than it does down on Earth. Inside each spacecraft is a very precise clock. But despite being so accurate, they all gain around a third of a billionth of a second every day. The system has to correct for the drift, otherwise that tiny difference would upset the whole system, causing every GPS device on Earth to go out by about six miles a day. You can just imagine the mayhem that that would cause.

The problem doesn't lie with the clocks. They run fast because time itself runs faster in space than it does down below. And the reason for this extraordinary effect is the mass of the Earth. Einstein realised that matter drags on time and slows it down like the slow part of a river. The heavier the object, the more it drags on time. And this startling reality is what opens the door to the possibility of time travel to the future.

Right in the centre of the Milky Way, 26,000 light years from us, lies the heaviest object in the galaxy. It is a supermassive black hole containing the mass of four million suns crushed down into a single point by its own gravity. The closer you get to the black hole, the stronger the gravity. Get really close and not even light can escape. A black hole like this one has a dramatic effect on time, slowing it down far more than anything else in the galaxy. That makes it a natural time machine.

I like to imagine how a spaceship might be able to take advantage of this phenomenon, by orbiting it. If a space agency were controlling the mission from Earth they'd observe that each full orbit took 16 minutes. But for the brave people on board, close to this massive object, time would be slowed down. And here the effect would be far more extreme than the gravitational pull of Earth. The crew's time would be slowed down by half. For every 16-minute orbit, they'd only experience eight minutes of time. 

 
The Large Hadron Collider

Inside the Large Hadron Collider

Around and around they'd go, experiencing just half the time of everyone far away from the black hole. The ship and its crew would be travelling through time. Imagine they circled the black hole for five of their years. Ten years would pass elsewhere. When they got home, everyone on Earth would have aged five years more than they had.

So a supermassive black hole is a time machine. But of course, it's not exactly practical. It has advantages over wormholes in that it doesn't provoke paradoxes. Plus it won't destroy itself in a flash of feedback. But it's pretty dangerous. It's a long way away and it doesn't even take us very far into the future. Fortunately there is another way to travel in time. And this represents our last and best hope of building a real time machine.

You just have to travel very, very fast. Much faster even than the speed required to avoid being sucked into a black hole. This is due to another strange fact about the universe. There's a cosmic speed limit, 186,000 miles per second, also known as the speed of light. Nothing can exceed that speed. It's one of the best established principles in science. Believe it or not, travelling at near the speed of light transports you to the future.

To explain why, let's dream up a science-fiction transportation system. Imagine a track that goes right around Earth, a track for a superfast train. We're going to use this imaginary train to get as close as possible to the speed of light and see how it becomes a time machine. On board are passengers with a one-way ticket to the future. The train begins to accelerate, faster and faster. Soon it's circling the Earth over and over again.

To approach the speed of light means circling the Earth pretty fast. Seven times a second. But no matter how much power the train has, it can never quite reach the speed of light, since the laws of physics forbid it. Instead, let's say it gets close, just shy of that ultimate speed. Now something extraordinary happens. Time starts flowing slowly on board relative to the rest of the world, just like near the black hole, only more so. Everything on the train is in slow motion.

This happens to protect the speed limit, and it's not hard to see why. Imagine a child running forwards up the train. Her forward speed is added to the speed of the train, so couldn't she break the speed limit simply by accident? The answer is no. The laws of nature prevent the possibility by slowing down time onboard.

Now she can't run fast enough to break the limit. Time will always slow down just enough to protect the speed limit. And from that fact comes the possibility of travelling many years into the future.

Imagine that the train left the station on January 1, 2050. It circles Earth over and over again for 100 years before finally coming to a halt on New Year's Day, 2150. The passengers will have only lived one week because time is slowed down that much inside the train. When they got out they'd find a very different world from the one they'd left. In one week they'd have travelled 100 years into the future. Of course, building a train that could reach such a speed is quite impossible. But we have built something very like the train at the world's largest particle accelerator at CERN in Geneva, Switzerland.

Deep underground, in a circular tunnel 16 miles long, is a stream of trillions of tiny particles. When the power is turned on they accelerate from zero to 60,000mph in a fraction of a second. Increase the power and the particles go faster and faster, until they're whizzing around the tunnel 11,000 times a second, which is almost the speed of light. But just like the train, they never quite reach that ultimate speed. They can only get to 99.99 per cent of the limit. When that happens, they too start to travel in time. We know this because of some extremely short-lived particles, called pi-mesons. Ordinarily, they disintegrate after just 25 billionths of a second. But when they are accelerated to near-light speed they last 30 times longer.

It really is that simple. If we want to travel into the future, we just need to go fast. Really fast. And I think the only way we're ever likely to do that is by going into space. The fastest manned vehicle in history was Apollo 10. It reached 25,000mph. But to travel in time we'll have to go more than 2,000 times faster. And to do that we'd need a much bigger ship, a truly enormous machine. The ship would have to be big enough to carry a huge amount of fuel, enough to accelerate it to nearly the speed of light. Getting to just beneath the cosmic speed limit would require six whole years at full power.

The initial acceleration would be gentle because the ship would be so big and heavy. But gradually it would pick up speed and soon would be covering massive distances. In one week it would have reached the outer planets. After two years it would reach half-light speed and be far outside our solar system. Two years later it would be travelling at 90 per cent of the speed of light. Around 30 trillion miles away from Earth, and four years after launch, the ship would begin to travel in time. For every hour of time on the ship, two would pass on Earth. A similar situation to the spaceship that orbited the massive black hole.

After another two years of full thrust the ship would reach its top speed, 99 per cent of the speed of light. At this speed, a single day on board is a whole year of Earth time. Our ship would be truly flying into the future. 

The slowing of time has another benefit. It means we could, in theory, travel extraordinary distances within one lifetime. A trip to the edge of the galaxy would take just 80 years. But the real wonder of our journey is that it reveals just how strange the universe is. It's a universe where time runs at different rates in different places. Where tiny wormholes exist all around us. And where, ultimately, we might use our understanding of physics to become true voyagers through the fourth dimension. 

Author : STEPHEN HAWKING

Source : http://www.dailymail.co.uk/home/moslive/article-1269288/STEPHEN-HAWKING-How-build-time-machine.html

 

Categorized in Science & Tech

Understanding the impact of machine learning will be crucial to adjusting our search marketing strategies -- but probably not in the way you think. Columnist Dave Davies explains.

There are many uses for machine learning and AI in the world around us, but today I’m going to talk about search. So, assuming you’re a business owner with a website or an SEO, the big question you’re probably asking is: what is machine learning and how will it impact my rankings?

The problem with this question is that it relies on a couple of assumptions that may or may not be correct: First, that machine learning is something you can optimize for, and second, that there will be rankings in any traditional sense.

So before we get to work trying to understand machine learning and its impact on search, let’s stop and ask ourselves the real question that needs to be answered:

What is Google trying to accomplish?

It is by answering this one seemingly simple question that we gain our greatest insights into what the future holds and why machine learning is part of it. And the answer to this question is also quite simple. It’s the same as what you and I both do every day: try to earn more money.

This, and this alone, is the objective — and with shareholders, it is a responsibility. So, while it may not be the feel-good answer you were hoping for, it is accurate.

Author:  Dave Davies

Source:  http://searchengineland.com/heck-machine-learning-care-265511

Categorized in Others

Dr. Chris Brauer will tell the Globes Israel Business Conference that computers will free us to be more  creative, but warns that machines are making unexplained decisions.

"Sorry I'm late," Dr. Chris Brauer apologizes. "I was preparing a bot for a bank, and it got a little crazy. We had to correct it."

"Globes": How does a bot go crazy?

Brauer: "When you give a learning machine too much freedom, or when you let the wrong people work on it, you get an unpredictable, inefficient machine that is sometimes racist."

This statement began the meeting with Brauer, who owns a creative media consultant firm, and founded the Centre for Creative and Social Technologies at Goldsmiths University of London. He will address next week's Globes Israel Business Conference in Tel Aviv. He immediately explains: "A  bot is actually software that learns how to respond through interactions with its surroundings. We teach it how to respond to a given number of situations, and it is then supposed to make deductions from these  examples, and to respond to new situations. It receives feedback from its decisions - if it was right - and improves its decision the next time according to the feedback."

This is similar to how a child is taught to recognize a dog, so that the definition will include all types of dogs, but not all the other animals having four legs and a tail. First he is shown a dog and told, "This is a dog," and then he is allowed to point to dogs, cats, and ferrets in the street. Only when he correctly points to a dog is he told that he was right, and his ability to identify a dog improves.

When the pound fell with no real reason

"Every bot has different degrees of freedom," Brauer says. "It can be restricted by setting many hard and fast rules in advance what it is and isn't allowed to do, but then you get a rather hidebound bot that does not benefit from all the advantages of machine learning. On the other hand, if you allow it too free a hand, the bot is liable to make generalizations that we don't like." One example is Google's bot, which mistakenly labels certain people as animals.

"We also have to decide who is entitled to teach the bot," Brauer continues. "If we let an entire community of participants prepare the bot and give it feedback, we get a very effective and powerful bot within a short time and with little effort. This, however, is like sending a child to a school where you know nothing about the teachers or the study plan. Sometimes, the community will teach the bot undesirable things, and sometimes it does this deliberately. That's what happened, for example, when Microsoft wanted to teach its bot to speak like a 10 year-old child. Microsoft sent it to talk with real little girls, but someone took advantage of this by deliberately teaching the bot terrible wordsthat destroyed it rather quickly."

People are dangerous to machines, and machines are dangerous to people.

"Absolutely. Machines were responsible, for example, for the drop in the pound following the Brexit events, and the process by which they did this is not completely clear to all those involved to this day. It is clear, however, that the pound fell sharply without people having made an active decision that this should be the pound's response to Brexit. It simply happened one day all of a sudden because of the machines. Only when they investigated this did they discover that the fall had occurred right around the time when a certain report was published in the "Financial Times." No one thought that this report said anything previously unknown, but for some reason, it was news to this machine.

"The mystery is that we don't know what in this report caused the machines to sell the pound at the same moment, what information was in the report, or what was the wording that drove the machines to sell. In a world in which machines are responsible for 90% of trading, they don't wait for approval from us, the human beings. They act first, and don't even explain afterwards."

New experts on relations

Brauer says that such incidents created a need for "machine relations experts" people whose job is to try to predict how certain actions by a person will affect how machines making decisions about him or her will act.

For example, Brauer now works with a public relations company. The job of such a company is to issue announcements to newspapers written in a manner that will grab the attention of human readers, and especially the attention of journalists whose job is to process these reports and use them as a basis for newspaper stories. This, however, is changing. Today, a sizeable proportion of press releases pass through some kind of machine filter before they get to the journalists. In the future, this will be the norm. "Because of the large amount of information and the need to process it at super-human speed, we have to delegate a large proportion of our decisions to machines," Brauer explains. "A journalist who doesn't let a machine sort press releases for him, and insists on sorting them by himself, will not produce the output required of him."

The public relations firms will therefore have to write the press release so that it will catch the attention of a machine, not a journalist. In the near future, people will jump through hoops trying to understand the machine reading the press releases in order to tailor the press releases to it. Later, the machine will also be doing the writing, or at least will process the press release into a language that other machines like. Bit by bit, people are losing control of the process, and even losing the understanding of it - the transparency.

This is true not only for journalists. For example, take profiles on dating websites. Machines are already deciding which profiles will see which people surfing the site. In the future, there will be advisors for helping you devise a profile for the website that a computer, not necessarily your prince charming, will like, because if you don't do this, the computer won't send prince charming the profile. You can hope as much as you want that your special beauty will shine out, and the right man will spot it, but not in the new era. If it doesn't pass the machine, it won't get you a date.

That's also how it will be in looking for a job when a machine is the one matching employers and CVs, or between entrepreneurs and investors. Even today, when you throw a joke out into space (Facebook or Twitter space, of course), and you want someone to hear it, it first has to please a machine.

"We're talking about a search engine optimization (SEO) world. Up until now, we have improved our websites for their benefit. Tomorrow, it will be the entire world," Brauer declares.

To get back to the "Financial Times" Brexit story, public relations firms also have to speak with machines, and journalists also have to realize that they're talking with machines, and that the stories they write activate many machines whose actions are not necessarily rational.

"That's right. A reporter must know that what he writes can directly set machines in motion, in contrast with human readers, who are supposed to exercise some kind of judgment. The press may be more influential in such a world, if that's what it wants."

That sounds frightening.

"I'd like people to begin designing the machines so that we will at least be able to retrospectively understand what led them to make a given decision. There should be documentation for every machine, an 'anti-machine' that will follow and report what's happening in real time, so that people can intervene and tell the algorithm, 'I saw you, algorithm! I know what you tried to do!' I want to believe that in 2025, there will be no more events like Brexit, in which months afterwards, we still haven't understood why the machines acted the way they did."

People are superior to machines

The world of 2025 will be the subject of one of the sessions at the Israel Business Conference that will take place in December, in which Brauer will take part. As a former developing technologies consultant for PricewaterhouseCoopers and the owner now of his own consultant company (he isdirector of Creative Industries at investment bank Clarity Capital), the need to flatter machines is only one of his technological predictions.

"The Internet of Things is expected to substantially alter the energy industry," Brauer says. "We are seeing a change in the direction of much better adaptation of energy consumption to the consumers' real needs, and differential energy pricing at times when it is in short supply. For example, the dishwasher will operate itself at times when energy is cheap and available, and will be aware of availability in real time, because all the devices will be connected, not just for the user's convenience in a smart home. When it is working, this dishwasher will also be able to consume energy from 15 different suppliers, and to automatically switch between them. It will change the energy companies' market from top to bottom, because like all of us, they too will be marketing to your machine, not to you."

Will the machines leave work for people, other than as machine relations managers, of course?

"We have always known that technology increases output. This happens mainly in places where decisions are deterministic, for example in medicine, where treatment takes place in the framework of a clear protocol. In such a world, there is ostensibly no need for a doctor, or at least, not for many doctors. The few that will remain will be the technology controllers, or will be consulted only in the difficult cases that a machine can't solve. You can see that the new technology improves employees' output. Instagram has attained a billion dollar value with only 12 employees, and they reach the same number of people as 'The New York Times'.

"People see this, and are fearful, but I say, 'Let's regard this period as our emergence from slavery.' You could say that up until now, because we didn't have the technology we really wanted, too many people worked in imitation of machines, and that detracted from their ability to be human beings. Now we can let the machines be machines, and people will prosper in all sorts of places where creativity is needed that is beyond a machine's capability. People will flourish when they are able to think critically, with values and nuances, about every good database they get from machines. People will do what they were always meant to do."

Is everyone cutout for these professions? Will they have enough work?

"I don't believe that any person is only capable of thinking like a machine. Our society has made them develop this way by pushing people into doing a machine's work. We're now learning how to change education and the enterprise environment so that all people will be able to do the work of people."

In order to prove his point, Brauer examined an algorithm that writes newspaper stories with a senior "Financial Times" journalist (not the one that pushed down the pound; a different journalist named Sara O'Connor). "The machine issued quite a good summary that included all the important facts, but it missed the real story, what all the human readers agreed was the story after reading Sara's story. That's what a good reporter does sees the contexts that are not immediately accessible, asks the right question, and fills in what's missing. This, at least, will characterize the reporter of the future, and it will be the same with all the other professions. Anyone who rises above all of them today in professional creativity, whether it's a politician or an accountant, will be the model for how this profession will appear in the future."

And all humans will have to work with machines in order to achieve the output expected of them.

"Anyone who doesn't will be useless. They will have no place in the hyper-productive future."

Author:  Gali Weinreb

Source:  http://www.globes.co.il/

Categorized in Science & Tech

One of the biggest buzzwords around Google and the overall technology market is machine learning. Google uses it with RankBrain for search and in other ways. We asked Gary Illyes from Google in part two of our interview how Google uses machine learning with search.

Illyes said that Google uses it mostly for “coming up with new signals and signal aggregations.” So they may look at two or more different existing non-machine-learning signals and see if adding machine learning to the aggregation of them can help improve search rankings and quality.

He also said, “RankBrain, where … which re-ranks based on based on historical signals,” is another way they use machine learning, and later explained how RankBrain works and that Penguin doesn’t really use machine learning.

Danny Sullivan: These days it seems like it’s really cool for people to just say machine learning is being used in everything.

Gary Illyes: And then people freak out.

Danny Sullivan: Yeah. What is it, what are you doing with machine learning? Like, so when you say it’s not being used in the core algorithm. So no one’s getting fired. The machines haven’t taken over the algorithm, you guys are still using an algorithm. You still have people trying to figure out the best way to process signals, and then what do you do with the machine learning; is [it] part of that?

Gary Illyes: They are typically used for coming up with new signals and signal aggregations. So basically, let’s say that this is a random example and not know if this is real, but let’s say that I would want to see if combining PageRank with Panda and whatever else, I don’t know, token frequency.

If combining those three in some way would result in better ranking, and for that for example, we could easily use machine learning. And then create the new composite signal. That would be one example.

The other example would be RankBrain, where… which re-ranks based on based on historical signals.

But that also is, if you, if you think about it, it’s also a composite signal.

It’s using several signals to come up with a new multiplier for the results that are already ranked by the core algorithm.

What else?

Barry Schwartz: Didn’t you first use it as a query refinement? Right? That’s the main thing?

Gary Illyes: I don’t know that … ?

Barry Schwartz: Wasn’t RankBrain all about some type of query understanding and…

Gary Illyes: Well, making sure that for the query we are the best possible result, basically, it is re-ranking in a way.

Barry Schwartz: Danny, did you understand RankBrain to mean, maybe it was just me, to mean, alright someone searched for X, but RankBrain really makes [it] into Xish? And then the queries would be the results.

Danny Sullivan: When it first came out, my understanding was [that] RankBrain was being used for long-tail queries to correspond them to short short answers. So somebody comes along and says, Why is the tide super-high sometimes, when I don’t understand — the moon seemed to be very big, and that’s a very unusual query, right? And Google might be going, OK, there’s a lot going on here. How do unpack this and to where, and then getting the confidence and using typical things where you’d be like, OK, we’ll see if we have all these words you have a link to whatever. Meanwhile, really what the person is saying is why is the tide high when the moon is full. And that is a more common query. And Google probably has much more confidence in what it’s ranking when it deals with that, and my understanding [is that] RankBrain helped Google better understand that these longer queries coresponded basically to the shorter queries where it had a lot of confidence about the answers.

That was then, that was like what, a year ago or so? At this point, Gary, when you start talking that re-ranking, is that the kind of the re-ranking you’re talking about?

Gary Illyes: Yeah.

Danny Sullivan: OK.

Barry Schwartz: All right. So we shouldn’t be classifying all these things as RankBrain, or should we? Like it could be other machine learning.

Gary Illyes: RankBrain is one component in our ranking system. There are over 200, as we said in the beginning, signals that we use and what each of them might become like machine learning-based.

But when you or I don’t expect that any time soon or in the foreseeable future all of them would become machine learning based. Or that’s what we call the core algorithm would become machine learning-based. The main reason for that is that debugging machine learning decisions or AI decisions, if you want, if you like, is incredibly hard, especially when you have … multiple layers of neural networks. It becomes close to impossible to debug a decision. And that’s very bad for us. And for that we try to develop new ways to to track back decisions. But if it can easily obfuscate issues, and that would limit our with our ability to improve search in general.

Barry Schwartz: So when people say Penguin is now an old machine learning-based…

Gary Illyes: Penguin is not ML.

Barry Schwartz: OK, there’s a lot of people saying that Penguin [is] machine learning-based.

Gary Illyes: Of course they do. I mean if you think about it, it’s a very sexy word. Right. And if you publish it…

Danny Sullivan: People use it in bars and online all the time. Like hey, machine learning. Oh yeah.

Gary Illyes: But basically, if you publish an article with a title like machine learning is now in Penguin or Penguin generated by machine learning it’s like…. But if you publish an article with that title it’s much more likely that people could click on that title, and well, probably come up with the idea that you are insane or something like that. But it’s much more likely they would visit your site than if you publish something with a title Penguin has launched.

Source : searchengineland

Categorized in Search Engine

Google has produced a car that drives itself and an Android operating system that has remarkably good speech recognition. Yes, Google has begun to master machine intelligence. So it should be no surprise that Google has finally started to figure out how to stop bad actors from gaming its crown jewel – the Google search engine. We say finally because it’s something Google has always talked about, but, until recently, has never actually been able to do.

With the improved search engine, SEO experts will have to learn a new playbook if they want to stay in the game.

SEO Wars

In January 2011, there was a groundswell of user complaints kicked off by Vivek Wadwa about Google’s search results being subpar and gamed by black hat SEO experts, people who use questionable techniques to improve search-engine results. By exploiting weaknesses in Google’s search algorithms, these characters made search less helpful for all of us.

We have been tracking the issue for a while. Back in 2007, we wrote about Americans experiencing “search engine fatigue,” as advertisers found ways to “game the system” so that their content appeared first in search results (read more here). And in 2009, we wrote about Google’s shift to providing “answers,” such as maps results and weather above search results.

Even the shift to answers was not enough to end Google’s ongoing war with SEO experts. As we describe in this CNET article from early 2012, it turns out that answers were even easier to monetize than ads. This was one of the reasons Google has increasingly turned to socially curated links.

In the past couple of years, Google has deployed a wave of algorithm updates, including Panda and Panda 2, Penguin, as well as updates to existing mechanisms such as Quality Deserved Freshness. In addition, Google made it harder to figure out what keywords people are using when they search.

The onslaught of algorithm updates has effectively made it increasingly more difficult for a host of black hat SEO techniques — such as duplicative content, link farming and keyword stuffing — to work. This doesn’t mean those techniques won’t work. One look into a query like “payday loans” or ‘‘viagra” proves they still do. But these techniques are now more query-dependent, meaning that Google has essentially given a pass for certain verticals that are naturally more overwhelmed with spam. But for the most part, using “SEO magic” to build a content site is no longer a viable long-term strategy.

The New Rules Of SEO

So is SEO over? Far from it. SEO is as important as ever. Understanding Google’s policies and not running afoul of them is critical to maintaining placement on Google search results.

With these latest changes, SEO experts will now need to have a deep understanding of the various reasons a site can inadvertently be punished by Google and how best to create solutions needed to fix the issues, or avoid them altogether.

Here’s what SEO experts need to focus on now:

Clean, well-structured site architecture. Sites should be easy to use and navigate, employ clean URL structures that make hierarchical sense, properly link internally, and have all pages, sections and categories properly labeled and tagged.

Usable Pages. Pages should be simple, clear, provide unique value, and meet the average user’s reason for coming to the page. Google wants to serve up results that will satisfy a user’s search intent. It does not want to serve up results that users will visit, click the back button, and select the next result.

Interesting content. Pages need to have more than straight facts that Google can answer above the search results, so a page needs to show more than the weather or a sports score.

No hidden content. Google sometimes thinks that hidden content is meant to game the system. So be very careful about handling hidden items that users can toggle on and off or creative pagination.

Good mobile experience. Google now penalizes sites that do not have a clean, speedy and presentable mobile experience. Sites need to stop delivering desktop web pages to mobile devices.

Duplicate content. When you think of duplicate content you probably think of content copied from one page or site to another, but that’s not the only form. Things like a URL resolving using various parameters, printable pages, and canonical issues can often create duplicate content issues that harm a site.

Markup. Rich snippets and structured data markup will help Google better understand content, as well as help users understand what’s on a page and why it’s relevant to their query, which can result in higher click-through rates.

Google chasing down and excluding content from bad actors is a huge opportunity for web content creators. Creating great content and working with SEO professionals from inception through maintenance can produce amazing results. Some of our sites have even doubled in Google traffic over the past 12 months.

So don’t think of Google’s changes as another offensive in the ongoing SEO battles. If played correctly, everyone will be better off now.

https://techcrunch.com/2013/08/03/won-the-seo-wars-google-has/

Categorized in Search Engine

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now