Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

Artificial intelligence has made great progress in helping computers recognize images in photos and recommending products online that you're more likely to buy. But the technology still faces many challenges, especially when it comes to computers remembering things like humans do.

On Tuesday, Apple’s director of AI research, Ruslan Salakhutdinov, discussed some of those limitations. However, he steered clear during his talk at an MIT Technology Review conference of how his secretive company incorporates AI into its products like Siri.

Get Data Sheet, Fortune’s technology newsletter.

Salakhutdinov, who joined Apple in October, said he is particularly interested in a type of AI known as reinforcement learning, which researchers use to teach computers to repeatedly take different actions to figure out the best possible resultGoogle (GOOG, +0.17%), for example, used reinforcement learning to help its computers find the best possible cooling and operating configurations in its data centers, thus making them more energy efficient.

Researchers at Carnegie Mellon, where Salakhutdinov is also an associate professor, recently used reinforcement learning to train computers to play the 1990's era video game Doom, Salakhutdinov explained. Computers learned to quickly and accurately shoot aliens while also discovering that ducking helps with avoiding enemy fire. However, these expert Doom computer systems are not very good at remembering things like the maze's layouts, which keeps them from planning and building strategies, he said.

Part of Salakhutdinov’s research involves creating AI-powered software that memorizes the layouts of virtual mazes in Doom and points of references in order to locate specific towers. During the game, the software first spots what's either a red or green torch, with the color of the torch corresponding to the color of the tower it needs to locate.

Eventually, the software learned to navigate the maze to reach the correct tower. When it discovered the wrong tower, the software backtracked through the maze to find the right one. What was especially noteworthy was that the software was able to recall the color of the torch each time it spotted a tower, he explained.

Don’t Trust Twitter When It Comes to Stocks
Here are the reasons why.

However, Salakhutdinov said this type of AI software takes “a long time to train” and that it requires enormous amounts of computing power, which makes it difficult to build at large scale. “Right now it’s very brittle,” Salakhutdinov said.

Another area Salakhutdinov wants to explore is teaching AI software to learn more quickly from “few examples and few experiences.” Although he did not mention it, his idea would benefit Apple in its race to create better products in less time.

Some AI experts and analysts believe Apple's AI technologies are inferior to competitors like Google or Microsoft because of the company's stricter user privacy rules, which limits the amount of data it can use to train its computers. If Apple used less data for computer training, it could perhaps satisfy its privacy requirements while still improving its software as quickly as rivals.

Author : Jonathan Vanian

Source : fortune.com

Published in Others

Tech to aid video search, detection of disease and of fraud

Artificial intelligence has been the secret sauce for some of the biggest technology companies. But technology giant Alphabet Inc.’s Google is betting big on ‘democratising’ artificial intelligence and machine learning and making them available to everyone — users, developers and enterprises.From detecting and managing deadly diseases, reducing accident risks to discovering financial fraud, Google said that it aimed to improve the quality of life by lowering entry barriers to using these technologies. These technologies would also add a lot of value to self-driving cars, Google Photos’ search capabilities and even Snapchat filters that convert the images of users into animated pictures.“Google’s cloud platform already delivers customer applications to over a billion users every day,” said Fei-Fei Li, chief scientist of AI and machine learning at Google Cloud. “Now if you can only imagine, combining the massive reach of this platform with the power of AI and making it available to everyone.”

No programming

AI aims to build machines that can simulate human intelligence processes, while Stanford University describes machine learning as “the science of getting computers to act without being explicitly programmed.
 
”At the Google Cloud Next conference in San Francisco this month, Ms. Li announced the availability of cloud ‘Video Intelligence API’ to the developers. The technology was demonstrated on stage while playing a video. The API was not only able to find a dog in the video but also identify it as a dachshund. In another demo, a simple search for “beach” threw up videos which had beach clips inside them. Google said the API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities. These include nouns such as “flower” or “human” and verbs such as “swim” or “fly” inside video content. It can even provide the contextual understanding of when those entities appear. For example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google cloud storage.
 
“Now finally we are beginning to shine the light on the dark matter of the digital universe,” said Ms. Li, who is also the director of the Artificial Intelligence and Vision Labs at Stanford University.
 
The Mountain View, California-based Google has introduced new capabilities for its Cloud Vision API which has already enabled developers to extract metadata from more than one billion images. It offers enhanced optical character recognition capabilities that can extract content from scans of text-heavy documents such as legal contracts, research papers and books. It also detects individual objects and faces within images and finds and reads printed words contained within images. For instance, Realtor.com, a resource for home buyers and sellers, uses Cloud Vision API. This enables its customers to use their smartphone to snap a photo of a home that they’re interested in and get instant information on that property.
 
Google is also aiming to use AI and machine learning to bring healthcare to the underserved population. It uses the power of computer-based intelligence to detect breast cancer. It does this by teaching the algorithm to search for cell patterns in the tissue slides, the same way doctors review slides.The Google Research Blog said this method had reached 89% accuracy, exceeding the 73% score for a pathologist with no time constraint.
Google Research said that pathologists are responsible for reviewing all the biological tissues visible on a slide. However, there can be many slides per patient. And each slide consists over 10 gigapixels when digitised at 40 times magnification.“Imagine having to go through a thousand 10 megapixel photos, and having to be responsible for every pixel,” according to the Google Research blog page posted by Martin Stumpe, Technical Lead, and Lily Peng, Product Manager.
 
Google feeds large amounts of information to its system and then teaches it to search for patterns using ‘deep learning’, a technique to implement machine learning. The team detected that the computer could understand the nature of pathology through analysing billions of pictures provided by Netherlands-based Radboud University Medical Center. Its algorithms were optimised for localisation of breast cancer that had spread to lymph nodes adjacent to the breast.
 
The team had earlier applied deep learning to interpret signs of diabetic retinopathy in retinal photographs. The condition is the fastest-growing cause of blindness, with close to 415 million diabetic patients at risk worldwide.
 
“Imagine these kind of insights spreading to the whole of healthcare industry,” said Ms. Li of Google. “What these examples have in common is the transformation from exclusivity to ubiquity. I believe AI can deliver this transformation at a scale, we have never seen and imagined before,” she said.
Author : Peerzada Abrar
Source : thehindu.com
Published in Search Engine

The Veritone Platform creates searchable data from media files for faster processing and actionable intelligence

Big Data is everywhere, and it keeps getting bigger. It’s estimated that 90 percent of all data on the internet has been generated in the past two years, more than three-quarters of it audio and video clips, and the pace continues to accelerate exponentially.

 Data that has been processed using artificial intelligence tools like the Veritone Platform is easily searched and analyzed, reducing the time and expense needed to solve crimes. (image/Veritone)

For law enforcement, this creates an information overload of evidence that can’t be easily searched or analyzed. This recent explosion of law enforcement data, particularly video and audio that by its nature is unstructured and unsearchable, means police are collecting information without the tools to manage it or turn it into actionable intelligence.

HOW ARTIFICIAL INTELLIGENCE CAN HELP LAW ENFORCEMENT

Veritone, an artificial intelligence technology company, has built an open platform to provide law enforcement with AI applications called cognitive engines that can process unstructured data from multiple sources to help police extract actionable intelligence. These cognitive engines include applications for audio transcriptionfacial recognition and more.

Data that has been processed and correlated using artificial intelligence is easily searchable and enables analysis to sift for patterns, says Dan Merkle, president of Veritone Public Safety, saving countless hours and reducing the time and expense needed to solve crimes.

HOW THE VERITONE AI PLATFORM WORKS

The Veritone Platform automates correlation and analysis by providing a one-stop shop for processing media evidence. Users can upload any file format, choose which cognitive engines to run, and then search the resulting indexed databases for the desired information.

Everything is accessed through a simple web-based user interface – if you can get to a browser, you can use the tools, says Merkle.

Veritone helps law enforcement agencies make sense of overwhelming amounts of unstructured data in three ways:

1. The platform is omnivorous, taking in audio and video from public and private sources from CCTV security video to social media clips to body-worn or dashboard camera video. These disparate data sources are integrated into an indexed data set that can be searched and layered for multi-dimensional correlation. This provides investigators with a way to integrate data from varied sources into a unified pool of actionable intelligence.

2. A variety of cognitive engines can be used to extract specific information such as words, faces, license plates, geolocation, time of day, etc. The system automates analysis to find patterns that provide useful information to investigators. The engines currently available perform with comparable accuracy to processing by a human, and they deliver results much faster. Performance continues to improve as the technology matures.

3. The Veritone Platform exists on the Microsoft Azure Government cloud for secure online access and mobility so that data becomes a dynamic tool for comparison and analysis rather than siloed on individual servers. 

WHY AI IS A BETTER SOLUTION FOR VIDEO EVIDENCE

Veritone’s AI platform automates what is otherwise a tedious series of tasks, says Merkle. Veritone can process and search everything all at once instead of requiring separate analysis of each end-point solution, a slow and costly process.

“With current systems, if you want to find something in there you basically have to pay someone to sit down and listen to it in real time,” Merkle said, “but a transcription cognitive engine can process that audio and turn it into a text file that is then searchable just like any other Word document or structured data that can be searched and correlated against other files, so you’re able to start creating a more complete picture.”

In addition to the investigative boost, AI automation makes responding to public records requests and complicated queries from attorneys much faster and easier, says Merkle.

THE FUTURE OF AI TECHNOLOGY IN LAW ENFORCEMENT

Artificial intelligence for policing is still in the early development of its capabilities and uses, he adds. Veritone monitors more than 3,000 individual cognitive engines in development, adding new and better tools as they become available. 

Sentiment analysis is a newer development in artificial intelligence, and Veritone offers analysis of phrases and words in a transcript for positive or negative sentiment. Merkle says analytic tools based on tone of voice and facial expression are in development and coming soon, and he describes the Veritone Platform as “future proof.”

“Even those of us who are knee-deep in artificial intelligence can’t predict where all the functionality and capabilities are going to be developed,” he said. “Having a platform that can take on any of these cognitive engines as they are developed future proofs the investment so that you don’t have to change out platforms, which is very expensive and difficult to do.”

As the technology matures, Merkle cautions that balancing privacy and transparency will be a key challenge. He recommends that law enforcement agencies consider best practices for use, as well as possible unintended consequences, in order to make sound policy decisions.

For more information about how artificial intelligence is transforming public safety, download Veritone’s free white paper, Artificial Intelligence: Friend or Foe?

Author : PoliceOne BrandFocus Staff

Source : https://www.policeone.com/police-products/investigation/evidence-management/articles/300077006-Analyze-video-evidence-faster-with-artificial-intelligence/

Published in Others

Master cyber criminals, super-trojans, workforce shifts, advanced analytics and more – CBR talks to the experts about how 2017 could prove an even bigger, smarter year for artificial intelligence.

AI certainly arrived with aplomb in 2016 with chatbots, digital assistants, PokemonWatson, and DeepMind just some of the AI companies and tech bringing artificial intelligence to the masses. The opportunities, benefits and promise of the technology, so experts say, is vast – limitless even – so what can we expect in the coming year?CBR talked to the top AI experts about their artificial intelligence predictions for the new year, with 2017 already shaping up to be even smarter than 2016.

Artificial Intelligence Predictions for 2017:

The Year of the digital Moriarty

Ian Hughes Analyst, Internet of Things, 451 Research

“With so much data flowing from the interconnected world of IoT, higher end AI is being used to find security holes and anomalies in systems that are too complex for humans to control. Security breaches we have seen so far have been brute force ones, the equivalent of a digital crow bar.

“AI being used to protect is clearly a benefit, but this technology is increasingly available to anyone, replacing the digital crow bar with a virtual master criminal, 2017 might just see Holmes versus Moriarty digital intellects start to battle it out behind the scenes.”

Artificial Intelligence Predictions

Artificial Intelligence Predictions for 2017:

The Year Machines Steal more human jobs than ever before

Dik Vos, CEO at SQS

“We will continue to see a rise in digital technology over the coming years, and 2017 will be the year we see the likes of Artificial Intelligence (AI) and automated vehicles take the place of low-skilled workers.

With machines pushing humans out of a number of jobs including, logistics drivers and factory workers, I predict we will see an increased emphasis placed on the retraining of up to 30 per cent of our working population. People want and need to work and 2017 will see those workers who have lost their jobs through digitalisation, start to filter across a variety of other sectors including manufacturing and labour.”

Artificial Intelligence Predictions for 2017:

The Year of the Buzzword Mart

Hal Lonas, CTO at Webroot

“In 2017 we will see an explosion of companies shopping at Buzzword Mart. The growing attention paid to terminology like Artificial Intelligence and Machine Learning will lead to more firms incorporating “me too” marketing claims into their messaging.  Artificial Intelligence predictions -buzzwordProspective buyers should take these claims with a grain of salt and carefully check the pedigree and experience of firms claiming to use these advanced approaches. Buyers are rightfully confused, and it is difficult to compare, prove, or disprove efficacy in an ecosystem where market messaging is dominated by legacy or unicorn-funded voices. All too often we see legacy technology bolting barely-functional technology onto bloated and ill-architected heavy-weight solutions, leading to a poor end product whose flaws can range from bad user experience to security vulnerabilities.

“This rings especially true for security, where the distinction between legitimate machine learning trained threat intelligence and a second-rate snap-on solution can be the difference between leaking critical customer or IP data files, or blocking the threat before it reaches the network.”

Artificial Intelligence Predictions for 2017:

The Year of AI-as-a-service

Abdul Razack, SVP & Head of Platforms, Big Data and Analytics, Infosys

“AI-as-a-Service will take off: In 2016 AI was applied to solve known problems. As we move forward we will start leveraging AI to gain greater insights into ongoing problems that we didn’t even know existed. Using AI to uncover these “unknown unknowns” will free us to collaborate more and tackle new, interesting and life-changing challenges.”

Artificial Intelligence Predictions for 2017:

The Year CIOs Take the AI Helm

Graeme Thompson, SVP and CIO, Informatica

“With the accelerating pace of business, organisations need to deliver change and make decisions at a rate unheard of just a few years ago. This has made human-paced processing insufficient in the face of the petabytes and exabytes of data that are pouring into the enterprise, driving a rise in machine learning and AI.

“Whereas before, machines would be used to complete a few tasks within a workflow, now they are executing almost the entire process, with humans only required to fill in the gaps.

“Rewind 20 years and we used tools like MapQuest to figure out the shortest distance between two points, but we never would have trusted it to tell us where to go. Now, with new developments like Waze, many of us delegate the navigation of a journey entirely to a machine.

Artificial Intelligence Predictions - leader

“Before long, humans will no longer be needed to fill the gaps. We’ll find that machines are fully autonomous in the case of driverless cars, for example, because they can store and make sense of much more information than humans can process. However, organisations capitalising on the benefits of AI and machine learning will have to ensure data quality to guarantee the accuracy of these fast responses. Un-validated or inaccurate data in a machine learning algorithm causes misleading insights or inaccurate actions when automated.

“In 2017, CIOs will be tasked with taking the helm of data driven initiatives and ensuring that data is clean enough to be processed by machines to drive fast and accurate insight and action.”

Author : ELLIE BURNS

Source : http://www.cbronline.com/news/internet-of-things/smart-technology/artificial-intelligence-predictions-2017-expect-ai-service-smart-malware-digital-moriarty/#

Published in Science & Tech

When Larry Page and Sergey Brin invented PageRank back in 1996, they had one simple idea in mind: Organize the web based on “link popularity.”

In short, in the universe of pages existing in a (at the time almost) shapeless web, Page and Brin wanted to organize that information to make it become knowledge. The logic was pretty simple, yet extremely powerful. First, if a page was connected to multiple pages, which in turn linked back to it, that page improved in relevance. Also if a page had less links from other pages, yet those pages
were more important, then it also improved the ranking of the linked page.

In other words, how much a page was linked to others and how much other important pages linked back to it, determined a score from 0 to 10. A higher score meant more relevance, thus more chances of being shown by what would eventually become the greatest search engines of all times, Google.

Nowadays when you open the internet browser, you are not looking at the web itself, but rather the way Google indexes it. Presently, Google is the most visited website and chances are this scenario will remain unchallenged at least in the near future.

What does that imply? Simply that if Google doesn’t know you exist, de facto you don’t. Thus, how can you make Google know you exist?

Web writing at the time of PageRank

Before 2013, machines and humans used two completely different languages. Almost like a bird of paradise’s chant is indifferent to an eagle, search engines could not understand human language, unless humans did change their writing process.

It was the birth of the web writing industry. This industry was based on a premise, follow what Google says is relevant. This premise generated a cascade of consequences that still affects the web today.

In fact, up to 2013, Google’s algorithm  took into account over 200 factors to determine the relevance of a piece of content. Yet those factors weren’t necessarily in line to what human readers wanted to see. Thus, for the first time in human history, men started to write for machines’ sake. That changed when in 2013, Google launched RankBrain.

How RankBrain and Artificial Intelligence changed web writing

Out of the more than 200 factors that Google accounts for when deciding whether the content on the web is relevant, RankBrain became the third most relevant.

Yet what is revolutionary about RankBrain is the fact that it uses Natural Language Processing (NLP) to translate human language in machine language, leaving the writing process unaffected. Thus, rather than worrying about search engine optimization, writers can finally go back and do what they have been doing best for the last five thousand years: writing compelling stories.

Although it may sound trivial for a traditional writer, that was a revolution for web writers.

There is one caveat tough. Instead of thinking in terms of the single article, writers should start thinking in terms of entities. What is an entity then?

The birth of the Semantic Web

As we saw, before 2013 Google incentivized writing standards that were tailored for machines rather than humans. This scenario changed when RankBrain was launched.

The new algorithm allowed the coming forward to a new way of thinking about the web, a semantic web. Its father was Tim Berners-Lee, which in 2006 called for a transition from web to semantic web.

Why is that relevant and what does Semantic Web stand for?

First, the semantic web is a set of rules and standards that make human language readable to machines. Second, there was a transition from the single word to the general context, or put in technical jargon from keyword to entity.

In short, to make a piece of content relevant to search engines, it was crucial to place a set of keywords within an article. Yet that strategy is not enough anymore. Indeed, what nowadays makes a piece of content relevant is the context on which it stands.

In semantic web jargon an entity is a subject which has unambiguous meaning because it has a strong contextual foundation. Although strong and solid, that foundation is in constant flux. That makes the information structured as an entity way more reliable that any set of keywords. At the same time an entity is also more powerful as it adapts to the context in which it stands.

What does that imply? A single entity can replace a whole set of keywords. Thus, making writing more human.

The future of web writing

Even though no one really knows how the future will unfold, the hope is that finally thanks to Artificial Intelligence writers will be empowered, as they will be free to write amazing stories that will enrich the human collective intelligence. In other words, instead of going from writing to web writing as unconsciously as the human race transitioned from hunter-gathering to farming, it is time to take this step forward deliberately and intentionally. That means giving the web writing’s stage to whom it belongs, human beings!

Gennaro Cuofano is a Growth Hacker at WordLift, a software company that helps web writers organize their content and reach more readers while remaining focused on what they do best, writing.

Author : Guest contributor

Source : https://bdtechtalks.com/2017/03/15/how-artificial-intelligence-is-changing-web-writing/

Published in Science & Tech

Earlier this year at Facebook's F8 conference, the company revealed three innovation pillars that make up the company's ten-year vision: connectivity, artificial intelligence (AI), and virtual reality (VR). Facebook's Chief Technology Officer Mike Schroepfer is responsible for leading each of them. Despite the fact that the vision is ten-years in duration, the company has made significant progress in each.

Facebook's progress in AI can been seen in everything from the company's news feed to the way in which people are tagged. The virtual reality innovations are best demonstrated through the Oculus Rift, which I demo'd last Thursday. More recently, the company made a great flight forward on the connectivity pillar as Acquila, a long-endurance plane that will fly above commercial aircraft and the weather, took flight in Arizona. The goal is for this v-shaped aircraft that has a wingspan longer than a Boeing 737, but weighs under 1,o00 pounds to bring basic internet access to the developing world.

I met with Schroepfer at Facebook's headquarters in Menlo Park, and we discussed these three pillars and a variety of other topics, including the company's recruiting methods, how the company maintains its innovative edge, and the logic behind its headquarters - one of the largest open-space offices in the world.

(This interview is excerpted from the 250th broadcast of the Forum on World Class IT. To listen to that unabridged interview, please visit this link. This is the 18th interview in the IT Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, and Walt Mossberg, please visit this link.)

Peter High: Earlier this year at F8 2016, Facebook’s developer’s conference, you introduced three innovation pillars. Could you take a moment to highlight each of them?

Mike Schroepfer: We have been, I think pretty uniquely in the industry, very public about our ten-year vision and roadmap, and we have broken it down into three core areas:

  • connectivity, connecting the approximately four billion people in the world who do not have internet access today (the majority of the world);
  • artificial intelligence in solving some of the core problems and building truly intelligent computer systems; and
  • virtual reality and augmented reality, building the next generation of computing systems that have probably the best promise that I am aware of to give me the ability to feel like I am present with someone in the same room, even if they are thousands of miles away.

High: With the abundant resources and brain power at Facebook, how did you choose those three as opposed to others?

Facebook CTO Mike Schroepfer

Facebook CTO Mike Schroepfer

Schroepfer: A lot of this derives directly from [Facebook CEO] Mark [Zuckerberg], and comes from the mission, which is to make the world more open and connected. I think of this simply as using technology to connect people. We sit down and say, “OK, if that is our goal, the thing we are uniquely suited for, what are the big problems of the world?” As you start breaking it down, these fall out quite naturally. The first problem is if a bunch of the world does not even have basic connectivity to the internet, that is a fundamental problem. Then you break it down and realize there are technological solutions to problems; there are things that can happen to dramatically reduce the cost of deploying infrastructure, which is the big limiting factor. It is just an economics problem. Once people are connected, you run into the problem you and I have, which is almost information overload. There is so much information out there, but I have limited time and so I may not be getting the best information. Then there is the realization that the only way to scale that is to start building intelligent systems in AI that can be my real-time assistant all the time, making sure that I do not miss anything that is critical to me and that I do not spend my time on stuff that is less important. The only way we know how to do that at the scale we operate at is artificial intelligence.

So there is connectivity and I am getting the right information, but most of us have friends or family who are not physically next to us all the time, and we cannot always be there for the most important moments in life. The state of the art technology we have for that right now is the video camera. If I want to capture a moment with my kids and remember it forever, that is the best we can do right now. The question is, ‘What if I want to be there live and record those moments in a way that I can relive them twenty years from now as if I was there?’ That is where virtual reality comes in. It gives you the capability of putting a headset on and experiencing it today, and you feel like you are in a real world somewhere else, wherever you want to be.

High: How do you think about those longer term goals, the things that are going to take a lot of stair steps to get to, versus the near-term exhaust of ideas that are going to be commercialized and commercial-ready?

Schroepfer: The biggest thing that I try to emphasize when I talk about the ten-year plan is to be patient when it does not work because no great thing is just a straight linear shot. I have started companies in the past and it is never a straight shot. The time at Facebook is seen from the outside as maybe always things are going great, but there are always a lot of ups and downs along the way. The key thing is not to get discouraged in the times when it is not going well. The other thing is if you can have intermediate milestones along the way which help you understand that you are making progress, that is handy. AI is quite easy because the team has already deployed tons of stuff that dramatically improves the Facebook experience every day. This can be as simple as techniques to help us better rank photos, so you do not miss the important photos in your life, to more fundamental things like earlier this year we launched assistive technology so that if you have a visual impairment and cannot see the billions of images uploaded every day on Facebook we can generate and read you a caption. We could not ever do that at human scale.

These are happening every day and, at the same time, we are working on much harder problems. How do you teach a computer to ingest a bunch of unstructured data, like the contents of a Wikipedia article, and then answer questions about it? That seems straightforward, but that is the frontier of artificial intelligence where you can reason and understand about things that are not pre-digested and pre-optimized by a person who put it in a nice key value format for you. When you look at that, that stuff is farther reaching and rudimentary.

In VR we want to deliver products on the market today so you can go into your Best Buy today and buy an Oculus Rift headset. Realize that that is going to be a relatively small market in 2016. Compared to the billion people today who use Facebook, it is going to seem tiny, but we hope every year better content will occur. We will release updated systems every few years, and as the systems get better and cheaper, low and behold in ten years you will have hundreds of millions of people in VR, rather than a few million. You have to have patience for that ramp, and not get disappointed when it does not have that many users in the short term.

There are intermediate milestones on the connectivity side, like our first flight of Aquila a couple of weeks ago, which was awesome and mind-bending and people were literally in tears. But that was the first flight of our first vehicle. There are many steps between that and a vehicle which is fully ready to perform the end goal, which is providing internet access for people where it is too expensive to lay fiber lines across a large area. So you try to have a long term vision, be patient when things might fail, but then also have these intermediate steps where you can be making progress and adding value and see where things are going. At least that is the way I think of it.

High: To solve the problems you are going after requires not only brain power, but also people with a wide array of backgrounds, perhaps more so than when the company began. You need the whole swath of STEM topics, people with advanced neuroscience specialties, and so forth. Especially here in Silicon Valley, where there is such a war for talent, how do you think about talent acquisition?

Schroepfer: Honestly, this is the real joy of the job. Now we are in this place where certainly it is a challenge to find talent, but magic occurs when it is cross-disciplinary. Let’s talk about flying this aircraft. You have high end composite material, electrical systems and electrical engines. A key part is using latest generation battery technology using solar cells. Then we need a communication system that can beam internet up and down between this plane, which means we are doing free space optics and have laser communications and laser transmitters and receivers. You need a software system that can help control flight on this aircraft, an aircraft that has never been built before. We need to build simulations of what this thing is going to look like. You have your machine learning software, hardware, electrical, aeronautics, material science, and all of these things together. When you can get them into a small team—this is a few dozen people together—and you get them clearly oriented on this goal, a lot of great stuff happens. Our core strategy in the company, our technology, is about bringing people together.

High:  How much liberty do people have to explore ideas in areas of their own choosing?

Schroepfer: A lot. A fundamental principle from the beginning of when I was here was that you have smart, well-educated people who are at the top of their field and they can work anywhere. They are going to be best used if you have them working on the thing that they are excited about. If they wake up in the morning and run in to work because they cannot wait to solve the problem, then they are going to produce much better stuff. There are certainly times where we try to convince someone to try to work on something else, or explain to them what is important, but a lot of my job ends up being to get the right talent here, and then to clearly and crisply articulate our end goal: “This is what I want you to work on. There are lots of ways you can contribute. Figure out what part of this makes you the most excited and dive in and go.” Then we can start putting some of the pieces together and build some of this technology. So there is a lot of freedom in what we do.

High: How do you think about building an ecosystem to do this outside of the company, in addition to the team you have built inside the company?

Schroepfer: This is an area where we have tried to innovate a lot. If you go back even five or six years to the foundation of the Open Compute Project, this was like everyone has accepted that open source is a great way to develop software. I used to work at Mozilla and we built a browser in open source. Pretty much every company out there has Linux running somewhere—that has open source; we all contributed to the same Linux kernel, but did not do it for hardware. We said, “Let’s do that for hardware.” So our datacenter designed the buildings themselves, the racks, the servers—everything in there is open for people to collaborate. Now you have the whole industry collaborating, including, very recently, Google. We are running that same playbook now in connectivity, such as in telecom infrastructure. If we can get the industry together, it ends up benefitting everyone because you share in a bunch of the core IP, you build on the same components, you get economies of scale and production, stuff is cheaper, and it gets more proliferated out there.

When you look at things like AI research, we are aggressively publishing and open sourcing our work. We are at all the major conferences. I was just reviewing with the team recently the core advances they had developed. One they had developed a few years ago was called “memory networks” as a way to attach a sort of a short-term memory to a convolutional neural net, and that was a capability we had not had before. The work then was in 2014 and since then there have been citations to that work. Every month there is a new paper out which shows a new advancement and enhancement to that technique that improves on some basic question and answering benchmarks. You look at the aggregate rate of throughput of the entire industry, versus if we just did it ourselves, and it is great because we can fast forward by building on work that just happened and be two years ahead of where we would be if we had just tried to do it ourselves. Fundamental technology can benefit lots of things besides Facebook. Whenever we can do that, we are big fans.

High: How do you keep yourself abreast of new innovations that are happening both at your company and outside your company?

Schroepfer: This is where I think I might have the best job in the industry. First of all, I read everything I can see. But even better than that is I get to go sit down and talk with the teams doing the work and that is by far the best part of my day. Just a few weeks ago, I sat down with the Facebook research team and we did a day-long briefing on all of the work that they are doing. Here is Yann LeCun, who has written the seminal papers on convolutional neural nets in the 1990’s, taking the team through his vision of where we are going. The team is reviewing not just the work they are doing, but, because they publish and open source it, also the work other people have built on top of that to solve similar problems. Then I walk away from that and get to talk to some people here building some of the latest social apps with VR and look at what we are trying out there. I get a chance to talk to people in the industry at tech conferences or when I see people doing interesting work, and I get a chance to understand what is happening there. It is a lot of fun. It is honestly hard to keep up with because there is so much stuff happening all the time and it is all fun and fascinating.

High: Having had a chance to walk around headquarters here, I want to ask how you have thought about creating a space that fosters the collaboration necessary, not only within these four walls, but also beyond that, wherever Facebook is?

Schroepfer: You are sitting in a building which is one of the world’s largest single floor office buildings. It is 2,800 people on a single floor with no individualized offices—you see everyone sitting out in desks here. This was designed as an experiment to see how far we could push collaboration if we had literally thousands of people in the same room. There are VC systems in every room so people can VC between our major offices. Obviously, one of the secret weapons we have had, that we have now made available to others, is Facebook itself. Everyone is on Facebook all day, because we work here, and it turns into a great collaboration tool: you have Facebook groups, Facebook messenger, you have all these great ways which are fundamentally about aggregating a bunch of information and being able to keep up with it. It could be ‘let’s see what my friends are up to’, or it could be what sixteen different teams are working on at the same time. The tools work well for that, too.

The subtle thing that I think a lot of people miss is that the key to collaboration is when people can bring their perspective and their point of view and their expertise, and spend the time it takes to understand the other person and empathize with what problem they are working on. This could be across domains. Let’s say I am a machine learning person and I am trying to understand what a medical doctor is trying to do to look at patterns in drug discoveries. The more I can understand about their problem, the more I can help them with that. A lot of what our culture builds is that basic empathy because you are on Facebook and so not only are you seeing what is going on with colleagues at work, but you are seeing what is going on in their real life, too: kids going off to school next week or coming back from vacation. It brings this sense of cohesion I have not ever experienced in any other organization anywhere near our scale.

High: As the organization grows, as it becomes a technology behemoth, to what extent do you worry or think about maintaining that smaller company feel, and the entrepreneurial spirit? Obviously, there are many innovative companies that have come before you that had bright days in the sun and then experienced a lot of rainy days after that.

Schroepfer: That is something we think about all the time. There are pros and cons of growth. The “pro” is that we are working on all this exciting stuff and we have specialists in all these areas. If you are an engineer joining the company and you want to learn more about AI or more about aircraft or virtual reality, we have all of that for you to do today in house, which is awesome and a big plus. But it is challenging to get a larger group of people unified under a common mission. I think this boils down to a couple things that we work on all the time. The first is that you want people to have a real sense of alignment towards the end goal. What can happen in a large organization is it gets federated and everyone is working on different things and so their goals are not aligned. We are clear about our mission – connecting people using technology—and we are clear that if you are excited about that, great, come here, and if you are not, there are lots of other great places to work.

I like to think that a lot of our job is engineering the culture. If you think about engineering as building a system, the way in which we all work together is a system, and we can spend time engineering that—as simple as what is your experience onboarding as a new hire. For most companies, that is a day and a half filling out a bunch of paperwork. For us, it is an intensive, six-week, boot camp onboarding which is designed to get you as exposed to as much technology and people across the company as possible. At the end of those six weeks, no matter what you are working on, you have met hundreds of engineers across the company, and not just in a superficial way. You have written a piece of code and had someone review it, and you have poked around and talked to them and asked them about a bug. So when you exit boot camp, everyone gets to know each other well because whether you have been fifteen years in the industry or you are just out of college, we are both figuring out Facebook, and that builds a bond. That gets spread throughout the company and you build connections between those people in the class, and each one of those individuals met tons of people across the company, so when someone has a question about something a team is doing, it is like “Oh, I met Mary when I was working on this bug. I will go ask her. Maybe she knows who to talk to.” It builds those loose connections that are so critical to building that cohesion.

I could go on and on because we have program after program designed to solve this exact problem, which is building cohesion across the groups. After boot camp, it is hackamonth, where, eighteen months into your job, you take a month and completely rotate to another team. You can learn a new skill, work in a different area entirely, meet a whole new group of people, and just continue this cohesion across the company.

High: You did that yourself, as Sheryl Sandberg and you switched jobs for a week.

Schroepfer: Absolutely. I learned what it was like to be in her shoes, and vice versa. We are just trying to keep it mixed up wherever we can, which I think is helping. It is a hard problem, so there is a lot to work on, but it is something we focus on.

High: If I could take it one step further back in the chain to the recruiting process: how do you evaluate talent? Especially now that the company is shooting for long-term objectives, and given some of the uncertainty and value you are seeking, how do you think about talent acquisition?

Schroepfer: The first obvious thing you want to do is see whether they have the raw skills needed to accomplish the job—and that is a much broader palette than it used to be. It used to be mostly software engineering, but now it could be electrical engineering, mechanical engineering, a specialty in AI. For each of those roles, there are different ways, but basically it boils down to some form of technical verification that they are state of the art in that field. The second is that collaboration is important to us. Part of the interview process is collaborative problem solving. Half of it is did they get the answer right, and the other half is whether they able to work with the other person in the room towards that answer because that is the way the real world is. Hollywood has the person in the basement, solo, creating everything, but nothing interesting I have ever seen is made that way. It is always a team. We just want to make sure that this person can communicate clearly and collaborate well with the group, and that is what we look for.

Author : Peter High

Source : http://www.forbes.com/sites/peterhigh/2016/08/15/facebooks-10-year-plan-connectivity-artificial-intelligence-and-virtual-reality/3/#4e865773c3cd

Published in Social

American business publication Fast Company has released its list of the most innovative companies of 2017. The annual list ranks enterprises that “tap both heartstrings and purse strings and use the engine of commerce to make a difference in the world” according to its website.

Amongst the top ten artificial intelligence and machine learning companies are tech giants Google and IBM and startup Iris AI. AI companies also dominated the top 10 global businesses across all sectors with Amazon at number one.

Amazon was selected as the leading company for “offering even more, even fast and even smarter”. The cloud computing giant which is America’s largest e-commerce company is worth $390 billion. Google was a close second due to its array of projects using artificial intelligence that are designed to reflect the search giant’s original mission: organizing the world’s information and making it universally accessible and useful.

Uber, Apple, Snap (the company that founded Snapchat), Facebook, Netflix and Twilio were also featured in the list. Meaning that nine out of the top ten most innovative companies are using artificial intelligence or machine learning.

The artificial intelligence top 10 featured companies ranging massively in size from startup Iris AI with 8 employees to IBM with almost 400,000. Here’s the top 10 in order:

01 Google

“When Google CEO Larry Page created a new holding company called Alphabet in 2015, initiatives such as self-driving cars and health tech got divvied up into new companies, and Google became an Alphabet division with a sharper focus on internet services and software. Today’s Google, now led by CEO Sundar Pichai, still dominates web search and online advertising sales. It has the most widely used mobile operating system (Android) and web browser (Chrome). Other venerable offerings, such as YouTube, Gmail, and Google Maps, continue to be the 800-pound gorillas of their respective categories.”

02 IBM

“Over the past decade, IBM has been moving away from its old business of making and selling computer hardware and transforming itself into something a little more modern: a company that offers services like cloud computing and data analytics.

Since Watson became commercially available, the technology has been applied to everything from cancer research, where Watson is used to sort through and decipher millions of medical journals, to retail, where Watson is being used to help shoppers locate exactly what they’re shopping for or similar items. As of 2017, Watson is already available to more than 400 million people and patients.”

03 Baidu

“In 2016, Baidu's CEO Robin Li publicly stated that the company is actively integrating artificial intelligence technologies into all of Baidu's major businesses, including the search engine, as well as new businesses such as autonomous driving. In August, Baidu, Stanford, and the University of Washington released an academic study demonstrating that voice input is more accurate and three times faster than human typing on smartphones.”

04 SoundHound

“In 2016 SoundHound launched its Hound virtual assistant, taking on Siri, Amazon’s Alexa, and the Google Assistant, and there are now 20,000 developers on the Houndify platform, with the service having already been integrated into 150 domains. Among those enterprises who have implemented it are Samsung, Nvidia, Sony’s Xperia, Yelp, and Uber.”

05 Zebra Medical Vision

“For using deep learning to predict and prevent disease”

06 Prisma

“For making masterpieces out of snapshots”

07 Iris AI

“For speeding up scientific research by surfacing relevant data”

08 Pinterest

“For serving up a universe of relevant pins to each and every user”

09 TrademarkVision

“For helping startups make their mark without any legal confusion”

10 Descartes Labs

“For preventing food shortages by predicting crop yields”

Source : http://www.access-ai.com/articles/10-most-innovative-companies-ai-and-machine-learning

Published in Science & Tech

Anonymity networks offer individuals living under restraining regimes protection from surveillance of their internet use. But citing the recently divulged vulnerabilities in the most popular of these networks - Tor - has urged computer scientists to bring forth more secure anonymity schemes.

An all-new anonymity scheme that offers strong security guarantees, but utilizes bandwidth more efficiently as compared to its predecessors is in the works.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory in collaboration with the the école Polytechnique Fédérale de Lausanne will present the new scheme during the Privacy Enhancing Technologies Symposium in this month.

During experiments, the researchers' system required only one-tenth as much time as current systems to transfer a large file between anonymous users, according to a post on MIT official website.

Albert Kwon, the first author on the new paper and a graduate student in electrical engineering and computer science said as the basic use case, the team thought of doing anonymous file-sharing where both, the receiving and the sending ends didn't each other.

This was done keeping in mind that honeypotting and other similar things - in which spies offer services via an anonymity network in a bid to entice its users - are real challenges. "But we also studied applications in microblogging," Kwon said - something like Twitter where a user can opt to anonymously broadcast his/her message to everyone.

The system designed by Kwon in collaboration with his coauthors - Bryan Ford SM '02 PhD '08, an associate professor of computer and communication sciences at the école Polytechnique Fédérale de Lausanne, David Lazar, a graduate student in electrical engineering and computer science, Edwin Sibley Webster Professor of Electrical Engineering and Computer Science at MIT and his adviser Srini Devadas - makes use of an array of existing cryptographic techniques, but combines them in a peculiar manner.

The internet, for a lot of people can seem like a frightening and intimidating place and all they seek is help feeling safer on the internet, especially while performing an array of tasks such as making an online purchase, Anonhq reported.

Shell game

 A series of servers known as a 'mixnet,' is the core f the system. Just before passing a received message on to the next server, each server rearranges the order in which it receives messages - for instance - messages from Tom, Bob and Rob reach the first server in the order A, B, C, that server would then forward them to the second server in a completely different order, something like C, B, A. The second server would do the same before sending them to the third and so on.

Even if an attacker somehow manages to track the messages' point of origin, he/she will not be able to decipher which was which by the time they moved out of the last server. The new system is called 'Riffle' citing this reshuffling of the messages.

Public proof

In a bid to curb messages tampering, Riffle makes use of a technique dubbed a verifiable shuffle.

Thanks to the onion encryption, the messages forwarded by each server does not resemble the ones it receives, it has peeled off a layer of encryption. However, the encryption can be done in a way that allows the server to generate mathematical evidence that the messages it sends are indeed credible manipulations of the ones it receives.

In order to verify the proof, it has to be checked against copies of messages received by the server. Basically, with Riffle, users send their primary messages to all the servers in the mixnet at the same time. Servers then independently check for manipulation.

As long as one server in the mixnet continues to be uncompromised by an attacker, Riffle is cryptographically secure.

Author : Vinay Patel

Source : http://www.universityherald.com/articles/34093/20160712/mit-massachusetts-institute-of-technology-researchschool-of-engineering-computer-science-and-artificial-intelligence-laboratory-csail-computer-science-and-technology-cyber-security.htm

Published in Online Research

Sundar Pichai says that google are making a big bet on machine learning and artificial intelligence.

Technologies like artificial intelligence and machine learning can make huge difference to everyday life and Google is investing in bringing these to "as many people and as fast as possible", its India-born chief Sundar Pichai on Thursday said.

"We are making a big bet on machine learning and artificial intelligence. Advancement in machine learning will make a big difference in many many fields," Pichai said at his alma mater IIT Kharagpur campus on Thursday, while chatting with students.

He pointed out that the ability of computers to do tasks like image recognition, voice recognition or speech recognition, are reaching a tipping point.

"So, we are definitely at a point of inflexion," he said, adding that Google is investing a lot in this space and if the investments are sustained over a few years, it will pave the way for the next wave of computing.

Pointing out to a paper published by Google recently, Pichai said machine learning can be used to detect diabetic retina, which can cause blindness if treatment isn't administered on time.

"This is an early example of the kind of changes that will happen when you apply machine learning to all kinds of fields. Google alone won't do this. What I am excited about is bringing machine learning and AI (artificial intelligence) to as many people and as fast as possible," he said. Pichai said that at Google, the aim is very high and the criterion is building technology that will apply to the lives of billions of people.



On India, he praised that the PPP model has been working well and the company is a big supporter of the Digital India campaign.

"To really make Google work in India, you need to make it available in as many languages as possible. English is spoken by only a small segment of the population," Pichai said adding Google has progressed but wants to work more in rural conditions and in the right dialects.

To improve access to digital world, he said he would love to see cheaper smartphones hit the market.

"You really need to bring the prices of entry level smartphones down at around $30," he said adding connectivity is also extremely important.

He described India as the most dynamic internet market in the world and the second largest one.

"When we built for India, we built for the world," he said citing the YouTube offline feature which is now available across 80 nations.

In the next 3-4 years, Pichai expects there will be big software companies coming out of India.

When asked by students, he said "You can build for a global market from India."

Pichai said he is convinced that India will become a global player soon.



"I am confident that it will compete with any player in the world. It is growing well as a country and will take few more years," he said when asked to comment on whether India can take on China.

Author : Kharagpur

Source : http://www.business-standard.com/article/pti-stories/google-bets-big-on-artificial-intelligence-117010500764_1.html

Published in Search Engine

Rather than leading to the violent downfall of humankind, artificial intelligence is helping people around the world do their jobs, including doctors who diagnose sepsis in patients and scientists who track endangered animals in the wild, experts said Thursday (Oct. 13) at the White House Frontiers Conference in Pittsburgh.

Advancements in the field of artificial intelligence (AI) haven't always been met with enthusiasm. Famed astrophysicist Stephen Hawking warned on several occasions that a fully developed AI could destroy the human race, and Hollywood sci-fi movies are rife with fierce robots battling humans for control. But at yesterday's conference — attended by the country's leading researchers, innovators, entrepreneurs and students — scientists explained how newly developed AI is accelerating research and improving lives.

Wildlife preservation

A herd of Grévy's zebras.
A herd of Grévy's zebras.

Credit: Rich Carey Shutterstock.com

Many researchers want to know how many animals are out there and where they live, but "scientists do not have the capacity to do this, and there are not enough GPS collars or satellite tracks in the world," Tanya Berger-Wolf, a professor of computer science at the University of Illinois at Chicago, said at the conference, which was jointly hosted by the University of Pittsburgh and Carnegie Mellon University and was also streamed live online.

Instead, Berger-Wolf and her colleagues developed Wildbook.org, a site that houses an AI system and algorithms. The system inspects photos uploaded online by experts and the public. It can recognize each animal's unique markings, track its habitat range by using GPS coordinates provided by each photo, estimate the animal's age and reveal whether it is male or female, Berger-Wolf said.

After a massive 2015 photo campaign, Wildbook determined that lions were killing too many babies of the endangered Grévy's zebra in Kenya, prompting local officials to change the lion management program, she said.

"The ability to use images with photo identification is democratizing access to conservation in science," Berger-Wolf said. "We now can use photographs to track and count animals."

Diagnosing sepsis

Sepsis is a complication that is treatable if caught early, but patients can experience organ failure, or even death, if it goes undetected for too long. Now, AI algorithms that scour data on electronic medical records can help doctors diagnose sepsis a full 24 hours earlier, on average, said Suchi Saria, an assistant professor at the Johns Hopkins Whiting School of Engineering. 

Saria shared a story about a 52-year-old woman who came to the hospital because of a mildly infected foot sore. During her stay, the woman developed sepsis — a condition in which a chemical released by the blood to fight infection triggers inflammation. This inflammation can lead to changes in the body, which can cause organ failure or even death, she said.

The woman died, Saria said. But if the doctors had used the AI system, called Targeted Real-Time Early Warning System (TREWScore), they could have diagnosed her 12 hours earlier, and perhaps saved her life, Saria said.

TREWScore also can be used to monitor other conditions, including diabetes and high blood pressure, she noted. "[Diagnoses] may already be in your data," Saria added. "We just need ways to decode them." [A Brief History of Artificial Intelligence]

Search and rescue

Victims of floods, earthquakes or other disasters can be stranded anywhere, but new AI technology is helping first responders locate them before it's too late.

Until recently, rescuers would try to find victims by looking at aerial footage of a disaster area. But sifting through photos and video from drones is time-intensive, and it runs the risk of the victim dying before help arrives, said Robin Murphy, a professor of computer science and engineering at Texas A&M University.

AI permits computer programmers to write basic algorithms that can examine extensive footage and find missing people in less than 2 hours, Murphy said. The AI can even find piles of debris in flooded areas that may have trapped victims, she added.

In addition, AI algorithms can sift through social media sites, such as Twitter, to learn about missing people and disasters, Murphy said.

Cybersecurity

Finding flaws and attacks on computer code is a manual process, and it's typically a difficult one.

"Attackers can spend months or years developing [hacks]," said Michael Walker, a program manager with the Defense Advanced Research Projects Agency's (DARPA) Information Innovation Office. "Defenders must comprehend that attack and counter it in just minutes."

But AI appears to be up to the challenge. DARPA held its first Cyber Grand Challenge on Aug. 4 in Las Vegas, a competition won by Mayhem, a program created by the Pittsburgh-based startup ForAllSecure.

Walker described how the second-place team Xandra "discovered a new attack in binary code, figured out how it worked, reached out over a network [and] breached the defenses of one of its opponents, a system named Jima. And Jima detected that breach, offered a patch, decided to field it and ended the breach."

The entire episode took 15 minutes. "It all happened before any human being knew that flaw existed," Walker said. The attack happened on a small network, but Walker said he was confident that AI could one day patch bugs and respond to attacks online in the real world.

Restoring touch

Researcher Rob Gaunt prepares Nathan Copeland for brain computer interface sensory test.
Researcher Rob Gaunt prepares Nathan Copeland for brain computer interface sensory test.

Credit: UPMC/Pitt Health Sciences Media Relations

In a landmark event announced Thursday, researchers revealed that a paralyzed man's feelings of touch were restored with a mind-controlled robotic arm and brain chip implants. [Bionic Humans: Top 10 Technologies]

A 2004 car accident left the man, Nathan Copeland, with quadriplegia, meaning he couldn't feel or move his legs or lower arms, Live Science reported yesterday. At the Frontiers Conference, Dr. Michael Boninger, a professor in the Department of Physical Medicine and Rehabilitation at the University of Pittsburgh School of Medicine, explained how innovations allowed Copeland to feel sensation in his hand again.

Doctors implanted two small electronic chips into Copeland's brain — one in the sensory cortex, which controls touch, and the other in the motor cortex, which controls movement. During one trial, Copeland was able to control the robotic arm with his thoughts. Even more exciting, Boninger said, was that the man reported feeling the sensation of touch when the researchers touched the robotic hand.

Many challenges remain, including developing a system that has a long battery life and enables full sensation and movement for injured people, he said. "All of this will require AI and machine learning," Boninger said.

Author : Laura Geggel

Source : http://www.livescience.com/56497-artificial-intelligence-intriguing-uses.html

Published in Science & Tech
Page 2 of 3

Upcoming Events

There are no up-coming events

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media