fbpx

Queries provide data mine for Microsoft's AI developments

Microsoft's Bing search engine has long been a punch line in the tech industry, an also-ran that has never come close to challenging Google's dominant position.

But Microsoft could still have the last laugh, since its service has helped lay the groundwork for its burgeoning artificial intelligence effort, which is helping keep the company competitive as it builds out its post-PC future.

Bing probably never stood a chance at surpassing Google, but its 2nd-place spot is worth far more than the advertising dollars it pulls in with every click. Billions of searches over time have given Microsoft a massive repository of everyday questions people ask about their health, the weather, store hours or directions.

 

“The way machines learn is by looking for patterns in data,” said former Microsoft CEO Steve Ballmer, when asked earlier this year about the relationship between Microsoft's AI efforts and Bing, which he helped launch nearly a decade ago. “It takes large data sets to make that happen.”

Microsoft has spent decades investing in various forms of artificial intelligence research, the fruits of which include its voice assistant Cortana, email-sorting features and the machine-learning algorithms used by businesses that pay for its cloud platform Azure.

It's been stepping up its overt efforts recently, such as with this year's acquisition of Montreal-based Maluuba, which aims to create “literate machines” that can process and communicate information more like humans do.

Some see Bing as the overlooked foundation to those efforts.

“They're getting a huge amount of data across a lot of different contexts – mobile devices, image searches,” said Larry Cornett, a former executive for Yahoo's search engine. “Whether it was intentional or not, having hundreds of millions of queries a day is exactly what you need to power huge artificial intelligence systems.”

Bing started in 2009, a rebranding of earlier Microsoft search engines. Yahoo and Microsoft signed a deal for Bing to power Yahoo's search engine, giving Microsoft access to Yahoo's greater search share, said Cornett, who worked for Yahoo at the time. Similar deals have infused Bing into the search features for Amazon tablets and, until recently, Apple's Siri.

All of this has helped Microsoft better understand language, images and text at a large scale, said Steve Clayton, who as Microsoft's chief storyteller helps communicate the company's AI strategy.

“It's so much more than a search engine for Microsoft,” he said. “It's fuel that helps build other things.”

Bing serves dual purposes, he said, as a source of data to train artificial intelligence and a vehicle to be able to deliver smarter services.

While Google also has the advantage of a powerful search engine, other companies making big investments in the AI race – such as IBM or Amazon – do not.

“Amazon has access to a ton of e-commerce queries, but they don't have all the other queries where people are asking everyday things,” Cornett said.

Neither Bing nor Microsoft's AI efforts have yet made major contributions to the company's overall earnings, though the company repeatedly points out “we are infusing AI into all our products,” including the workplace applications it sells to corporate customers.

The company on Thursday reported fiscal first-quarter profit of $6.6 billion, up 16 percent from a year earlier, on revenue of $24.5 billion, up 12 percent. Meanwhile, Bing-driven search advertising revenue increased by $210 million, or 15 percent, to $1.6 billion – or roughly 7 percent of Microsoft's overall business.

That's OK by current Microsoft current CEO Satya Nadella, who nearly a decade ago was the executive tapped by Ballmer to head Bing's engineering efforts.

In his recent autobiography, Nadella describes the search engine as a “great training ground for building the hyper-scale, cloud-first services” that have allowed the company to pivot to new technologies as its old PC-software business wanes.

Source: This article was published journalgazette.net By MATT O'BRIEN

Categorized in Search Engine

The CIA is developing AI to advance data collection and analysis capabilities. These technologies are, and will continue to be, used for social media data.

INFORMATION IS KEY

The United States Central Intelligence Agency (CIA) requires large quantities of data, collected from a variety of sources, in order to complete investigations. Since its creation in 1947, intel has typically been gathered by hand. The advent of computers has improved the process, but even more modern methods can still be painstakingly slow. Ultimately, these methods only retrieve minuscule amounts of data when compared what artificial intelligence (AI) can gather.

According to information revealed by Dawn Meyerriecks, the deputy director for technology development with the CIA, the agency currently has 137 different AI projects underway. A large portion of these ventures are collaborative efforts between researchers at the agency and developers in Silicon Valley. But emerging and developing capabilities in AI aren’t just allowing the CIA more access to data and a greater ability to sift through it. These AI programs have taken to social media, combing through countless public records (i.e. what you post online). In fact, a massive percentage of the data collected and used by the agency comes from social media. 

 

As you might know or have guessed, the CIA is no stranger to collecting data from social media, but with AI things are a little bit different, “What is new is the volume and velocity of collecting social media data,” said Joseph Gartin, head of the CIA’s Kent School. And, according to Chris Hurst, the chief operating officer of Stabilitas, at the Intelligence Summit, “Human behavior is data and AI is a data model.”

AUTOMATION

According to Robert Cardillo, director of the National Geospatial-Intelligence Agency, in a June speech, “If we were to attempt to manually exploit the commercial satellite imagery we expect to have over the next 20 years, we would need eight million imagery analysts.” He went on to state that the agency aims to use AI to automate about 75% of the current workload for analysts. And, if they use self-improving AIs as they hope to, this process will only become more efficient.

While countries like Russia are still far behind the U.S. in terms of AI development, especially as it pertains to intelligence, there seems to be a global push — if not a race — forward.  Knowledge is power, and creating technology capable of extracting, sorting, and analyzing data faster than any human or other AI system could is certainly sounds like a fast track to the top.  As Vladimir Putin recently stated on the subject of AI, “Whoever becomes the leader in this sphere will become the ruler of the world.”

Source: This article was published futurism.com By Chelsea Gohd

Categorized in Internet Technology

From The Terminator to Blade Runner, pop culture has always leaned towards a chilling depiction of artificial intelligence (AI) and our future with AI at the helm. Recent headlines about Facebook panicking because their AI bots developed a language of their own have us hitting the alarm button once again. Should we really feel unsettled with an AI future?

News flash: that future is here. If you ask Siri, the helpful assistant who magically lives inside your phone, to read text messages and emails to you, find the nearest pizza place or call your mother for you, then you’ve made AI a part of your everyday life. Even current weather forecasting systems, spam filtering programs, and Google’s search engine – among so many other practical applications – are AI-powered. Now, artificial intelligence doesn’t seem that alarming, right?

 

What Is Artificial Intelligence?

AI refers to machine intelligence or a machine’s ability to replicate the cognitive functions of a human being. It has the ability to learn and solve problems. In computer science, these machines are aptly called “intelligent agents” or bots.

Not all AI are alike. In fact, what is considered artificial intelligence has shifted as the technology develops. Today, there are three recognized levels in the AI spectrum, all of which we can experience today.

Assisted intelligence – This refers to the automation of basic tasks. Examples include machines in assembly lines.

Augmented intelligence – There is a give and take with augmented intelligence. An AI learns from human input. We, in turn, can make more accurate decisions based on AI information. As Anand Rao of PricewaterhouseCoopers (PwC) Data & Analytics puts it: “There is symmetry with augmented intelligence.”

Autonomous intelligence – This is AI with humans out of the loop. Think self-driving cars and autonomous robots.

Deep Learning

It is actually just in recent years when a good number of scientists and innovators began to devote their work to artificial intelligence. Technology has finally caught up with faster and more powerful GPUs. Industry observers tack this resurgence to 2015, when fast and powerful parallel processing became accessible. This is also around the birth of the so-called Big Data movement, when it became possible to store and analyze infinite amounts of data.

Thus, we reach today, the era of Deep Learning. Deep learning pertains to the use of artificial neural networks (ANNs) in order to facilitate learning at multiple layers. It is a part of machine learning based on how data is presented, instead of task-based algorithms.

Deep learning has led the way in revolutionizing analytics and enabling practical applications of AI.

We see it in something as basic as automatic photo-tagging on Facebook, a process developed by Yann LeCun for the company in 2013. Blippar, on the other hand, has come out with an augmented reality application that employs deep learning in real-time object recognition in 2015.

You can look forward to driverless cars and so much more. In the same we, we can expect AI to be applied further in business, particularly in decision-making.

Artificial Intelligence in Business

According to Dr. John Kelly III, IBM Senior Vice President for Research and Solutions: “The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved.”

Yes, AI technology isn’t the end but only a means towards effectiveness and efficiency, improved innovative capabilities, and better opportunities. And, we’ve seen this in several industries that have begun to adopt AI into their operations.

According to a survey by Tech Pro Research, up to 24 percent of businesses currently implement or plan on using artificial intelligence. Stand-outs are in the health, financial services and automotive sectors.

In financial services, PwC has put together massive amounts of data from the US Census Bureau, US financial data, and other public licensed sources to create $ecure, a large-scale model of 320 million US consumers’ financial decisions. The model is designed to help financial services companies map buyer personas, simulate “future selves” and anticipate customer behavior. It has enabled these financial services companies in validating real-time business decisions within seconds.

The automotive industry, on the other hand, has developed several AI applications, from vehicle design to marketing and sales decision-making support. For instance, artificial intelligence has led to the design of smarter (even driverless) cars, equipped with multiple sensors that learn and identify patterns. This is put to use through add-on safe-drive features that warn drivers of possible collisions and lane departures.

Like in the financial services sector, AI is used to develop a model of the automobile ecosystem. Here, you have bots that map the decisions made from automotive players, such as car buyers and manufacturers, and transportation services providers. This has helped companies predict the adoption of electric and driverless vehicles, and the implementation of non-restrictive pricing schemes that work on their target market. It has also helped them make better advertising decisions.

 

The key here is how artificial intelligence systems are able to run more than 200,000 GTM (go-to-market) scenarios, instead of just a typical handful. What you get is optimized scenarios that maximize revenues.

It’s a similar case in the fields of retail, marketing and sales. According to Adobe Marketing Cloud Product Manager, John Bates: “For retail companies that want to compete and differentiate their sales from competitors, retail is a hotbed of analytics and machine learning.” AI application development has provided marketers with new and more reliable tools in market forecasting, process automation and decision-making.

AI and Business Decisions

Prior to the resurgence of AI and its eventual commercial application, executives have had to rely on inconsistent and incomplete data. With artificial intelligence, they have data-based models and simulations to turn to.

According to PwC’s Rao, limitless outcome modeling is one of the breakthroughs in today’s AI systems. He reiterates: “There’s an immense opportunity to use AI in all kinds of decision making”

Today’s AI systems start from zero and feed on a regular diet of big data. This is augmented intelligence in action, which eventually provides executives with sophisticated models as basis for their decision-making.

There are several AI applications that enhance decision-making capacities. Here are some of them:

Marketing Decision-Making with AI

There are many complexities to each marketing decision. One has to know and understand customer needs and desires, and align products to these needs and desires. Likewise, having a good grasp of changing consumer behavior is crucial to making the best marketing decisions, in the short- and long-run.

AI modeling and simulation techniques enable reliable insight into your buyer personas. These techniques can be used to predict consumer behavior. Through a Decision Support System, your artificial intelligence system is able to support decisions through real-time and up-to-date data gathering, forecasting, and trend analysis.

Customer Relationship Management (CRM)

Artificial intelligence within CRM systems enable its many automated functions, such as contact management, data recording and analyses and lead ranking. AI’s buyer persona modeling can also provide you with a prediction of a customer’s lifetime value. Sales and marketing teams can work more efficiently through these features.

Recommendation System

Recommendation systems were first implemented in music content sites. This has since been extended to different industries. The AI system learns a user’s content preferences and pushes content that fit those preferences. This can help you reduce bounce rate. Likewise, you can use the information learned by your AI to craft better-targeted content.

Expert System

Artificial intelligence has tried to replicate the knowledge and reasoning methodologies of experts through Expert System, a type of problem-solving software. Expert systems, such as MARKEX (for marketing), apply expert thinking processes to provided data. Output includes assessment and recommendations for your specific problem.

Automation Efficiency and AI

The automation efficiency lent by artificial intelligence to today’s business processes has gone beyond the assembly lines of the past. In several business functions, such as marketing and distribution, AI has been able to hasten processes and provide decision-makers with reliable insight.

In marketing, for instance, the automation of market segmentation and campaign management has enabled more efficient decision-making and quick action. You get invaluable insight on your customers, which can help you enhance your interactions with them. Marketing automation is one of the main features of a good CRM application.

Distribution automation with the help of AI has also been a key advantage of several retailers. Through AI-supported monitoring and control, retailers can accurately predict and respond to product demand.

An example is the online retail giant, Amazon. In 2012, it acquired Kiva Systems, which developed warehouse robots. Since its implementation, Kiva robots have been tasked with product monitoring and replenishment, and order fulfillment. They can even do the lifting for you. That’s a big jump in Amazon efficiency, compared to the time when humans had to do the grunt work.

Social Computing

Social computing helps marketing professionals understand the social dynamics and behaviors of a target market. Through AI, they can simulate, analyze and eventually predict consumer behavior. These AI applications can also be used to understand and data-mine online social media networks.

Opinion Mining

Opinion mining is a kind of data mining that searches the web for opinions and feelings. It is a way for marketers to know more about how their products are received by their target audience. Manual mining and analyses require long hours. AI has helped shorten this through reliable search and analyses functions.

This form of AI is often used by search engines, which regularly rank people’s interests in specific web pages, websites and products. These bots employ different algorithms to get to a target’s HITS and PageRank, among other online scoring systems. Here, hyperlink-based AI is employed, wherein bots seek out clusters of linked pages and see these as a group sharing a common interest.

 

The Future of Business Decision-Making With AI

With no Terminator or Replicant looming in the periphery, there really is no danger to artificial intelligence, only potential. Arguably, there shouldn’t even be the more practical scare of losing people’s jobs to machines. Experts say that AI can actually enhance people’s jobs and allows them to work more efficiently.

And surely, this rings true with respect to decision-making. When decision-makers and business executives have reliable data analyses, recommendations and follow-ups through artificial intelligence systems, they can make better choices for their business and employees. You don’t just enhance the work of individual team members. AI also improves the competitive standing of the business.

The gap lies in developing artificial intelligence systems that could deal with the enormous amount of data currently available. According to Gartner, a marketing research organization, today’s data is set to balloon to up to 800% by 2020. With this, you get about 80% of unstructured data, made up of images, emails, audio clips and the like. At this point, there is nothing – neither human nor artificial intelligence – that can sift through this amount of data, in order to make it usable for business.

According to IBM’s Dr. Kelly: ““This data represents the most abundant, valuable and complex raw material in the world. And until now, we have not had the means to mine it.” He believes that it is companies involved in genomics and oil that will find the means to mine this resource.

He delves further on the future of AI and analytics: “In the end, all technology revolutions are propelled not just by discovery, but also by business and societal need. We pursue these new possibilities not because we can, but because we must.”

Source: This article was published business2community By Dan Sincavage

Categorized in Business Research

Scientists from Queen Mary University of London (QMUL) have created an artificial intelligence (AI) that uses internet searches to help co-design a word association magic trick.

The computer automatically sources and processes associated words and images required for the novel mind reading card trick which is performed by a magician.

Previously psychological experiments on participants would need to have been carried out by the magician to reveal how the human mind associates certain words and images, but the AI can complete the same job by searching through the internet.

The computer is able to assist in a creative task by taking over some of the workload during the design of the trick and by acting as an aid to prompt further creativity as it can uncover suggestions the magician may not have considered.

The researchers hope this study will introduce the use of computer technology as a natural language data sourcing and processing tool for magic trick design purposes.

 

Professor Peter McOwan from QMUL's School of Electronic Engineering and Computer Science, and co-author of the study, said: "This research is important, as it provides further evidence that computers can be used as aids in creative tasks. Particularly, it contributes to the relatively new field of the science of magic, placing magic in a similar research realm to music and other arts, and worth of investigation and exploration on its own terms."

He added: "New magic tricks are constantly being created. This research provides the magic community with another tool to use to this end, and the scientific community with some further insight into the possible uses and implications of applied computational creativity."

 

The trick is performed with a set of custom playing cards consisting of matching words and images supplied by the computer. During the performance the spectator chooses from two shuffled decks an image card and a word card that form a good match, which the performer can predict thanks to the mathematical properties of a deck of cards and the way the human mind makes mental associations.

Though the algorithm can replace the need for carrying out psychological experiments on volunteers to help determine the mind associations required for the trick, the researchers found that to produce the best results a combination of the algorithm and psychological experiments was ideal.

Similarly the matches of words and images suggested by the algorithm would need to be filtered by the magician before they could be used in the trick.

Dr Howard Williams, co-author of the paper, said: "The association trick is still very much the result of a human creative act, though a computer now stands in as a significant proxy for some of the process.

"Overall, the effect for the spectators is magical, and has been brought about by the blending of human and computational design processes."

Source: This article was published eurekalert.org

Categorized in Online Research

Artificial intelligence has made great progress in helping computers recognize images in photos and recommending products online that you're more likely to buy. But the technology still faces many challenges, especially when it comes to computers remembering things like humans do.

On Tuesday, Apple’s director of AI research, Ruslan Salakhutdinov, discussed some of those limitations. However, he steered clear during his talk at an MIT Technology Review conference of how his secretive company incorporates AI into its products like Siri.

Get Data Sheet, Fortune’s technology newsletter.

 

Salakhutdinov, who joined Apple in October, said he is particularly interested in a type of AI known as reinforcement learning, which researchers use to teach computers to repeatedly take different actions to figure out the best possible resultGoogle (GOOG, +0.17%), for example, used reinforcement learning to help its computers find the best possible cooling and operating configurations in its data centers, thus making them more energy efficient.

Researchers at Carnegie Mellon, where Salakhutdinov is also an associate professor, recently used reinforcement learning to train computers to play the 1990's era video game Doom, Salakhutdinov explained. Computers learned to quickly and accurately shoot aliens while also discovering that ducking helps with avoiding enemy fire. However, these expert Doom computer systems are not very good at remembering things like the maze's layouts, which keeps them from planning and building strategies, he said.

Part of Salakhutdinov’s research involves creating AI-powered software that memorizes the layouts of virtual mazes in Doom and points of references in order to locate specific towers. During the game, the software first spots what's either a red or green torch, with the color of the torch corresponding to the color of the tower it needs to locate.

Eventually, the software learned to navigate the maze to reach the correct tower. When it discovered the wrong tower, the software backtracked through the maze to find the right one. What was especially noteworthy was that the software was able to recall the color of the torch each time it spotted a tower, he explained.

Don’t Trust Twitter When It Comes to Stocks
Here are the reasons why.

However, Salakhutdinov said this type of AI software takes “a long time to train” and that it requires enormous amounts of computing power, which makes it difficult to build at large scale. “Right now it’s very brittle,” Salakhutdinov said.

Another area Salakhutdinov wants to explore is teaching AI software to learn more quickly from “few examples and few experiences.” Although he did not mention it, his idea would benefit Apple in its race to create better products in less time.

Some AI experts and analysts believe Apple's AI technologies are inferior to competitors like Google or Microsoft because of the company's stricter user privacy rules, which limits the amount of data it can use to train its computers. If Apple used less data for computer training, it could perhaps satisfy its privacy requirements while still improving its software as quickly as rivals.

Author : Jonathan Vanian

Source : fortune.com

Categorized in Internet Technology

Tech to aid video search, detection of disease and of fraud

Artificial intelligence has been the secret sauce for some of the biggest technology companies. But technology giant Alphabet Inc.’s Google is betting big on ‘democratising’ artificial intelligence and machine learning and making them available to everyone — users, developers and enterprises.From detecting and managing deadly diseases, reducing accident risks to discovering financial fraud, Google said that it aimed to improve the quality of life by lowering entry barriers to using these technologies. These technologies would also add a lot of value to self-driving cars, Google Photos’ search capabilities and even Snapchat filters that convert the images of users into animated pictures.“Google’s cloud platform already delivers customer applications to over a billion users every day,” said Fei-Fei Li, chief scientist of AI and machine learning at Google Cloud. “Now if you can only imagine, combining the massive reach of this platform with the power of AI and making it available to everyone.”

 

No programming

AI aims to build machines that can simulate human intelligence processes, while Stanford University describes machine learning as “the science of getting computers to act without being explicitly programmed.
 
”At the Google Cloud Next conference in San Francisco this month, Ms. Li announced the availability of cloud ‘Video Intelligence API’ to the developers. The technology was demonstrated on stage while playing a video. The API was not only able to find a dog in the video but also identify it as a dachshund. In another demo, a simple search for “beach” threw up videos which had beach clips inside them. Google said the API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities. These include nouns such as “flower” or “human” and verbs such as “swim” or “fly” inside video content. It can even provide the contextual understanding of when those entities appear. For example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google cloud storage.
 
“Now finally we are beginning to shine the light on the dark matter of the digital universe,” said Ms. Li, who is also the director of the Artificial Intelligence and Vision Labs at Stanford University.
 
The Mountain View, California-based Google has introduced new capabilities for its Cloud Vision API which has already enabled developers to extract metadata from more than one billion images. It offers enhanced optical character recognition capabilities that can extract content from scans of text-heavy documents such as legal contracts, research papers and books. It also detects individual objects and faces within images and finds and reads printed words contained within images. For instance, Realtor.com, a resource for home buyers and sellers, uses Cloud Vision API. This enables its customers to use their smartphone to snap a photo of a home that they’re interested in and get instant information on that property.
 
Google is also aiming to use AI and machine learning to bring healthcare to the underserved population. It uses the power of computer-based intelligence to detect breast cancer. It does this by teaching the algorithm to search for cell patterns in the tissue slides, the same way doctors review slides.The Google Research Blog said this method had reached 89% accuracy, exceeding the 73% score for a pathologist with no time constraint.

 

Google Research said that pathologists are responsible for reviewing all the biological tissues visible on a slide. However, there can be many slides per patient. And each slide consists over 10 gigapixels when digitised at 40 times magnification.“Imagine having to go through a thousand 10 megapixel photos, and having to be responsible for every pixel,” according to the Google Research blog page posted by Martin Stumpe, Technical Lead, and Lily Peng, Product Manager.
 
Google feeds large amounts of information to its system and then teaches it to search for patterns using ‘deep learning’, a technique to implement machine learning. The team detected that the computer could understand the nature of pathology through analysing billions of pictures provided by Netherlands-based Radboud University Medical Center. Its algorithms were optimised for localisation of breast cancer that had spread to lymph nodes adjacent to the breast.
 
The team had earlier applied deep learning to interpret signs of diabetic retinopathy in retinal photographs. The condition is the fastest-growing cause of blindness, with close to 415 million diabetic patients at risk worldwide.
 
“Imagine these kind of insights spreading to the whole of healthcare industry,” said Ms. Li of Google. “What these examples have in common is the transformation from exclusivity to ubiquity. I believe AI can deliver this transformation at a scale, we have never seen and imagined before,” she said.
Author : Peerzada Abrar
Source : thehindu.com
Categorized in Search Engine

The Veritone Platform creates searchable data from media files for faster processing and actionable intelligence

Big Data is everywhere, and it keeps getting bigger. It’s estimated that 90 percent of all data on the internet has been generated in the past two years, more than three-quarters of it audio and video clips, and the pace continues to accelerate exponentially.

Data that has been processed using artificial intelligence tools like the Veritone Platform is easily searched and analyzed, reducing the time and expense needed to solve crimes. (image/Veritone)

For law enforcement, this creates an information overload of evidence that can’t be easily searched or analyzed. This recent explosion of law enforcement data, particularly video and audio that by its nature is unstructured and unsearchable, means police are collecting information without the tools to manage it or turn it into actionable intelligence.

HOW ARTIFICIAL INTELLIGENCE CAN HELP LAW ENFORCEMENT

Veritone, an artificial intelligence technology company, has built an open platform to provide law enforcement with AI applications called cognitive engines that can process unstructured data from multiple sources to help police extract actionable intelligence. These cognitive engines include applications for audio transcriptionfacial recognition and more.

Data that has been processed and correlated using artificial intelligence is easily searchable and enables analysis to sift for patterns, says Dan Merkle, president of Veritone Public Safety, saving countless hours and reducing the time and expense needed to solve crimes.

HOW THE VERITONE AI PLATFORM WORKS

The Veritone Platform automates correlation and analysis by providing a one-stop shop for processing media evidence. Users can upload any file format, choose which cognitive engines to run, and then search the resulting indexed databases for the desired information.

Everything is accessed through a simple web-based user interface – if you can get to a browser, you can use the tools, says Merkle.

Veritone helps law enforcement agencies make sense of overwhelming amounts of unstructured data in three ways:

1. The platform is omnivorous, taking in audio and video from public and private sources from CCTV security video to social media clips to body-worn or dashboard camera video. These disparate data sources are integrated into an indexed data set that can be searched and layered for multi-dimensional correlation. This provides investigators with a way to integrate data from varied sources into a unified pool of actionable intelligence.

2. A variety of cognitive engines can be used to extract specific information such as words, faces, license plates, geolocation, time of day, etc. The system automates analysis to find patterns that provide useful information to investigators. The engines currently available perform with comparable accuracy to processing by a human, and they deliver results much faster. Performance continues to improve as the technology matures.

3. The Veritone Platform exists on the Microsoft Azure Government cloud for secure online access and mobility so that data becomes a dynamic tool for comparison and analysis rather than siloed on individual servers. 

WHY AI IS A BETTER SOLUTION FOR VIDEO EVIDENCE

Veritone’s AI platform automates what is otherwise a tedious series of tasks, says Merkle. Veritone can process and search everything all at once instead of requiring separate analysis of each end-point solution, a slow and costly process.

 

“With current systems, if you want to find something in there you basically have to pay someone to sit down and listen to it in real time,” Merkle said, “but a transcription cognitive engine can process that audio and turn it into a text file that is then searchable just like any other Word document or structured data that can be searched and correlated against other files, so you’re able to start creating a more complete picture.”

In addition to the investigative boost, AI automation makes responding to public records requests and complicated queries from attorneys much faster and easier, says Merkle.

THE FUTURE OF AI TECHNOLOGY IN LAW ENFORCEMENT

Artificial intelligence for policing is still in the early development of its capabilities and uses, he adds. Veritone monitors more than 3,000 individual cognitive engines in development, adding new and better tools as they become available. 

Sentiment analysis is a newer development in artificial intelligence, and Veritone offers analysis of phrases and words in a transcript for positive or negative sentiment. Merkle says analytic tools based on tone of voice and facial expression are in development and coming soon, and he describes the Veritone Platform as “future proof.”

“Even those of us who are knee-deep in artificial intelligence can’t predict where all the functionality and capabilities are going to be developed,” he said. “Having a platform that can take on any of these cognitive engines as they are developed future proofs the investment so that you don’t have to change out platforms, which is very expensive and difficult to do.”

As the technology matures, Merkle cautions that balancing privacy and transparency will be a key challenge. He recommends that law enforcement agencies consider best practices for use, as well as possible unintended consequences, in order to make sound policy decisions.

For more information about how artificial intelligence is transforming public safety, download Veritone’s free white paper, Artificial Intelligence: Friend or Foe?

Author : PoliceOne BrandFocus Staff

Source : https://www.policeone.com/police-products/investigation/evidence-management/articles/300077006-Analyze-video-evidence-faster-with-artificial-intelligence/

Categorized in Internet Technology

Master cyber criminals, super-trojans, workforce shifts, advanced analytics and more – CBR talks to the experts about how 2017 could prove an even bigger, smarter year for artificial intelligence.

AI certainly arrived with aplomb in 2016 with chatbots, digital assistants, PokemonWatson, and DeepMind just some of the AI companies and tech bringing artificial intelligence to the masses. The opportunities, benefits and promise of the technology, so experts say, is vast – limitless even – so what can we expect in the coming year?CBR talked to the top AI experts about their artificial intelligence predictions for the new year, with 2017 already shaping up to be even smarter than 2016.

Artificial Intelligence Predictions for 2017:

The Year of the digital Moriarty

 

Ian Hughes Analyst, Internet of Things, 451 Research

“With so much data flowing from the interconnected world of IoT, higher end AI is being used to find security holes and anomalies in systems that are too complex for humans to control. Security breaches we have seen so far have been brute force ones, the equivalent of a digital crow bar.

“AI being used to protect is clearly a benefit, but this technology is increasingly available to anyone, replacing the digital crow bar with a virtual master criminal, 2017 might just see Holmes versus Moriarty digital intellects start to battle it out behind the scenes.”

Artificial Intelligence Predictions

Artificial Intelligence Predictions for 2017:

The Year Machines Steal more human jobs than ever before

Dik Vos, CEO at SQS

“We will continue to see a rise in digital technology over the coming years, and 2017 will be the year we see the likes of Artificial Intelligence (AI) and automated vehicles take the place of low-skilled workers.

With machines pushing humans out of a number of jobs including, logistics drivers and factory workers, I predict we will see an increased emphasis placed on the retraining of up to 30 per cent of our working population. People want and need to work and 2017 will see those workers who have lost their jobs through digitalisation, start to filter across a variety of other sectors including manufacturing and labour.”

Artificial Intelligence Predictions for 2017:

The Year of the Buzzword Mart

Hal Lonas, CTO at Webroot

“In 2017 we will see an explosion of companies shopping at Buzzword Mart. The growing attention paid to terminology like Artificial Intelligence and Machine Learning will lead to more firms incorporating “me too” marketing claims into their messaging.  Artificial Intelligence predictions -buzzwordProspective buyers should take these claims with a grain of salt and carefully check the pedigree and experience of firms claiming to use these advanced approaches. Buyers are rightfully confused, and it is difficult to compare, prove, or disprove efficacy in an ecosystem where market messaging is dominated by legacy or unicorn-funded voices. All too often we see legacy technology bolting barely-functional technology onto bloated and ill-architected heavy-weight solutions, leading to a poor end product whose flaws can range from bad user experience to security vulnerabilities.

“This rings especially true for security, where the distinction between legitimate machine learning trained threat intelligence and a second-rate snap-on solution can be the difference between leaking critical customer or IP data files, or blocking the threat before it reaches the network.”

 

Artificial Intelligence Predictions for 2017:

The Year of AI-as-a-service

Abdul Razack, SVP & Head of Platforms, Big Data and Analytics, Infosys

“AI-as-a-Service will take off: In 2016 AI was applied to solve known problems. As we move forward we will start leveraging AI to gain greater insights into ongoing problems that we didn’t even know existed. Using AI to uncover these “unknown unknowns” will free us to collaborate more and tackle new, interesting and life-changing challenges.”

Artificial Intelligence Predictions for 2017:

The Year CIOs Take the AI Helm

Graeme Thompson, SVP and CIO, Informatica

“With the accelerating pace of business, organisations need to deliver change and make decisions at a rate unheard of just a few years ago. This has made human-paced processing insufficient in the face of the petabytes and exabytes of data that are pouring into the enterprise, driving a rise in machine learning and AI.

“Whereas before, machines would be used to complete a few tasks within a workflow, now they are executing almost the entire process, with humans only required to fill in the gaps.

“Rewind 20 years and we used tools like MapQuest to figure out the shortest distance between two points, but we never would have trusted it to tell us where to go. Now, with new developments like Waze, many of us delegate the navigation of a journey entirely to a machine.

Artificial Intelligence Predictions - leader

“Before long, humans will no longer be needed to fill the gaps. We’ll find that machines are fully autonomous in the case of driverless cars, for example, because they can store and make sense of much more information than humans can process. However, organisations capitalising on the benefits of AI and machine learning will have to ensure data quality to guarantee the accuracy of these fast responses. Un-validated or inaccurate data in a machine learning algorithm causes misleading insights or inaccurate actions when automated.

“In 2017, CIOs will be tasked with taking the helm of data driven initiatives and ensuring that data is clean enough to be processed by machines to drive fast and accurate insight and action.”

Author : ELLIE BURNS

Source : http://www.cbronline.com/news/internet-of-things/smart-technology/artificial-intelligence-predictions-2017-expect-ai-service-smart-malware-digital-moriarty/#

Categorized in Internet Technology

When Larry Page and Sergey Brin invented PageRank back in 1996, they had one simple idea in mind: Organize the web based on “link popularity.”

In short, in the universe of pages existing in a (at the time almost) shapeless web, Page and Brin wanted to organize that information to make it become knowledge. The logic was pretty simple, yet extremely powerful. First, if a page was connected to multiple pages, which in turn linked back to it, that page improved in relevance. Also if a page had less links from other pages, yet those pages
were more important, then it also improved the ranking of the linked page.

 

In other words, how much a page was linked to others and how much other important pages linked back to it, determined a score from 0 to 10. A higher score meant more relevance, thus more chances of being shown by what would eventually become the greatest search engines of all times, Google.

Nowadays when you open the internet browser, you are not looking at the web itself, but rather the way Google indexes it. Presently, Google is the most visited website and chances are this scenario will remain unchallenged at least in the near future.

What does that imply? Simply that if Google doesn’t know you exist, de facto you don’t. Thus, how can you make Google know you exist?

Web writing at the time of PageRank

Before 2013, machines and humans used two completely different languages. Almost like a bird of paradise’s chant is indifferent to an eagle, search engines could not understand human language, unless humans did change their writing process.

It was the birth of the web writing industry. This industry was based on a premise, follow what Google says is relevant. This premise generated a cascade of consequences that still affects the web today.

In fact, up to 2013, Google’s algorithm  took into account over 200 factors to determine the relevance of a piece of content. Yet those factors weren’t necessarily in line to what human readers wanted to see. Thus, for the first time in human history, men started to write for machines’ sake. That changed when in 2013, Google launched RankBrain.

How RankBrain and Artificial Intelligence changed web writing

Out of the more than 200 factors that Google accounts for when deciding whether the content on the web is relevant, RankBrain became the third most relevant.

Yet what is revolutionary about RankBrain is the fact that it uses Natural Language Processing (NLP) to translate human language in machine language, leaving the writing process unaffected. Thus, rather than worrying about search engine optimization, writers can finally go back and do what they have been doing best for the last five thousand years: writing compelling stories.

 

Although it may sound trivial for a traditional writer, that was a revolution for web writers.

There is one caveat tough. Instead of thinking in terms of the single article, writers should start thinking in terms of entities. What is an entity then?

The birth of the Semantic Web

As we saw, before 2013 Google incentivized writing standards that were tailored for machines rather than humans. This scenario changed when RankBrain was launched.

The new algorithm allowed the coming forward to a new way of thinking about the web, a semantic web. Its father was Tim Berners-Lee, which in 2006 called for a transition from web to semantic web.

Why is that relevant and what does Semantic Web stand for?

First, the semantic web is a set of rules and standards that make human language readable to machines. Second, there was a transition from the single word to the general context, or put in technical jargon from keyword to entity.

In short, to make a piece of content relevant to search engines, it was crucial to place a set of keywords within an article. Yet that strategy is not enough anymore. Indeed, what nowadays makes a piece of content relevant is the context on which it stands.

In semantic web jargon an entity is a subject which has unambiguous meaning because it has a strong contextual foundation. Although strong and solid, that foundation is in constant flux. That makes the information structured as an entity way more reliable that any set of keywords. At the same time an entity is also more powerful as it adapts to the context in which it stands.

What does that imply? A single entity can replace a whole set of keywords. Thus, making writing more human.

The future of web writing

Even though no one really knows how the future will unfold, the hope is that finally thanks to Artificial Intelligence writers will be empowered, as they will be free to write amazing stories that will enrich the human collective intelligence. In other words, instead of going from writing to web writing as unconsciously as the human race transitioned from hunter-gathering to farming, it is time to take this step forward deliberately and intentionally. That means giving the web writing’s stage to whom it belongs, human beings!

Gennaro Cuofano is a Growth Hacker at WordLift, a software company that helps web writers organize their content and reach more readers while remaining focused on what they do best, writing.

Author : Guest contributor

Source : https://bdtechtalks.com/2017/03/15/how-artificial-intelligence-is-changing-web-writing/

Categorized in Internet Technology

Earlier this year at Facebook's F8 conference, the company revealed three innovation pillars that make up the company's ten-year vision: connectivity, artificial intelligence (AI), and virtual reality (VR). Facebook's Chief Technology Officer Mike Schroepfer is responsible for leading each of them. Despite the fact that the vision is ten-years in duration, the company has made significant progress in each.

Facebook's progress in AI can been seen in everything from the company's news feed to the way in which people are tagged. The virtual reality innovations are best demonstrated through the Oculus Rift, which I demo'd last Thursday. More recently, the company made a great flight forward on the connectivity pillar as Acquila, a long-endurance plane that will fly above commercial aircraft and the weather, took flight in Arizona. The goal is for this v-shaped aircraft that has a wingspan longer than a Boeing 737, but weighs under 1,o00 pounds to bring basic internet access to the developing world.

 

I met with Schroepfer at Facebook's headquarters in Menlo Park, and we discussed these three pillars and a variety of other topics, including the company's recruiting methods, how the company maintains its innovative edge, and the logic behind its headquarters - one of the largest open-space offices in the world.

(This interview is excerpted from the 250th broadcast of the Forum on World Class IT. To listen to that unabridged interview, please visit this link. This is the 18th interview in the IT Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, and Walt Mossberg, please visit this link.)

Peter High: Earlier this year at F8 2016, Facebook’s developer’s conference, you introduced three innovation pillars. Could you take a moment to highlight each of them?

Mike Schroepfer: We have been, I think pretty uniquely in the industry, very public about our ten-year vision and roadmap, and we have broken it down into three core areas:

  • connectivity, connecting the approximately four billion people in the world who do not have internet access today (the majority of the world);
  • artificial intelligence in solving some of the core problems and building truly intelligent computer systems; and
  • virtual reality and augmented reality, building the next generation of computing systems that have probably the best promise that I am aware of to give me the ability to feel like I am present with someone in the same room, even if they are thousands of miles away.

High: With the abundant resources and brain power at Facebook, how did you choose those three as opposed to others?

Facebook CTO Mike Schroepfer

Facebook CTO Mike Schroepfer

 

Schroepfer: A lot of this derives directly from [Facebook CEO] Mark [Zuckerberg], and comes from the mission, which is to make the world more open and connected. I think of this simply as using technology to connect people. We sit down and say, “OK, if that is our goal, the thing we are uniquely suited for, what are the big problems of the world?” As you start breaking it down, these fall out quite naturally. The first problem is if a bunch of the world does not even have basic connectivity to the internet, that is a fundamental problem. Then you break it down and realize there are technological solutions to problems; there are things that can happen to dramatically reduce the cost of deploying infrastructure, which is the big limiting factor. It is just an economics problem. Once people are connected, you run into the problem you and I have, which is almost information overload. There is so much information out there, but I have limited time and so I may not be getting the best information. Then there is the realization that the only way to scale that is to start building intelligent systems in AI that can be my real-time assistant all the time, making sure that I do not miss anything that is critical to me and that I do not spend my time on stuff that is less important. The only way we know how to do that at the scale we operate at is artificial intelligence.

So there is connectivity and I am getting the right information, but most of us have friends or family who are not physically next to us all the time, and we cannot always be there for the most important moments in life. The state of the art technology we have for that right now is the video camera. If I want to capture a moment with my kids and remember it forever, that is the best we can do right now. The question is, ‘What if I want to be there live and record those moments in a way that I can relive them twenty years from now as if I was there?’ That is where virtual reality comes in. It gives you the capability of putting a headset on and experiencing it today, and you feel like you are in a real world somewhere else, wherever you want to be.

High: How do you think about those longer term goals, the things that are going to take a lot of stair steps to get to, versus the near-term exhaust of ideas that are going to be commercialized and commercial-ready?

Schroepfer: The biggest thing that I try to emphasize when I talk about the ten-year plan is to be patient when it does not work because no great thing is just a straight linear shot. I have started companies in the past and it is never a straight shot. The time at Facebook is seen from the outside as maybe always things are going great, but there are always a lot of ups and downs along the way. The key thing is not to get discouraged in the times when it is not going well. The other thing is if you can have intermediate milestones along the way which help you understand that you are making progress, that is handy. AI is quite easy because the team has already deployed tons of stuff that dramatically improves the Facebook experience every day. This can be as simple as techniques to help us better rank photos, so you do not miss the important photos in your life, to more fundamental things like earlier this year we launched assistive technology so that if you have a visual impairment and cannot see the billions of images uploaded every day on Facebook we can generate and read you a caption. We could not ever do that at human scale.

These are happening every day and, at the same time, we are working on much harder problems. How do you teach a computer to ingest a bunch of unstructured data, like the contents of a Wikipedia article, and then answer questions about it? That seems straightforward, but that is the frontier of artificial intelligence where you can reason and understand about things that are not pre-digested and pre-optimized by a person who put it in a nice key value format for you. When you look at that, that stuff is farther reaching and rudimentary.

In VR we want to deliver products on the market today so you can go into your Best Buy today and buy an Oculus Rift headset. Realize that that is going to be a relatively small market in 2016. Compared to the billion people today who use Facebook, it is going to seem tiny, but we hope every year better content will occur. We will release updated systems every few years, and as the systems get better and cheaper, low and behold in ten years you will have hundreds of millions of people in VR, rather than a few million. You have to have patience for that ramp, and not get disappointed when it does not have that many users in the short term.

There are intermediate milestones on the connectivity side, like our first flight of Aquila a couple of weeks ago, which was awesome and mind-bending and people were literally in tears. But that was the first flight of our first vehicle. There are many steps between that and a vehicle which is fully ready to perform the end goal, which is providing internet access for people where it is too expensive to lay fiber lines across a large area. So you try to have a long term vision, be patient when things might fail, but then also have these intermediate steps where you can be making progress and adding value and see where things are going. At least that is the way I think of it.

High: To solve the problems you are going after requires not only brain power, but also people with a wide array of backgrounds, perhaps more so than when the company began. You need the whole swath of STEM topics, people with advanced neuroscience specialties, and so forth. Especially here in Silicon Valley, where there is such a war for talent, how do you think about talent acquisition?

Schroepfer: Honestly, this is the real joy of the job. Now we are in this place where certainly it is a challenge to find talent, but magic occurs when it is cross-disciplinary. Let’s talk about flying this aircraft. You have high end composite material, electrical systems and electrical engines. A key part is using latest generation battery technology using solar cells. Then we need a communication system that can beam internet up and down between this plane, which means we are doing free space optics and have laser communications and laser transmitters and receivers. You need a software system that can help control flight on this aircraft, an aircraft that has never been built before. We need to build simulations of what this thing is going to look like. You have your machine learning software, hardware, electrical, aeronautics, material science, and all of these things together. When you can get them into a small team—this is a few dozen people together—and you get them clearly oriented on this goal, a lot of great stuff happens. Our core strategy in the company, our technology, is about bringing people together.

 

High:  How much liberty do people have to explore ideas in areas of their own choosing?

Schroepfer: A lot. A fundamental principle from the beginning of when I was here was that you have smart, well-educated people who are at the top of their field and they can work anywhere. They are going to be best used if you have them working on the thing that they are excited about. If they wake up in the morning and run in to work because they cannot wait to solve the problem, then they are going to produce much better stuff. There are certainly times where we try to convince someone to try to work on something else, or explain to them what is important, but a lot of my job ends up being to get the right talent here, and then to clearly and crisply articulate our end goal: “This is what I want you to work on. There are lots of ways you can contribute. Figure out what part of this makes you the most excited and dive in and go.” Then we can start putting some of the pieces together and build some of this technology. So there is a lot of freedom in what we do.

High: How do you think about building an ecosystem to do this outside of the company, in addition to the team you have built inside the company?

Schroepfer: This is an area where we have tried to innovate a lot. If you go back even five or six years to the foundation of the Open Compute Project, this was like everyone has accepted that open source is a great way to develop software. I used to work at Mozilla and we built a browser in open source. Pretty much every company out there has Linux running somewhere—that has open source; we all contributed to the same Linux kernel, but did not do it for hardware. We said, “Let’s do that for hardware.” So our datacenter designed the buildings themselves, the racks, the servers—everything in there is open for people to collaborate. Now you have the whole industry collaborating, including, very recently, Google. We are running that same playbook now in connectivity, such as in telecom infrastructure. If we can get the industry together, it ends up benefitting everyone because you share in a bunch of the core IP, you build on the same components, you get economies of scale and production, stuff is cheaper, and it gets more proliferated out there.

When you look at things like AI research, we are aggressively publishing and open sourcing our work. We are at all the major conferences. I was just reviewing with the team recently the core advances they had developed. One they had developed a few years ago was called “memory networks” as a way to attach a sort of a short-term memory to a convolutional neural net, and that was a capability we had not had before. The work then was in 2014 and since then there have been citations to that work. Every month there is a new paper out which shows a new advancement and enhancement to that technique that improves on some basic question and answering benchmarks. You look at the aggregate rate of throughput of the entire industry, versus if we just did it ourselves, and it is great because we can fast forward by building on work that just happened and be two years ahead of where we would be if we had just tried to do it ourselves. Fundamental technology can benefit lots of things besides Facebook. Whenever we can do that, we are big fans.

High: How do you keep yourself abreast of new innovations that are happening both at your company and outside your company?

Schroepfer: This is where I think I might have the best job in the industry. First of all, I read everything I can see. But even better than that is I get to go sit down and talk with the teams doing the work and that is by far the best part of my day. Just a few weeks ago, I sat down with the Facebook research team and we did a day-long briefing on all of the work that they are doing. Here is Yann LeCun, who has written the seminal papers on convolutional neural nets in the 1990’s, taking the team through his vision of where we are going. The team is reviewing not just the work they are doing, but, because they publish and open source it, also the work other people have built on top of that to solve similar problems. Then I walk away from that and get to talk to some people here building some of the latest social apps with VR and look at what we are trying out there. I get a chance to talk to people in the industry at tech conferences or when I see people doing interesting work, and I get a chance to understand what is happening there. It is a lot of fun. It is honestly hard to keep up with because there is so much stuff happening all the time and it is all fun and fascinating.

 

High: Having had a chance to walk around headquarters here, I want to ask how you have thought about creating a space that fosters the collaboration necessary, not only within these four walls, but also beyond that, wherever Facebook is?

Schroepfer: You are sitting in a building which is one of the world’s largest single floor office buildings. It is 2,800 people on a single floor with no individualized offices—you see everyone sitting out in desks here. This was designed as an experiment to see how far we could push collaboration if we had literally thousands of people in the same room. There are VC systems in every room so people can VC between our major offices. Obviously, one of the secret weapons we have had, that we have now made available to others, is Facebook itself. Everyone is on Facebook all day, because we work here, and it turns into a great collaboration tool: you have Facebook groups, Facebook messenger, you have all these great ways which are fundamentally about aggregating a bunch of information and being able to keep up with it. It could be ‘let’s see what my friends are up to’, or it could be what sixteen different teams are working on at the same time. The tools work well for that, too.

The subtle thing that I think a lot of people miss is that the key to collaboration is when people can bring their perspective and their point of view and their expertise, and spend the time it takes to understand the other person and empathize with what problem they are working on. This could be across domains. Let’s say I am a machine learning person and I am trying to understand what a medical doctor is trying to do to look at patterns in drug discoveries. The more I can understand about their problem, the more I can help them with that. A lot of what our culture builds is that basic empathy because you are on Facebook and so not only are you seeing what is going on with colleagues at work, but you are seeing what is going on in their real life, too: kids going off to school next week or coming back from vacation. It brings this sense of cohesion I have not ever experienced in any other organization anywhere near our scale.

High: As the organization grows, as it becomes a technology behemoth, to what extent do you worry or think about maintaining that smaller company feel, and the entrepreneurial spirit? Obviously, there are many innovative companies that have come before you that had bright days in the sun and then experienced a lot of rainy days after that.

Schroepfer: That is something we think about all the time. There are pros and cons of growth. The “pro” is that we are working on all this exciting stuff and we have specialists in all these areas. If you are an engineer joining the company and you want to learn more about AI or more about aircraft or virtual reality, we have all of that for you to do today in house, which is awesome and a big plus. But it is challenging to get a larger group of people unified under a common mission. I think this boils down to a couple things that we work on all the time. The first is that you want people to have a real sense of alignment towards the end goal. What can happen in a large organization is it gets federated and everyone is working on different things and so their goals are not aligned. We are clear about our mission – connecting people using technology—and we are clear that if you are excited about that, great, come here, and if you are not, there are lots of other great places to work.

 

I like to think that a lot of our job is engineering the culture. If you think about engineering as building a system, the way in which we all work together is a system, and we can spend time engineering that—as simple as what is your experience onboarding as a new hire. For most companies, that is a day and a half filling out a bunch of paperwork. For us, it is an intensive, six-week, boot camp onboarding which is designed to get you as exposed to as much technology and people across the company as possible. At the end of those six weeks, no matter what you are working on, you have met hundreds of engineers across the company, and not just in a superficial way. You have written a piece of code and had someone review it, and you have poked around and talked to them and asked them about a bug. So when you exit boot camp, everyone gets to know each other well because whether you have been fifteen years in the industry or you are just out of college, we are both figuring out Facebook, and that builds a bond. That gets spread throughout the company and you build connections between those people in the class, and each one of those individuals met tons of people across the company, so when someone has a question about something a team is doing, it is like “Oh, I met Mary when I was working on this bug. I will go ask her. Maybe she knows who to talk to.” It builds those loose connections that are so critical to building that cohesion.

I could go on and on because we have program after program designed to solve this exact problem, which is building cohesion across the groups. After boot camp, it is hackamonth, where, eighteen months into your job, you take a month and completely rotate to another team. You can learn a new skill, work in a different area entirely, meet a whole new group of people, and just continue this cohesion across the company.

High: You did that yourself, as Sheryl Sandberg and you switched jobs for a week.

Schroepfer: Absolutely. I learned what it was like to be in her shoes, and vice versa. We are just trying to keep it mixed up wherever we can, which I think is helping. It is a hard problem, so there is a lot to work on, but it is something we focus on.

High: If I could take it one step further back in the chain to the recruiting process: how do you evaluate talent? Especially now that the company is shooting for long-term objectives, and given some of the uncertainty and value you are seeking, how do you think about talent acquisition?

Schroepfer: The first obvious thing you want to do is see whether they have the raw skills needed to accomplish the job—and that is a much broader palette than it used to be. It used to be mostly software engineering, but now it could be electrical engineering, mechanical engineering, a specialty in AI. For each of those roles, there are different ways, but basically it boils down to some form of technical verification that they are state of the art in that field. The second is that collaboration is important to us. Part of the interview process is collaborative problem solving. Half of it is did they get the answer right, and the other half is whether they able to work with the other person in the room towards that answer because that is the way the real world is. Hollywood has the person in the basement, solo, creating everything, but nothing interesting I have ever seen is made that way. It is always a team. We just want to make sure that this person can communicate clearly and collaborate well with the group, and that is what we look for.

Author : Peter High

Source : http://www.forbes.com/sites/peterhigh/2016/08/15/facebooks-10-year-plan-connectivity-artificial-intelligence-and-virtual-reality/3/#4e865773c3cd

Categorized in Social
Page 2 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more