fbpx

The scientific community worldwide has mobilized with unprecedented speed to tackle the COVID-19 pandemic, and the emerging research output is staggering. Every day, hundreds of scientific papers about COVID-19 come out, in both traditional journals and non-peer-reviewed preprints. There's already far more than any human could possibly keep up with, and more research is constantly emerging.

And it's not just new research. We estimate that there are as many as 500,000 papers relevant to COVID-19 that were published before the outbreak, including papers related to the outbreaks of SARS in 2002 and MERS in 2012. Any one of these might contain the key information that leads to  or a vaccine for COVID-19.

Traditional methods of searching through the research literature just don't cut it anymore. This is why we and our colleagues at Lawrence Berkeley National Lab are using the latest artificial intelligence techniques to build COVIDScholar, a  dedicated to COVID-19. COVIDScholar includes tools that pick up subtle clues like similar drugs or research methodologies to recommend relevant research to scientists. AI can't replace scientists, but it can help them gain new insights from more papers than they could read in a lifetime.

Why it matters

When it comes to finding effective treatments for COVID-19, time is of the essence. Scientists spend 23% of their time searching for and reading papers. Every second our  can save them is more time to spend making discoveries in the lab and analyzing data.

AI can do more than just save scientists time. Our group's previous work showed that AI can capture latent scientific knowledge from text, making connections that humans missed. There, we showed that AI was able to suggest new, cutting-edge functional materials years before their discovery by humans. The information was there all along, but it took combining information from hundreds of thousands of papers to find it.

We are now applying the same techniques to COVID-19, to find existing drugs that could be repurposed, genetic links that might help develop a vaccine or effective treatment regimens. We're also starting to build in new innovations, like using molecular structures to help find which drugs are similar to each other, including those that are similar in unexpected ways.


1-aitoolsearch.jpg

How we do this work

The most important part of our work is the data. We've built web scrapers that collect new papers as they're published from a wide variety of sources, making them available on our website within 15 minutes of their appearance online. We also clean the data, fixing mistakes in formatting and comparing the same paper from multiple sources to find the best version. Our machine learning algorithms then go to work on the paper, tagging it with subject categories and marking work important to COVID-19.

We're also continuously seeking out experts in new areas. Their input and annotation of data is what allows us to train new AI models.

What's next

So far, we have assembled a collection of over 60,000 papers on COVID-19, and we're expanding the collection daily. We've also built search tools that group research into categories, suggest related research and allow users to find papers that connect different concepts, such as papers that connect a specific drug to the diseases it's been used to treat in the past. We're now building AI algorithms that allow researchers to plug  into quantitative models for studying topics like protein interactions. We're also starting to dig through the past literature to find hidden gems.

We hope that very soon, researchers using COVIDScholar will start to identify relationships that they might never have imagined, bringing us closer to treatments and a remedy for COVID-19.

[Source: This article was published in medicalxpress.com By Amalie Trewartha and John Dagdelen - Uploaded by the Association Member: Barbara larson]

Categorized in Online Research

Google‘s AI team has released a new tool to help researchers traverse through a trove of coronavirus papers, journals, and articles. The COVID-19 research explorer tool is a semantic search interface that sits on top of the COVID-19 Open Research Dataset (CORD-19). 

The team says that traditional search engines are sufficient at answering queries such as “What are the symptoms of coronavirus?” or “Where can I get tested in my country?”. However, when it comes to more pointed questions from researchers, these search engines and their keyword-based approach fail to deliver accurate results.

Google‘s new tool helps researchers solve that problem. The CORD-19 database has over 50,000 journal articles and research papers related to coronavirus. However, a simple keyword search wouldn’t yield reliable results. So, Google uses Natural Language Understanding (NLU) based semantic search to answer those queries. 

NLU is a subset of Natural Language Processing (NLP) that focuses on a smaller context while trying to derive the meaning of the question and draw distinct insights.

The COVID-19 research explorer tool not only returns related papers to the query, but it also highlights parts of papers that might provide relevant answers to the question. You can also ask follow-up questions to further narrow down results.

The semantic search is powered by Google’s popular BERT language model. In addition to that, the AI has been trained on BioASQ, a biomedical semantical search model to enhance results.

The team built a hybrid term-neural retrieval model for better results. While the term-based model provides accuracy with search results, the neural model helps with understanding the meaning and context of the query.

You can read more technical details about the model here and try out the search explorer here.

[Source: This article was published in sup.news By Ivan Mehta - Uploaded by the Association Member: Wushe Zhiyang]

Categorized in Online Research

No overarching artificial intelligence looms on the horizon, but machine-learning tools can make applications do some magical things.

I was talking with a friend recently about artificial intelligence (AI) and machine learning (ML), and they noted that if you replaced AI or ML with the word magic, many of those discussions would be as useful and informative as before. This is due to a number of factors, including misunderstanding about the current state of affairs when it comes to AI, ML, and more specifically, deep neural networks (DNNs)—specifically, what ML models are actually doing and not comprehending how ML models are used together.

I hope that those who have been working with ML take kindly to my explanations because they’re targeted at engineers who want to understand and use ML but haven’t gotten through the hype that even ML companies are spouting. More than half of you are looking into ML, but only a fraction is actually incorporating it into products. This number is growing rapidly though.

ML is only a part of the AI field and many ML tools and models are available, being used now, and in development (Fig. 1). DNNs are just a part; other neural-network approaches enter into the mix, but more on that later.

I was talking with a friend recently about artificial intelligence (AI) and machine learning (ML), and they noted that if you replaced AI or ML with the word magic, many of those discussions would be as useful and informative as before. This is due to a number of factors, including misunderstanding about the current state of affairs when it comes to AI, ML, and more specifically, deep neural networks (DNNs)—specifically, what ML models are actually doing and not comprehending how ML models are used together.

I hope that those who have been working with ML take kindly to my explanations because they’re targeted at engineers who want to understand and use ML but haven’t gotten through the hype that even ML companies are spouting. More than half of you are looking into ML, but only a fraction is actually incorporating it into products. This number is growing rapidly though.

ML is only a part of the AI field and many ML tools and models are available, being used now, and in development (Fig. 1). DNNs are just a part; other neural-network approaches enter into the mix, but more on that later.

Machine Learning1

1. Neural networks are just a part of the machine-learning portion of artificial-intelligence research.

Developers should look at ML models more like fast Fourier transforms (FFTs) or Kalman filters. They’re building blocks that perform a particular function well and can be combined with similar tools, modules, or models to solve a problem. The idea of stringing black boxes together is appropriate. The difference between an FFT and a DNN model is in the configuration. The former has a few parameters while DNN model needs to be trained.

Training for some types of neural networks requires thousands of samples, such as photos. This is often done in the cloud, where large amounts of storage and computation power can be applied. Trained models can then be used in the field since they normally require less storage and computation power as their training counterparts. AI accelerators can be utilized in both instances to improve performance and reduce power requirements.

Rolling a Machine-Learning Model

Most ML models can be trained to provide different results using a different set of training samples. For example, a collection of cat photos can be used with some models to help identify cats.

Models can perform different functions such as detection, classification, and segmentation. These are common chores for image-based tools. Other functions could include path optimization or anomaly detection, or provide recommendations.

A single model will not typically deliver all of the processing needed in most applications, and input and output data may benefit from additional processing. For example, noise reduction may be useful for audio input to a model. The noise reduction may be provided by conventional analog or digital filters or there may be an ML model in the mix. The output could then be used to recognize phonemes, words, etc., as the data is massaged until a voice command is potentially recognized. 

Likewise, a model or filter might be used to identify an area of interest in an image. This subset could then be presented to the ML-based identification subsystem and so on (Fig. 2). The level of detail will depend on the application. For example, a video-based door-opening system may need to differentiate between people and animals as well as the direction of movement so that the door only opens when a person is moving toward it.

Machine Learning2

2. Different tools or ML models can be used to identify areas of interest that are then isolated and processed to distinguish between objects such as people and cars.

Models may be custom-built and pretrained, or created and trained by a developer. Much will depend on the requirements and goals of the application. For example, keeping a machine running may mean tracking the operation of the electric motor in the system. A number of factors can be recorded and analyzed from power provided to the motor to noise and vibration information.

Companies such as H2O.ai and XNor are providing prebuilt or customized models and training for those who don’t want to start from scratch or use open-source models that may require integration and customization. H2O.ai has packages like Enterprise Steam and Enterprise Puddle that target specific platforms and services. XNor’s AI2Go uses a menu-style approach: developers start by choosing a target platform, like a Raspberry Pi, then an industry, like automotive, and then a use case, such as In-cabin object classification. The final step is to select a model based on latency and memory footprint limitations (Fig. 3).

Machine Learning 3

3. Shown is the tail end of the menu selection process for XNor’s AI2Go. Developers can narrow the search for the ideal model by specifying the memory footprint and latency time.

It’s Not All About DNNs

Developers need to keep in mind a number of factors when dealing with neural networks and similar technologies. Probability is involved and results from an ML model are typically defined in percentages. For example, a model trained to recognize cats and dogs may be able to provide a high level of confidence that an image contains a dog or a cat. The level may be lower distinguishing a dog from a cat and so on, to the point that a particular breed of animal is recognized.

The percentages can often improve with additional training, but changes usually aren’t linear. It may be easy to hit the 50% mark and 90% might be a good model. However, a lot of training time may be required to hit 99%.

The big question is: “What are the application requirements and what alternatives are there in the decision-making process?” It’s one reason why multiple sensors are used when security and safety are important design factors.

DNNs have been popular because of the availability of open-source solutions, including platforms like TensorFlow and Caffe. They have found extensive hardware and software support from the likes of Xilinx, NVIDIA, Intel, and so on, but they’re not the only types of neural-network tools available. Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs) are some of the other options available.

SNNs are used by BrainChip and Eta Compute. BrainChip’s Akida Development Environment (ADE) is designed to support SNN model creation. Eta Compute augments its ultra-low-power Cortex-M3 microcontroller with SNN hardware. SNNs are easier to train than DNNs and their ilk, although there are tradeoffs for all neural-network approaches.

Neurala’s Lifelong-DNN (LDNN) is another ML approach that’s similar to DNNs with the lower training overhead of SNNs. LDNN is a proprietary system developed over many years. It supports continuous learning using an approximation of lightweight backpropagation that allows learning to continue without the need to retain the initial training information. LDNN also requires fewer samples to reach the same level of training as a conventional DNN.  

There’s a tradeoff in precision and recognition levels compared to a DNN, but such differences are similar to those involving SNNs. It’s not possible to make direct comparisons between systems because so many factors are involved, including training time, samples, etc.

LDNN can benefit for AI acceleration provided by general-purpose GPUs (GPGPUs). SNNs are even more lightweight, making them easier to use on microcontrollers. Even so, DNNs can run on microcontrollers and low-end DSPs as long as the models aren’t too demanding. Image processing may not be practical, but tracking anomalies on a motor-control system could be feasible. 

Overcoming ML Challenges

There are numerous challenges when dealing with ML. For example, overfitting is a problem experienced by training-based solutions. This occurs when the models work well with data similar to the training data, but poorly on data that’s new. LDNN uses an automatic, threshold-based consolidation system that reduces redundant weight vectors and resets the weights while preserving new, valid outliers.  

ML models can address many tasks successfully with high accuracy. However, that doesn’t mean all tasks, regardless if they’re a conventional classification or segmentation problem, can be accommodated. Sometimes changing models can help or develop new ones. This is where data engineers can come in handy, though they tend to be rare and expensive.

Debugging models can also be a challenge. ML module debugging is much different than debugging a conventional program. Debugging models that are working within an application is another issue. Keep in mind that models will often have an accuracy of less than 100%; therefore, applications need to be designed to handle these conditions. This is less of an issue for non-critical applications. However, apps like self-driving cars will require redundant, overlapping systems.

Avalanche of Advances

New systems continue to come out of academia and research facilities. For example, “Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” is a paper out of the University of Maryland’s engineering department. Anton Mitrokhin and Peter Sutor Jr., Cornelia Fermüller, and Computer Science Professor Yiannis Aloimonos developed a hyperdimensional pipeline for integrating sensor data, ML analysis, and control. It uses its own hyperdimensional memory system.

ML has been progressing like no other programming tool in the past. Improvements have been significant even without turning to specialized hardware. Part of this is due to improved software support to optimizations that increase accuracy or performance while reducing hardware requirements. The challenge for developers is determining what hardware to use, what ML tools to use, and how to combine them to address their application.

It’s worth making most systems now rather than waiting for the next improvement. Some platforms will be upward-compatible; however, others may not. Going with a hardware-accelerated solution will limit the ML models that can be supported but with significant performance gains, often multiple orders of magnitude.

Systems that employ ML aren’t magic and their application can use conventional design approaches. They do require new tools and debugging techniques, so incorporating ML for the first time shouldn’t be a task taken lightly. On the other hand, the payback can be significant and ML models may often provide the support that’s unavailable with conventional programming techniques and frameworks.

As noted, a single ML model may not be what’s needed for a particular application. Combining models, filters, and other modules require an understanding of each, so don’t assume it will simply be a matter of choosing an ML model and doing a limited amount of training. That may be adequate in some instances, especially if the application matches an existing model, but don’t count on it until you try it out.

 [Source: This article was Published in electronicdesign.com BY William G. Wong - Uploaded by the Association Member: Jay Harris]

Categorized in Science & Tech

 Source: This article was published forbes.com By Jayson DeMers - Contributed by Member: William A. Woods

Some search optimizers like to complain that “Google is always changing things.” In reality, that’s only a half-truth; Google is always coming out with new updates to improve its search results, but the fundamentals of SEO have remained the same for more than 15 years. Only some of those updates have truly “changed the game,” and for the most part, those updates are positive (even though they cause some major short-term headaches for optimizers).

Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes but introduced semantic search, which transformed SEO for the better—both for users and for marketers.

What Is Semantic Search?

I’ll start with a briefer on what semantic search actually is, in case you aren’t familiar. The so-called Hummingbird update came out back in 2013 and introduced a new way for Google to consider user-submitted queries. Up until that point, the search engine was built heavily on keyword interpretation; Google would look at specific sequences of words in a user’s query, then find matches for those keyword sequences in pages on the internet.

Search optimizers built their strategies around this tendency by targeting specific keyword sequences, and using them, verbatim, on as many pages as possible (while trying to seem relevant in accordance with Panda’s content requirements).

Hummingbird changed this. Now, instead of finding exact matches for keywords, Google looks at the language used by a searcher and analyzes the searcher’s intent. It then uses that intent to find the most relevant search results for that user’s intent. It’s a subtle distinction, but one that demanded a new approach to SEO; rather than focusing on specific, exact-match keywords, you had to start creating content that addressed a user’s needs, using more semantic phrases and synonyms for your primary targets.

Voice Search and Ongoing Improvements

Of course, since then, there’s been an explosion in voice search—driven by Google’s improved ability to recognize spoken words, its improved search results, and the increased need for voice searches with mobile devices. That, in turn, has fueled even more advances in semantic search sophistication.

One of the biggest advancements, an update called RankBrain, utilizes an artificial intelligence (AI) algorithm to better understand the complex queries that everyday searchers use, and provide more helpful search results.

Why It's Better for Searchers

So why is this approach better for searchers?

  • Intuitiveness. Most of us have already taken for granted how intuitive searching is these days; if you ask a question, Google will have an answer for you—and probably an accurate one, even if your question doesn’t use the right terminology, isn’t spelled correctly, or dances around the main thing you’re trying to ask. A decade ago, effective search required you to carefully calculate which search terms to use, and even then, you might not find what you were looking for.
  • High-quality results. SERPs are now loaded with high-quality content related to your original query—and oftentimes, a direct answer to your question. Rich answers are growing in frequency, in part to meet the rising utility of semantic search, and it’s giving users faster, more relevant answers (which encourages even more search use on a daily basis).
  • Content encouragement. The nature of semantic search forces searches optimizers and webmasters to spend more time researching topics to write about and developing high-quality content that’s going to serve search users’ needs. That means there’s a bigger pool of content developers than ever before, and they’re working harder to churn out readable, practical, and in-demand content for public consumption.

Why It's Better for Optimizers

The benefits aren’t just for searchers, though—I’d argue there are just as many benefits for those of us in the SEO community (even if it was an annoying update to adjust to at first):

  • Less pressure on keywords. Keyword research has been one of the most important parts of the SEO process since search first became popular, and it’s still important to gauge the popularity of various search queries—but it isn’t as make-or-break as it used to be. You no longer have to ensure you have exact-match keywords at exactly the right ratio in exactly the right number of pages (an outdated concept known as keyword density); in many cases, merely writing about the general topic is incidentally enough to make your page relevant for your target.
  • Value Optimization. Search optimizers now get to spend more time optimizing their content for user value, rather than keyword targeting. Semantic search makes it harder to accurately predict and track how keywords are specifically searched for (and ranked for), so we can, instead, spend that effort on making things better for our core users.
  • Wiggle room. Semantic search considers synonyms and alternative wordings just as much as it considers exact match text, which means we have far more flexibility in our content. We might even end up optimizing for long-tail phrases we hadn’t considered before.

The SEO community is better off focusing on semantic search optimization, rather than keyword-specific optimization. It’s forcing content producers to produce better, more user-serving content, and relieving some of the pressure of keyword research (which at times is downright annoying).

Take this time to revisit your keyword selection and content strategies, and see if you can’t capitalize on these contextual queries even further within your content marketing strategy.

Categorized in Search Engine

Searching video surveillance streaming for relevant information is a time-consuming mission that does not always convey accurate results. A new cloud-based deep-learning search engine augments surveillance systems with natural language search capabilities across recorded video footage.

The Ella search engine, developed by IC Realtime, uses both algorithmic and deep learning tools to give any surveillance or security camera the ability to recognize objects, colors, people, vehicles, animals and more.

It was designed with the technology backbone of Camio, a startup founded by ex-Googlers who realized there could be a way to apply search to streaming video feeds. Ella makes every nanosecond of video searchable instantly, letting users type in queries like “white truck” to find every relevant clip instead of searching through hours of footage. Ella quite simply creates a Google for video.

Traditional systems only allow the user to search for events by date, time, and camera type and to return very broad results that still require sifting, according to businesswire.com. The average surveillance camera sees less than two minutes of interesting video each day despite streaming and recording 24/7.

Ella instead does the work for users to highlight the interesting events and to enable fast searches of their surveillance and security footage. From the moment Ella comes online and is connected, it begins learning and tagging objects the cameras see.

The deep learning engine lives in the cloud and comes preloaded with recognition of thousands of objects like makes and models of cars; within the first minute of being online, users can start to search their footage.

Hardware agnostic, the technology also solves the issue of limited bandwidth for any HD streaming camera or NVR. Rather than push every second of recorded video to the cloud, Ella features interest-based video compression. Based on machine learning algorithms that recognize patterns of motion in each camera scene to recognize what is interesting within each scene, Ella will only record in HD when it recognizes something important. The uninteresting events are still stored in a low-resolution time-lapse format, so they provide 24×7 continuous security coverage without using up valuable bandwidth.

Ella works with both existing DIY and professionally installed surveillance and security cameras and is comprised of an on-premise video gateway device and the cloud platform subscription.

Source: This article was published i-hls.com

Categorized in Search Engine

FOR ALL THE hype about killer robots, 2017 saw some notable strides in artificial intelligence. A bot called Libratus out-bluffed poker kingpins, for example. Out in the real world, machine learning is being put to use improving farming and widening access to healthcare.

But have you talked to Siri or Alexa recently? Then you’ll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can’t do or understand. Here are five thorny problems that experts will be bending their brains against next year.

The meaning of our words

Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can’t really understand the meaning of our words and the ideas we share with them. “We’re able to take concepts we’ve learned and combined them in different ways, and apply them in new situations,” says Melanie Mitchell, a professor at Portland State University. “These AI and machine learning systems are not.”

Mitchell describes today’s software as stuck behind what mathematician Gian Carlo-Rota called “the barrier of meaning.” Some leading AI research teams are trying to figure out how to clamber over it.

One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching the video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what’s happening in photos using analogies and a store of concepts about the world.

The reality gap impeding the robot revolution

Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today’s robots lack the brains to match their sophisticated brawn.

Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap—a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.

The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.

Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team’s priorities. “It’ll be neat to see over the next year or so how we can leverage that to accelerate learning,” says Urmson, who previously led Google parent Alphabet’s autonomous-car project.

Guarding against AI hacking

The software that runs our electrical gridssecurity cameras, and cell phones is plagued by security flaws. We shouldn’t expect software for self-driving cars and domestic robots to be any different. It may, in fact, be worse: There’s evidence that the complexity of machine-learning software introduces new avenues of attack.

Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally—unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.

The threat is considered serious enough that researchers at the world’s most prominent machine-learning conference convened a one-day workshop on the threat of machine deception earlier this month. Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.

Tim Hwang, who organized the workshop, predicted using the technology to manipulate people is inevitable as machine learning becomes easier to deploy, and more powerful. “You no longer need a room full of PhDs to do machine learning,” he said. Hwang pointed to the Russian disinformation campaign during the 2016 presidential election as a potential forerunner of AI-enhanced information war. “Why wouldn’t you see techniques from the machine learning space in these campaigns?” he said. One trick Hwang predicts could be particularly effective is using machine learning to generate fake video and audio.

Graduating beyond boardgames

Alphabet’s champion Go-playing software evolved rapidly in 2017. In May, a more powerful version beat Go champions in China. Its creators, research unit DeepMind, subsequently built a version, AlphaGo Zero, that learned the game without studying human play. In December, another upgrade effort birthed AlphaZero, which can learn to play chess and Japanese board game Shogi (although not at the same time).

That avalanche of notable results is impressive—but also a reminder of AI software’s limitations. Chess, Shogi, and Go are complex but all have relatively simple rules and gameplay visible to both opponents. They are a good match for computers’ ability to rapidly spool through many possible future positions. But most situations and problems in life are not so neatly structured.

That’s why DeepMind and Facebook both started working on the multiplayer video game StarCraft in 2017. Neither have yet gotten very far. Right now, the best bots—built by amateurs—are no match for even moderately-skilled players. DeepMind researcher Oriol Vinyals told WIREDearlier this year that his software now lacks the planning and memory capabilities needed to carefully assemble and command an army while anticipating and reacting to moves by opponents. Not coincidentally, those skills would also make software much better at helping with real-world tasks such as office work or real military operations. Big progress on StarCraft or similar games in 2018 might presage some powerful new applications for AI.

Teaching AI to distinguish right from wrong

Even without new progress in the areas listed above, many aspects of the economy and society could change greatly if existing AI technology is widely adopted. As companies and governments rush to do just that, some people are worried about accidental and intentional harms caused by AI and machine learning.

How to keep the technology within safe and ethical bounds was a prominent thread of discussion at the NIPS machine-learning conference this month. Researchers have found that machine learning systems can pick up unsavory or unwanted behaviors, such as perpetuating gender stereotypes, when trained on data from our far-from-perfect world. Now some people are working on techniques that can be used to audit the internal workings of AI systems, and ensure they make fair decisions when putting to work in industries such as finance or healthcare.

The next year should see tech companies put forward ideas for how to keep AI on the right side of humanity. Google, Facebook, Microsoft, and others have begun talking about the issue, and are members of a new nonprofit called Partnership on AI that will research and try to shape the societal implications of AI. Pressure is also coming from more independent quarters. A philanthropic project called the Ethics and Governance of Artificial Intelligence Fund is supporting MIT, Harvard, and others to research AI and the public interest. A new research institute at NYU, AI Now, has a similar mission. In a recent report, it called for governments to swear off using “black box” algorithms not open to public inspection in areas such as criminal justice or welfare.

Source: This article was published wired.com By Tom

Categorized in Science & Tech

Google has officially announced that it is opening an AI center in Beijing, China.

The confirmation comes after months of speculation fueled by a major push to hire AI talent inside the country.

Google’s search engine is blocked in China, but the company still has hundreds of staff in China which work on its international services. In reference to that workforce, Alphabet chairman Eric Schmidt has said the company “never left” China, and it makes sense that Google wouldn’t want to ignore China’s deep and growing AI talent pool, which has been hailed by experts that include former Google China head Kaifu Lee.

Like the general talent with Google China, this AI hiring push isn’t a sign that Google will launch new services in China. Although it did make its Google Translate app available in China earlier this year in a rare product move on Chinese soil.

Instead, the Beijing-based team will work with AI colleagues in Google offices across the world, including New York, Toronto, London and Zurich.

“I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it,” wrote Dr. Fei-Fei Li, Chief Scientist at Google Cloud, in a blog post announcing plans for the China lab.

Related...

Li, formerly the director of Stanford University’s Artificial Intelligence Lab, was a high-profile arrival when she joined Google one year ago. She will lead the China-based team alongside Jia Li, who was hired from Snap where she had been head of research at the same time as Li.

The China lab has “already hired some top talent” and there are currently more than 20 jobs open, according to a vacancy listing.

“Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community,” Li added.

Google is up against some tough competitors for talent. Aside from the country’s three largest tech companies Baidu, Tencent and Alibaba, ambitious $30 billion firm Bytedance — which acquired Musical.ly for $1 billion — and fast-growing companies SenseTime and Face++ all compete for AI engineers with compensation deals growing higher.

Source: This article was published techcrunch.com By Jon Russell

Categorized in Online Research

Response:now uses machine learning to save companies time and money with automatically produced, actionable research insights.

We’re hearing about a lot of companies using artificial intelligence (AI) to make the most of the data they collect. Now market research has adopted the technology.

After finding success with clients such as Google and Mastercard abroad, Prague-based response:now is bringing their AI-powered app to the United States.

The company now offers a fully self-service, programmatic platform that creates research reports based on machine learning. Then it uses a human editor to tease out any undetected nuances and reconcile any disparities.

Fred Barber, newly appointed managing director of response:now in North America says it makes sense to use AI in research.

Since research is essentially data — and common metrics such as the Net Promoter score are basically formulas — an algorithm can learn to make reasonable assumptions and conclusions about the data it analyzes, Barber said. Using AI, response:now automatically creates reports, cutting down much of the time spent in traditional research environments.

According to Barber, 75-80 percent of the work effort in market research is in the writing of the reports. “It’s costly and time-consuming,” Barber said. In comparison to traditional research, response:now can deliver in “five days, not five weeks and for 2K instead of $20 (on average).”

In many cases, Barber says, the company can provide research at three times the speed and one-third the cost of current market research and DIY offerings.

Related...

The company enables their clients to get research on a wide variety of variables, including ad performance, packaging design, customer satisfaction and more.

“We’ve enabled market research to become a much more ubiquitous part of the business process,” Barber said.

Source: This article was published martechtoday.com By Robin Kurzer

Categorized in Market Research

Queries provide data mine for Microsoft's AI developments

Microsoft's Bing search engine has long been a punch line in the tech industry, an also-ran that has never come close to challenging Google's dominant position.

But Microsoft could still have the last laugh, since its service has helped lay the groundwork for its burgeoning artificial intelligence effort, which is helping keep the company competitive as it builds out its post-PC future.

Bing probably never stood a chance at surpassing Google, but its 2nd-place spot is worth far more than the advertising dollars it pulls in with every click. Billions of searches over time have given Microsoft a massive repository of everyday questions people ask about their health, the weather, store hours or directions.

“The way machines learn is by looking for patterns in data,” said former Microsoft CEO Steve Ballmer, when asked earlier this year about the relationship between Microsoft's AI efforts and Bing, which he helped launch nearly a decade ago. “It takes large data sets to make that happen.”

Microsoft has spent decades investing in various forms of artificial intelligence research, the fruits of which include its voice assistant Cortana, email-sorting features and the machine-learning algorithms used by businesses that pay for its cloud platform Azure.

It's been stepping up its overt efforts recently, such as with this year's acquisition of Montreal-based Maluuba, which aims to create “literate machines” that can process and communicate information more like humans do.

Some see Bing as the overlooked foundation to those efforts.

“They're getting a huge amount of data across a lot of different contexts – mobile devices, image searches,” said Larry Cornett, a former executive for Yahoo's search engine. “Whether it was intentional or not, having hundreds of millions of queries a day is exactly what you need to power huge artificial intelligence systems.”

Bing started in 2009, a rebranding of earlier Microsoft search engines. Yahoo and Microsoft signed a deal for Bing to power Yahoo's search engine, giving Microsoft access to Yahoo's greater search share, said Cornett, who worked for Yahoo at the time. Similar deals have infused Bing into the search features for Amazon tablets and, until recently, Apple's Siri.

All of this has helped Microsoft better understand language, images and text at a large scale, said Steve Clayton, who as Microsoft's chief storyteller helps communicate the company's AI strategy.

“It's so much more than a search engine for Microsoft,” he said. “It's fuel that helps build other things.”

Bing serves dual purposes, he said, as a source of data to train artificial intelligence and a vehicle to be able to deliver smarter services.

While Google also has the advantage of a powerful search engine, other companies making big investments in the AI race – such as IBM or Amazon – do not.

“Amazon has access to a ton of e-commerce queries, but they don't have all the other queries where people are asking everyday things,” Cornett said.

Neither Bing nor Microsoft's AI efforts have yet made major contributions to the company's overall earnings, though the company repeatedly points out “we are infusing AI into all our products,” including the workplace applications it sells to corporate customers.

The company on Thursday reported fiscal first-quarter profit of $6.6 billion, up 16 percent from a year earlier, on revenue of $24.5 billion, up 12 percent. Meanwhile, Bing-driven search advertising revenue increased by $210 million, or 15 percent, to $1.6 billion – or roughly 7 percent of Microsoft's overall business.

That's OK by current Microsoft current CEO Satya Nadella, who nearly a decade ago was the executive tapped by Ballmer to head Bing's engineering efforts.

In his recent autobiography, Nadella describes the search engine as a “great training ground for building the hyper-scale, cloud-first services” that have allowed the company to pivot to new technologies as its old PC-software business wanes.

Source: This article was published journalgazette.net By MATT O'BRIEN

Categorized in Search Engine

The CIA is developing AI to advance data collection and analysis capabilities. These technologies are, and will continue to be, used for social media data.

INFORMATION IS KEY

The United States Central Intelligence Agency (CIA) requires large quantities of data, collected from a variety of sources, in order to complete investigations. Since its creation in 1947, intel has typically been gathered by hand. The advent of computers has improved the process, but even more modern methods can still be painstakingly slow. Ultimately, these methods only retrieve minuscule amounts of data when compared what artificial intelligence (AI) can gather.

According to information revealed by Dawn Meyerriecks, the deputy director for technology development with the CIA, the agency currently has 137 different AI projects underway. A large portion of these ventures are collaborative efforts between researchers at the agency and developers in Silicon Valley. But emerging and developing capabilities in AI aren’t just allowing the CIA more access to data and a greater ability to sift through it. These AI programs have taken to social media, combing through countless public records (i.e. what you post online). In fact, a massive percentage of the data collected and used by the agency comes from social media. 

As you might know or have guessed, the CIA is no stranger to collecting data from social media, but with AI things are a little bit different, “What is new is the volume and velocity of collecting social media data,” said Joseph Gartin, head of the CIA’s Kent School. And, according to Chris Hurst, the chief operating officer of Stabilitas, at the Intelligence Summit, “Human behavior is data and AI is a data model.”

AUTOMATION

According to Robert Cardillo, director of the National Geospatial-Intelligence Agency, in a June speech, “If we were to attempt to manually exploit the commercial satellite imagery we expect to have over the next 20 years, we would need eight million imagery analysts.” He went on to state that the agency aims to use AI to automate about 75% of the current workload for analysts. And, if they use self-improving AIs as they hope to, this process will only become more efficient.

While countries like Russia are still far behind the U.S. in terms of AI development, especially as it pertains to intelligence, there seems to be a global push — if not a race — forward.  Knowledge is power, and creating technology capable of extracting, sorting, and analyzing data faster than any human or other AI system could is certainly sounds like a fast track to the top.  As Vladimir Putin recently stated on the subject of AI, “Whoever becomes the leader in this sphere will become the ruler of the world.”

Source: This article was published futurism.com By Chelsea Gohd

Categorized in Science & Tech
Page 1 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media