fbpx

Law enforcement agencies working online benefit from machine learning (ML) and artificial intelligence (AI) , which lead to leading solutions. ML and AI work together, and automated methods can search the dark web, detect illegal activity and bring malicious actors to justice. 

The interface between AI and GIS has created enormous possibilities that were not possible before. The field of artificial intelligence (AI) is so advanced that it exceeds or exceeds human accuracy in many areas, such as speech recognition, reading and writing, and image recognition. Together, ML and AI are rapidly making their way into the world of law enforcement. 

AI, machine learning, and deep learning help make the world a better place, for example, by helping to increase crop yields through precision farming, fighting crime through predictive policing, or predicting when the next big storm will arrive, whether in the US or elsewhere.

As fraud detection programs are driven by artificial intelligence (AI), many of these chains turn to AI to ensure that they use various techniques to stop bad actors in advance. Broadly speaking, AI is the ability to perform tasks that typically require a certain level of human intelligence. 

 

Reward programs are particularly popular because they can store large amounts of valuable data, including payment information. Reward points are also valuable because bad actors can spend them or sell them on dark web marketplaces. 

Coffee giant Dunkin 'Donuts was the victim of a hacker attack in October 2018, and the fraudsters who initiated the program were able to sell users' loyalty credits on dark web marketplaces for a fraction of their value. Sixgill is a cyber threat intelligence service that analyses dark web activity to detect and prevent cyber attacks and sensitive data leaks before they occur. Using advanced algorithms, its cyber intelligence platform provides organisations with real-time alerts and actionable intelligence that priorities major threats such as cyber attacks, data breaches and cyber attacks. 

New York City-based Insight has developed a threat detection platform that uses artificial intelligence and machine learning to scan deep and dark networks for specific keywords to alert potential targets. Sixgill investigates the Dark Web, the Internet of Things, and other areas of human activity to identify and predict cybercrime and terrorist activity. While the darker web requires someone to use the Tor browser, it can also be accessed by someone who knows where to look. 

That's why AI and ML are used to bring light into the dark web, and they can sweep it away faster than a person could. The IntSights report primarily scans deep and dark nets for the latter, but it can also scan the darker net, though not as fast or as far as a person could do, the report said. 

The problem with using AI and ML for this job is that there is not enough clarity: 40% of the websites on the dark-net are completely legal. The remaining 60% are not, and this includes anonymous transactions that are legal, according to the IntSights report.

 

 

Good cybersecurity practices can reduce the risk of information being collected and sold on the dark-net. Reporting incidents to law enforcement can generally reduce the risk, and a quick response to incidents can help minimise the damage. According to IntSights, law enforcement agencies around the world seized more than $1.5 billion worth of malicious software in 2017. 

Cobwebs Technologies' confusing tool can also search for information about possible crimes before they happen. Cobwebs Technologies' involvement tools can also search for information about potential crimes before they happen, and they are available to law enforcement free of charge. 

Cobwebs Technologies "confusing tool scans the deep dark web to identify and find connections between people's different profiles, displays the information in graphs and maps, and presents it in a variety of formats. It uses artificial intelligence and machine learning to search for keywords that contain information about people, such as their social media profiles and social networks. Tangle can also generate alarms to alert officials to potential threats extremely quickly. Monitoring people's activities on the dark web and other social networks can help officials pinpoint their plans.

Criminals now routinely use the internet to keep their criminal businesses under wraps, and artificial intelligence could help catch paedophiles operating on the dark-net, the Home Office has announced. The company's co-founder and chief technology officer, Dr Michael O'Brien, said: "Our company has developed an AI-based web intelligence solution to make the web safer by enabling law enforcement and crime analysts to uncover the hidden profiles of criminals, drug dealers, money launderers and other criminals lurking in the deep darknet. 

Earlier this month, Chancellor Sajid Javid announced that £30million had been made available to tackle child sexual exploitation online, with the Home Office revealing details on Tuesday of how it will be spent. The government has promised to spend more money on a child abuse image database that, since 2014, has allowed police and other law enforcement agencies to search seized computers or other devices for indecent images of children to help identify victims. Some aspects of artificial intelligence, including language analysis and age assessment, have been used to determine whether they would help track down child molesters.

[Source: This article was published in aidaily.co.uk By Manahil Zahra - Uploaded by the Association Member: Anna K. Sasaki]

Categorized in Deep Web

Privacy-preserving AI techniques could allow researchers to extract insights from sensitive data if cost and complexity barriers can be overcome. But as the concept of privacy-preserving artificial intelligence matures, so do data volumes and complexity. This year, the size of the digital universe could hit 44 zettabytes, according to the World Economic Forum. That sum is 40 times more bytes than the number of stars in the observable universe. And by 2025, IDC projects that number could nearly double.

More Data, More Privacy Problems

While the explosion in data volume, together with declining computation costs, has driven interest in artificial intelligence, a significant portion of data poses potential privacy and cybersecurity questions. Regulatory and cybersecurity issues concerning data abound. AI researchers are constrained by data quality and availability. Databases that would enable them, for instance, to shed light on common diseases or stamp out financial fraud — an estimated $5 trillion global problem — are difficult to obtain. Conversely, innocuous datasets like ImageNet have driven machine learning advances because they are freely available.

 

A traditional strategy to protect sensitive data is to anonymize it, stripping out confidential information. “Most of the privacy regulations have a clause that permits sufficiently anonymizing it instead of deleting data at request,” said Lisa Donchak, associate partner at McKinsey.

But the catch is, the explosion of data makes the task of re-identifying individuals in masked datasets progressively easier. The goal of protecting privacy is getting “harder and harder to solve because there are so many data snippets available,” said Zulfikar Ramzan, chief technology officer at RSA.

The Internet of Things (IoT) complicates the picture. Connected sensors, found in everything from surveillance cameras to industrial plants to fitness trackers, collect troves of sensitive data. With the appropriate privacy protections in place, such data could be a gold mine for AI research. But security and privacy concerns stand in the way.

Addressing such hurdles requires two things. First, a framework providing user controls and rights on the front-end protects data coming into a database. “That includes specifying who has access to my data and for what purpose,” said Casimir Wierzynski, senior director of AI products at Intel. Second, it requires sufficient data protection, including encrypting data while it is at rest or in transit. The latter is arguably a thornier challenge.

[Source: This article was published in urgentcomm.com By Brian Buntz - Uploaded by the Association Member: Bridget Miller]

Categorized in Internet Privacy

The scientific community worldwide has mobilized with unprecedented speed to tackle the COVID-19 pandemic, and the emerging research output is staggering. Every day, hundreds of scientific papers about COVID-19 come out, in both traditional journals and non-peer-reviewed preprints. There's already far more than any human could possibly keep up with, and more research is constantly emerging.

 

And it's not just new research. We estimate that there are as many as 500,000 papers relevant to COVID-19 that were published before the outbreak, including papers related to the outbreaks of SARS in 2002 and MERS in 2012. Any one of these might contain the key information that leads to  or a vaccine for COVID-19.

Traditional methods of searching through the research literature just don't cut it anymore. This is why we and our colleagues at Lawrence Berkeley National Lab are using the latest artificial intelligence techniques to build COVIDScholar, a  dedicated to COVID-19. COVIDScholar includes tools that pick up subtle clues like similar drugs or research methodologies to recommend relevant research to scientists. AI can't replace scientists, but it can help them gain new insights from more papers than they could read in a lifetime.

Why it matters

When it comes to finding effective treatments for COVID-19, time is of the essence. Scientists spend 23% of their time searching for and reading papers. Every second our  can save them is more time to spend making discoveries in the lab and analyzing data.

AI can do more than just save scientists time. Our group's previous work showed that AI can capture latent scientific knowledge from text, making connections that humans missed. There, we showed that AI was able to suggest new, cutting-edge functional materials years before their discovery by humans. The information was there all along, but it took combining information from hundreds of thousands of papers to find it.

 

We are now applying the same techniques to COVID-19, to find existing drugs that could be repurposed, genetic links that might help develop a vaccine or effective treatment regimens. We're also starting to build in new innovations, like using molecular structures to help find which drugs are similar to each other, including those that are similar in unexpected ways.


1-aitoolsearch.jpg

How we do this work

The most important part of our work is the data. We've built web scrapers that collect new papers as they're published from a wide variety of sources, making them available on our website within 15 minutes of their appearance online. We also clean the data, fixing mistakes in formatting and comparing the same paper from multiple sources to find the best version. Our machine learning algorithms then go to work on the paper, tagging it with subject categories and marking work important to COVID-19.

We're also continuously seeking out experts in new areas. Their input and annotation of data is what allows us to train new AI models.

What's next

So far, we have assembled a collection of over 60,000 papers on COVID-19, and we're expanding the collection daily. We've also built search tools that group research into categories, suggest related research and allow users to find papers that connect different concepts, such as papers that connect a specific drug to the diseases it's been used to treat in the past. We're now building AI algorithms that allow researchers to plug  into quantitative models for studying topics like protein interactions. We're also starting to dig through the past literature to find hidden gems.

We hope that very soon, researchers using COVIDScholar will start to identify relationships that they might never have imagined, bringing us closer to treatments and a remedy for COVID-19.

 

[Source: This article was published in medicalxpress.com By Amalie Trewartha and John Dagdelen - Uploaded by the Association Member: Barbara larson]

Categorized in Online Research

Google‘s AI team has released a new tool to help researchers traverse through a trove of coronavirus papers, journals, and articles. The COVID-19 research explorer tool is a semantic search interface that sits on top of the COVID-19 Open Research Dataset (CORD-19). 

The team says that traditional search engines are sufficient at answering queries such as “What are the symptoms of coronavirus?” or “Where can I get tested in my country?”. However, when it comes to more pointed questions from researchers, these search engines and their keyword-based approach fail to deliver accurate results.

Google‘s new tool helps researchers solve that problem. The CORD-19 database has over 50,000 journal articles and research papers related to coronavirus. However, a simple keyword search wouldn’t yield reliable results. So, Google uses Natural Language Understanding (NLU) based semantic search to answer those queries. 

 

NLU is a subset of Natural Language Processing (NLP) that focuses on a smaller context while trying to derive the meaning of the question and draw distinct insights.

The COVID-19 research explorer tool not only returns related papers to the query, but it also highlights parts of papers that might provide relevant answers to the question. You can also ask follow-up questions to further narrow down results.

The semantic search is powered by Google’s popular BERT language model. In addition to that, the AI has been trained on BioASQ, a biomedical semantical search model to enhance results.

The team built a hybrid term-neural retrieval model for better results. While the term-based model provides accuracy with search results, the neural model helps with understanding the meaning and context of the query.

You can read more technical details about the model here and try out the search explorer here.

 

[Source: This article was published in sup.news By Ivan Mehta - Uploaded by the Association Member: Wushe Zhiyang]

Categorized in Online Research

No overarching artificial intelligence looms on the horizon, but machine-learning tools can make applications do some magical things.

I was talking with a friend recently about artificial intelligence (AI) and machine learning (ML), and they noted that if you replaced AI or ML with the word magic, many of those discussions would be as useful and informative as before. This is due to a number of factors, including misunderstanding about the current state of affairs when it comes to AI, ML, and more specifically, deep neural networks (DNNs)—specifically, what ML models are actually doing and not comprehending how ML models are used together.

 

I hope that those who have been working with ML take kindly to my explanations because they’re targeted at engineers who want to understand and use ML but haven’t gotten through the hype that even ML companies are spouting. More than half of you are looking into ML, but only a fraction is actually incorporating it into products. This number is growing rapidly though.

ML is only a part of the AI field and many ML tools and models are available, being used now, and in development (Fig. 1). DNNs are just a part; other neural-network approaches enter into the mix, but more on that later.

I was talking with a friend recently about artificial intelligence (AI) and machine learning (ML), and they noted that if you replaced AI or ML with the word magic, many of those discussions would be as useful and informative as before. This is due to a number of factors, including misunderstanding about the current state of affairs when it comes to AI, ML, and more specifically, deep neural networks (DNNs)—specifically, what ML models are actually doing and not comprehending how ML models are used together.

I hope that those who have been working with ML take kindly to my explanations because they’re targeted at engineers who want to understand and use ML but haven’t gotten through the hype that even ML companies are spouting. More than half of you are looking into ML, but only a fraction is actually incorporating it into products. This number is growing rapidly though.

ML is only a part of the AI field and many ML tools and models are available, being used now, and in development (Fig. 1). DNNs are just a part; other neural-network approaches enter into the mix, but more on that later.

Machine Learning1

1. Neural networks are just a part of the machine-learning portion of artificial-intelligence research.

Developers should look at ML models more like fast Fourier transforms (FFTs) or Kalman filters. They’re building blocks that perform a particular function well and can be combined with similar tools, modules, or models to solve a problem. The idea of stringing black boxes together is appropriate. The difference between an FFT and a DNN model is in the configuration. The former has a few parameters while DNN model needs to be trained.

Training for some types of neural networks requires thousands of samples, such as photos. This is often done in the cloud, where large amounts of storage and computation power can be applied. Trained models can then be used in the field since they normally require less storage and computation power as their training counterparts. AI accelerators can be utilized in both instances to improve performance and reduce power requirements.

Rolling a Machine-Learning Model

Most ML models can be trained to provide different results using a different set of training samples. For example, a collection of cat photos can be used with some models to help identify cats.

Models can perform different functions such as detection, classification, and segmentation. These are common chores for image-based tools. Other functions could include path optimization or anomaly detection, or provide recommendations.

A single model will not typically deliver all of the processing needed in most applications, and input and output data may benefit from additional processing. For example, noise reduction may be useful for audio input to a model. The noise reduction may be provided by conventional analog or digital filters or there may be an ML model in the mix. The output could then be used to recognize phonemes, words, etc., as the data is massaged until a voice command is potentially recognized. 

Likewise, a model or filter might be used to identify an area of interest in an image. This subset could then be presented to the ML-based identification subsystem and so on (Fig. 2). The level of detail will depend on the application. For example, a video-based door-opening system may need to differentiate between people and animals as well as the direction of movement so that the door only opens when a person is moving toward it.

Machine Learning2

2. Different tools or ML models can be used to identify areas of interest that are then isolated and processed to distinguish between objects such as people and cars.

Models may be custom-built and pretrained, or created and trained by a developer. Much will depend on the requirements and goals of the application. For example, keeping a machine running may mean tracking the operation of the electric motor in the system. A number of factors can be recorded and analyzed from power provided to the motor to noise and vibration information.

 

Companies such as H2O.ai and XNor are providing prebuilt or customized models and training for those who don’t want to start from scratch or use open-source models that may require integration and customization. H2O.ai has packages like Enterprise Steam and Enterprise Puddle that target specific platforms and services. XNor’s AI2Go uses a menu-style approach: developers start by choosing a target platform, like a Raspberry Pi, then an industry, like automotive, and then a use case, such as In-cabin object classification. The final step is to select a model based on latency and memory footprint limitations (Fig. 3).

Machine Learning 3

3. Shown is the tail end of the menu selection process for XNor’s AI2Go. Developers can narrow the search for the ideal model by specifying the memory footprint and latency time.

It’s Not All About DNNs

Developers need to keep in mind a number of factors when dealing with neural networks and similar technologies. Probability is involved and results from an ML model are typically defined in percentages. For example, a model trained to recognize cats and dogs may be able to provide a high level of confidence that an image contains a dog or a cat. The level may be lower distinguishing a dog from a cat and so on, to the point that a particular breed of animal is recognized.

The percentages can often improve with additional training, but changes usually aren’t linear. It may be easy to hit the 50% mark and 90% might be a good model. However, a lot of training time may be required to hit 99%.

The big question is: “What are the application requirements and what alternatives are there in the decision-making process?” It’s one reason why multiple sensors are used when security and safety are important design factors.

DNNs have been popular because of the availability of open-source solutions, including platforms like TensorFlow and Caffe. They have found extensive hardware and software support from the likes of Xilinx, NVIDIA, Intel, and so on, but they’re not the only types of neural-network tools available. Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs) are some of the other options available.

SNNs are used by BrainChip and Eta Compute. BrainChip’s Akida Development Environment (ADE) is designed to support SNN model creation. Eta Compute augments its ultra-low-power Cortex-M3 microcontroller with SNN hardware. SNNs are easier to train than DNNs and their ilk, although there are tradeoffs for all neural-network approaches.

Neurala’s Lifelong-DNN (LDNN) is another ML approach that’s similar to DNNs with the lower training overhead of SNNs. LDNN is a proprietary system developed over many years. It supports continuous learning using an approximation of lightweight backpropagation that allows learning to continue without the need to retain the initial training information. LDNN also requires fewer samples to reach the same level of training as a conventional DNN.  

 

There’s a tradeoff in precision and recognition levels compared to a DNN, but such differences are similar to those involving SNNs. It’s not possible to make direct comparisons between systems because so many factors are involved, including training time, samples, etc.

LDNN can benefit for AI acceleration provided by general-purpose GPUs (GPGPUs). SNNs are even more lightweight, making them easier to use on microcontrollers. Even so, DNNs can run on microcontrollers and low-end DSPs as long as the models aren’t too demanding. Image processing may not be practical, but tracking anomalies on a motor-control system could be feasible. 

Overcoming ML Challenges

There are numerous challenges when dealing with ML. For example, overfitting is a problem experienced by training-based solutions. This occurs when the models work well with data similar to the training data, but poorly on data that’s new. LDNN uses an automatic, threshold-based consolidation system that reduces redundant weight vectors and resets the weights while preserving new, valid outliers.  

ML models can address many tasks successfully with high accuracy. However, that doesn’t mean all tasks, regardless if they’re a conventional classification or segmentation problem, can be accommodated. Sometimes changing models can help or develop new ones. This is where data engineers can come in handy, though they tend to be rare and expensive.

Debugging models can also be a challenge. ML module debugging is much different than debugging a conventional program. Debugging models that are working within an application is another issue. Keep in mind that models will often have an accuracy of less than 100%; therefore, applications need to be designed to handle these conditions. This is less of an issue for non-critical applications. However, apps like self-driving cars will require redundant, overlapping systems.

Avalanche of Advances

New systems continue to come out of academia and research facilities. For example, “Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” is a paper out of the University of Maryland’s engineering department. Anton Mitrokhin and Peter Sutor Jr., Cornelia Fermüller, and Computer Science Professor Yiannis Aloimonos developed a hyperdimensional pipeline for integrating sensor data, ML analysis, and control. It uses its own hyperdimensional memory system.

ML has been progressing like no other programming tool in the past. Improvements have been significant even without turning to specialized hardware. Part of this is due to improved software support to optimizations that increase accuracy or performance while reducing hardware requirements. The challenge for developers is determining what hardware to use, what ML tools to use, and how to combine them to address their application.

It’s worth making most systems now rather than waiting for the next improvement. Some platforms will be upward-compatible; however, others may not. Going with a hardware-accelerated solution will limit the ML models that can be supported but with significant performance gains, often multiple orders of magnitude.

Systems that employ ML aren’t magic and their application can use conventional design approaches. They do require new tools and debugging techniques, so incorporating ML for the first time shouldn’t be a task taken lightly. On the other hand, the payback can be significant and ML models may often provide the support that’s unavailable with conventional programming techniques and frameworks.

As noted, a single ML model may not be what’s needed for a particular application. Combining models, filters, and other modules require an understanding of each, so don’t assume it will simply be a matter of choosing an ML model and doing a limited amount of training. That may be adequate in some instances, especially if the application matches an existing model, but don’t count on it until you try it out.

 [Source: This article was Published in electronicdesign.com BY William G. Wong - Uploaded by the Association Member: Jay Harris]

Categorized in Internet Technology

 Source: This article was published forbes.com By Jayson DeMers - Contributed by Member: William A. Woods

Some search optimizers like to complain that “Google is always changing things.” In reality, that’s only a half-truth; Google is always coming out with new updates to improve its search results, but the fundamentals of SEO have remained the same for more than 15 years. Only some of those updates have truly “changed the game,” and for the most part, those updates are positive (even though they cause some major short-term headaches for optimizers).

Today, I’ll turn my attention to semantic search, a search engine improvement that came along in 2013 in the form of the Hummingbird update. At the time, it sent the SERPs into a somewhat chaotic frenzy of changes but introduced semantic search, which transformed SEO for the better—both for users and for marketers.

 

What Is Semantic Search?

I’ll start with a briefer on what semantic search actually is, in case you aren’t familiar. The so-called Hummingbird update came out back in 2013 and introduced a new way for Google to consider user-submitted queries. Up until that point, the search engine was built heavily on keyword interpretation; Google would look at specific sequences of words in a user’s query, then find matches for those keyword sequences in pages on the internet.

Search optimizers built their strategies around this tendency by targeting specific keyword sequences, and using them, verbatim, on as many pages as possible (while trying to seem relevant in accordance with Panda’s content requirements).

Hummingbird changed this. Now, instead of finding exact matches for keywords, Google looks at the language used by a searcher and analyzes the searcher’s intent. It then uses that intent to find the most relevant search results for that user’s intent. It’s a subtle distinction, but one that demanded a new approach to SEO; rather than focusing on specific, exact-match keywords, you had to start creating content that addressed a user’s needs, using more semantic phrases and synonyms for your primary targets.

Voice Search and Ongoing Improvements

Of course, since then, there’s been an explosion in voice search—driven by Google’s improved ability to recognize spoken words, its improved search results, and the increased need for voice searches with mobile devices. That, in turn, has fueled even more advances in semantic search sophistication.

One of the biggest advancements, an update called RankBrain, utilizes an artificial intelligence (AI) algorithm to better understand the complex queries that everyday searchers use, and provide more helpful search results.

 

Why It's Better for Searchers

So why is this approach better for searchers?

  • Intuitiveness. Most of us have already taken for granted how intuitive searching is these days; if you ask a question, Google will have an answer for you—and probably an accurate one, even if your question doesn’t use the right terminology, isn’t spelled correctly, or dances around the main thing you’re trying to ask. A decade ago, effective search required you to carefully calculate which search terms to use, and even then, you might not find what you were looking for.
  • High-quality results. SERPs are now loaded with high-quality content related to your original query—and oftentimes, a direct answer to your question. Rich answers are growing in frequency, in part to meet the rising utility of semantic search, and it’s giving users faster, more relevant answers (which encourages even more search use on a daily basis).
  • Content encouragement. The nature of semantic search forces searches optimizers and webmasters to spend more time researching topics to write about and developing high-quality content that’s going to serve search users’ needs. That means there’s a bigger pool of content developers than ever before, and they’re working harder to churn out readable, practical, and in-demand content for public consumption.

Why It's Better for Optimizers

The benefits aren’t just for searchers, though—I’d argue there are just as many benefits for those of us in the SEO community (even if it was an annoying update to adjust to at first):

  • Less pressure on keywords. Keyword research has been one of the most important parts of the SEO process since search first became popular, and it’s still important to gauge the popularity of various search queries—but it isn’t as make-or-break as it used to be. You no longer have to ensure you have exact-match keywords at exactly the right ratio in exactly the right number of pages (an outdated concept known as keyword density); in many cases, merely writing about the general topic is incidentally enough to make your page relevant for your target.
  • Value Optimization. Search optimizers now get to spend more time optimizing their content for user value, rather than keyword targeting. Semantic search makes it harder to accurately predict and track how keywords are specifically searched for (and ranked for), so we can, instead, spend that effort on making things better for our core users.
  • Wiggle room. Semantic search considers synonyms and alternative wordings just as much as it considers exact match text, which means we have far more flexibility in our content. We might even end up optimizing for long-tail phrases we hadn’t considered before.

The SEO community is better off focusing on semantic search optimization, rather than keyword-specific optimization. It’s forcing content producers to produce better, more user-serving content, and relieving some of the pressure of keyword research (which at times is downright annoying).

Take this time to revisit your keyword selection and content strategies, and see if you can’t capitalize on these contextual queries even further within your content marketing strategy.

Categorized in Search Engine

Searching video surveillance streaming for relevant information is a time-consuming mission that does not always convey accurate results. A new cloud-based deep-learning search engine augments surveillance systems with natural language search capabilities across recorded video footage.

The Ella search engine, developed by IC Realtime, uses both algorithmic and deep learning tools to give any surveillance or security camera the ability to recognize objects, colors, people, vehicles, animals and more.

It was designed with the technology backbone of Camio, a startup founded by ex-Googlers who realized there could be a way to apply search to streaming video feeds. Ella makes every nanosecond of video searchable instantly, letting users type in queries like “white truck” to find every relevant clip instead of searching through hours of footage. Ella quite simply creates a Google for video.

 

Traditional systems only allow the user to search for events by date, time, and camera type and to return very broad results that still require sifting, according to businesswire.com. The average surveillance camera sees less than two minutes of interesting video each day despite streaming and recording 24/7.

Ella instead does the work for users to highlight the interesting events and to enable fast searches of their surveillance and security footage. From the moment Ella comes online and is connected, it begins learning and tagging objects the cameras see.

The deep learning engine lives in the cloud and comes preloaded with recognition of thousands of objects like makes and models of cars; within the first minute of being online, users can start to search their footage.

Hardware agnostic, the technology also solves the issue of limited bandwidth for any HD streaming camera or NVR. Rather than push every second of recorded video to the cloud, Ella features interest-based video compression. Based on machine learning algorithms that recognize patterns of motion in each camera scene to recognize what is interesting within each scene, Ella will only record in HD when it recognizes something important. The uninteresting events are still stored in a low-resolution time-lapse format, so they provide 24×7 continuous security coverage without using up valuable bandwidth.

Ella works with both existing DIY and professionally installed surveillance and security cameras and is comprised of an on-premise video gateway device and the cloud platform subscription.

Source: This article was published i-hls.com

Categorized in Search Engine

FOR ALL THE hype about killer robots, 2017 saw some notable strides in artificial intelligence. A bot called Libratus out-bluffed poker kingpins, for example. Out in the real world, machine learning is being put to use improving farming and widening access to healthcare.

But have you talked to Siri or Alexa recently? Then you’ll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can’t do or understand. Here are five thorny problems that experts will be bending their brains against next year.

The meaning of our words

Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can’t really understand the meaning of our words and the ideas we share with them. “We’re able to take concepts we’ve learned and combined them in different ways, and apply them in new situations,” says Melanie Mitchell, a professor at Portland State University. “These AI and machine learning systems are not.”

 

Mitchell describes today’s software as stuck behind what mathematician Gian Carlo-Rota called “the barrier of meaning.” Some leading AI research teams are trying to figure out how to clamber over it.

One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching the video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what’s happening in photos using analogies and a store of concepts about the world.

The reality gap impeding the robot revolution

Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today’s robots lack the brains to match their sophisticated brawn.

Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap—a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.

The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.

Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team’s priorities. “It’ll be neat to see over the next year or so how we can leverage that to accelerate learning,” says Urmson, who previously led Google parent Alphabet’s autonomous-car project.

Guarding against AI hacking

The software that runs our electrical gridssecurity cameras, and cell phones is plagued by security flaws. We shouldn’t expect software for self-driving cars and domestic robots to be any different. It may, in fact, be worse: There’s evidence that the complexity of machine-learning software introduces new avenues of attack.

Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally—unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.

The threat is considered serious enough that researchers at the world’s most prominent machine-learning conference convened a one-day workshop on the threat of machine deception earlier this month. Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.

Tim Hwang, who organized the workshop, predicted using the technology to manipulate people is inevitable as machine learning becomes easier to deploy, and more powerful. “You no longer need a room full of PhDs to do machine learning,” he said. Hwang pointed to the Russian disinformation campaign during the 2016 presidential election as a potential forerunner of AI-enhanced information war. “Why wouldn’t you see techniques from the machine learning space in these campaigns?” he said. One trick Hwang predicts could be particularly effective is using machine learning to generate fake video and audio.

Graduating beyond boardgames

Alphabet’s champion Go-playing software evolved rapidly in 2017. In May, a more powerful version beat Go champions in China. Its creators, research unit DeepMind, subsequently built a version, AlphaGo Zero, that learned the game without studying human play. In December, another upgrade effort birthed AlphaZero, which can learn to play chess and Japanese board game Shogi (although not at the same time).

That avalanche of notable results is impressive—but also a reminder of AI software’s limitations. Chess, Shogi, and Go are complex but all have relatively simple rules and gameplay visible to both opponents. They are a good match for computers’ ability to rapidly spool through many possible future positions. But most situations and problems in life are not so neatly structured.

 

That’s why DeepMind and Facebook both started working on the multiplayer video game StarCraft in 2017. Neither have yet gotten very far. Right now, the best bots—built by amateurs—are no match for even moderately-skilled players. DeepMind researcher Oriol Vinyals told WIREDearlier this year that his software now lacks the planning and memory capabilities needed to carefully assemble and command an army while anticipating and reacting to moves by opponents. Not coincidentally, those skills would also make software much better at helping with real-world tasks such as office work or real military operations. Big progress on StarCraft or similar games in 2018 might presage some powerful new applications for AI.

Teaching AI to distinguish right from wrong

Even without new progress in the areas listed above, many aspects of the economy and society could change greatly if existing AI technology is widely adopted. As companies and governments rush to do just that, some people are worried about accidental and intentional harms caused by AI and machine learning.

How to keep the technology within safe and ethical bounds was a prominent thread of discussion at the NIPS machine-learning conference this month. Researchers have found that machine learning systems can pick up unsavory or unwanted behaviors, such as perpetuating gender stereotypes, when trained on data from our far-from-perfect world. Now some people are working on techniques that can be used to audit the internal workings of AI systems, and ensure they make fair decisions when putting to work in industries such as finance or healthcare.

The next year should see tech companies put forward ideas for how to keep AI on the right side of humanity. Google, Facebook, Microsoft, and others have begun talking about the issue, and are members of a new nonprofit called Partnership on AI that will research and try to shape the societal implications of AI. Pressure is also coming from more independent quarters. A philanthropic project called the Ethics and Governance of Artificial Intelligence Fund is supporting MIT, Harvard, and others to research AI and the public interest. A new research institute at NYU, AI Now, has a similar mission. In a recent report, it called for governments to swear off using “black box” algorithms not open to public inspection in areas such as criminal justice or welfare.

Source: This article was published wired.com By Tom

Categorized in Internet Technology

Google has officially announced that it is opening an AI center in Beijing, China.

The confirmation comes after months of speculation fueled by a major push to hire AI talent inside the country.

Google’s search engine is blocked in China, but the company still has hundreds of staff in China which work on its international services. In reference to that workforce, Alphabet chairman Eric Schmidt has said the company “never left” China, and it makes sense that Google wouldn’t want to ignore China’s deep and growing AI talent pool, which has been hailed by experts that include former Google China head Kaifu Lee.

 

Like the general talent with Google China, this AI hiring push isn’t a sign that Google will launch new services in China. Although it did make its Google Translate app available in China earlier this year in a rare product move on Chinese soil.

Instead, the Beijing-based team will work with AI colleagues in Google offices across the world, including New York, Toronto, London and Zurich.

“I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it,” wrote Dr. Fei-Fei Li, Chief Scientist at Google Cloud, in a blog post announcing plans for the China lab.

Related...

Li, formerly the director of Stanford University’s Artificial Intelligence Lab, was a high-profile arrival when she joined Google one year ago. She will lead the China-based team alongside Jia Li, who was hired from Snap where she had been head of research at the same time as Li.

The China lab has “already hired some top talent” and there are currently more than 20 jobs open, according to a vacancy listing.

“Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community,” Li added.

Google is up against some tough competitors for talent. Aside from the country’s three largest tech companies Baidu, Tencent and Alibaba, ambitious $30 billion firm Bytedance — which acquired Musical.ly for $1 billion — and fast-growing companies SenseTime and Face++ all compete for AI engineers with compensation deals growing higher.

Source: This article was published techcrunch.com By Jon Russell

Categorized in Online Research

Response:now uses machine learning to save companies time and money with automatically produced, actionable research insights.

We’re hearing about a lot of companies using artificial intelligence (AI) to make the most of the data they collect. Now market research has adopted the technology.

After finding success with clients such as Google and Mastercard abroad, Prague-based response:now is bringing their AI-powered app to the United States.

The company now offers a fully self-service, programmatic platform that creates research reports based on machine learning. Then it uses a human editor to tease out any undetected nuances and reconcile any disparities.

Fred Barber, newly appointed managing director of response:now in North America says it makes sense to use AI in research.

 

Since research is essentially data — and common metrics such as the Net Promoter score are basically formulas — an algorithm can learn to make reasonable assumptions and conclusions about the data it analyzes, Barber said. Using AI, response:now automatically creates reports, cutting down much of the time spent in traditional research environments.

According to Barber, 75-80 percent of the work effort in market research is in the writing of the reports. “It’s costly and time-consuming,” Barber said. In comparison to traditional research, response:now can deliver in “five days, not five weeks and for 2K instead of $20 (on average).”

In many cases, Barber says, the company can provide research at three times the speed and one-third the cost of current market research and DIY offerings.

Related...

The company enables their clients to get research on a wide variety of variables, including ad performance, packaging design, customer satisfaction and more.

“We’ve enabled market research to become a much more ubiquitous part of the business process,” Barber said.

Source: This article was published martechtoday.com By Robin Kurzer

Categorized in Market Research
Page 1 of 3

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more