Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

WE FACE a crisis of computing. The very devices that were supposed to augment our minds now harvest them for profit. How did we get here?

Most of us only know the oft-told mythology featuring industrious nerds who sparked a revolution in the garages of California. The heroes of the epic: Jobs, Gates, Musk, and the rest of the cast. Earlier this year, Mark Zuckerberg, hawker of neo-Esperantist bromides about “connectivity as panacea” and leader of one of the largest media distribution channels on the planet, excused himself by recounting to senators an “aw shucks” tale of building Facebook in his dorm room. Silicon Valley myths aren’t just used to rationalize bad behavior. These business school tales end up restricting how we imagine our future, limiting it to the caprices of eccentric billionaires and market forces.

What we need instead of myths are engaging, popular histories of computing and the internet, lest we remain blind to the long view.

At first blush, Yasha Levine’s Surveillance Valley: The Secret Military History of the Internet (2018) seems to fit the bill. A former editor of The eXile, a Moscow-based tabloid newspaper, and investigative reporter for PandoDaily, Levine has made a career out of writing about the dark side of tech. In this book, he traces the intellectual and institutional origins of the internet. He then focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community.

He remains unremittingly dark, however, claiming that these technologies were developed from the beginning with surveillance in mind and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”

To be sure, these constitute provocative theses, ones that attempt to confront not only the standard Silicon Valley story, but also established lore among the small group of scholars who study the history of computing. He falls short, however, of backing up his claims with sufficient evidence. Indeed, he flirts with creating a mythology of his own — one that I believe risks marginalizing the most relevant lessons from the history of computing.

The scholarly history is not widely known and worth relaying here in brief. The internet and what today we consider personal computing came out of a unique, government-funded research community that took off in the early 1960s. Keep in mind that, in the preceding decade, “computers” were radically different from what we know today. Hulking machines, they existed to crunch numbers for scientists, researchers, and civil servants. “Programs” consisted of punched cards fed into room-sized devices that would process them one at a time. Computer time was tedious and riddled with frustration. A researcher working with census data might have to queue up behind dozens of other users, book time to run her cards through, and would only know about a mistake when the whole process was over.

Users, along with IBM, remained steadfast in believing that these so-called “batch processing” systems were really what computers were for. Any progress, they believed, would entail building bigger, faster, better versions of the same thing.

But that’s obviously not what we have today. From a small research, a community emerged an entirely different set of goals, loosely described as “interactive computing.” As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind.

It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today is known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.

At ARPA, he enjoyed relative freedom in addition to plenty of cash, which enabled him to fund projects in computing whose military relevance was decidedly tenuous. He established a nationwide, multi-generational network of researchers who shared his vision. As a result, almost every significant advance in the field from the 1960s through the early 1970s was, in some form or another, funded or influenced by the community he helped establish.

Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.

Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.

The first piece of evidence he cites comes by way of ARPA’s Project Agile. A counterinsurgency research effort in Southeast Asia during the Vietnam War, it was notorious for its defoliation program that developed chemicals like Agent Orange. It also involved social science research and data collection under the guidance of an intelligence operative named William Godel, head of ARPA’s classified efforts under the Office of Foreign Developments. On more than one occasion, Levine asserts or at least suggests that Licklider and Godel’s efforts were somehow insidiously intertwined and that Licklider’s computing research in his division of ARPA had something to do with Project Agile. Despite arguing that this is clear from “pages and pages of released and declassified government files,” Levine cites only one such document as supporting evidence for this claim. It shows how Godel, who at one point had surplus funds, transferred money from his group to Licklider’s department when the latter was over budget.

This doesn’t pass the sniff test. Given the freewheeling nature of ARPA’s funding and management in the early days, such a transfer should come as no surprise. On its own, it doesn’t suggest a direct link in terms of research efforts. Years later, Taylor asked his boss at ARPA to fund the ARPANET — and, after a 20-minute conversation, he received $1 million in funds transferred from ballistic missile research. No one would seriously suggest that ARPANET and ballistic missile research were somehow closely “intertwined” because of this.

Sharon Weinberger’s recent history of ARPA, The Imagineers of War: The Untold Story of DARPA, The Pentagon Agency that Changed the World(2017), which Levine cites, makes clear what is already known from the established history. “Newcomers like Licklider were essentially making up the rules as they went along,” and were “given broad berth to establish research programs that might be tied only tangentially to a larger Pentagon goal.” Licklider took nearly every chance he could to transform his ostensible behavioral science group into an interactive computing research group. Most people in wider ARPA, let alone the DoD, had no idea what Licklider’s researchers were up to. His Command and Control division was even renamed the more descriptive Information Processing Techniques Office (IPTO).

Licklider was certainly involved in several aspects of counterinsurgency research. Annie Jacobsen, in her book The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency (2015), describes how he attended meetings discussing strategic hamlets in Southeast Asia and collaborated on proposals with others who conducted Cold War social science research. And Levine mentions Licklider’s involvement with a symposium that addressed how computers might be useful in conducting counterinsurgency work.

But Levine only points to one specific ARPA-funded computing research project that might have had something to do with counterinsurgency. In 1969, Licklider — no longer at ARPA — championed a proposal for a constellation of research efforts to develop statistical analysis and database software for social scientists. The Cambridge Project, as it was called, was a joint effort between Harvard and MIT. Formed at the height of the antiwar movement, when all DoD funding was viewed as suspicious, it was greeted with outrage by student demonstrators. As Levine mentions, students on campuses across the country viewed computers as large, bureaucratic, war-making machines that supported the military-industrial complex.

Levine makes a big deal of the Cambridge Project, but is there really a concrete connection between surveillance, counterinsurgency, computer networking, and this research effort? If there is, he doesn’t present it in the book. Instead, he relies heavily on an article in the Harvard Crimson by a student activist. He doesn’t even directly quote from the project proposal itself, which should contain at least one or two damning lines. Instead, he lists types of “data banks” the project would build, including ones on youth movements, minority integration in multicultural societies, and public opinion polls, among others. The project ran for five years but Levine never tells us what it was actually used for.

It’s worth pointing out that the DoD was the only organization that was funding computing research in a manner that could lead to real breakthroughs. Licklider and others needed to present military justification for their work, no matter how thin. In addition, as the 1960s came to a close, Congress was tightening its purse strings, which was another reason to trump up their relevance. It’s odd that an investigative reporter like Levine, ever suspicious of the standard line, should take the claims of these proposals at face value.

I spoke with John Klensin, a member of the Cambridge Project steering committee who was involved from the beginning. He has no memory of such data banks. “There was never any central archive or effort to build one,” he told me. He worked closely with Licklider and other key members of the project, and he distinctly recalls the tense atmosphere on campuses at the time, even down to the smell of tear gas. Oddly enough, he says some people worked for him by day and protested the project by night, believing that others elsewhere must be doing unethical work. According to Klensin, the Cambridge Project conducted “zero classified research.” It produced general purpose software and published its reports publicly. Some of them are available online, but Levine doesn’t cite them at all. An ARPA commissioned study of its own funding history even concluded that, while the project had been a “technical success” whose systems were “applicable to a wide variety of disciplines,” behavioral scientists hadn’t benefited much from it. Until Levine or someone else can produce documents demonstrating that the project was designed for, or even used in, counterinsurgency or surveillance efforts, we’ll have to take Klensin at his word.

As for the ARPANET, Levine only provides one source of evidence for his claim that, from its earliest days, the experimental computer network was involved in some kind of surveillance activity. He has dug up an NBC News report from the 1970s that describes how intelligence gathered in previous years (as part of an effort to create dossiers of domestic protestors) had been transferred across a new network of computer systems within the Department of Defense.

This report was read into the Congressional record during joint hearings on Surveillance Technology in 1975. But what’s clear from the subsequent testimony of Assistant Deputy Secretary of Defense David Cooke, the NBC reporter had likely confused several computer systems and networks across various government agencies. The story’s lone named source claims to have seen the data structure used for the files when they arrived at MIT. It is indeed an interesting account, but it remains unclear what was transferred, across which system, and what he saw. This incident hardly shows “how military and intelligence agencies used the network technology to spy on Americans in the first version of the Internet,” as Levine claims.

The ARPANET was not a classified system — anyone with an appropriately funded research project could use it. “ARPANET was a general purpose communication network. It is a distortion to conflate this communication system’s development with the various projects that made use of its facilities,” Vint Cerf, creator of the internet protocol, told me. Cerf concedes, however, that a “secured capability” was created early on, “presumably used to communicate classified information across the network.” That should not be surprising, as the government ran the project. But Levine’s evidence merely shows that surveillance information gathered elsewhere might have been transferred across the network. Does that count as having surveillance “baked in,” as he says, to the early internet?

Levine’s early history suffers most from viewing ARPA or even the military as a single monolithic entity. In the absence of hard evidence, he employs a jackhammer of willful insinuations as described above, pounding toward a questionable conclusion. Others have noted this tendency. He disingenuously writes that, four years ago, a review of Julian Assange’s book in this very publication accused him of being funded by the CIA, when in fact its author had merely suggested that Levine was prone to conspiracy theories. It’s a shame because today’s internet is undoubtedly a surveillance platform, both for governments and the companies whose cash crop is our collective mind. To suggest this was always the case means ignoring the effects of the hysterical national response to 9/11, which granted unprecedented funding and power to private intelligence contractors. Such dependence on private companies was itself part of a broader free market turn in national politics from the 1970s onward, which tightened funds for basic research in computing and other technical fields — and cemented the idea that private companies, rather than government-funded research, would take charge of inventing the future. Today’s comparatively incremental technical progress is the result. In The Utopia of Rules (2015), anthropologist David Graeber describes this phenomenon as a turn away from investment in technologies promoting “the possibility of alternative futures” to investment in those that “furthered labor discipline and social control.” As a result, instead of mind-enhancing devices that might have the same sort of effect as, say, mass literacy, we have a precarious gig economy and a convenience-addled relationship with reality.

Levine recognizes a tinge of this in his account of the rise of Google, the first large tech company to build a business model for profiting from user data. “Something in technology pushed other companies in the same direction. It happened just about everywhere,” he writes, though he doesn’t say what the “something” is. But the lesson to remember from history is that companies on their own are incapable of big inventions like personal computing or the internet. The quarterly pressure for earnings and “innovations” leads them toward unimaginative profit-driven developments, some of them harmful.

This is why Levine’s unsupported suspicion of government-funded computing research, regardless of the context, is counterproductive. The lessons of ARPA prove inconvenient for mythologizing Silicon Valley. They show a simple truth: in order to achieve serious invention and progress — in computers or any other advanced technology — you have to pay intelligent people to screw around with minimal interference, accept that most ideas won’t pan out, and extend this play period to longer stretches of time than the pressures of corporate finance allow. As science historian Mitchell Waldrop once wrote, the polio vaccine might never have existed otherwise; it was “discovered only after years of failure, frustration, and blind alleys, none of which could have been justified by cost/benefit analysis.” Left to corporate interests, the world would instead “have gotten the best iron lungs you ever saw.”

Computing for the benefit of the public is a more important concept now than ever. In fact, Levine agrees, writing, “The more we understand and democratize the Internet, the more we can deploy its power in the service of democratic and humanistic values.” Power in the computing world is wildly unbalanced — each of us mediated by and dependent on, indeed addicted to, invasive systems whose functionality we barely understand. Silicon Valley only exacerbates this imbalance, in the same manner, that oil companies exacerbate climate change or financialization of the economy exacerbates inequality. Today’s technology is flashy, sexy, and downright irresistible. But, while we need a cure for the ills of late-stage capitalism, our gadgets are merely “the best iron lungs you ever saw.”

 Source: This article was published lareviewofbooks.org By Eric Gade

Published in Online Research

Written By Bram Jansen

A lot more people are concerned about privacy today than used to be the case a few years ago. The efforts of whistleblowers like Edward Snowden have a lot to do with this. Things have changed now that people realize just how vulnerable they are when browsing the web. When we say things have changed we mean people are starting to take their online privacy more seriously. It does not mean that the threats that you face while browsing has reduced. If anything, they have increased in number. If you still don’t look after your online privacy, there’s no time to think about it anymore. You have to take action now.

5 reasons to protect online privacy

1- Hide from government surveillance

Almost all countries of the world monitor the online activity of their citizens to some degree. It doesn’t matter how big or developed the country is. If you think that your government doesn’t look into your online privacy at all, you’re deluding yourself. The only thing that varies is the extent to which your internet activity is monitored and the information that is recorded.

2- Bypass government censorship

There are large parts of the world where internet is not a free place. Governments censor the internet and control what their citizens can and cannot access and do on the internet. While most people know about The Great Firewall of China and the way middle-eastern countries block access to social networking and news websites, the problem is there are a lot more countries.

3- Protect your personal data

You use the internet to share a lot of personal data. This includes your private conversations, pictures, bank details, social security numbers, etc. If you are sharing this without hiding it, then it is visible to everyone. Malicious users can intercept this data and make you a victim of identity theft quite easily. While HTTPS might protect you against them, not all websites use it.

4- Hiding P2P activity is important

P2P or torrenting is a huge part of everyone’s internet usage today. You can download all sorts of files from P2P websites. However, many of this content is copyrighted and protected by copyright laws. While downloading copyrighted content for personal usage is legal, sharing it is illegal. The way P2P works you are constantly sharing the files as you download them. If someone notices you downloading copyrighted content, you might be in trouble. There is a lot of vagueness when it comes to copyright laws, so it’s best to hide your torrent activity. Even those in Canada have to use P2P carefully. The country provides some of the fastest connections but is strengthening its stronghold on P2P users.

5- Stream content peacefully without ISP throttling

When you stream or download content, your ISP might throttle your bandwidth to balance the network load. This is only possible because your ISP can see your activity. You are robbed of a quality streaming experience because of this. The solution is to hide your online activity.

Online privacy is fast becoming a myth, and users have to make efforts to have some privacy on the internet. The best way to do this is to use a VPN, for they encrypt your connection and hide your true IP address. But be careful when you choose a VPN. If you don’t know how to choose the best VPN, visit VPNAlert for the detailed solution. Or choose a VPN that does not record activity logs and does not hand over data to the authorities, otherwise all your efforts will be for naught.  

Published in Internet Privacy

FILE - CIA Director Mike Pompeo testifies before a Senate Intelligence hearing during his nomination process, in Washington, Jan. 12, 2017.

WASHINGTON — If this week’s WikiLeaks document dump is genuine, it includes a CIA list of the many and varied ways the electronic device in your hand, in your car, and in your home can be used to hack your life.

It’s simply more proof that, “it’s not a matter of if you’ll get hacked, but when you’ll get hacked.” That may be every security expert’s favorite quote, and unfortunately, they say it’s true. The WikiLeaks releases include confidential documents the group says exposes “the entire hacking capacity of the CIA.”

The CIA has refused to confirm the authenticity of the documents, which allege the agency has the tools to hack into smartphones and some televisions, allowing it to remotely spy on people through microphones on the devices.

Watch: New Generation of Hackable Internet Devices May Always Be Listening

Screenshot 1

WikiLeaks also claimed the CIA managed to compromise both Apple and Android smartphones, allowing their officers to bypass the encryption on popular services such as Signal, WhatsApp and Telegram.

For some of the regular tech users, news of the leaks and the hacking techniques just confirms what they already knew. When we’re wired 24-7, we are vulnerable.

“The expectation for privacy has been reduced, I think,” Chris Coletta said, “... in society, with things like WikiLeaks, the Snowden revelations ... I don’t know, maybe I’m cynical and just consider it to be inevitable, but that’s really the direction things are going.”

The internet of things

The problem is becoming even more dangerous as new, wired gadgets find their way into our homes, equipped with microphones and cameras that may always be listening and watching.

One of the WikiLeaks documents suggests the microphones in Samsung smart TV’s can be hacked and used to listen in on conversations, even when the TV is turned off.

Security experts say it is important to understand that in many cases, the growing number of wired devices in your home may be listening to all the time.

“We have sensors in our phones, in our televisions, in Amazon Echo devices, in our vehicles,” said Clifford Neuman, the director of the Center for Computer Systems Security, at the University of Southern California. “And really almost all of these attacks are things that are modifying the software that has access to those sensors so that the information is directed to other locations. Security practitioners have known that this is a problem for a long time.”

Neuman says hackers are using the things that make our tech so convenient against us.

“Certain pieces of software and certain pieces of hardware have been criticized because, for example, microphones might be always on,” he said. “But it is the kind of thing that we’re demanding as consumers, and we just need to be more aware that the information that is collected for one purpose can very easily be redirected for others.”

Tools of the espionage trade

The WikiLeaks release is especially damaging because it may have laid bare a number of U.S. surveillance techniques. The New York Times says the documents it examined layout programs called “Wrecking Crew” for instance, which “explains how to crash a targeted computer, and another tells how to steal passwords using the autocomplete function on Internet Explorer.”

Steve Grobman, chief of the Intel Security Group, says that’s bad not only because it can be done, but also because so-called “bad actors” now know it can be done. Soon enough, he warns, we could find our own espionage tools being used against us.

“We also do need to recognize the precedents we set, so, as offensive cyber capabilities are used ... they do give the blueprint for how that attack took place. And bad actors can then learn from that,” he said.

So how can tech-savvy consumers remain safe? Security experts say they can’t, and to remember the “it’s not if, but when” rule of hacking.

The best bet is to always be aware that if you’re online, you’re vulnerable.

Source: This article was published voanews.com By Kevin Enochs

Published in Online Research

IC Realtime introduces video search engine technology that will augment surveillance systems using analytics, natural language processing, and machine vision.

LAS VEGAS--()--PEPCOM at CES 2018 – IC Realtime, a leader in digital surveillance and security technology announces today the introduction of Ella, a new cloud-based deep-learning search engine that augments surveillance systems with natural language search capabilities across recorded video footage.

#helloella - @ICRealtime introduces Ella, a deep learning engine for #surveillance systems at #CES2018

Ella uses both algorithmic and deep learning tools to give any surveillance or security camera the ability to recognize objects, colors, people, vehicles, animals and more. Ella was designed with the technology backbone of Camio, a startup founded by ex-Googlers who realized there could be a way to apply search to streaming video feeds. Ella makes every nanosecond of video searchable instantly, letting users type in queries like “white truck” to find every relevant clip instead of searching through hours of footage. Ella quite simply creates a Google for video.

“The idea was born from a simple question: if we can search the entire internet in under a second, why can’t we do the same with video feeds,” said Carter Maslan, CEO of Camio. “IC Realtime is the perfect partner to bring this advanced video search capability to the global surveillance and security market because of their knowledge and experience with the needs of users in this space. Ella is the result of our partnership in fine-tuning the service for security applications.”

The average surveillance camera sees less than two minutes of interesting video each day despite streaming and recording 24/7. On top of that, traditional systems only allow the user to search for events by date, time, and camera type and to return very broad results that still require sifting, often taking hours of time.

Ella instead does the work for users to highlight the interesting events and to enable fast searches of their surveillance & security footage for the events they want to see and share. From the moment Ella comes online and is connected, it begins learning and tagging objects the cameras sees. The deep learning engine lives in the cloud and comes preloaded with recognition of thousands of objects like makes and models of cars; within the first minute of being online, users can start to search their footage.

Hardware agnostic, Ella also solves the issue of limited bandwidth for any HD streaming camera or NVR. Rather than push every second of recorded video to the cloud, Ella features interest-based video compression. Based on machine learning algorithms that recognize patterns of motion in each camera scene to recognize what is interesting within each scene, Ella will only record in HD when it recognizes something important. By learning from what the system sees, Ella can reduce false positives by understanding that a tree swaying in the wind is not notable while the arrival a delivery truck might be. Even the uninteresting events are still stored in a low-resolution time-lapse format, so they provide 24x7 continuous security coverage without using up valuable bandwidth.

“The video search capabilities delivered by Ella haven't been feasible in the security and surveillance industry before today,” said Matt Sailor, CEO for IC Realtime. “This new solution brings intelligence and analytics to security cameras around the world; Ella is a hardware agnostic approach to cloud-based analytics that instantly moves any connected surveillance system into the future.”

Ella works with both existing DIY and professionally installed surveillance and security cameras and is comprised of an on-premise video gateway device and the cloud platform subscription. Ella subscription pricing starts at $6.99 per month and increases with storage and analysis features needed for the particular scope of each project. To learn more about Ella, visit www.smartella.com.

For more information about IC Realtime please visit http://www.icrealtime.com.

For more information on Camio please visit https://camio.com.

About IC Realtime

Established in 2006, IC Realtime is a leading digital surveillance manufacturer serving the residential, commercial government, and military security markets. With an expansive product portfolio of surveillance solutions, IC Realtime innovates, distributes, and supports global video technology. Through a partnership with technology platform Camio, ICR created Ella, a cloud-based deep learning solution that augments surveillance cameras with natural language search capabilities. IC Realtime is revolutionizing video search functionality for the entire industry. IC Realtime is part of parent company IC Real Tech, formed in 2014 with headquarters in the US and Europe. Learn more at http://icrealtime.com

Connect with IC Realtime on Facebook at www.facebook.com/icrealtimeus or on Twitter at www.twitter.com/icrealtime.

Contacts

Caster Communications
Peter Girard
This email address is being protected from spambots. You need JavaScript enabled to view it.

Source: This article was published businesswire.com

Published in News & Politics

Searching video surveillance streaming for relevant information is a time-consuming mission that does not always convey accurate results. A new cloud-based deep-learning search engine augments surveillance systems with natural language search capabilities across recorded video footage.

The Ella search engine, developed by IC Realtime, uses both algorithmic and deep learning tools to give any surveillance or security camera the ability to recognize objects, colors, people, vehicles, animals and more.

It was designed with the technology backbone of Camio, a startup founded by ex-Googlers who realized there could be a way to apply search to streaming video feeds. Ella makes every nanosecond of video searchable instantly, letting users type in queries like “white truck” to find every relevant clip instead of searching through hours of footage. Ella quite simply creates a Google for video.

Traditional systems only allow the user to search for events by date, time, and camera type and to return very broad results that still require sifting, according to businesswire.com. The average surveillance camera sees less than two minutes of interesting video each day despite streaming and recording 24/7.

Ella instead does the work for users to highlight the interesting events and to enable fast searches of their surveillance and security footage. From the moment Ella comes online and is connected, it begins learning and tagging objects the cameras see.

The deep learning engine lives in the cloud and comes preloaded with recognition of thousands of objects like makes and models of cars; within the first minute of being online, users can start to search their footage.

Hardware agnostic, the technology also solves the issue of limited bandwidth for any HD streaming camera or NVR. Rather than push every second of recorded video to the cloud, Ella features interest-based video compression. Based on machine learning algorithms that recognize patterns of motion in each camera scene to recognize what is interesting within each scene, Ella will only record in HD when it recognizes something important. The uninteresting events are still stored in a low-resolution time-lapse format, so they provide 24×7 continuous security coverage without using up valuable bandwidth.

Ella works with both existing DIY and professionally installed surveillance and security cameras and is comprised of an on-premise video gateway device and the cloud platform subscription.

Source: This article was published i-hls.com

Published in Search Engine

Upcoming Events

There are no up-coming events

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
Internet research courses

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media