Since the Arab uprisings of 2011, UAE has utilised 'cyber-security governance' to quell the harbingers of revolt and suppress dissident voices

he nuts and bolts of the Emirati surveillance state moved into the spotlight on 1 February as the Abu Dhabi-based cybersecurity company DarkMatter allegedly stepped "out of the shadows" to speak to the international media.

Its CEO and founder, Faisal al-Bannai, gave a rare interview to the Associated Press at the company's headquarters in Abu Dhabi, in which he absolved his company of any direct responsibility for human rights violations in the UAE.  

Established in the UAE in 2015, DarkMatter has always maintained itself to be a commercially driven company. Despite the Emirati government constituting 80 percent of DarkMatter's customer base and the company previously describing itself as "a strategic partner of the UAE government", its CEO was at pains to suggest that it was independent from the state.

According to its website, the company's stated aim is to "protect governments and enterprises from the ever-evolving threat of cyber attack" by offering a range of non-offensive cybersecurity services. 

Seeking skilled hackers

Though DarkMatter defines its activities as defensive, an Italian security expert, who attended an interview with the company in 2016, likened its operations to "big brother on steroids" and suggested it was deeply rooted within the Emirati intelligence system.

Simone Margaritelli, also a former hacker, alleged that during the interview he was informed of the UAE's intention to develop a surveillance system that was "capable of intercepting, modifying, and diverting (as well as occasionally obscuring) traffic on IP, 2G, 3G, and 4G networks".

Although he was offered a lucrative monthly tax-free salary of $15,000, he rejected the offer on ethical grounds.

Furthermore, in an investigation carried out by The Intercept in 2016, sources with inside knowledge of the company said that DarkMatter was "aggressively" seeking skilled hackers to carry out offensive surveillance operations. This included plans to exploit hardware probes already installed across major cities in order to track, locate and hack any person at any time in the UAE.

In many respects, the UAE's surveillance infrastructure has been built by a network of international cybersecurity “dealers” who have willingly profited from supplying the Emirati regime with the tools needed for a modern-day surveillance state

As with other states, there is a need for cybersecurity in the UAE. As the threat of cyber-attacks has increased worldwide, there have been numerous reports of attempted attacks from external actors on critical infrastructure in the country. 

Since the Arab uprisings of 2011, however, internal "cyber-security governance", which has been utilised to quell the harbingers of revolt and suppress dissident voices, has become increasingly important to the Emirati government and other regimes across the region.

Authoritarian control

In the UAE, as with other GCC states, this has found legislative expression in the cybercrime law. Instituted in 2012, its vaguely worded provisions essentially provide a legal basis to detain anybody who criticises the regime online.

This was to be followed shortly after by the formation of the UAE’s own cybersecurity entity, the National Electronic Security Authority (NESA), which recently began working in parallel with the UAE Armed Forces’ cyber command unit, established in 2014.  

A network of Emirati government agencies and state-directed telecommunications industries have worked in loose coordination with international arms manufacturers and cybersecurity companies to transform communications technologies into central components of authoritarian control. 

In 2016, an official from the Dubai police force announced that authorities were monitoring users across 42 social media platforms, while a spokesperson for the UAE’s Telecommunication Regulatory Authority similarly boasted that all social media profiles and internet sites were being tracked by the relevant agencies.

000 OF77X

Crown Prince Mohammed Bin Zayed Al Nahyan of Abu Dhabi meets with US President Donald Trump in Washington in May 2017 (AFP)

As a result, scores of people who have criticised the UAE government on social media have been arbitrarily detained, forcefully disappeared and, in many cases, tortured.

Last year, Jordanian journalist Tayseer al-Najjar and prominent Emirati academic Nasser bin Ghaith received sentences of three and 10 years respectively for comments made on social media. Similarly, award-winning human rights activist Ahmed Mansoor has been arbitrarily detained for nearly a year due to his online activities. 

This has been a common theme across the region in the post-"Arab Spring" landscape. In line with this, a lucrative cybersecurity market opened up across the Middle East and North Africa, which, according to the US tech research firm Gartner, was valued at $1.3bn in 2016.

A modern-day surveillance state

In many respects, the UAE's surveillance infrastructure has been built by a network of international cybersecurity "dealers" who have willingly profited from supplying the Emirati regime with the tools needed for a modern-day surveillance state. 

Moreover, it has been reported that DarkMatter has been hiring a range of top talent from across the US national security and tech establishment, including from Google, Samsung, and McAfee. Late last year, it was revealed that DarkMatter was managing an intelligence contract that had been recruiting former CIA agents and US government officials to train Emirati security officials in a bid to bolster the UAE's intelligence body.

UK military companies also have a foothold in the Emirati surveillance state. Last year, it was revealed that BAE Systems had been using a Danish subsidiary, ETI Evident, to export surveillance technologies to the UAE government and other regimes across the region. 

'The million-dollar dissident'

Although there are officially no diplomatic relations between the two countries, in 2016, Abu Dhabi launched Falcon Eye, an Israeli-installed civil surveillance system. This enables Emirati security officials to monitor every person "from the moment they leave their doorstep to the moment they return to it", a source close to Falcon Eye told Middle East Eye in 2015.

The source added that the system allows work, social and behavioral patterns to be recorded, analyzed and archived: "It sounds like sci-fi but it is happening in Abu Dhabi today."

Moreover, in a story that made headlines in 2016, Ahmed Mansoor's iPhone was hacked by the UAE government with software provided by the Israeli-based security company NSO Group. Emirati authorities reportedly paid $1m for the software, leading international media outlets to dub Mansoor "the million-dollar dissident."

Mansoor's case is illustrative of how Emirati authorities have conducted unethical practices in the past. In recent years, the UAE has bought tailored software products from international companies such as Hacking Team to engage in isolated, targeted attacks on human rights activists, such as Mansoor.

The operations of DarkMatter, as well as the installation of Falcon Eye, suggest, however, that rather than relying on individual products from abroad, Emirati authorities are now building a surveillance system of their own and bringing operations in-house by developing the infrastructure for a 21st-century police state. 

[Source: This article was published in middleeasteye.net By JOE ODELL - Uploaded by the Association Member: Wushe Zhiyang]

Categorized in Deep Web

 Source: This article was published lareviewofbooks.org By Eric Gade - Contributed by Member: Anthony Frank

WE FACE a crisis of computing. The very devices that were supposed to augment our minds now harvest them for profit. How did we get here?

Most of us only know the oft-told mythology featuring industrious nerds who sparked a revolution in the garages of California. The heroes of the epic: Jobs, Gates, Musk, and the rest of the cast. Earlier this year, Mark Zuckerberg, hawker of neo-Esperantist bromides about “connectivity as panacea” and leader of one of the largest media distribution channels on the planet, excused himself by recounting to senators an “aw shucks” tale of building Facebook in his dorm room. Silicon Valley myths aren’t just used to rationalize bad behavior. These business school tales end up restricting how we imagine our future, limiting it to the caprices of eccentric billionaires and market forces.

What we need instead of myths are engaging, popular histories of computing and the internet, lest we remain blind to the long view.

At first blush, Yasha Levine’s Surveillance Valley: The Secret Military History of the Internet (2018) seems to fit the bill. A former editor of The eXile, a Moscow-based tabloid newspaper, and investigative reporter for PandoDaily, Levine has made a career out of writing about the dark side of tech. In this book, he traces the intellectual and institutional origins of the internet. He then focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community.

He remains unremittingly dark, however, claiming that these technologies were developed from the beginning with surveillance in mind and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”

To be sure, these constitute provocative theses, ones that attempt to confront not only the standard Silicon Valley story, but also established lore among the small group of scholars who study the history of computing. He falls short, however, of backing up his claims with sufficient evidence. Indeed, he flirts with creating a mythology of his own — one that I believe risks marginalizing the most relevant lessons from the history of computing.

The scholarly history is not widely known and worth relaying here in brief. The internet and what today we consider personal computing came out of a unique, government-funded research community that took off in the early 1960s. Keep in mind that, in the preceding decade, “computers” were radically different from what we know today. Hulking machines, they existed to crunch numbers for scientists, researchers, and civil servants. “Programs” consisted of punched cards fed into room-sized devices that would process them one at a time. Computer time was tedious and riddled with frustration. A researcher working with census data might have to queue up behind dozens of other users, book time to run her cards through, and would only know about a mistake when the whole process was over.

Users, along with IBM, remained steadfast in believing that these so-called “batch processing” systems were really what computers were for. Any progress, they believed, would entail building bigger, faster, better versions of the same thing.

But that’s obviously not what we have today. From a small research, a community emerged an entirely different set of goals, loosely described as “interactive computing.” As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind.

It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today is known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.

At ARPA, he enjoyed relative freedom in addition to plenty of cash, which enabled him to fund projects in computing whose military relevance was decidedly tenuous. He established a nationwide, multi-generational network of researchers who shared his vision. As a result, almost every significant advance in the field from the 1960s through the early 1970s was, in some form or another, funded or influenced by the community he helped establish.

Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.

Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.

The first piece of evidence he cites comes by way of ARPA’s Project Agile. A counterinsurgency research effort in Southeast Asia during the Vietnam War, it was notorious for its defoliation program that developed chemicals like Agent Orange. It also involved social science research and data collection under the guidance of an intelligence operative named William Godel, head of ARPA’s classified efforts under the Office of Foreign Developments. On more than one occasion, Levine asserts or at least suggests that Licklider and Godel’s efforts were somehow insidiously intertwined and that Licklider’s computing research in his division of ARPA had something to do with Project Agile. Despite arguing that this is clear from “pages and pages of released and declassified government files,” Levine cites only one such document as supporting evidence for this claim. It shows how Godel, who at one point had surplus funds, transferred money from his group to Licklider’s department when the latter was over budget.

This doesn’t pass the sniff test. Given the freewheeling nature of ARPA’s funding and management in the early days, such a transfer should come as no surprise. On its own, it doesn’t suggest a direct link in terms of research efforts. Years later, Taylor asked his boss at ARPA to fund the ARPANET — and, after a 20-minute conversation, he received $1 million in funds transferred from ballistic missile research. No one would seriously suggest that ARPANET and ballistic missile research were somehow closely “intertwined” because of this.

Sharon Weinberger’s recent history of ARPA, The Imagineers of War: The Untold Story of DARPA, The Pentagon Agency that Changed the World(2017), which Levine cites, makes clear what is already known from the established history. “Newcomers like Licklider were essentially making up the rules as they went along,” and were “given broad berth to establish research programs that might be tied only tangentially to a larger Pentagon goal.” Licklider took nearly every chance he could to transform his ostensible behavioral science group into an interactive computing research group. Most people in wider ARPA, let alone the DoD, had no idea what Licklider’s researchers were up to. His Command and Control division was even renamed the more descriptive Information Processing Techniques Office (IPTO).

Licklider was certainly involved in several aspects of counterinsurgency research. Annie Jacobsen, in her book The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency (2015), describes how he attended meetings discussing strategic hamlets in Southeast Asia and collaborated on proposals with others who conducted Cold War social science research. And Levine mentions Licklider’s involvement with a symposium that addressed how computers might be useful in conducting counterinsurgency work.

But Levine only points to one specific ARPA-funded computing research project that might have had something to do with counterinsurgency. In 1969, Licklider — no longer at ARPA — championed a proposal for a constellation of research efforts to develop statistical analysis and database software for social scientists. The Cambridge Project, as it was called, was a joint effort between Harvard and MIT. Formed at the height of the antiwar movement, when all DoD funding was viewed as suspicious, it was greeted with outrage by student demonstrators. As Levine mentions, students on campuses across the country viewed computers as large, bureaucratic, war-making machines that supported the military-industrial complex.

Levine makes a big deal of the Cambridge Project, but is there really a concrete connection between surveillance, counterinsurgency, computer networking, and this research effort? If there is, he doesn’t present it in the book. Instead, he relies heavily on an article in the Harvard Crimson by a student activist. He doesn’t even directly quote from the project proposal itself, which should contain at least one or two damning lines. Instead, he lists types of “data banks” the project would build, including ones on youth movements, minority integration in multicultural societies, and public opinion polls, among others. The project ran for five years but Levine never tells us what it was actually used for.

It’s worth pointing out that the DoD was the only organization that was funding computing research in a manner that could lead to real breakthroughs. Licklider and others needed to present military justification for their work, no matter how thin. In addition, as the 1960s came to a close, Congress was tightening its purse strings, which was another reason to trump up their relevance. It’s odd that an investigative reporter like Levine, ever suspicious of the standard line, should take the claims of these proposals at face value.

I spoke with John Klensin, a member of the Cambridge Project steering committee who was involved from the beginning. He has no memory of such data banks. “There was never any central archive or effort to build one,” he told me. He worked closely with Licklider and other key members of the project, and he distinctly recalls the tense atmosphere on campuses at the time, even down to the smell of tear gas. Oddly enough, he says some people worked for him by day and protested the project by night, believing that others elsewhere must be doing unethical work. According to Klensin, the Cambridge Project conducted “zero classified research.” It produced general purpose software and published its reports publicly. Some of them are available online, but Levine doesn’t cite them at all. An ARPA commissioned study of its own funding history even concluded that, while the project had been a “technical success” whose systems were “applicable to a wide variety of disciplines,” behavioral scientists hadn’t benefited much from it. Until Levine or someone else can produce documents demonstrating that the project was designed for, or even used in, counterinsurgency or surveillance efforts, we’ll have to take Klensin at his word.

As for the ARPANET, Levine only provides one source of evidence for his claim that, from its earliest days, the experimental computer network was involved in some kind of surveillance activity. He has dug up an NBC News report from the 1970s that describes how intelligence gathered in previous years (as part of an effort to create dossiers of domestic protestors) had been transferred across a new network of computer systems within the Department of Defense.

This report was read into the Congressional record during joint hearings on Surveillance Technology in 1975. But what’s clear from the subsequent testimony of Assistant Deputy Secretary of Defense David Cooke, the NBC reporter had likely confused several computer systems and networks across various government agencies. The story’s lone named source claims to have seen the data structure used for the files when they arrived at MIT. It is indeed an interesting account, but it remains unclear what was transferred, across which system, and what he saw. This incident hardly shows “how military and intelligence agencies used the network technology to spy on Americans in the first version of the Internet,” as Levine claims.

The ARPANET was not a classified system — anyone with an appropriately funded research project could use it. “ARPANET was a general purpose communication network. It is a distortion to conflate this communication system’s development with the various projects that made use of its facilities,” Vint Cerf, creator of the internet protocol, told me. Cerf concedes, however, that a “secured capability” was created early on, “presumably used to communicate classified information across the network.” That should not be surprising, as the government ran the project. But Levine’s evidence merely shows that surveillance information gathered elsewhere might have been transferred across the network. Does that count as having surveillance “baked in,” as he says, to the early internet?

Levine’s early history suffers most from viewing ARPA or even the military as a single monolithic entity. In the absence of hard evidence, he employs a jackhammer of willful insinuations as described above, pounding toward a questionable conclusion. Others have noted this tendency. He disingenuously writes that, four years ago, a review of Julian Assange’s book in this very publication accused him of being funded by the CIA, when in fact its author had merely suggested that Levine was prone to conspiracy theories. It’s a shame because today’s internet is undoubtedly a surveillance platform, both for governments and the companies whose cash crop is our collective mind. To suggest this was always the case means ignoring the effects of the hysterical national response to 9/11, which granted unprecedented funding and power to private intelligence contractors. Such dependence on private companies was itself part of a broader free market turn in national politics from the 1970s onward, which tightened funds for basic research in computing and other technical fields — and cemented the idea that private companies, rather than government-funded research, would take charge of inventing the future. Today’s comparatively incremental technical progress is the result. In The Utopia of Rules (2015), anthropologist David Graeber describes this phenomenon as a turn away from investment in technologies promoting “the possibility of alternative futures” to investment in those that “furthered labor discipline and social control.” As a result, instead of mind-enhancing devices that might have the same sort of effect as, say, mass literacy, we have a precarious gig economy and a convenience-addled relationship with reality.

Levine recognizes a tinge of this in his account of the rise of Google, the first large tech company to build a business model for profiting from user data. “Something in technology pushed other companies in the same direction. It happened just about everywhere,” he writes, though he doesn’t say what the “something” is. But the lesson to remember from history is that companies on their own are incapable of big inventions like personal computing or the internet. The quarterly pressure for earnings and “innovations” leads them toward unimaginative profit-driven developments, some of them harmful.

This is why Levine’s unsupported suspicion of government-funded computing research, regardless of the context, is counterproductive. The lessons of ARPA prove inconvenient for mythologizing Silicon Valley. They show a simple truth: in order to achieve serious invention and progress — in computers or any other advanced technology — you have to pay intelligent people to screw around with minimal interference, accept that most ideas won’t pan out, and extend this play period to longer stretches of time than the pressures of corporate finance allow. As science historian Mitchell Waldrop once wrote, the polio vaccine might never have existed otherwise; it was “discovered only after years of failure, frustration, and blind alleys, none of which could have been justified by cost/benefit analysis.” Left to corporate interests, the world would instead “have gotten the best iron lungs you ever saw.”

Computing for the benefit of the public is a more important concept now than ever. In fact, Levine agrees, writing, “The more we understand and democratize the Internet, the more we can deploy its power in the service of democratic and humanistic values.” Power in the computing world is wildly unbalanced — each of us mediated by and dependent on, indeed addicted to, invasive systems whose functionality we barely understand. Silicon Valley only exacerbates this imbalance, in the same manner, that oil companies exacerbate climate change or financialization of the economy exacerbates inequality. Today’s technology is flashy, sexy, and downright irresistible. But, while we need a cure for the ills of late-stage capitalism, our gadgets are merely “the best iron lungs you ever saw.”

Categorized in Online Research

China will further tighten its internet regulations with a pledge on Sunday to strengthen controls over search engines and online news portals, the latest step in President Xi Jinping's push to maintain strict Communist Party control over content.

Xi has made China's "cyber sovereignty" a top priority in his sweeping campaign to bolster security. He has also reasserted the ruling Communist Party's role in limiting and guiding online discussion.

The five-year cultural development and reform plan released by the party and State Council, or Cabinet, calls for a "perfecting" of laws and rules related to the internet.

That includes a qualification system for people working in online news, according to the plan, carried by the official Xinhua news agency.

"Strike hard against online rumors, harmful information, fake news, news extortion, fake media and fake reporters," it said, without giving details.

Xi has been explicit that media must follow the party line, uphold the correct guidance on public opinion and promote "positive propaganda." The plan comes on top of existing tight internet controls, which includes the blocking of popular foreign websites such as Google and Facebook.

    The government last week issued tighter rules for online news portals and network providers. Regulators say such controls are necessary in the face of growing security threats, and are done in accordance with the law.

    Speaking more broadly about the country's cultural sector, the plan calls for efforts to reinforce and improve "positive propaganda".

    "Strengthen and improve supervision over public opinion," it added.

    The plan also calls for more effort to be put into promoting China's point of view and cultural soft power globally, though without giving details.

    Source : This article was published in newsweek.com By REUTERS

    Categorized in Search Engine

    The following is a translation of an article written by Russian journalist Darya Luganskaya. The post has been edited for clarity and length, and reprinted with the author's permission. You can read the original text here

    “The Internet was created as a special project by the CIA, and is developing as such,” Vladimir Putin announced three years ago last week. Since then, Russian authorities’ faith in the Internet has declined even further.

    Despite this negative reputation among officials, the commercial side of the Russian Internet plays an important part of the country’s economy, accounting for around 1.35 trillion rubles ($23.7 billion), or 2.4 percent of Russian GDP in 2015, according to statistics from the Russian Association for Electronic Communications (RAEC). If officials are seriously thinking about “bringing order” to the Internet, as they say they want to, it’ll be a very costly endeavor. OpenEconomy has learned of three potential ways the authorities might begin to “restore order” to the Internet in the coming years.

    1. Total Wiretap

    On July 1, 2018, the second part of the so-called “Yarovaya packet,” a set of anti-terror laws prepared by State Duma MP Irina Yarovaya that advances a new way of storing and decoding Internet traffic, will come into force. Service providers including MTS, MegaFon, Beeline, and Rostelecom, will be obligated to store phone call records and user messages, as well as all Internet traffic data (information about who visits which sites, and when) within six months.  This information must be turned over to the security services upon request.

    Storing this information is very expensive: TMT Consulting estimates that it will cost the Russian telecommunications market nearly $1.7 trillion rubles ($29.8 billion) in 2016. The Ministry of Communications is talking with the security services about ways to reduce storage tenfold, but the costs will still be enormous.

    Participants at the 2014 Internet Entrepreneurship in Russia Forum. Source: Kremlin.ru

    Under the rules, Internet services are also obligated to turn over encryption keys to the FSB (the Federal Security Service) upon request or risk being fined, though it's unclear precisely which keys they've requested and from whom. And Kommersant reported in September that the FSB is trying to find a way to decrypt all internet traffic in Russia using Deep Packet Inspection (DPI), despite the fact that it is less effective when sites use the https security protocol, which many major Russian sites do.

    Beginning in September 2015, all this personal information was ordered to be stored on Russian soil, forcing foreign companies question whether they wanted to continue operating in Russia. Adding additional servers isn’t cheap, and the political implications of such a move could be costly for their users and their reputations around the world.

    In November 2016, LinkedIn was blocked in Russia for violating this order. But this could change — according to Andrei Soldatov, the co-author of “The Fight for the RuNet,” LinkedIn parent company Microsoft has been known to cooperate with Russian authorities. For example, Windows reportedly handed over source code to Russian authorities so that the government would continue to use its products. This appears to have increased the odds that LinkedIn will be unblocked in Russia at some point.

    The end of LinkedIn in Russia?

    Twitter has long refused to comply with this data localization law, though the company has said that it is reviewing relevant policies for Russian users, and that it may reconsider “where it stores the data of Russian users who have a commercial relationship as advertisers on the platform.” The messaging system Viber and the ride-share app Uber have made similar announcements.

    Soldatov belives that Facebook and Google will not hand over users’ personal information to the authorities. Roskomnadzor, the main RuNet regulator, has not yet threatened to block them. And, Soldatov says, they won’t be added to the list of companies that are up for compliance checks anytime soon.

    Other countries are taking similar measures to control the internet, Rose Dlougatch, a senior research associate at Freedom House, which releases an annual ranking of Internet freedom around the world, told Open Economy. “In Turkey, PayPal lost its license for violating data localization laws. The Iranian authorities have indicated that communication services will soon have to store data within Iran. At the beginning of 2016, an analogous data localization law was adopted in Kazakhstan. Citizens’ information is thereby made accessible to the authorities and foreign platforms are pushed out of the local market,” Dlougatch explained.

    2. Blocking Sites

    In 2012, Russian authorities began thinking about a mechanism by which they could control the Internet—a “black list” of websites. Landing on the government’s register of forbidden sites for violating one of the many laws governing Internet content (like those prohibiting propagandizing suicide and drugs, or those banning calls to extremism or terrorism), websites are blocked, often without court review. By the middle of April 2017, free expression news site Roskomsvoboda counted more than 4 million sites that had been blocked in this manner.

    Roskomnadzor has repeatedly threatened to block major websites like YouTube, Reddit, Vimeo, and Wikipedia (and access to these sites has been cut for hours at a time). But these warnings are not about the websites as a whole, but rather about specific pages that, according to authorities, violate one law or another. If a site uses the https security protocol, service providers aren’t able to block only a single page, meaning they have to shut down an entire site until the owner decides to remove the content in question.

    It’s possible to access blocked sites by using anonymizers like VPN services that mask the location of your IP address, making it look like you are visiting websites from, say, Britain, rather than Russia. And Russians actively use these anonymizers. For example, only Americans use the well-known anonymizer Tor more than Russians, and nearly 12 percent of Tor’s clients are located in Russia.

    At the end of April 2017, Vedomosti reported on a Roskomnadzor project aimed at blocking access to tools that allow users to visit blocked websites. Services can avoid being blocked, however, if they voluntarily cut users’ access to sites on Roskomnadzor’s “black list.” Proposed legislation also obligates search engines to prevent blocked websites from appearing in their search results. They could be fined up to 700,000 rubles if they do not comply.

    Image: Pixabay and Kremlin Press Service, edited by Kevin Rothrock.

    The goal of this initiative is to make it illegal to circumvent blocks and to block major anonymizers. Still, anonymizers aren’t going away.

    Vedomosti also reported that rather than blocking specific sites, measures could be taken that make it difficult for users to access sites, including slowing them down. But this would be very difficult and costly to accomplish. According to the Institute for Internet Research, limiting traffic at the subscriber level would require special equipment that could cost as much as $5 billion to develop and implement.

    3. An Autonomous RuNet

    Finally, Russia is trying to regulate the so-called “critical infrastructure” of the RuNet—Internet exchange points with other countries and the .ru and .рф domain names.

    Two years ago at a meeting of the Russian Security Council, Putin instructed state organs to think up ways to maintain the stability of the RuNet if it were to be cut off from the outside world. And at the end of last year, the Ministry of Communications and the FSB discussed legislation on this very topic.

    The Ministry proposes bringing traffic onto a single Government Information System, which Vedomosti has reported would be necessary to localize Internet activity. It also proposes moving Internet exchange points under the administrative control of Russian companies, exclusively. Finally, the Ministry wants to introduce a rule mandating that the administrator of a national domain name system is a Russian legal entity and an executive body with power over communications—that is, the Ministry itself.

    The FSB, meanwhile, is proposing changes to the Criminal Code for causing damage to or threatening the nation’s critical information infrastructure—with punishments running up to 6 years in prison.

    Both proposals have faced loud criticism. Microsoft and Cisco have come out against the FSB’s plan, and the Ministry of Communication’s proposal has not yet been approved by RAEC or expert committees in the government. Neither proposal has become law.

    This article was  published on globalvoices.org

    Categorized in Internet Privacy

    Net Neutrality is without question one of the most important principles in modern life. Whether you realize it or not, the idea that web services should be provided equally without preference to one type of traffic or another has been the cornerstone of the web as we know it. And the Trump administration is doing everything in its power to trample the principle.

    In the latter Obama years, net neutrality came to be enshrined, at least in part, by the FCC. The Open Internet Order of 2015 reclassified internet service providers as “common carriers.” This came with stricter rules regarding how ISPs handled internet traffic. It protected user privacy and forbade companies from prioritizing traffic from specific sites or services.

    The current chairman of the FCC and Trump appointee, Ajit Pai, announced his plan to upend the Open Internet Order. Pai will hold a meeting to look re-examine parts of the order. First, Pai will push for reclassification of ISPs, removing the common carrier requirements, then the FCC will take a look and at provisions that prohibit ISPs from throttling traffic to their competitors, for instance.

    That last bit is particularly concerning because internet service providers, particularly in the United States, have built themselves into tremendous companies wielding broad and dangerous power. Comcast, for instance, is part of the same mega-corporation that controls NBC and Universal. Already you can see a problem — Comcast provides internet and cable services, and without strict rules governing how Comcast handles consumer internet traffic or cable accessibility, there’s little to stop them from making competitors’ movies and television shows harder or more expensive to access.

    We’ve already seen this happen a few times. Netflix was notably throttled until it agreed to pay more to have its traffic given priority. This kind of practice is the same that created the type of mega-monopolies of the gilded age. Rail companies, having few regulations, would often charge farmers more to send their goods to market than they were worth. That led to insane concentrations of wealth and many of the worst social disasters in history — not to mention countless anti-trust laws. As a global society, we’ve already learned this lesson, or at least we should have.

    fcc

    Now the internet faces its greatest existential threat. The US internet is already among the worst in the industrialized world, and it’s precisely because ISPs have had unchecked growth. As a result, there’s often only one high-speed company operating in any given city. Internet service providers have diced up the country to reduce consumer choice. No choice means no competition which leads to higher prices and lower quality for everyone. That’s one of the many reasons Comcast and pals’ customer service is so terrible — they know you don’t have a choice.

    To be fair, ISPs are at least partially right. Having more than one major service company working in an area can cause a lot of redundancy regarding basic infrastructure. Typically services like this are what’s known as natural monopolies. Things like power companies or other utilities are common examples. And these are special cases where it doesn’t make sense to have lots of competition. Imagine having five different power lines running to your house so you could switch providers whenever you wanted — the costs would be higher because that system has lots of excess.

    That said, the internet is something that most people aren’t comfortable trusting to just one entity — and for good reason. The ability to control information or gather data on users is unsettling. And that’s precisely why the FCC filed the Open Internet Order. It simply doesn’t make sense for us to allow companies to have unchecked power over the market. These sorts of systems aren’t just anti-consumer, they’re anti-capitalist, and they only serve to further support the wealthy and help entrench a fundamentally broken system.

    Doom and gloom aside, Ajit Pai’s proposal could also just make your browsing experience suck a lot more. As Ars Technica reports, before 2015, services like video streaming could often be unstable — the companies that carried the signals would always have to work out an exchange when the signal left systems owned by one company and transitioned to those of another. Classifying ISPs as common carriers got rid of that problem, requiring providers to leave all transmitted data essentially alone. So even if you don’t care about the fate of the internet or think I’m being melodramatic, your YouTube surfing is still about to get a lot more frustrating.

    The one upside in all of this is that Tmobile customers and those who take advantage of data-free services that their providers offer will get to keep using them. Previously, the FCC was investigating whether or not allowing consumers to use Spotify, for example, without taking from their data cap qualified as a violation of Net Neutrality (it totally does, by the way). Now, Pai’s suspended that investigation. That’s nice for people like myself who use TMo, but it’s hardly worth everything else we may stand to lose.

    Pai’s scheduled the initial meeting for May 18. If you’re so inclined to leave your thoughts, the FCC typically takes comments from the public on proposed rule changes. It’s also worth following closely as Pai will be unveiling more details as we get closer to the meeting. Regardless, hopefully, one day, we won’t still be stuck in this tired fight.

    This article was  published on geek.com by DANIEL STARKEY

    Categorized in Internet Privacy

    Introduction :

    Cyber law is the law governing cyberspace. Cyberspace jurisprudence is evolving rapidly with various technological developments. Due to immense increase in the use of computers and other communication devices, there has been increase in cyber crimes worldwide. It has also given rise to various challenges dealing with the legal aspects of cyber law to the governments across the world.

    The unprecedented rate of technological change has cropped up many challenges and problems, especially the problem of governance, formulation of new laws and amendments of the existing ones from time to time. Cyberspace has no limitations as to geographical boundaries which has given rise to many transnational crimes, which may affect any country across the globe.

    It is likely that various challenges might arise in future due to such rapid increase in the information technology. There is a need for nations to enter into multi-lateral international agreements to establish new and uniform rules applicable to cyberspace. Also there is a need for creating an international organization to deal with problems relating to cyberspace.

    Emerging global trends and developments in Cyber Law :

    Globally, Various trends and challenges are likely to arise with the development and evolution of cyber law jurisprudence. Following are few of the Trends and challenges in cyberspace which has emerged in last few years and needs a serious attention of the governments across the world.

    Trends in Mobile Laws :

    The increased usage of mobile devices, cell phones, smart phones, personal digital assistants and all other kinds of communication devices in different parts of the world are becoming an integral part of day –to-day existence in life. This has widened the scope of Mobile Ecosystem and is likely to give rise to various complex legal issues and challenges across the world

    In countries like India and China, wherein the usage of mobile devices is exceedingly high, the area of mobile law is emerging and different complicated legal, regulatory and policy challenges concerning the usage of mobile devices and  communication devices are coming to the forefront ,and it is expected that these countries would contribute towards the growth and development of Mobile law.

    Also the increased usage of mobile devices is likely to give rise to more mobile related crimes. It is need of an hour for the governments to adopt direct legislations pertaining to mobile law as the existing laws, regulations and rules in different countries which have an impact on legal issues pertaining to mobile laws are applicable in an indirect manner.

    It is expected that with increased usage of mobile devices across the world, mobile law would emerge as an distinct area of jurisprudence. There is a need for appropriate enabling frameworks for the governments across the world that will help, protect and preserve the rule of law in mobile ecosystem.

    The growth of mobile law jurisprudence has led to the legal issues connected with it. Mobile crime is likely to increase leaps and bounds in the coming years. Increased usage of mobile apps, which majorly consists of an individuals private and personal information, are likely to bring up various legal issues which will need appropriate consideration in order to ensure mobile protection and privacy .With more and more mobile apps emerging the personal information of the user needs to be protected.

    Social Media and Challenges :

    One of the biggest problem cyber law is encountering is related to development of jurisprudence relating to social networking. Increased adoption and usage of social media are likely to bring various legal, policy and regulatory issues. Social media crimes are increasingly gaining attention of relevant stake holders. Misuse of information, other criminal and unwanted activities on social networking platforms and social media are raising significant legal issues and challenges. There is a need for the countries across the world to ensure that rule of law prevails on social media. Social media legal issues continues to be significant. In order to avoid social media crimes and protect the privacy related to social media, it is a challenge for cyber law makers across the world to not only provide appropriate legislative and regulatory mechanisms but also provide for effective remedies for redressal to the victims of various unauthorized, unwanted criminal activities done in cyber space and social media.

    Cyber security and related issues :

    With the growing activities of cyber crime across the world, there is a need for enacting a appropriate legislative, regulatory and policy framework pertaining to cyber security. The International Conference on Cyber law, Cyber crime and Cyber security which took place in November 2014 in India highlighted significant issues affecting cyber security and came up with various recommendations for international stakeholders. It is likely that countries of the world have to deal with issues pertaining to attacks and intrusions into computer systems and networks from location outside the territorial boundaries of the country. It has the potential of prejudicially impacting the sovereignty, integrity and security of the country. Thus there is a need for the nations across the world to amend their existing IT legislations which would help the protection, preservation and promotion of cyber security in the use of computers and communication devices.

    Cloud Computing and Challenges:

    Another important challenge in cyber space is the evolution and development of legal responses to the complicated legal challenges poised and raised by cloud computing and virtualization. Cloud computing being a popular phenomenon among corporate is likely to bring forth issues like data protection and data confidentiality. The relevant stakeholders including lawmakers and governments across the globe need to provide appropriate legal, policy and regulatory framework pertaining to legal aspects concerning cloud computing.

    Spam Laws :

    In the initial years, spam seemed to be targeted at computers but has now also targeted mobile phones. Email spam is the most common form of spamming, Mobile phone spam and instant messaging spam also exist. In majority of the countries there is no such anti spam law, which has led to the further growth of spam. There is an increased need for the countries to come up with regulatory and legal framework for spam as many countries have already become hotspots for generating spam.

    Conclusion:

    The aforesaid are some of the more significant and important cyber law trends which will have bearing on the growth and further evolution of international Cyber law ecosystem. The aforesaid list is only illustrative in nature and by no means exhaustive. With the tremendous growth in information technology worldwide, the society being more and more dependant on technology,  crimes related to computer, computer systems and electronic devices are bound to increase and the law makers have to go the extra mile to maintain the rule of law in the cyberspace ecosystem. What may happen in the future no one can predict due to the fast pace of technological growth. There lies a duty not only on lawmakers and governments, but also on the users at large, to understand their responsibility towards ensuring a safe and healthy technological development and that it is used for legal and ethical purposes to the utmost benefit of mankind.

    Author : Sonia Tulse

    Source : https://www.linkedin.com/pulse/emerging-global-trends-developments-cyber-law-growing-sonia-tulse

    Categorized in Internet Ethics

    The United States government has started asking a select number of foreign travelers about their social media accounts.

    The news came on Thursday via Politico and was confirmed to Mashable by a spokesperson for Customs and Border Protection (CBP) after the new procedure reportedly began earlier in the week. 

    The process dovetails with what has been expected for months and has been slammed by privacy advocates.

    Here's what we know about the basics of the program. 

    Whose information is the agency collecting?

    CBP is asking for social media info from anyone traveling to the U.S. through the Visa Waiver Program, which means they'd be able to travel about the country for 90 days of business or pleasure without a visa.

    The social media request is a part of the Electronic System for Travel Authorization (ESTA) form, which travelers looking for a visa waiver have to fill out before they get to the U.S. The form is used to assess "law enforcement or security risk," according to the CBP's website. 

    Travelers from 38 countries are eligible for a visa waiver, including those from the United Kingdom, Belgium, France and Hungary. 

    What kind of information are they looking for?

     

     

    The form reportedly asks for account names on prominent social networks such as Facebook, Twitter, YouTube, Instagram and LinkedIn, as well as networks many people don't think much about, such as Github and Google+.

    Is it mandatory?

    No one has to fill out their social media information to get into the country, and CBP has reportedly said it won't bar anyone from the U.S. just because that person didn't want to give their Twitter handle to the government.

    Privacy advocates have decried the policy, since many travelers are likely to fill it out just in case.

    That said, privacy advocates have decried the policy, since many travelers are likely to fill it out just in case. A number of groups including the ACLU signed an open letter in October warning of the forthcoming changes.

    "Many of these travelers are likely to have business associates, family, and friends in the U.S., and many of them will communicate with their contacts in the U.S. over social media.

    This data collection could therefore vacuum up a significant amount of data about Americans’ associations, beliefs, religious and political leanings, and more, chilling First Amendment freedoms."

    Why do they want social media information?

    The U.S. has long tried to spot radicals and radical sympathizers online, especially anyone affiliated with the Islamic State (ISIS). 

    ISIS has long had a prolific and disparate social media presence, especially on Twitter, which they've used to spread messages and recruit those who might be hundreds or thousands of miles away from fighting in Syria and Iraq. 

    Initially, government officials wanted ISIS sympathizers to keep tweeting, because agencies were able to gather bits of information from those tweets. Then, however, the government got tired of how many ISIS members and sympathizers there were on Twitter and other platforms, so they ramped up pressure on those social networks to shut down such accounts. 

     

     

    For the government, this is the next step in working out which potential travelers to the U.S. have "connections" to ISIS. Of course, it's unclear what language the CBP would find alarming, and whether their alarm bells would be warranted. 

    How long will they hold onto the information?

    Assuming the social media information will be used just like the rest of the information on the ESTA form travelers have to fill out for a visa waiver, the Department of Homeland Security will keep it readily available for up to three years after it's been filled out. Then the information is "archived for 12 years," but still accessible to law enforcement and national security agencies.

    Can they share the social media information with others?

    Homeland security and the CBP can share your social accounts with "appropriate federal, state, local, tribal and foreign governmental agencies or multilateral governmental organizations responsible for investigating or prosecuting the violations of, or for enforcing or implementing, a statute, rule, regulation, order or license, or where DHS believes information would assist enforcement of civil or criminal laws," according to the CBP website. 

    In other words, assuming the social information is treated like all the other information they collect form those with a visa waiver, homeland security could potentially share it with any law enforcement agency on the planet. They just have to "believe" the information might be of use in solving some type of legal violation

    So once you type out your Twitter handle and send in the application, that information is hardly yours. 

    BONUS: Pushing the Boundaries: Immigration and Esports

     

    Author: COLIN DAILEDA
    Source: http://mashable.com/2016/12/23/us-government-social-media-travelers/?utm_cid=mash-prod-nav-sub-st#mBjkEomtpmqO

    Categorized in Internet Privacy

    The Pirate Bay, ExtraTorrent, Rarbg, 1337X, and YouTube-mp3 are among the websites cited on a new U.S. government hit list. The "Out-of-Cycle Review of Notorious Markets" by the United States Trade Representative (USTR) lists the popular websites among a larger group, which it cites as potentially promoting online piracy and other illegal activities.

    The Pirate Bay Of Symbolic Importance

    The report concedes that it is not accusing any of the listed sites as having violated the law, instead stating that it is only the intention of the report to promote worldwide action against the listed websites when legally appropriate. The list was compiled in large part based on input from industry sources, namely groups like the MPAA (Motion Picture Association Of America) and RIAA (Recording Industry Association Of America), and cites The Pirate Bay as having special significance.

    "Despite enforcement actions around the world and drawn-out legal battles against its operators, The Pirate Bay is of symbolic importance as one of the longest-running and most vocal torrent sites for admittedly illegal downloads of movies, television, music, and other copyrighted content." the report concludes.

    The Pirate Bay is also the top site of its kind in terms of traffic. It held that position for years until being relegated to number two as competitor Kickass Torrents overtook it in popularity until it was recently shut down in July by the U.S. government.

    The report specifically cites the closure of Kickass Torrents and the following voluntary closure of meta torrent search engine, Torrentz, as positive developments since the 2015 report a year ago. Both of those sites have recently returned online, however, in new versions.

    Focus On Stream Ripping

    The report also dedicates a special section to stream ripping, describing it as "an emerging trend in digital copyright infringement that is increasingly causing substantial economic harm to music creators and undermining legitimate services."

    For the first time ever, a specific stream ripping site is included on the government list, namely YouTube-mp3.org, the largest such site of its kind in the world and the subject of a recent lawsuit by the major music labels. Since that lawsuit, as we have reported, the site's stream ripping functions have been disabled by Google, although the process itself has not been determined to be illegal in the U.S.

    Sites Mentioned

    The complete list of websites mentioned in  the report, includes suspected counterfeit and e-commerce sites as well as file sharing and stream ripping domains.

    Author:  James Geddes

    Source:  http://www.techtimes.com/articles/189672/20161225/pirate-bay-extratorrent-rarbg-1337x-youtube-mp3-and-more-on-new-u-s-government-hit-list.htm

    Categorized in Others

    A new Google pilot program now allows publishers to describe CSV and other tabular datasets for scientific and government data.

     Google added a new structured data type named Science datasets. This is a new markup, which technically can be used by Google for rich cards/rich snippets in the Google search results interface.

    Science data sets are “specialized repositories for datasets in many scientific domains: life sciences, earth sciences, material sciences, and more,” Google said. Google added, “Many governments maintain repositories of civic and government data,” which can be used for this as well.

    Here is the example Google gave:

    For example, consider this dataset that describes historical snow levels in the Northern Hemisphere. This page contains basic information about the data, like spatial coverage and units. Other pages on the site contain additional metadata: who produces the dataset, how to download it, and the license for using the data. With structured data markup, these pages can be more easily discovered by other scientists searching for climate data in that subject area.

    This specific schema is not something that Google will show in the search results today. Google said this is something they are experimenting with: “Dataset markup is available for you to experiment with before it’s released to general availability.” Google explained you should be able to see the “previews in the Structured Data Testing Tools,” but “you won’t, however, see your datasets appear in Search.”

    Here are the data sets that qualify for this markup:

    • a table or a CSV file with some data;
    • a file in a proprietary format that contains data;
    • a collection of files that together constitute some meaningful dataset;
    • a structured object with data in some other format that you might want to load into a special tool for processing;
    • images capturing the data; and
    • anything that looks like a dat aset to you.

    Aaron Bradley seemed to first spot this and said “with [a] pilot program, Google now allows publishers to describe CSV and other tabular datasets.”

    Source : http://searchengineland.com/

    Categorized in Search Engine

    Federal regulators just suffered a major setback in their efforts to help cities build Internet services that compete with large providers such as Comcast and Time Warner Cable.

    In a federal court decision Wednesday, the Federal Communications Commission was told that it doesn't have the power to block state laws that critics say hinder the spread of cheap, publicly run broadband service.

    The ruling marks a significant defeat for a federal agency that for the past several years has turned "competition" into an almost-literal mantra, with its chairman, Tom Wheeler, repeating the word at almost every possible opportunity.

    To-save-the-Internet-regulate-it
    To save the Internet, regulate it

    Under the court decision, large Internet providers will continue to enjoy certain benefits that insulate them from the threat of popular city-owned broadband operators such as the Electric Power Board of Chattanooga, Tenn., and the city of Wilson, N.C.

    Through EPB, residents of Chattanooga have access to download speeds of 1 Gbps at rates of about $70 a month. People outside of EBP's service area have "repeatedly requested expansions" from the public utility, according to Wednesday's ruling from the U.S. Court of Appeals for the Sixth Circuit, but due to a geographic restriction put in place by the Tennessee state legislature, EPB is prohibited by law from reaching more customers.

    Last year, EPB and other so-called municipal broadband providers asked the FCC to intervene on their behalf, and the agency agreed. Invoking a part of its congressional charter that it said would allow it to act against the states, the FCC tried to neutralize those state laws. The states responded by suing the agency, claiming it had no right to come between the historical relationship between states and the cities lying within their jurisdiction. This week's ruling, then, rolls back the federal government's attempt to intervene.

    privating-core-part-of-the-internet
    The U.S. just took one step closer to privatizing a core part of the internet

     

    Wheeler, a Democrat, said Wednesday that the outcome of the case "appears to halt the promise of jobs, investment and opportunity that community broadband has provided in Tennessee and North Carolina. In the end, I believe the Commission's decision to champion municipal efforts highlighted the benefits of competition and the need of communities to take their broadband futures in their own hands."

    Wheeler's opponents, including from within his own agency, said the outcome was an obvious one.

    "In my statement last year dissenting from the Commission's decision, I warned that the FCC lacked the power to preempt these Tennessee and North Carolina laws, and that doing so would usurp fundamental aspects of state sovereignty," said Republican FCC Commissioner Ajit Pai. "I am pleased that the Sixth Circuit vindicated these concerns."

    Berin Szoka, president of the right-leaning think tank TechFreedom, said the issue was "federalism 101."

    internet-speed
    Chicago's internet speeds lag behind other cities'

    "The FCC was unconstitutionally interfering with the division of power between state legislatures and municipalities without a 'clear statement' from Congress authorizing it to do so."

    The court ruling represents a turning point for the legal tool the FCC tried to use as a weapon against Internet providers. First deployed in earnest by the FCC as an attempt to justify its net neutrality regulations on Internet providers, Wheeler again invoked Section 706 of the Communications Act to defend his moves against state limits on municipal broadband.

     

    Section 706 calls on the FCC to promote the timely deployment of broadband across the country. The state laws targeting EPB and Wilson, N.C., Wheeler argued, amounted to a legal roadblock to meeting that goal, so preempting those state laws was consistent with Congress' marching orders.

    In rebuking Wheeler's FCC, the Sixth Circuit has now effectively put some new constraints on what Section 706 may be invoked to accomplish. That is a significant step: Not long ago, policy analysts were saying that there were so few limits on the relatively vague language of Section 706 that the FCC could in theory use it to justify almost anything Internet-related. In effect, the court took what some analysts viewed as an unbounded grant of legal authority and imposed some bounds on it.

    There are signs, however, that municipal broadband proponents were anticipating Wednesday's outcome - and are already moving to adapt. One approach? Focus on improving cities' abilities to lay fiber optic cables that then any Internet provider can lease; so far, only one state, Nebraska, has banned this so-called "dark fiber" plan, said Christopher Mitchell, who directs the Institute for Local Self-Reliance's Community Broadband Networks Initiative.

    "We're pursuing strategies that are harder for the cable and telephone companies to defeat," said Mitchell.

    Source : http://www.chicagotribune.com/bluesky/technology/ct-fcc-broadband-competition-20160811-story.html

    Categorized in Internet Ethics

    Get Exclusive Research Tips in Your Inbox

    Receive Great tips via email, enter your email to Subscribe.
    Please wait

    airs logo

    Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

    Newsletter Subscription

    Receive Great tips via email, enter your email to Subscribe.
    Please wait

    Follow Us on Social Media

    Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now