fbpx

I was talking with my good friend Sheryl Sleeva last week and discussing that there will be a day in the not so distant future where your refrigerator will re-order food for you automatically. Out of Honest Tea and So Delicious dairy free ice cream? No worries, it arrives in less than three hours. At 4pm, your thermostat goes from 65 to 72 degrees. Your home, lightbulbs and all appliances will be getting ready to welcome you when you return from work.

Often referred to as the Internet of Things (IoT) or the Internet of Everything (IoE), our many interconnected devices create a massive online infrastructure many of us use throughout our daily lives. Our smartphones, tablets, laptops, and even refrigerators make up the Internet of Things. In fact, any device which is connected to the internet makes up the Internet of Things.

The Internet of Things has grown into a powerful shaping force for both our personal and professional lives. Throughout the past two decades, technological innovation has changed the face of humanity unlike any other period in history, and the sector continues to expand and re-shape how we see entertainment and business.

The History of the Internet of Things

Despite the growing impact of the Internet of Things, nearly 87 percent of individuals have never heard the term. Most people are connected to the Internet of Things in some way, but few have any idea as to how the system impacts their day-to-day life - both in the office and in the home. It seems like a concept for those tech-y people and engineers out on the West Coast.

The Internet of Things may sound like a new term, but in actuality it’s been around since 1974, when Automated Teller Machines (ATMs) took their place as the first member of the IoT. Since then, this network of interconnected devices has grown exponentially - in 2008, the number of devices in the IoT surpassed the number of people in the world.

 

What Makes Up the Internet of Things?

Currently, there are roughly 5 billion devices connected to the internet. This number is expected to reach 50 billion by the year 2020, and will include hundreds of new devices. Today’s most common devices include smartphones, tablets, and computers, but newly designed refrigerators, thermostats, and even light bulbs are beginning to take their place in the Internet of Things.

Self-Driving Cars

Among the most interesting of these newly developed, interconnected devices is the self-driving car. Experts predict we’ll see over 250 million cars connected to the internet by the year 2020, and many of these will be self-driving. While self-driving cars are still in their early stages of development, companies like Google are logging over 10,000 miles per week with their fleet of fully autonomous vehicles. It will take a number of years for self-driving cars to become a staple of garages everywhere, but as they become more mainstream and more readily available, they’ll become an important addition to the Internet of Things.

Wearable Technology

While the invention of self-driving cars is enough to excite the inner sci-fi enthusiast in all of us, the development of smaller devices will also have a major impact on our lives in the coming years. Over the past five years, the development of wearable technology has grown into an established market, with smart watches and FitBits becoming an important addition to many of our wardrobes. In fact, the wearable device market grew 223 percent in 2015, with millions of FitBits and Apple Watches being shipped shortly after their release. These devices are still fairly new to many members of today’s society, but they form an important sector of the Internet of Things. Wearable technology is expected to pave the way for new innovation, wherein devices like smartphones and tablets become a thing of the past. When I bought the Apple watch last April, my friend Dan said “Julie, now your devices have devices.” I was sitting at the table and my purse was in the other room. The phone rang on my iPhone and on my watch I could see it was my daughter calling, so I answered my watch at the table.

Smart Clothing

As of now, wearable technology is confined to watches, FitBits, and Google Glass, but internet-connected clothing is currently in development. Experts expect roughly 10.2 million units of smart clothing will hit shelves in 2020, such as fitness-tracking shirts which are designed to monitor the wearer’s heart rate, body temperature, and other vital signs. Just ten years ago, smart clothing would have sounded like a work of fiction devised in an episode of Star Trek: The Next Generation, but as of 2013 over 140,000 units of smart clothing were shipped worldwide.

 

How the Internet of Things Impacts Our Shared Economy

The Internet of Things plays an important role in our personal lives by giving us new ways to learn, work, travel, exercise, communicate, and entertain ourselves, but it also plays a massive role in the world’s shared economy. As the internet grows, economies of the world become increasingly connected. This has a major impact on jobs and trade, as well as the world’s GDP - in fact, GE recently stated that the Internet of Things (or the “Industrial Internet,” in their terms) will add between $10 trillion and $15 trillion to the global GDP over the next two decades. In addition, the McKinsey Global Institute believes the Internet of Things will have a total economic impact of roughly $11 trillion by the year 2025. These estimations show that as people of the world become more connected because of the IoT, so do their economies.

The Internet of Things is only going to grow, and it’s going to have an utterly major impact on the people and economies of the globe. Today’s devices make up only 0.1 percent of all new innovations expected to connect to the internet, meaning our lives are going to change significantly as this technological sector grows exponentially. The devices we currently use have had a significant impact on how we work and relax, but the devices we can expect to see throughout the next 20 years have immeasurable potential for the people and economies of the world. Technological innovation holds limitless opportunity for society and business worldwide, making the Internet of Things one of the most innovative new factors of modern life.

Are you ready?

Julie Kantor is CEO of http://www.twomentor.com/"}}" style="box-sizing: inherit; color: rgb(46, 112, 97);">Twomentor, LLC a management consulting firm that helps companies reach greater heights and better retain employees by building mentoring cultures. She will be chairing the http://www.womeninstemconference.com/"}}" style="box-sizing: inherit; color: rgb(46, 112, 97);">Global Women in STEM Conference in Dubai this October for women from 12+ countries for the Meera Kaul Foundation.

Source : huffingtonpost

Categorized in Internet of Things

There is an inverse relationship between public access to the Internet and the inability of governments and institutions to control information flow and hence state allegiance, ideology, public opinion, and policy formulation.

Increase in public access to the Internet results in an equivalent decrease in government and institutional power. Indeed, after September 11, 2001, Internet traffic statistics show that many millions of Americans have connected to alternative news sources outside the continental United States. The information they consume can be and often is contrary to US government statements and US mainstream media reporting. 

Information is a strategic resource vital to national security. US Government efforts to understand and engage key audiences to create, strengthen, or preserve conditions favorable for the advancement of USG interests, policies, and objectives through coordinated programs, plans, themes, messages, and products synchronized with the actions of all elements of national power: Diplomacy, Intelligence, Military, Economic, Finance, Law Enforcement, Information… The DOD must also support and participate in USG Strategic C communications activities to understand, inform, and influence relevant foreign audiences, including the DOD’s transition to and from hostilities, security, military forward presence, and stability operations. US Army Unconventional Warfare Manual, 2008

In the early 1990s scores of studies were conducted by the US government, think tanks, consulting firms, defense contractors, futurists and military thinkers on the likely threats to the US military’s electronic communications systems. Those analyses often encompassed commercial networked systems.

For example, in May 1993 Security Measures for Wireless Communications was released under the auspices of the US National Communications System. Not long after, the same office published The Electronic Intrusion Threat to National Security and Emergency Preparedness in December 1994. During June 1995 a conference, co-sponsored by the Technical Marketing Society of America, was held. That event was titled Information Warfare: Addressing the Revolutionary New Paradigm for Modern Warfare.

Then as now the most pernicious and non-life threatening cyber-attacks normally resulted in the theft of identities and, perhaps, intellectual property to which ‘experts’ would assign dollar values. Other network, computer assaults were visited upon databases containing personal information producing headaches for the individuals who had to get new credit cards or revise identities. Embarrassment was the penalty for commercial organizations too cheap to invest in robust electronic security systems.

I Love New York

Information Operations have not taken place (yet) resulting in large scale, life-threatening fallout, but the 1977 New York City blackout provides some clues as to what might result from a successful cyber assault on a power grid. Those initially responsible for the Black Out were bolts of lightning from a thunderstorm that repeatedly struck a Consolidated Edison facility. Redundancies built into the grid that did not function and aging equipment and operator error led to the loss of power.  Observers were already thinking about rudimentary network centric themes even then as The Trigger Effect from the 1978 series Connections by James Burke demonstrates.

It is difficult to say with any certainty if, over the last 23 years, competently secured US military networks have been successfully compromised by electronic intrusions by noted Information Warfare nations Russia, China and Israel seeking to steal classified, compartmented data or Intelligence, Surveillance and Reconnaissance technologies. That information is not likely to ever see the light of day, classified as it should be.

Certainly, US military websites and other government organizations have been hacked successfully over the years resulting in detrimental data spills and website defacement. But these do not rise to the level of national security threat; instead, they are clear cut cases of robbery and vandalism and should be viewed from a civilian law enforcement perspective.

 

Insiders Have Done More Damage to US National Security

It is worth noting that, to date, the most serious breaches of US national and military security have come at the hands of disillusioned US citizens like Jonathan Pollard (US Navy) and Richard Hansen (FBI) who lifted paper documents from secure facilities, and Edward Snowden (NSA & Booz Allen) who downloaded electronic files to his storage devices.

As far as anyone knows, the electromagnetic waves emanating from a computer display have not been remotely manipulated by a state or non-state actor to kill or maim a person looking at the display. But transmitting retroviral software at some distance, or using an intelligence operative to insert destructive code via a flash drive, is known to have been successful in the US-led operation against Iran as theStuxent case demonstrated.

Recent electronic intrusions and theft of data/images from the non-secured private accounts of former NATO commander General Phillip Breedlove, USAF (Ret.); Andrew Weiner (sexting former politician from New York) or General Colin Powell, USA (Ret.) are generally served up by hackers and then picked up as news by US Big Media and Social Media. Humiliating as it is for the individuals involved, this nefarious CYBER-vandalism is not a national security matter, but it is used, gleefully, by any number of political interest groups and businesses for their own ends.

In like manner, the Sony, Democratic National Committee and Yahoo electronic break-ins, for example, are not national security incidents by any stretch of the imagination. Were they criminal actions and embarrassing for the victims? Yes. Did the information peddled by the hackers influence the public in some fashion? Sure. If sponsors of the hackers are from Russia, China, Iran, DPRK, Daesh, Israel or any other cyber-suspect, should they be exposed and brought to justice? Yes.

Should we nuke them or carpet bomb them? No.

It is problematic that politico-military strategists and tacticians, spurred on by any number of think tanks and CYBER hustlers in Washington, DC and New York (Atlantic CouncilNew York Times), have pushed the robbery of data/information and vandalism, or defacement of main-page websites into a crisis that threatens the nation’s stability. More’s the pity,  they have pasted CYBER over Information Warfare and have meshed it with Asymmetric Warfare and Unconventional Warfare not recognizing the differences and nuances.

CYBER Influence Peddlers: Pest Control Needed

CYBER enthusiasts at the Atlantic Council and the New York Times see foreign news agencies like Xinhua/People’s Daily, Press TV, RT, Sputnik News and Hezbollah, which all broadcast news and information with their brand of spin, as demonic CYBER influence peddlers who are corrupting the American national consciousness by engaging in perception management techniques in an attempt to electronically captivate American audiences and turn them, well, to the “dark side.”

Iran’s Press TV Internet traffic statistics show it is ranked 26,598 with 28 percent of its visits coming from the United States.  RT is ranked 446 in the world with 18 percent of its visitors from the US. Sputnik News is 1410 with 8 percent of its visitors from the US.  Xinhua is ranked 25,000 with about 3 percent visiting from the US and the People’s Daily does not even rank.

 

In this dark CYBER world, the unemployed and disaffected youth bulges (but why are they jobless and disenchanted), social miscreants and American citizens will populate evil foreign websites and after viewing assorted marketing/propaganda they will by Pepsi instead of Coke; whoops, I meant to say join the Islamic State or the Chinese Communist Party; move to Russia; or take in the Hezbollah website (no ranking on the Internet).

What this says, in part, is that those pushing CYBER fear have unwittingly indicted the United States and its people of idiocy. They seem to be saying that the American people have been ill served by the Constitution and the Bill of Rights, educational institutions and the government and the citizenry is but a collective of dolts incapable of sorting through information pushed out of non-Western media outlets. In the United States, the First Amendment makes sure that all-points of view can be aired on the premise that the American people have the ability to harvest information and distinguish between info-crap and ‘actionable’ info that can be turned into positive knowledge for civil good.

What is there to fear from comparatively small state backed-foreign news outlets? So they spin news or publish opinions contrary to the US narrative. So what? How is that any different from left and right wing publications in the United States that take down US civilian and military institutions? The American public can handle all of this. The CYBER Fear pushers further display their ignorance by assuming that the US national security machinery has not done enough to protect the enfeebled American public from opinions emanating from non-Western sources. The CYBER chicken-littles believe too, that the US military and those in charge of America’s critical infrastructure sets do not understand the gravity of the CYBER Danger.

Nonsense.

Sleep Well

As the US Army’s Unconventional Warfare Manual and scores of US military CYBER commands,  and doctrinal publications make clear, the US national security community has been pushing the CYBER matter hard. It has engaged in the less public relations friendly issues like mathematics and encryption, physically securing communications nodes and networks, creating honeypots to attract hackers, digital forensics (breaking into secure hard drives, software) and working with civilian counterparts, sometimes controversially, to secure communications networks.

For those worried about the US government’s ability to listen to adversaries, allies, the public, whomever,  the Snowden document dumps show just how deep the National Security Agency’s wormhole goes. Either you’re of the mind that this grossly oversteps the US government’s authority, or maybe the nation is better off with the NSA playing God, or, like most, you just don’t care.

The US capabilities to tap transoceanic communications cables or satellite communications are well known.

 

The seriousness with which the US national security community views CYBER can be noted in this comment from a Defense Science Board study on CYBER Existentialism

While the manifestation of a nuclear and cyber attack are very different, in the end, the existential impact to the United States is the same. Existential Cyber Attack is defined as an attack that is capable of causing sufficient wide scale damage for the government potentially to lose control of the country, including loss or damage to significant portions of military and critical infrastructure: power generation, communications, fuel and transportation, emergency services, financial services, etc.

And just a quasi authoritative US government body claims there is a real danger of an existential CYBER attack, the First Amendment allows a rapier like response from a former government official musing on the fallout from the collapse of electronically connected networks whether by CYBER Attack, lightening bolts or human error.

Cyber Warfare, Cyber Security and massive Cyber Attacks are alarmist and vastly overrated. Look at what went on in Cyprus in 2013. What could trigger a run on the banks in the United States? Something as simple as shutting down all the ATM’s for three days. The resulting panic and long bank lines could irrevocably shake confidence in banks and financial institutions, as Americans find out the significance of all the paperwork they signed when they established their banks accounts, fed by direct deposits. Since many in the country know what the country was like before personal computers and the Internet, they’ll do fine. Those people who have exchanged their hearts and brains for computer chips manufactured in Vietnam, and are tethered to Smart Phones and the Cloud, are due for a very rude awakening. You’ve heard of sleeper agents and moles haven’t you? I wonder how many sleeper programs are in the millions of computer chips that are now in every single facet of our lives.

The original source of this article is Global Research

Categorized in Internet Privacy

 

Is the Internet making us stupid? I’ve gone from being tired of this question to being more and more confused by it.

A recent poll says that about two-thirds of Americans think that the Net is indeed making us stupid. But I wonder what the percentage would be if the question were “Is the Internet making you stupid?”

We all can point to ways in which it is indeed making us stupider in some sense. I know that I check in with some sites too frequently, looking for distraction from work. Some of the sites are nothing to be ashamed of, such as Slate, Google News, Twitter, BoingBoing, some friends’ blogs, DailyKos. (Yeah, I’m a Democrat. Surprise!) Some are less dignified: Reddit, HuffingtonPost, BuzzFeed sometimes. And then there are the sites I don’t even want to mention in public because they reflect poorly on me. OK, fine: Yes, I’ve been to Gawker more than once.

Ready to be delved

We also all—or maybe just most of us—spend time bouncing around the Web as if it were a global pinball table. One link leads to another and then to another. Sometimes the topics are worth knowing about, and sometimes they’re just mental itches that spawn more itches every time we scratch. Often I can’t remember how I got there and sometimes I don’t even remember where I started and why. I suspect I’m not alone in that.

So, in those ways the Internet is making me stupider by wasting my time. Except that often those meandering excursions widen my world. So, maybe it’s not making me quite as stupid as it could.

But, when I look at what I do with the Internet, the idea that it’s overall making me stupider is ridiculous. Not only can I get answers to questions instantly, I can do actual research. Whom do the French credit with inventing the airplane and how do they view the Wright brothers? What was the mechanism that governed the speed at which a dial on an old phone returns to its initial state, and why was it necessary? Why did the Greeks think that history overall declines? Whatever you want to delve into, the Internet is ready to be delved.

 

Every level of explorer

Before the Internet, this was hard to do. With the Internet it’s so easy that we now complain about being distracted. But it’s by no means always pointless distraction. Getting easier answers encourages more questions. Those questions lead to new areas to explore where almost always you’ll find information written for every level of explorer. The threshold for discovery has been reduced to the movement of your clicking finger measured in millimeters. Your curiosity has been unleashed.

If you disagree, if you think the Internet is making you stupider, then stop using it. But of course you won’t. You with the Internet is much smarter than you without the Internet. Isn’t that true for just about all of us?

So, if it’s true that most of us act as if the Net is making us smarter, why do two-thirds of Americans think it’s making us dumber? The answer, I believe, comes from recognizing that people are really saying that the Internet is making them stupid. You know,them.

Them and us

Who is this them? At our worst, we define them by race, gender or other irrelevancies. But putting such prejudices aside, thethem are people we feel can’t navigate the Internet without getting lost or fooled. For example, they are children who think it’s fine to copy and paste from the Net into their homework. So, yes, parents and teachers need not only to teach students to think critically but to enjoy doing so.

Still, the question that gets asked isn’t “Is the Net making students stupid?” The them is broader than that. I suspect that when we think of a stupid them, we’re imagining someone with whom we disagree deeply. The them denies science, distrusts intellectual inquiry and votes for people we consider to be crazy, stupid or both.

But the mystery still remains, because those people—the them—also believe that the Net is making them smarter. After all, that’s how they found all that important (but wrong) information about, say, climate change or vaccinations. So then just about everyone should be thinking that the Net makes them smarter. If we all think the Net’s making us smarter, why are we so ready to disparage human knowledge’s greatest gift to itself?

 

Might it be that our sense that the Net is making other people dumber masks the recognition that all belief is based on networks of believers, authorities and works? Even ours. The Net has made visible a weakness of all human knowledge: It lives within systems of coherence constantly buttressed by other mere mortals. Perhaps that exposure brings us to condemn the Net’s effect on everyone else but us.

Source : http://www.kmworld.com/

 

Categorized in Internet Technology

AROUND MIDNIGHT ONE Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane­ously wrote what she now bitterly refers to as “the tweet that launched a thousand ships.” The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper­bolic joke, she tweeted out a list of political carica­tures, one of which called the typical Sanders fan a “vitriolic crypto­racist who spends 20 hours a day on the Internet yelling at women.”

The ill-advised late-night tweet was, Jeong admits, provocative and absurd—she even supported Sanders. But what happened next was the kind of backlash that’s all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to “rip each one of [her] hairs out” and “twist her tits clear off.”

The attacks continued for weeks. “I was in crisis mode,” she recalls. So she did what many victims of mass harassment do: She gave up and let her abusers have the last word. Jeong made her tweets private, removing herself from the public conversation for a month. And she took a two-week unpaid leave from her job as a contributor to the tech news site Motherboard.

For years now, on Twitter and practically any other freewheeling public forum, the trolls have been out in force. Just in recent months: Trump’s anti-Semitic supporters mobbed Jewish public figures with menacing Holocaust “jokes.” Anonymous racists bullied African American comedian Leslie Jones off Twitter temporarily with pictures of apes and Photoshopped images of semen on her face.Guardian columnist Jessica Valenti quit the service after a horde of misogynist attackers resorted to rape threats against her 5-year-old daughter. “It’s too much,” she signed off. “I can’t live like this.” Feminist writer Sady Doyle says her experience of mass harassment has induced a kind of permanent self-­censorship. “There are things I won’t allow myself to talk about,” she says. “Names I won’t allow myself to say.”qa 

Jigsaw's Jared Cohen: “I want us to feel the responsibility of the burden we’re shouldering.”

Mass harassment online has proved so effective that it’s emerging as a weapon of repressive governments. In late 2014, Finnish journalist Jessikka Aro reported on Russia’s troll farms, where day laborers regurgitate messages that promote the government’s interests and inundate oppo­nents with vitriol on every possible outlet, including Twitter and Facebook. In turn, she’s been barraged daily by bullies on social media, in the comments of news stories, and via email. They call her a liar, a “NATO skank,” even a drug dealer, after digging up a fine she received 12 years ago for possessing amphetamines. “They want to normalize hate speech, to create chaos and mistrust,” Aro says. “It’s just a way of making people disillusioned.”

All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on—clamoring for—the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”

Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

Conversation AI represents just one of Jigsaw’s wildly ambitious projects. The New York–based think tank and tech incubator aims to build products that use Google’s massive infra­structure and engineer­ing muscle not to advance the best possibilities of the Internet but to fix the worst of it: surveillance, extremist indoctrination, censorship. The group sees its work, in part, as taking on the most intract­able jobs in Google’s larger mission to make the world’s information “universally accessible and useful.”

Cohen founded Jigsaw, which now has about 50 staffers (almost half are engineers), after a brief high-profile and controversial career in the US State Department, where he worked to focus American diplomacy on the Internet like never before. One of the moon-shot goals he’s set for Jigsaw is to end censorship within a decade, whether it comes in the form of politically motivated cyberattacks on opposition websites or government strangleholds on Internet service providers. And if that task isn’t daunting enough, Jigsaw is about to unleash Conversation AI on the murky challenge of harassment, where the only way to protect some of the web’s most repressed voices may be to selectively shut up others. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

 
jigsaw_illo2.jpg

But slowly, the group’s lofty challenges began to attract engineers, some joining from other parts of Google after volunteering for Cohen’s team. One of their first creations was a tool called uProxy that allows anyone whose Internet access is censored to bounce their traffic through a friend’s connection outside the firewall; it’s now used in more than 100 countries. Another tool, a Chrome add-on called Password Alert, aims to block phishing by warning people when they’re retyping their Gmail password into a malicious look-­alike site; the company developed it for Syrian activists targeted by government-friendly hackers, but when it proved effective, it was rolled out to all of Google’s users.

  

“We are not going to be one of those groups that justimagines what vulnerable populations are experienc­ing. We’re going to get to know our users.”

In February, the group was renamed Jigsaw to reflect its focus on building practical products. A program called Montage lets war correspondents and nonprofits crowdsource the analysis of YouTube videos to track conflicts and gather evidence of human rights violations. Another free service called Project Shield uses Google’s servers to absorb government-sponsored cyberattacks intended to take down the websites of media, election-­monitoring, and human rights organi­zations. And an initiative, aimed at deradicalizing ISIS recruits, identifies would-be jihadis based on their search terms, then shows them ads redirecting them to videos by former extremists who explain the downsides of joining an ultraviolent, apocalyptic cult. In a pilot project, the anti-ISIS ads were so effective that they were in some cases two to three times more likely to be clicked than typical search advertising campaigns.

 

The common thread that binds these projects, Cohen says, is a focus on what he calls “vulnerable populations.” To that end, he gives new hires an assignment: Draw a scrap of paper from a baseball cap filled with the names of the world’s most troubled or repressive countries; track down someone under threat there and talk to them about their life online. Then present their stories to other Jigsaw employees.

At one recent meeting, Cohen leans over a conference table as 15 or so Jigsaw recruits—engineers, designers, and foreign policy wonks—prepare to report back from the dark corners of the Internet. “We are not going to be one of those groups that sits in our offices and imagines what vulnerable populations around the world are experiencing,” Cohen says. “We’re going to get to know our users.” He speaks in a fast-­forward, geeky patter that contrasts with his blue-eyed, broad-­shouldered good looks, like a politician disguised as a Silicon Valley executive or vice versa. “Every single day, I want us to feel the burden of the responsibility we’re shouldering.”

“Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying.”

 We hear about an Albanian LGBT activist who tries to hide his identity on Facebook despite its real-names-only policy, an admini­strator for a Libyan youth group wary of govern­ment infiltrators, a defector’s memories from the digital black hole of North Korea. Many of the T-shirt-and-­sandal-­wearing Googlers in the room will later be sent to some of those far-flung places to meet their contacts face-to-face.

“They’ll hear stories about people being tortured for their passwords or of state-sponsored cyberbullying,” Cohen tells me later. The purpose of these field trips isn’t simply to get feedback for future products, he says. They’re about creating personal investment in otherwise distant, invisible problems—a sense of investment Cohen says he himself gained in his twenties during his four-year stint in the State Department, and before that during extensive travel in the Middle East and Africa as a student.

Cohen reports directly to Alphabet’s top execs, but in practice, Jigsaw functions as Google’s blue-sky, human-rights-focused skunkworks. At the group’s launch, Schmidt declared its audacious mission to be “tackling the world’s toughest geopolitical problems” and listed some of the challenges within its remit: “money laundering, organized crime, police brutality, human trafficking, and terrorism.” In an interview in Google’s New York office, Schmidt (now chair of Alphabet) summarized them to me as the “problems that bedevil humanity involving information.”

Jigsaw, in other words, has become ­Google’s Internet justice league, and it represents the notion that the company is no longer content with merely not being evil. It wants—as difficult and even ethically fraught as the impulse may be—to do good.

 
Yasmin Green, Jigsaw’s head of R&D.

IN SEPTEMBER OF 2015, Yasmin Green, then head of operations and strategy for ­Google Ideas, the working group that would become Jigsaw, invited 10 women who had been harassment victims to come to the office and discuss their experiences. Some of them had been targeted by members of the antifeminist Gamergate movement. Game developer Zoë Quinn had been threatened repeatedly with rape, and her attackers had dug up and distributed old nude photos of her. Another visitor, Anita Sarkeesian, had moved out of her home temporarily because of numerous death threats.

At the end of the session, Green and a few other Google employees took a photo with the women and posted it to the company’s Twitter account. Almost immediately, the Gamergate trolls turned their ire against Google itself. Over the next 48 hours, tens of thousands of comments on Reddit and Twitter demanded the Googlers be fired for enabling “feminazis.”

“It’s like you walk into Madison Square Garden and you have 50,000 people saying you suck, you’re horrible, die,” Green says. “If you really believe that’s what the universe thinks about you, you certainly shut up. And you might just take your own life.”

To combat trolling, services like Reddit, YouTube, and Facebook have for years depended on users to flag abuse for review by overworked staffers or an offshore workforce of content moderators in countries like the Philippines. The task is expensive and can be scarring for the employees who spend days on end reviewing loathsome content—yet often it’s still not enough to keep up with the real-time flood of filth. Twitter recently introduced new filters designed to keep users from seeing unwanted tweets, but it’s not yet clear whether the move will tame determined trolls.

The meeting with the Gamergate victims was the genesis for another approach. Lucas Dixon, a wide-eyed Scot with a doctorate in machine learning, and product manager CJ Adams wondered: Could an abuse-detecting AI clean up online conversations by detecting toxic language—with all its idioms and ambiguities—as reliably as humans?

Show millions of vile Inter­net comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

To create a viable tool, Jigsaw first needed to teach its algorithm to tell the difference between harmless banter and harassment. For that, it would need a massive number of examples. So the group partnered withThe New York Times, which gave Jigsaw’s engineers 17 million comments fromTimes stories, along with data about which of those comments were flagged as inappropriate by moderators. Jigsaw also worked with the Wikimedia Foundation to parse 130,000 snippets of discussion around Wikipedia pages. It showed those text strings to panels of 10 people recruited randomly from the CrowdFlower crowdsourcing service and asked whether they found each snippet to represent a “personal attack” or “harassment.” Jigsaw then fed the massive corpus of online conversation and human evaluations into Google’s open source machine learning software, TensorFlow.

Machine learning, a branch of computer science that Google uses to continually improve everything from Google Translate to its core search engine, works something like human learning. Instead of programming an algorithm, you teach it with examples. Show a toddler enough shapes identified as a cat and eventually she can recognize a cat. Show millions of vile Internet comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

In fact, by some measures Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10 percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack. For now the tool looks only at the content of that single string of text. But Green says Jigsaw has also looked into detecting methods of mass harassment based on the volume of messages and other long-term patterns.

Wikipedia and the Times will be the first to try out Google’s automated harassment detector on comment threads and article discussion pages. Wikimedia is still considering exactly how it will use the tool, while the Times plans to make Conversation AI the first pass of its website’s com­ments, blocking any abuse it detects until it can be moder­ated by a human. Jigsaw will also make its work open source, letting any web forum or social media platform adopt it to automatically flag insults, scold harassers, or even auto-delete toxic language, preventing an intended harassment victim from ever seeing the offending comment. The hope is that “anyone can take these models and run with them,” says Adams, who helped lead the machine learning project.

Adams types in “What’s up, bitch?” and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale.

What’s more, some limited evidence suggests that this kind of quick detection can actually help to tame trolling. Conversation AI was inspired in part by an experiment undertaken by Riot Games, the video­game company that runs the world’s biggest multi­player world, known as League of Legends, with 67 million players. Starting in late 2012, Riot began using machine learning to try to analyze the results of in-game conversations that led to players being banned. It used the resulting algorithm to show players in real time when they had made sexist or abusive remarks. When players saw immediate automated warnings, 92 percent of them changed their behavior for the better, according to areport in the science journal Nature.

My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office, when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyze. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.

In fact, Conversation AI’s algorithm goes on to make impressively subtle distinctions. Pluralizing my trashy greeting to “What’s up bitches?” drops the attack score to 45. Add a smiling emoji and it falls to 39. So far, so good.

But later, after I’ve left Google’s office, I open the Conver­sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver­sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.

FOR A TECH EXECUTIVE taking on would-be terrorists, state-sponsored trolls, and tyrannical surveillance regimes, Jigsaw’s creator has a surprisingly sunny outlook on the battle between the people who use the Internet and the authorities that seek to control them. “I have a fundamental belief that technology empowers people,” Jared Cohen says. Between us sits a coffee table covered in souvenirs from his travels: a clay prayer coin from Iraq, a plastic-wrapped nut bar from Syria, a packet of North Korean cigarettes. “It’s hard for me to imagine a world where there’s not a continued cat-and-mouse game. But over time, the mouse might just become bigger than the cat.”

 

JIGSAW’S PROJECTS

Project Shield

Montage

Password Alert

The Redirect Method

Conversation AI

Digital Attack Map

When Cohen became the youngest person ever to join the State Depart­ment’s Policy Planning Staff in 2006, he brought with him a notion that he’d formed from seeing digitally shrewd Middle Eastern youths flout systems of control: that the Internet could be a force for political empowerment and even upheaval. And as Facebook, then YouTube and Twitter, started to evolve into tools of protest and even revo­lution, that theory earned him access to officials far above his pay grade—all the way up to secretaries of state Condo­leezza Rice and later Hillary Clinton. Rice would describe Cohen in her memoirs as an “inspired” appoint­ment. Former Policy Planning director Anne-Marie Slaughter, his boss under Clinton, remembers him as “ferociously intelligent.”

Many of his ideas had a digital twist. After visiting Afghanistan, Cohen helped create a cell-phone-based payment system for local police, a move that allowed officers to speed up cash trans­fers to remote family members. And in June of 2009, when Twitter had scheduled downtime for maintenance during a massive Iranian protest against hardliner president Mahmoud Ahmadi­nejad, Cohen emailed founder Jack Dorsey and asked him to keep the service online. The unauthorized move, which violated the Obama administra­tion’s noninterference policy with Iran, nearly cost Cohen his job. But when Clinton backed Cohen, it signaled a shift in the State Department’s relationship with both Iran and Silicon Valley.

Around the same time, Cohen began calling up tech CEOs and inviting them on tech delegation trips, or “techdels”—conceived to somehow inspire them to build products that could help people in repressed corners of the world. He asked Google’s Schmidt to visit Iraq, a trip that sparked the relationship that a year later would result in Schmidt’s invitation to Cohen to create Google Ideas. But it was Cohen’s email to Twitter during the Iran protests that most impressed Schmidt. “He wasn’t following a playbook,” Schmidt tells me. “He was inventing the playbook.”

The story Cohen’s critics focus on, however, is his involvement in a notorious piece of software called Haystack, intended to provide online anonymity and circumvent censorship. They say Cohen helped to hype the tool in early 2010 as a potential boon to Iranian dissidents. After the US govern­ment fast-tracked it for approval, however, a security researcher revealed it had egregious vulnerabilities that put any dissident who used it in grave danger of detection. Today, Cohen disclaims any responsibility for Haystack, but two former colleagues say he championed the project. His former boss Slaughter describes his time in government more diplomatically: “At State there was a mismatch between the scale of Jared’s ideas and the tools the department had to deliver on them,” she says. “Jigsaw is a much better match.”

But inserting Google into thorny geopolitical problems has led to new questions about the role of a multinational corporation. Some have accused the group of trying to monetize the sensitive issues they’re taking on; the Electronic Frontier Foundation’s director of international free expression, Jillian York, calls its work “a little bit imperialistic.” For all its altruistic talk, she points out, Jigsaw is part of a for-profit entity. And on that point, Schmidt is clear: Alphabet hopes to someday make money from Jigsaw’s work. “The easiest way to understand it is, better connectivity, better information access, we make more money,” he explains to me. He draws an analogy to the company’s efforts to lay fiber in some developing countries. “Why would we try to wire up Africa?” he asks. “Because eventually there will be advertising markets there.”

“We’re not a government,” Eric Schmidt says slowly and carefully. “We’re not engaged in regime change. We don’t do that stuff.”

Throwing out well-intentioned speech thatresembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect. When I ask Conversation AI’s inventors about its potential for collateral damage, the engineers argue that its false positive rate will improve over time as the software continues to train itself. But on the question of how its judgments will be enforced, they say that’s up to whoever uses the tool. “We want to let communities have the discussions they want to have,” says Conversation AI cocreator Lucas Dixon. And if that favors a sanitized Internet over a freewheeling one? Better to err on the side of civility. “There are already plenty of nasty places on the Internet. What we can do is create places where people can have better conversations.”

ON A MUGGY MORNING in June, I join Jared Cohen at one of his favorite spots in New York: the Soldiers’ and Sailors’ Monument, an empty, expansive, tomblike dome of worn marble in sleepy Riverside Park. When Cohen arrives, he tells me the place reminds him of the quiet ruins he liked to roam during his travels in rural Syria.

 

Our meeting is in part to air the criticisms I’ve heard of Conversation AI. But when I mention the possibility of false positives actually censoring speech, he answers with surprising humility. “We’ve been asking these exact questions,” he says. And they apply not just to Conversation AI but to everything Jigsaw builds, he says. “What’s the most dangerous use case for this? Are there risks we haven’t sufficiently stress-tested?”

Jigsaw runs all of its projects by groups of beta testers and asks for input from the same groups it intends to recruit as users, he says. But Cohen admits he never knows if they’re getting enough feedback, or the right kind. Conversation AI in particular, he says, remains an experiment. “When you’re looking at curbing online harassment and at free expression, there’s a tension between the two,” he acknowledges, a far more measured response than what I’d heard from Conversation AI’s developers. “We don’t claim to have all the answers.”

And if that experiment fails, and the tool ends up harming the exact free speech it’s trying to protect, would Jigsaw kill it? “Could be,” Cohen answers without hesitation.

 

I start to ask another question, but Cohen interrupts, unwilling to drop the notion that Jigsaw’s tools may have unintended consequences. He wants to talk about the people he met while wandering through the Middle East’s most repressive countries, the friends who hosted him and served as his guide, seemingly out of sheer curiosity and hospitality.

It wasn’t until after Cohen returned to the US that he realized how dangerous it had been for them to help him or even to be seen with him, a Jewish American during a peak of anti-­Americanism. “My very presence could have put them at risk,” he says, with what sounds like genuine throat-­tightening emotion. “To the extent I have a guilt I act on, it’s that. I never want to make that mistake again.”

Cohen still sends some of those friends, particularly ones in the war-torn orbit of Syria and ISIS, an encrypted message almost daily, simply to confirm that they’re alive and well. It’s an exercise, like the one he assigns to new Jigsaw hires but designed as maintenance for his own conscience: a daily check-in to assure himself his interventions in the world have left it better than it was before.

“Ten years from now I’ll look back at where my head is at today too,” he says. “What I got right and what I got wrong.” He hopes he’ll have done good.

Source : https://www.wired.com

Categorized in Internet Privacy

North Korea is very secretive when it comes to letting the world know about most things that take place in the hermit state, so it was a huge surprise when a U.S.-based engineer was able to gain access to all the internet domains in the country on Tuesday night.

As it turns out, there are only 28 registered domains in North Korea.

CNBC reports that Matthew Bryant, a United States researcher who has set up an automated request asking North Korea’s main Domain Name System (DNS) server to allow access to all its registered domains, is usually disappointed as the server is configured to reject such requests. But on Tuesday, Bryant was in for a surprise when the main server — possibly due to a technical glitch — revealed to him a list of all the domain names under the domain.kp.

Soon after, the researcher dumped the data he had accessed on Github, a site that hosts computer code. This is the first time that the outside world has been able to get a peek into North Korea’s intensely secretive internet system, and while experts were already familiar with some of the websites that form it, not many knew about the extent of North Korea’s online presence.

 

“When North Korea brings up a new website they never publicize it. Either someone finds it by accident or it might show up in a search engine,” Martyn Williams, who runs the website North Korea tech from San Francisco, told BBC.

 

“We knew about most of these, but weren’t sure what else existed.”

North Korea's Supreme Leader's extensive list of activities are the content of one site in North Korea.

So what really exists on North Korea’s highly secretive internet network?

As can be expected, a number of these sites are dedicated to publishing state propaganda or are the online arms of the official ministerial bodies, such as the committee for cultural relations and the maritime agency, as well as official state news organisations like the Pyongyang Broadcasting Service, but there are other sites where one can learn more about the cuisines and films being produced in the hermit state. The food site, cooks.org.kp, is filled with pictures of “Korea’s famous recipes” while the film site, korfilm.com.kp, highlights the North Korean film industry. One current section on KorFilm, for example, focuses on the ongoing Pyongyang International Film Festival, where North Korean citizens can apparently watch “art films, documentaries and animated movies.”

There is also a website called Friend, which analysts believe might be North Korea’s version of a social networking site. But most sites that are mentioned in the list of the 28 domain names take a long time to upload, while some are completely inaccessible. One of the sites mentioned in the list is that of the Korean Central News Agency — the state-run propaganda site — the only website of the 28 that was accessible to users outside North Korea even before Tuesday.

Most of the sites are unsophisticated and not as slick as their Western counterparts.

“They don’t try to ape Western media. When you go on the website its obvious its news from North Korea. It’s not dressed up to look like a slick international media outlet,” Williams told BBC.

North Korea's newspaper

Among others sites mentioned in the list are an insurance site, an air travel site, a charity site for the elderly and children, as well as a couple of tourism and educational sites. Despite the websites being seemingly conceptualized to represent the various facets of North Korea, some of them are exclusively dedicated to nothing but obsessing on the cult around the leader Kim Jong-un and his family.

 

The site for the main newspaper Rodong Sinum, rodong.rep.kp, even has a section dedicated to Kim Jong-un’s daily activities.

Despite the findings, Western media organizations might be disappointed that none of the sites contain any information about North Korea’s mysterious intranet, that could, if accessed, be much more revealing about the inner workings of the hermit state.

A full list of North Korean websites with screenshots can be found by clicking here.

Source : http://www.inquisitr.com/

Categorized in Others

The proposals aren’t just bad for Google, but for everyone.

There’s a lot to like about the copyright proposals that the European Commission unveiled Wednesday—easier access to video across the EU’s internal borders, more copyright exceptions for researchers, and more access to books for blind people.

However, two elements in particular could be disastrous if carried out as proposed. One would make it more difficult for small news publications to be able to challenge legacy media giants, and the other would threaten the existence of user-generated content platforms.

In a way, it’s good that digital commissioner Günther Oettinger has finally laid his cards on the table. But the battles that begin now will be epic.

The first contentious proposal is the introduction of so-called neighboring rights for press publishers, also known as ancillary copyright.

The move sounds pretty obscure, but isn’t. Much as it is possible for someone to get rewarded for performing a work—as opposed to writing it, which involves copyright—publishers would get to command fees for the stuff their writers write, based their own (new) rights rather than the copyright held by the journalist.

 

In effect, this would allow publishers to try wrangling fees out of others for any “use of the work”—a dangerously vague term in this context. What’s more, they’d get to do so for a whopping 20 years after publication.

This idea has been tried before in Germany and in Spain, where large publishers used new laws to try getting Google GOOG 0.11%  News to pay for using snippets of their text and thumbnails of their images.

Get Data SheetFortune’s technology newsletter.

Both times the attempts failed. In Germany, Google stopped reproducing snippets of text in Google News, and the publishers granted the firm a free (albeit temporary) licence once they saw how their traffic suffered. In Spain, the publishers had no such leeway and Google News ended up pulling out of the country, hammering the industry’s income in the process.

The Commission’s new proposals aren’t as suicidally rigid as what went down in Spain, but they’re also much vaguer than the German version. As currently phrased, they could allow press publishers to try charging for the reproduction of headlines, or even the mere indexing of their articles.

It’s hard to know whether the large press publishers who lobbied so hard for these measures really think Google will ultimately pay up, or whether their real goal is what happens when it refuses.

Because Google surely won’t pay for indexing their content or reproducing snippets of their text. It can’t—that would be the beginning of the end of its entire search engine business model, which can no longer scale if its links come with a cost.

If this law goes through and demands for licensing fees are rigidly enforced, Google will almost certainly pull Google News out of the entire EU.

Remember that it doesn’t run ads on Google News. It does run ads on its regular search engine, of course, and news results make that a fuller product, but it would have no reason to maintain Google News in Europe if it became a serious financial liability.

And if Google News exits the EU, the biggest victims will be the smaller publications, as happened in Spain. They rely on Google News and other aggregators because that’s how people find their articles, visit their sites, and view and click on their ads.

More established media outlets have much more brand recognition and traditional marketing clout, particularly in linguistically semi-closed markets such as Germany and France. They have everything to gain from reversing the Internet’s opening up of the media market; their rivals, and the reading public, have everything to lose. No wonder they’ve been pushing Oettinger to bring in ancillary copyright.

 

The other major flaw in the new proposals would also be bad news for smaller players, and for the rights of the public.

Under the e-Commerce Directive of 2000, the operators of user-generated content platforms—YouTube and SoundCloud and the like—are not liable for the content their users upload, as long as they take down the illegal stuff once someone flags it. That directive also explicitly says there can be no laws forcing platforms to generally monitor the content they manage.

Despite having consistently denied it is going to change these rules, the Commission is now proposing exactly that. In its new copyright directive proposal, it wants to force all user-generated content platforms to use “effective content recognition technologies,” which sounds an awful lot like generally monitoring content.

Of course, YouTube already has its Content ID technology for identifying and purging illegally uploaded films and so on, but what about new platforms? It cost Google more than $60 million to develop and implement Content ID, and it has to constantly tweak it to counteract those users who figure out ways to get around it.

You know how people upload movies to YouTube that are re-filmed from a funny angle, or that cut off the edges of the screen? That’s an attempt to circumvent Content ID and fighting it costs money, as does handling disputes when the system incorrectly flags videos as infringing copyright.

Quite apart from the fact that this would clash with another piece of EU legislation that’s trying to protect freedom of expression, this would be a huge burden for anyone trying to set up a new user-generated content platform, making it a problem for both innovation and competition.

Yes, creators deserve fair remuneration for the works they create. Yes, the Internet has turned their livelihoods upside-down by forcing them to compete with millions of rivals in an open market. Yes, lack of funding threatens media diversity. Yes, change is hard.

But these new proposals wouldn’t help creators make the best of the new landscape. All they would do is entrench the positions of the big players—the legacy media outlets in the case of ancillary copyright, and funnily enough Google in the case of the user-generated content proposals.

The European Parliament and the EU’s member states have a lot to fix over the next year or two, as this proposal wends its way through the legislative process.

 

Source : http://fortune.com/2016/09/14/europe-copyright-google/

Categorized in Internet Ethics
The deputy head of the presidential administration, Vyacheslav Volodin, has said that Russia has more internet freedom than the United States, where people receive prison sentences for online comments about President Barack Obama.

Volodin was giving a press conference in the central Russian city of Tambov, where a local reporter asked him to comment on the possibility of introducing a rule that would require social networks to obtain ID from their users “so that people could know who is on the other side of the internet.” The official replied that unlike many countries, Russia has chosen self-regulation on the internet and he saw no need to change this.

“Now we are capable of solving various issues through self-regulation and a ban on distribution of information about illegal drugs, suicide and extremism. Society has a need for this.”

He also noted that Russia had more internet freedom than other nations, in particular the United States.

“Take a look at the legal practice. Have you ever heard about the legal proceedings initiated by [Russian] civil servants and senior officials against ordinary internet users over even the most harsh statements made on the internet?” Volodin asked journalists.

A woman in the audience answered that a man had once attempted to sue her for dissemination of discrediting materials about him on the internet, but failed as police and prosecutors refused to recognize her material as unlawful. “You can see that prosecutors protect you. And if you take a look at the US statistics, even over the past six months, you will see that several people there received prison sentences between 12 and 18 months for their posts about President Obama,” Volodin told journalists.

 

“Ask yourselves – who has more democracy – us or them?” he concluded.

The official did not specify which legal cases he was talking about, but this could be the arrest of John Martin Roos – a 61-year-old Wisconsin man who was detained in April this year for threatening the US president on social media. Police also found weapons and several pipe bombs as they searched Roos’ home. He has not yet been sentenced. In 2013, Donte Jamar Sims from Florida was sentenced to six months in prison plus one year of supervised release for making threats to President Obama over Twitter.

In August 2014, Russia introduced a law requiring all blogs with 3,000 daily readers or more to follow many of the rules that exist in conventional mass media, such as tougher controls on published information and a ban on the use of explicit language. The restrictions include the requirement to verify information before publishing it and to abstain from releasing reports containing slander, hate speech, calls for extremism or other banned information such as advice on suicide.

In July this year, Russian President Vladimir Putin signed into law a package of anti-terrorist amendments that allow automatic blocking of websites for promoting extremism and terrorism and require all communications companies, including internet providers, to retain information about their clients’ data traffic for three years and to hand it over to the authorities on demand (one year for messengers and social networks). Providers also must keep records of phone calls, messages and transferred files for six months.

 

 

Source : https://www.rt.com/politics/358296-internet-in-russia-is-freer/

Categorized in Internet Technology

 

THE SAGA OF Facebook Trending Topics never seems to end—and it drives us nuts.

First, Gizmodo said that biased human curators hired by Facebook—not just automated algorithms—were deciding what news stories showed up as Trending Topics on the company’s social network, before sprucing them up with fresh headlines and descriptions. Then a US Senator demanded an explanation from Facebook because Gizmodo said those biased humans were suppressing conservative stories. So, eventually, Facebook jettisoned the human curators so that Trending Topics would be “more automated.” Then people complained that the more algorithmically driven system chose a fake story about Fox News anchor Megyn Kelly as a Trending Topic.

Don’t get us wrong. The Facebook Trending Topics deserve scrutiny. They’re a prominent source of news on a social network that serves over 1.7 billion people. But one important issue was lost among all the weird twists and turns—and the weird way the tech press covered those twists and turns. What everyone seems incapable of realizing is that everything on the Internet is run by a mix of automation and humanity. That’s just how things work. And here’s the key problem: prior to Gizmodo’s piece, Facebook seemed to imply that Trending Topics was just a transparent looking glass into what was most popular on the social network.

Yes, everything on the Internet is a mix of the human and inhuman. Automated algorithms play a very big role in some services, like, say, the Google Search Engine. But humans play a role in these services too. Humans whitelist and blacklist sites on the Google Search Engine. They make what you might think of as manual decisions, in part because today’s algorithms are so flawed. What’s more—and this is just stating what should be obvious—humans write the algorithms. That’s not insignificant. What it means is that algorithms carry human biases. They carry the biases of the people who write them and the companies those people work for. Algorithms drive the Google Search Engine, but the European Union is still investigating whether Google—meaning: the humans at Google—instilled this search engine with a bias in favor of other Google services and against competing services.

“We have to let go of the idea that there are no humans,” says Tarleton Gillespie, a principal researcher at Microsoft Research who focuses on the impact of social media on public discourse. That’s worth remembering when you think about the Facebook Trending Topics. Heck, it’s worth repeating over and over and over again.

 

Facebook’s ‘Crappy’ Algorithm

Jonathan Koren worked on the technology behind the Facebook Trending Topics. The bottom line, says the former Facebook engineer, is that the algorithm is “crappy.” As he puts it, this automated system “finds ‘lunch’ every day at noon.” That’s not the indictment you may think it is. The truth is that so many of today’s computer algorithms are crappy—though companies and coders are always working to improve them. And because they’re crappy, they need help from humans.

That’s why Facebook hired those news curators. “Identifying true news versus satire and outright fabrication is hard—something computers don’t do well,” Koren says. “If you want to ship a product today, you hire some curators and the problem goes away. Otherwise, you fund a research project that may or may not meet human equivalence, and you don’t have a product until it does.” This is a natural thing for Facebook or any other Internet company to do. For years, Facebook, Twitter, and other social networks used humans to remove or flag lewd and horrific content on their platforms.

So, Koren and about five or six other engineers ran a Trending Topics algorithm at Facebook headquarters in Menlo Park, California, and across the country in New York, news curators filtered and edited the algorithm’s output. According to Gizmodo, they also “injected” stories that in some cases weren’t trending at all. (A leaked document obtained by The Guardian, however, showed Facebook guidelines said a topic had to appear in at least one tool before it could be considered for the Trending module.) The setup made sense, though Koren says he privately thought that the humans involved were overqualified. “It always struck me as a waste to have people with real journalism degrees essentially surfing the web,” he says.

Trending versus ‘Trending’

When it looked like Gizmodo’s story was finally blowing over, Facebook got rid of its journalist news curators—then it promptly had to deal with the fake Megyn Kelly story. People blamed the more algorithmically driven system, but Facebook said all along that humans would still play a role—and they did. A human working for Facebook still approved the hoax topic over that weekend, something many people probably don’t realize. But they were outraged that Facebook’s review system, now without a single journalist employed, let a fake story slip through.

 

 

Koren says the whole thing was “a bit overblown.” And that’s an understatement. From where he was sitting, “there wasn’t someone within the company going ‘bwahaha’ and killing conservative news stories.” But even if there was an anti-conservative bias, this is the kind of thing that happens on any web service, whether it’s Google or Amazon or The New York Times or WIRED. That’s because humans are biased. And that means companies are biased too. Don’t buy the argument? Well, some people want fake stories about Megyn Kelly, just because they’re what everyone is talking about or just because they’re funny.

The issue is whether Facebook misrepresented Trending Topics. Prior to the Gizmodo article, a Facebook help page read: “Trending shows you topics that have recently become popular on Facebook. The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked, and your location.” It didn’t mention curators or the possibility that the system allowed a story to be added manually. We could deconstruct the language on that help page. But that’s seems silly. Algorithms don’t exist in a vacuum. They require humans. Besides, Facebook has now changed the description. “Our team is responsible for reviewing trending topics to ensure that they reflect real world events,” it says.

What we will say is that Facebook—like everyone else—needs to be more aware of the realities at work here. Koren says that Facebook’s relationship to the broader issues behind Trending Topics was characterized by a kind of “benign obliviousness.” It was just focused on making its product better. The folks building the algorithm didn’t really talk to the curators in New York. Well, however benign its obliviousness may be, Facebook shouldn’t be oblivious. Given its power to influence our society, it should work to ensure that people understand how its services work and, indeed, that they understand how the Internet works.

What’s important here is getting the world to realize that human intervention is status quo on the Internet, and Facebook is responsible for the misconceptions that persist. But so is Google—especially Google. And so is the tech press. They’ve spent years feeding the notion that the Internet is entirely automated. Though it doesn’t operate that way, people want it to. When someone implies that it does, people are apt to believe that it does. “There’s a desire to treat algorithms as if they’re standalone technical objects, because they offer us this sense of finally not having to worry about human subjectivity, error, or personal bias—things we’ve worried about for years,” says Gillespie.

 

 

Humans Forever

Sorry, folks, algorithms don’t give us that. Certainly, algorithms are getting better. With the rise of deep neural networks—artificially intelligent systems that learn tasks by analyzing vast amounts of data—humans are playing a smaller role in what algorithms ultimately deliver. But they still play a role. They build the neural networks. They decide what data the neural nets train on. They still decide when to whitelist and blacklist. Neural nets work alongside so many other services.

Besides, deep neural networks only work well in certain situations—at least today. They can recognize photos. They can identify spoken words. They help choose search results on Google. But they can’t run the entire Google search engine. And they can’t run the Trending Topics on Facebook. Like Google, Facebook is at the forefront of deep learning research. If it could off-load Trending Topics onto a neural net, it would.

But the bigger point is that even neural nets carry human biases. All algorithms do. Sure, you can build an algorithm that generates Trending Topics solely based on the traffic stories are getting. But then people would complain because it would turn up fake stories about Megyn Kelly. You have to filter the stream. And once you start to filter the stream, you make human judgments—whether humans are manually editing material or not. The tech press (including WIRED) is clamoring for Twitter to deal with harassment on its social network. If it does, it can use humans to intervene, build algorithms, or use a combination of both. But one thing is certain: those algorithms will carry bias. After all: What is harassment? There is no mathematical answer.

Like Twitter, Facebook is a powerful thing. It has a responsibility to think long and hard about what it shows and what it doesn’t show. It must answer to widespread public complaints about the choices it makes. It must be open and honest about how it makes these choices. But this humans versus algorithms debate is a bit ridiculous. “We’ll never get away from the bias question,” Gillespie says. “We just bury them inside of systems, but apply it much more widely, and at much bigger scale.”

Source : http://www.wired.com/2016/09/facebook-trending-humans-versus-algorithms/

 

Categorized in Search Engine

The Internet is a necessity in the workplace, whether you work in a supermarket or in an office, there is a part of your job that will require an Internet connection. With companies becoming increasingly obsessed with ways to increase productivity, the most obvious way is to improve your connection speed.

If you count up the times you have waited for a page to load or a tutorial video to stop buffering, you’ll probably have spent hours if not days each year sat at your screen simply waiting.

In the following paragraphs we’ll cover what contributes to Internet speed and share some top tips on how you can speed yours up.

Internet speed is determined by a number of factors; some are completely out of your control, like where you live whilst others you do have control over.

Network structure

Perhaps the most important thing is the structure of your network. Whether you use a wired or wireless connection, if your system isn’t installed with bandwidth in mind you could be losing mountains of all-important speed. As a result, it is imperative that your business hires a structured cabling and wireless network professional to ensure that your network is in the best position possible to help you reach maximum Internet speed.

Clean your device

Have you ever wondered why the Internet on your laptop seems to move much quicker than on your PC, despite you being connected to the same network? Well a lot of this can be to do with your device rather than the network.

If you are trying to use multiple applications at once or your device is reaching its memory capacity, it will run much slower. As a knock on effect this means it will take longer for web pages to load.

 

Spend a few hours going through your computer and remove any unwanted applications. Don’t forget to perform regular virus scans as well because if your computer is infected it could make your Internet run at a snail-like pace. Here are a few more tips on how to clean your computer.

Check your browser

This is something that many people forget, but different browsers will have more of a strain on your computer and connection. For example, Internet Explorer is a popular browser but it does use a lot of resources, on the other hand you could find that using a more compact browser like Chrome will speed up your connection.

Download browser plug-ins

Most browsers now come with the ability to download plug-ins like dictionaries that allow you to hover over a word and instantly be presented with a definition.

There are many fantastic plug-ins that can help to improve your Internet speed. Popular ones include those which virtually disable ads from your surfing experience and with less flashing banners and pop-ups on your screen, you will have less elements to load and ultimately the content that matters should load much more quickly.

Most browsers now come with this ability built-in, like Safari will automatically disable all Flash elements unless you physically click on it and enable Flash.

Remove unwanted plug-ins

There are countless plug-ins that you can download so you’ll be forgiven for amassing a few over time. However, if you have plug-ins that you haven’t used in months, delete them. They’ll be running in the background and consuming bandwidth unnecessarily.

 

Close all unneeded tabs

Many webpages will now refresh automatically every few minutes, mainly news sites. So even if you aren’t looking at pages but have them open in tabs, they will be consuming bandwidth. If you are one of these people that likes to have unlimited tabs and browser windows open, think again if you are experiencing slow Internet speeds – closing them could have a huge difference.

Source : http://www.toptensocialmedia.com/social-media-technology-2/internet-reliance-improving-your-speed-at-work/

Categorized in Internet Technology

ISLAMABAD: In 2013, after Yahoo acquired Tumblr, a micro blogging website, many financial analysts thought that Yahoo would move away from troubled waters and join ranks with the likes of Facebook and Twitter, if not Google.

Marissa Mayer indeed made a good bet by acquiring the blogging platform for $1.1 billion but unfortunately the acquisition failed to turn things around for Yahoo. Revenues fell, though Yahoo snatched back some market share from Google in 2015 after a deal to replace Google as the default search engine on Firefox browsers in the US.

Despite several acquisitions and organisational changes, profits continued to tumble and eventually the company was put up for sale in 2016. Now in hindsight, we can identify four reasons why a company valued at more than $100 billion in year 2000 ended up getting acquired for less than $4 billion in 2016.

Do you Yahoo!?

After 21 years, board of directors at Yahoo still has no idea if Yahoo is an internet technology company or is it a media powerhouse. For most users, Yahoo is an obsolete search engine; for some, Yahoo is synonymous to Yahoo Mail and for many, it is a finance news portal.

The organisational identity crisis resulted in an unbridged gap between its internal self-image and its market positioning. Yahoo was the ‘go-to destination’ to find good content on internet but it failed to develop a new niche after the dot-com bubble burst.

 

Yahoo is a buzzkill when it comes to acquisitions.

It acquired more than 110 companies since its inception but only a few had a strategic fit with its core business. Yahoo has shown a poor track record in general when it comes to managing million dollar acquisitions. Yahoo failed to monetise the $5.7 billion Broadcast.com, an internet radio company and had to close operations of GeoCities – a web hosting company that it acquired for over $3.6 billion. Yahoo did the same with Delicious and Flickr.

A hands-off approach to product development

Unlike Mark Zuckerberg at Facebook and Larry at Google, co-founders of Yahoo essentially disconnected themselves with decisions related to the product design. Product managers called the shots who would prepare extensive requirements elicitation documents for engineers to execute – with little room for feedback.

Creativity was not a priority and there was no culture of process improvement. Things never changed even when underdogs started to steal Yahoo’s thunder and grab its market share.

Missed opportunities

In 2002, Yahoo failed to close a deal with Google co-founders when they asked for $1 billion. Eventually when Yahoo’s CEO went to them with the reluctant offer, Google raised their valuation to $3 billion.

Similarly in 2006, Yahoo approached Facebook with an offer of $1 billion. Though Mark Zuckerberg declined, it was widely known that an offer of $1.1 billion would have got the deal approved by Facebook’s board.

In 2008, Microsoft approached Yahoo with a takeover bid of over $44 billion. Yang resisted the offer and made up a “stockholder rights plan” as a poison pill to make the company unattractive for takeover. Eventually in 2012, Yang stepped down from the board leaving the company in dire straits.

 

Final word

An internal memo written by a Yahoo employee in 2006 (called Peanut Butter Manifesto) highlighted that the company wants to do everything and be everything – to everyone. The “fear of missing out” and the inability to focus on a core business contributed to the downfall of an internet pioneer.

http://tribune.com.pk/story/1153035/yahoos-demise-internet-giants-failure-story-missed-opportunities/

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more