fbpx

The company’s revamped app and browser extension will block ad tracking networks from companies like Google and Facebook

DuckDuckGo is launching updated versions of its browser extension and mobile app, with the promise of keeping internet users safe from snooping “beyond the search box.”

The company’s flagship product, its privacy-focused search engine, will remain the same, but the revamped extension and app will offer new tools to help users keep their web-browsing as safe and private as possible. These include grade ratings for websites, factoring in their use of encryption and ad tracking networks, and offering summaries of their terms of service (with summaries provided by third-party Terms of Service Didn’t Read). The app and extension are available for FirefoxSafariChromeiOS, and Android.

The ability to block ad tracking networks is probably the most important feature here. These networks are used by companies like Google and Facebook to follow users around the web, stitching together their browsing history to create a more accurate profile for targeted advertising. DuckDuckGo says its software will “expose and block” these trackers when it can find them. Although, in the cat and mouse game of advertising vs. privacy tech, it won’t always be able to catch them all.

DuckDuckGo has long been a small fish in a big pond (or should that be a small duck), but its pitch to users continues to prove popular. At the beginning of 2017, it celebrated 10 billion searches since its creation in 2009. This figure now stands at 16 billion — an increase of more than 50 percent in less than a year.

According to DuckDuckGo CEO Gabriel Weinberg, this shows the appetite for privacy online is only getting stronger. And, says Weinberg, the more people that use tools like DuckDuckGo’s, the more tech companies will be forced to reconsider their business model. “We’ll collectively raise the Internet’s privacy grade, ending the widespread use of invasive tracking,” writes Weinberg. It’s ambitious, to say the least.

Source: This article was published theverge.com By James Vincent

Categorized in Search Engine

Group of 11 British MPs flew to Washington at a cost of £30,000 to taxpayers. But why?

The usual practice at the start of a select committee hearing is for the chair to thank the witnesses for having made the effort to come. At the digital, culture, media and sports committee’s latest hearing on “fake news,” it was the other way round. For the first time in parliamentary history, an entire committee had upped sticks and decamped to the US.

Quite why they had chosen to do so was not altogether clear. As far as anyone was aware, GoogleYouTube, and Facebook all had senior executives working in the UK who were just as qualified to give evidence as their US counterparts. But on the off chance that the committee was hell-bent on hearing from the Americans, you’d have thought it was a great deal cheaper and much less of an organizational nightmare to fly them to the UK. After all, some of them were halfway to London having already flown 3,000 miles from Silicon Valley to join the committee in Washington.

Some might call it a nice winter break at an estimated cost of £30,000 to the British taxpayer. The 11 MPs preferred to call it thoroughness and, to mark the occasion, they had had special lapel badges made for themselves. Every trip abroad deserves a souvenir. And after a day or so to acclimatize and recover from jet lag – the committee flew out to the US on Tuesday – everyone was gathered in an echoey white hall at George Washington University for a 9 am the start.

First in the firing line were Richard Gingras, a dead ringer for Donald Sutherland as well as being vice-president of news at Google, and Juniper Downs, the global head of public policy at YouTube. Both were at pains to say how pleased they were to be there, how much they admired the work of the committee and how much they hated the fake news. Just in case anyone had not been paying attention to this, they repeated how much they hated the fake news.

The committee chair, Damian Collins, is a much shrewder operator than he sometimes appears and probed them rather more forensically than they expected on just how much they put the profit principle above such dreary considerations as monitoring fake news and making sure people weren’t using their platforms to influence election outcomes. “We’ve got 10,000 raiders to make sure people don’t misuse the Google search engines,” Gingras insisted. In which case, Collins observed, why was it that when you typed in Jew, the auto-complete function more often than not took you to an antisemitic website? Gingras shrugged. No one was perfect.

“It’s mission critical for us,” said Downs, when asked what Google-owned YouTube did to ensure the veracity and provenance of the news videos posted on its site. “We spend tens of millions of dollars on security.” She was then asked how much YouTube made in total. Downs bounced up and down in her chair nervously. “I don’t know,” she squeaked. Collins filled her in: $10bn. So YouTubewas spending 0.1% of its earnings on security. Downs shrugged. Sounded plenty to her.

Things didn’t improve when Facebook’s Monika Bickert, its head of global policy management, and Simon Milner, its policy director for the UK, Middle East, and Africa, got their turn in front of the committee. Milner’s appearance was especially baffling as he is a Brit through and through and could far more easily have been questioned in London.

Like Google and YouTube before them, the Facebook execs were mortified that anyone might have been using their websites for anything other than the greater glory of self-improvement. In fact, they were so appalled that they were now voluntarily implementing security measures that the regulators had recently imposed on them.

None of it was terribly enlightening. Just about the only thing we did learn was new media execs talk the same bullshit the world over. And 11 MPs probably didn’t need to travel 3,000 miles to discover that.

Source: This article was published theguardian.com By John Crace

Categorized in News & Politics

FOR ALL THE hype about killer robots, 2017 saw some notable strides in artificial intelligence. A bot called Libratus out-bluffed poker kingpins, for example. Out in the real world, machine learning is being put to use improving farming and widening access to healthcare.

But have you talked to Siri or Alexa recently? Then you’ll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can’t do or understand. Here are five thorny problems that experts will be bending their brains against next year.

The meaning of our words

Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can’t really understand the meaning of our words and the ideas we share with them. “We’re able to take concepts we’ve learned and combined them in different ways, and apply them in new situations,” says Melanie Mitchell, a professor at Portland State University. “These AI and machine learning systems are not.”

Mitchell describes today’s software as stuck behind what mathematician Gian Carlo-Rota called “the barrier of meaning.” Some leading AI research teams are trying to figure out how to clamber over it.

One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching the video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what’s happening in photos using analogies and a store of concepts about the world.

The reality gap impeding the robot revolution

Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today’s robots lack the brains to match their sophisticated brawn.

Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap—a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.

The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.

Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team’s priorities. “It’ll be neat to see over the next year or so how we can leverage that to accelerate learning,” says Urmson, who previously led Google parent Alphabet’s autonomous-car project.

Guarding against AI hacking

The software that runs our electrical gridssecurity cameras, and cell phones is plagued by security flaws. We shouldn’t expect software for self-driving cars and domestic robots to be any different. It may, in fact, be worse: There’s evidence that the complexity of machine-learning software introduces new avenues of attack.

Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally—unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.

The threat is considered serious enough that researchers at the world’s most prominent machine-learning conference convened a one-day workshop on the threat of machine deception earlier this month. Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.

Tim Hwang, who organized the workshop, predicted using the technology to manipulate people is inevitable as machine learning becomes easier to deploy, and more powerful. “You no longer need a room full of PhDs to do machine learning,” he said. Hwang pointed to the Russian disinformation campaign during the 2016 presidential election as a potential forerunner of AI-enhanced information war. “Why wouldn’t you see techniques from the machine learning space in these campaigns?” he said. One trick Hwang predicts could be particularly effective is using machine learning to generate fake video and audio.

Graduating beyond boardgames

Alphabet’s champion Go-playing software evolved rapidly in 2017. In May, a more powerful version beat Go champions in China. Its creators, research unit DeepMind, subsequently built a version, AlphaGo Zero, that learned the game without studying human play. In December, another upgrade effort birthed AlphaZero, which can learn to play chess and Japanese board game Shogi (although not at the same time).

That avalanche of notable results is impressive—but also a reminder of AI software’s limitations. Chess, Shogi, and Go are complex but all have relatively simple rules and gameplay visible to both opponents. They are a good match for computers’ ability to rapidly spool through many possible future positions. But most situations and problems in life are not so neatly structured.

That’s why DeepMind and Facebook both started working on the multiplayer video game StarCraft in 2017. Neither have yet gotten very far. Right now, the best bots—built by amateurs—are no match for even moderately-skilled players. DeepMind researcher Oriol Vinyals told WIREDearlier this year that his software now lacks the planning and memory capabilities needed to carefully assemble and command an army while anticipating and reacting to moves by opponents. Not coincidentally, those skills would also make software much better at helping with real-world tasks such as office work or real military operations. Big progress on StarCraft or similar games in 2018 might presage some powerful new applications for AI.

Teaching AI to distinguish right from wrong

Even without new progress in the areas listed above, many aspects of the economy and society could change greatly if existing AI technology is widely adopted. As companies and governments rush to do just that, some people are worried about accidental and intentional harms caused by AI and machine learning.

How to keep the technology within safe and ethical bounds was a prominent thread of discussion at the NIPS machine-learning conference this month. Researchers have found that machine learning systems can pick up unsavory or unwanted behaviors, such as perpetuating gender stereotypes, when trained on data from our far-from-perfect world. Now some people are working on techniques that can be used to audit the internal workings of AI systems, and ensure they make fair decisions when putting to work in industries such as finance or healthcare.

The next year should see tech companies put forward ideas for how to keep AI on the right side of humanity. Google, Facebook, Microsoft, and others have begun talking about the issue, and are members of a new nonprofit called Partnership on AI that will research and try to shape the societal implications of AI. Pressure is also coming from more independent quarters. A philanthropic project called the Ethics and Governance of Artificial Intelligence Fund is supporting MIT, Harvard, and others to research AI and the public interest. A new research institute at NYU, AI Now, has a similar mission. In a recent report, it called for governments to swear off using “black box” algorithms not open to public inspection in areas such as criminal justice or welfare.

Source: This article was published wired.com By Tom

Categorized in Science & Tech

Searching for people online? Looking for an email address? Look closer to find friends old and new as well as business contacts with these email address directories and people search engines. Here are your best bets.

1-Pipl People Search - Free People Search Site

In real time, Pipl scours databases and directories such as ICQ, Amazon profiles, flickr, or SEC records to find information and people web search engines do not see. More »

2-Intelius People Search - People Search Site
Accessing various public records, Intelius provides comprehensive email address search for the U.S. and can reveal the person behind an email address, too. More »

3-LinkedIn People Search - Free People Search Site
LinkedIn worldwide network of professionals can be searched by name, industry, company, region and more. Of course, LinkedIn offers means to get in touch. More »

4-LexisNexis Public Records - People Search Service
For serious research: LexisNexis's public records and private database search covers hundreds of millions of people and businesses.

5-PeopleSmart - People Search Service
PeopleSmart finds people competently and relays messages to their email addresses so you can contact them. In addition, PeopleSmart can look up the person behind an email address in reverse email search. More »

6-Data.com Connect - Free People Search Site
Data.com Connect helps you find business contacts—across companies and countries, with many a criterion to narrow your search. More »

7-Facebook People Search
You can find everybody on Facebook (and just about everybody is on Facebook after all) by college, company, school, or name. More »

8-yasni - Free People Search Site
yasni scours social networks, the web, blogs, Amazon wishlists, and its own records for whomever you seek. If your search is fruitless, you can swiftly create a missing person ad. More »

9-FreshAddress.com - Free People Search Site
FreshAddress.com links old and new email addresses, but it's always up to date database can also be searched for other criteria. More »

10-EmailSherlock.com Email Search
EmailSherlock smartly searches directories and public records but also web services such as online calendars to return data and details about the person behind an email address. More »

11-Spokeo - Reverse Email Address Search Site
Spokeo's reverse email search shows you the name, photos, videos, social networking profiles, blogs, and non-email contact information behind an email address. More »

12-MyLife - Free People Search Site
Hand MyLife a name and approximate age, and it will often find the person you seek. After registering, you can see their details, too. More »

13-Myspace.com Discover People
The space to meet friends on the web was once heavily populated. You can still find ways to get in touch with artists, though, and possibly old friends on Myspace.com. More »

14-Plaxo Business Card Search
After becoming a Plaxo member yourself, you can search—and contact—others in their directory. More »

15-InfoTracer
InfoTracer aggregates publicly available information—from Social Security records to blogs and business ownerships—for search by name, location, and also email address. More »

16-Email Finder Reverse Email Lookup - People Search Site
Email Finder finds more than email addresses. It looks up the person behind an email address, in fact, with a detailed profile—for members only.

17-Reunion.com People Search - Free People Search Site
After registering yourself (which puts you in the directory), Reunion.com turns up comprehensive results that get you back in touch with people you knew. You can also search by school, for example, and find out who's looking for you. More »

18-XING - Free People Search Site
Popular in Europe, XING helps you find and connect to businesses and their people. More »

19-ICQ White Pages - Free People Search Site
Search the directory of ICQ users with numerous criteria to find old and new friends and their email addresses. More »

20-PeekYou People Search - Free People Search Site
You can search PeekYou's profiles for people (and a way to contact them) by name, company or school. More »

21-ThatsThem.com
You can search ThatsThem.com by name and address or, for reverse lookups, by email address with, in my experience, mixed results. More »

 

Source: This article was published lifewire.com By Heinz Tschabitscher

Categorized in Search Engine

Life online has been rough lately — for the billions of people who use the Web every day, and also for the tech giants behind much of the world’s hardware and software.

Meanwhile, amid hacks and misinformation, the Internet is entering a new frontier. Connected devices, or the Internet of Things, are introducing the Internet to even more private aspects of our lives.

First, the latest news:

A massive cybersecurity breach at Equifax exposed millions of Americans’ most sensitive data, from Social Security numbers to home addresses. The aftermath yielded even more digital drama: erroneous tweets, fake websites and phishing scams.

At Facebook, Mark Zuckerberg has admitted politically motivated Russian accounts used the social network during the recent US presidential election. “I don’t want anyone to use our tools to undermine democracy,” Zuckerberg had to aver.

Across the Atlantic, Google is facing a staggering $2.7 billion antitrust fine from the European Union. Officials say the search giant is depriving Internet users of choice and depriving its competitors of a fair shake. (Google disagrees and says it’s actually improving user choice and competition.)

And on the African continent, the government of Togo knocked the Internet offline amid growing protests. Social media sites, online banking and mobile text messaging were blocked — a blow to freedom of expression and other democratic ideals.

Headlines such as these are reminders that online life is deeply entwined with offline life. What happens on the Internet affects our pocketbooks and even our democracies.

All of this is happening as the Internet of Things grows exponentially. In the ’90s, the Internet was tethered to our desktops. Last decade, it leapt onto our phones and into our pockets. Now, the Internet is becoming pervasive: It’s entering our cities, our cars, our thermostats and even our salt shakers.

As a result, the connection between online life and all other aspects of life is deepening. Five or 10 years down the line, the implications of another Equifax hack, or another Internet shutdown, will be far greater.

What do we do? Right now, the Internet of Things is at an inflection point. It’s pervasive but also still in its infancy. Rules have yet to be written, and social mores yet to be established. There are many possible futures — some darker than others.

If we continue forward with the Internet’s current design patterns — controlled by a handful of Silicon Valley giants, with personal data as currency — those darker futures will likely prevail. Internet-connected bedrooms, cars, pacemakers and dialysis machines would be beholden to companies, not individual users. Personal data — captured on an even more granular level — would remain currency, and threats such as hacking would extend to even more intimate areas of our lives.

There are already sobering examples. Consider My Friend Cayla, the Internet-enabled toy that the German government labeled an “illegal espionage apparatus.” Cayla is a seemingly innocuous, Barbie-like doll. But Cayla records conversations, hawks products to impressionable youngsters, and is vulnerable to hackers.

Even good news raises murky questions. Tesla recently gave its customers affected by Hurricane Irma a battery boost — a noble gesture. But some Floridians and journalists questioned the implications: What happens if someone other than Tesla gains access to a fleet of vehicles? This is a concern about connected cars that long predates Hurricane Irma.

Alternatively, we — Internet users and consumers — can demand new, better design patterns. The Internet of Things can adopt an ethos akin to the early Internet: decentralized, open source and harmonized with privacy.

Here, too, there are examples. A growing number of technologists ask not only what’s possible but also what’s responsible. At the recent ThingsCon event in Berlin, creations from this movement were on display.

We encountered the concept of the Internet of Things trust marks — third-party labels that signal whether a device sufficiently respects privacy. We were introduced to Simply Secure’s Knowledge Base, a tool kit instructing developers and designers how to make privacy-respecting, secure products. And we saw SolarPak, a backpack created in Senegal that’s equipped with a solar panel. It collects energy during the day to power a small LED lamp at night, allowing students to study.

These are ideas and devices that solve problems, as many technology products do. But they also put responsibility first; data collection and planned obsolescence aren’t part of the equation.

This isn’t a simple issue. Big tech platforms do have an important role to play in making the Internet of Things more responsible and ethical. But the dynamic of so few controlling so much of our lives is simply too risky. As we welcome the Internet into more intimate parts of our lives, individual consumers and users must remain in control. And loud consumer demand alone isn’t enough: Regulators and industry leaders need to take steps, too. Together, we must ensure new hardware and software put responsibility ahead of flashiness and profit.

Source: This article was published wtkr.com By CNN WIRE

Categorized in Internet of Things

AMID ONGOING CONCERN over the role of disinformation in the 2016 election, Facebook said Wednesday it found that more than 5,000 ads, costing more than $150,000, had been placed on its network between June 2015 and May 2017 from "inauthentic accounts" and Pages, likely from Russia.

The ads didn't directly mention the election or the candidates, according to a blog post by Facebook's chief security officer Alex Stamos, but focused on "amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights." Facebook declined to discuss additional details about the ads.

Facebook says it had given the information to authorities investigating Russian interference in the 2016 election. "We know we have to stay vigilant to keep ahead of people who try to misuse our platform," Stamos wrote in the post. "We believe in protecting the integrity of civic discourse, and require advertisers on our platform to follow both our policies and all applicable laws."

Speculation has swirled about the role Facebook played spreading fake news during the 2016 election. Senator Mark Warner, vice chair of the Senate Intelligence Committee, has gone so far as to wonder whether President Trump's tech and data team collaborated with Russian actors to target fake news at American voters in key geographic areas. “We need information from the companies, as well as we need to look into the activities of some of the Trump digital campaign activities," Warner said recently.

Brad Parscale, digital director of the Trump campaign, has agreed to an interview with the House Intelligence Committee, and maintains he is "unaware of any Russian involvement in the digital and data operations of the 2016 Trump presidential campaign."

Wednesday's revelation is a new wrinkle in the ongoing Russia investigations. In July, Facebook told WIRED it had found no indication of Russian entities buying entities during the election.

In the larger context of political ad spending, even $150,000 is a nominal amount. According to a report by Borrell Associates, digital political-ad spending totaled roughly $1.4 billion in 2016. And yet, this finding exposes what seems to be a coordinated effort to spread misinformation about key election issues in targeted states.

Facebook is remaining tight lipped about the methods it used to identify the fraudulent accounts and Pages that it has since suspended. One search for ads purchased from US internet addresses set to the Russian language turned up $50,000 worth of spending on 2,200 ads. Facebook said about one-quarter of the suspect ads were geographically targeted, with more of those running in 2015 than 2016. According to The Washington Post, some accounts may be linked to a content farm called Internet Research Agency in St. Petersburg.

Facebook said it is implementing changes to prevent similar abuse. Among other things, it's looking for ways to combat so-called cloaking in which ads that appear benign redirect users to malicious or misleading websites once people click through. That allows bad actors to circumvent Facebook's ad review process.

But while Facebook may be able to limit what people can and can't buy on its platform, it doesn't change the fact that social media has created a stage for anyone looking to spread false information online, with or without ads. As the $150,000 figure indicates, this finding is but a small fraction of a much larger problem.

Source: This article was published wired.com By ISSIE LAPOWSKY

Categorized in Social

Facebook is the largest and most popular social networking site on the Web today. Millions of people check into Facebook daily, which makes it a fantastically powerful tool for finding people you might have lost contact with: friends, family, high school chums, military buddies, etc. In this article, we are going to look at a few ways you can use Facebook to reconnect. Note: as technology moves forward very quickly, keep in mind that some of the methods listed here might become outdated; however, at the time of this writing, all of these were tested and found to work correctly.

Facebook Friends Page

Go to the find your friends on Facebook page. You have a number of options here: find people you know by email, find people you know by the last name, find people on your IM (instant messenger) list, browse for people alphabetically (this is somewhat tedious) or browse Facebook pages by name.

Piggyback on Your Friends' Friends

Use your Facebook friends as a resource. Click on their Friends and scroll through their list of friends. This is a great way to find someone in common that you might have forgotten about.

Facebook Suggestions

Use the Facebook Suggestions link (found to the right of your news stream) as a jumping off point. You will not only see potential friends and fan pages here but if you scroll down a little, you will also see an opportunity to search within your groups: college, high school, workplace, camps, etc.

By default, when you search for a topic on Facebook, the results you see will be from your list of contacts; your "circle of friends", so to speak. If you would like to expand that circle to include results from anyone who has chosen to make their Facebook information publicly accessible, simply click on "Posts By Everyone." This gives you the option to view information from people who are not included in your contact list.

Search Facebook Profiles

Facebook has a page designated especially for the networks that people choose to belong to. On this search page, you can search by name, email, school name and graduation year, and company. More »

Filter Your Facebook Results

Once you start typing something into the Facebook search bar, a feature called Facebook Typeahead kicks in, which returns the most relevant results from your immediate contacts.By default, when you search for someone on Facebook, you will get all the result on one page: people, pages, groups, events, networks, etc. You can filter these easily by using the search filters on the left-hand side of the search results page. Once you click on one of those filters, your search results will rearrange themselves into only results that coincide with that particular subject, making it easier for you to track down who you are looking for.

Search For Two Things at Once

Facebook (unfortunately) does not have much in the way of advanced search, but you can search for two things at once by using the pipe character (you can make this character by pressing shift backslash). For example, you could look for baseball and Billy Smith with this search: "baseball (pipe character) Billy Smith."

Find Classmates on Facebook

Search for former classmates on Facebook. You can either simply browse through a graduation year (this is a GREAT way to find people you have lost touch with), or you can type in a specific name to get more narrowed results.You'llo be given people from your alma mater if you include it in your own Facebook profile.

Find work colleagues on Facebook

If someone has ever been affiliated with a company (and has put this affiliation on their Facebook profile), you will be able to find it using the Facebook company search page.

Search for Facebook Networks

This Facebook search page is especially helpful. Use the drop down menu to search within your networks, or browse the left-hand side menu to filter your search results (recently updated, lists, possible connections, etc.).

Facebook's general search page searches ALL results; friends, groups, posts by friends, and Web results (powered by Bing). You are given the option to "like" pages and groups that you might be intereste

Source: This article was published lifewire.com By Wendy Boswell

Categorized in Search Engine

The data is used "to know the population distribution" of Earth to figure out "the best connectivity technologies" in different locales 

SAN FRANCISCO: In a bid to expand the reach of internet to every corner of the world, Facebook said that it has created a data map of the human population of 23 countries by combining government census numbers with information obtained from satellites. 

Citing Janna Lewis, Facebook's head of strategic innovation partnerships and sourcing, the Media eported that the mapping technology can pinpoint any man-made structures in any country on Earth to a resolution of five metres.

Facebook used the data to understand the precise distribution of humans around the planet which would help it determine what types of internet service -- based either on land, in the air or in space -- it can use to reach consumers who now have no (or very low quality) internet. 

"Satellites are exciting for us. Our data showed the best way to connect cities is an internet in the sky," Lewis was quoted as saying at a Space Technology and Investment Forum sponsored by the Space Foundation in San Francisco this week. 

"We are trying to connect people from the stratosphere and from space," using high-altitude drone aircraft and satellites, to supplement Earth-based networks," Lewis added. 

The data is used "to know the population distribution" of Earth to figure out "the best connectivity technologies" in different locales. 

"We see these as a viable option for serving these populations" that are "unconnected or under-connected," she said. 

Facebook said that it developed the mapping technology itself. 

Source: This article was published economictimes.indiatimes.com
Categorized in Social

Facebook is hiring 3,000 workers over the next year -- but this is no ordinary job.

The company is adding an army of new content reviewers to its Community Operations Team as part of an effort to combat an uptick in gruesome live and pre-recorded videos users are posting on its site.

Videos of murders, suicides and other awful things are popping up with alarming frequency on the popular social platform, and Facebook’s content moderators are apparently having trouble keeping up with the flagged reports.

Last month, a video of a murder remained on the site for nearly two hours before it was taken down.

In response, Facebook CEO Mark Zuckerberg announced plans to hire more moderators to “review the millions of reports we get every week and improve the process for doing it quickly.”

“If we're going to build a safe community, we need to respond quickly. We're working to make these videos easier to report so we can take the right action sooner -- whether that's responding quickly when someone needs help or taking a post down,” he said.  

Nabbing a Job as a Facebook Content Moderator

Facebook’s Community Operations Team has been around a while, but there are virtually no concrete details about these new reviewer positions.

It’s also not clear whether these will be in-house positions or through third-party contractors.

One thing is certain -- this job isn’t for the faint of heart.

It will expose you to all manner of abusive, violent and gory content.

Before you apply, know that social media content moderation jobs have a high incidence of burnout, PTSD and long-term psychological trauma.

In fact, the positions are so challenging that two members of Microsoft’s Online Safety Team who worked in content moderation are suing the company for damages. They say the job has caused them permanent psychological trauma, including social anxiety, insomnia, depression, dissociation and hallucinations.

So why on earth would anyone want to do this job?

Ellen S., Vice President of Global Developer Support and Operations at Facebook, says, “The people that make up Community Operations care about our community and take pride in being Facebook’s first line of support.”

Jobs like this often appeal to people who want to make a difference in the world.

If you’ve got a passion for making online communities safer, a thick skin and a psychiatrist on speed dial, keep your eyes open for new positions at Facebook’s Online Operations career page.

While you’re waiting for jobs to open up, learn more about what Facebook’s hiring managers look for in a candidate.

Your Turn: Could you handle being a Facebook content reviewer?

Lisa McGreevy is a staff writer at The Penny Hoarder. She’s grateful for the content moderators who make online communities a little safer for us all.

Source: This article was thepennyhoarder.com

Categorized in Social

Facebook is by far and away the largest social network on the Internet, bringing together friends, family, and colleagues to discuss in text, images, and video form whatever they feel like every day. But Facebook is apparently changing, and within 5 years the entire social network will consist of video content.

That’s not the prediction of this writer or some social network researcher, it’s the view of Nicola Mendelsohn, Facebook’s vice president for Europe, the Middle East, and Africa. She also believes that Facebook “will be definitely mobile,” which we can only assume means accessed almost exclusively on mobile devices.

nicola_mendelsohn

Mendelsohn’s claim that most people will be using mobile devices to access Facebook in the near future is the much more believable prediction of the two. Facebook being 100% videos by 2021? I don’t think so.

Video is still a relatively new addition to Facebook, but most certainly a feature that is growing in popularity. Zuckerberg thinks it is important, meaning it’s going to get a lot of attention and resources put behind it. So it will grow rapidly, but I can’t see it replacing images and text. In fact, I doubt Facebook’s management would want that to happen seeing as it owns image sharing service Instagram (although it can also handle video).

facebook_laptop

Not everything works as a video, and not everyone is comfortable making videos. Sometimes you just want to write, or have a text chat, or post an image of a cat. Video takes longer to create unless we’re talking Vine-length captures, and is much easier to create poorly ultimately meaning it doesn’t get posted.

Mendelsohn says the amount of text appearing on Facebook is declining every year while video grows and virtual reality is coming. On those points I’m sure she’s correct, but by 2021 I expect plenty of the billion+ people using Facebook to still be tapping out sentences of text and sharing them with their little community of followers.

Source: This article was published geek.com By MATTHEW HUMPHRIES

Categorized in Social

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now