Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

The advancement of technology has given us so many ways of finding out what we need to do to sustain businesses of different kinds.

Regardless of the differences that are unique to each business, one thing remains important – knowing what your customer base wants, and then work on fulfilling their needs in order to survive the competition.

With the era of monopolies now gone, it makes it very essential to on market research. That is why the major companies in the world employ the services of marketing teams and spend millions of dollars annually in recruiting research firms. Here are some reasons why you need to spend that money on research.

To have a better idea of what you are going to do

This may sound generic as ever, but the thing is – it shows you the importance of conducting research in the first place. The worst mistake you can make as an entrepreneur is shooting in the dark – you simply waste your time and resources without anything to show for it. You may make mistakes along the way even if you have sufficient information, but it is better to be prepared.

Whatever decision you make for the business, it is important to do your homework. From sourcing for suppliers, getting potential clients, and even developing the range of your products and services. It can also help in clearing up any queries that you have while still providing you with sufficient first-hand strategy that helps in progressing the business, also known as SWOT (strengths, weaknesses, opportunities, and threats).

It helps you to stay focused

Let us be honest here – marketing research and marketing, in general, is not an easy venture to get into, especially with all the multitasking that happens today. Even as a business owner, you might find yourself trying to focus on multiple projects, playing various roles within your organization, while still running the business and trying to help it succeed.

However, when you have done sufficient market research, it should inform you of what your consumers want. This helps you make a list of priorities you need to accomplish, which in turn helps you in managing your time as effectively as possible. It also gives you a guide on both the long and short term strategies you need to put in place, which helps you feel more organized, less frazzled and overwhelmed in the long-term.

Gives you better insight on your customers

Regardless of whether your business has been in the game for a long time or is starting out, target customers should always be a priority, as they are either potential or loyal customers. Good market research strategies will clear doubts you may have on the identification of these customers, including their gender, ages, locations, and so on.

The more analysis you do on their spending habits and products they frequently purchase, for instance, the better you can understand what makes them tick. This allows you to focus on manufacturing certain products or giving certain services in order to retain them as loyal customers.

Helps in understanding customer behavior

Aside from getting to know potentially loyal customers, you can use market research to find out some requirements of the customers. The improvement of technology has given us some useful advanced software tools such as Google Analytics, which assist you to understand the behavioral patterns of your customers. Oncetheir spending behavior is understood, you can then refine and customize various products for them.

These software tools can use the data in two forms. One is tracking the spending habits of the customer when they are online, which is a real-time analysis (it particularly works if you have an online shop), and the other is to check their past spending behavior to see if there are any patterns involved.

Helps in analyzing competitors

If you have ever read the book ‘The Art of War’ by Sun Tzu, it states that for you defeat your enemy in battle, it is important to know yourself and know them, and they will never defeat you. This statement is true; not just in war, but also in other aspects of life – especially when it comes to running a business.

Monopolies do not exist anymore in this day and age, so every business has fierce competition all around it. This should motivate you to do adequate research on the market as well as your competitors, and you may learn some strategies from them that can either push you ahead or reduce the growth of your competitors.

Even more important, it will help you to keep your growth consistent through the improvement of your services and products.

Gives you adequate information for selling and forecasting activities

The most important focus of sales forecasts is helping a business keep adequate inventory, an action which regulates the balance of demand and supply. The only way to find this out is through doing research. After doing a sales forecast you can then begin to plan for alternative goods you can sell, or find new methods of selling the goods. In addition, you may have consumer bases in other countries; market research allows you to find them and tailor make strategies for those areas.

Reduces losses and risk on a significant level

The best approach to use would be a contingency approach. At the back of your mind, you remember that the business may not do well due to factors that may be beyond your control, or others may be mistakes you will learn along the way.

Market research proves to be effective at reducing risks that you incur or the losses your business may suffer, as long as it has concrete findings to back it. It helps in avoiding mistakes such as poor pricing methods and poor marketing so that your business has a fighting chance of thriving.

Final thoughts

Making your business succeed in the face of increasing competition is not an easy thing, and that is why market research exists – to enable you to find weaknesses you should improve on. With these tips, investing in proper research is justified, and it will give your company great benefits.

Source: This article was Published bmmagazine.co.uk

Published in Marketing Research

The third Annual Western Cape Research Ethics Committees Colloquium was hosted by the University of the Western Cape (UWC) on Tuesday 11 September 2018

Here, the effectiveness of social media as a research tool and the implications of work conducted on these social media platforms were highlighted. 

According to Dr. Amiena Peck, from UWC’s Department of Linguistics, social media platforms have created many advantages of online research.

Guidelines, privacy, and cybersecurity

“Millions of South Africans use social media platforms such as Facebook, Twitter, Instagram, and Linkedin, and more and more people join daily. This makes finding data more accessible, but it does offer challenges,” Peck said. 

“Unfortunately, there are no guidelines and no existing literature for guidelines when using social media for data collection, and there are several other challenges – such as privacy issues and cybersecurity.”

Professor Neil Myburgh, chair of UWC’s Biomedical Research Ethics Committee, said the issue of consent when using social media is often not spoken about – but this should change. “We have seen on Twitter where photos of children were shared in particular campaigns, bringing ethical issues to the surface,” he said. 

Myburgh noted that researchers need to consider all ethical issues when harvesting data from social media and strict ethical guidelines need to be established for social media use.

Proper ethical research methods

These kinds of reviews carried out by Research Ethics Committees allow a collective of multiskilled people to review a proposal and check its scientific veracity, as well as its ethical quality – a useful process. 

UWC rector and vice-chancellor, Professor Tyrone Pretorius said ethics is close to the hearts of most researchers and professionals at universities.

“Colloquia such as these are important to ensure that proper ethical research methods are taught to our young researchers. We have seen what has been happening in the accounting profession, for example – the curriculum needs to be amended so that we can teach the softer skills to our young accountants,” he said.

The colloquium enabled fruitful engagement between people closely involved in ensuring both scientific and ethical quality in research, whilst contributing to better practices all around. 

Attendees included participants from research structures at the Cape Peninsula University of Technology, Stellenbosch University, University of Cape Town, the South African Medical Research Council and the Western Cape Department of Health.
Source: This article was Published bizcommunity.com
Published in Research Methods

The EU-funded KConnect project has developed an innovative online medical search and analysis tools, enabling researchers to achieve clearer insights into the effectiveness of specific medical interventions and ultimately leading to more optimized treatments.

“The key success of the KConnect project has been to make effective online medical search tools accessible to medical researchers and the public,” says KConnect (Khresmoi Multilingual Medical Text Analysis, Search and Machine Translation Connected in a Thriving Data-Value Chain) project coordinator Allan Hanbury from the Vienna University of Technology in Austria. “The project results will now be further developed and should allow better insight into the effectiveness of medical interventions, as well as providing more reliable access for citizens to online medical information.” Project partners are currently working with commercial clients to create specific search solutions.

Automated text analysis

The amount of written information that exists in the medical domain is phenomenal. This includes patient-specific information such as medical records, as well as non-patient-specific information including peer-reviewed articles in journals that describe the results of clinical trials of interventions. To evaluate the effectiveness of specific treatments and procedures, all this text needs to be taken into account.


“There is a clear need for computer-supported tools capable of analysing all this information, which can then lead to firm conclusions on the effectiveness of specific medical interventions,” says Hanbury. “Computer analysis of text remains a challenge though, and this is, even more, the case in the medical domain. This is because different styles of writing can be found across scientific papers and medical records, and there is extensive use of abbreviations and of course different languages in medical records.”

Accessible, reliable information

KConnect focused on two main challenges: improving medical text analysis, search and machine translation services; and demonstrating the effectiveness of using these tools in medical record analysis and online searches of medical publications and websites. The project was built, to a large extent, on the results of the EU-funded Khresmoi project, which developed tools to search for and analyse medical text and images. Khresmoi’s main focus was on visual searches for radiology images, as well as text analysis of medical publications.

Starting from this basis, new search tools were developed and tested, and are now being applied in real life situations. The medical record analysis and search algorithms have been included in the Clinical Record Interactive Search (CRIS) system at the NHS Maudsley Biomedical Research Centre in the UK. CRIS provides authorized researchers with secure access to anonymised information extracted from the South London and Maudsley NHS Foundation Trust electronic clinical records system. This enables them to look at real-life situations on a large scale, making it easier to see patterns and trends and to see which treatments work for some but not others.

KConnect tools are also being used by the Health on the Net Foundation, which promotes the dissemination of useful and reliable health information online. The Foundation’s new search system gives users an estimation of the readability and reliability of medical websites. A KConnect plug-in for the Chrome Browser has been released and provides users with estimates of the reliability of medical websites sourced using common search engines.

Hanbury notes that training the medical text-specific machine translation algorithms proved to be a challenge for certain languages where few relevant resources were available, such as Hungarian. Nonetheless, KConnect services now allow multilingual queries in the search engine of the Trip medical database, a tool that enables researchers to find high-quality clinical research evidence. A soon-to-be-released Trip tool using KConnect technology will allow for the rapid analysis of multiple medical publications related to a specific disease, giving researchers an immediate overview of the effectiveness of various medications and interventions.

 Source: This article was Published Cordis.europa.eu

Published in Search Engine

Did you ever need data on a topic you wanted to research, and had a hard time finding it? Wish you could just Google it? Well, now you can do that.

With data science and analytics on the rise and underway to being democratized, the importance of being able to find the right data to investigate hypotheses and derive insights is paramount

What used to be the realm of researchers and geeks is now the bread and butter of an ever-growing array of professionals, organizations, and tools, not to mention self-service enthusiasts.

Even for the most well-organized and data-rich out there, there comes a time when you need to utilize data from sources other than your own. Weather and environmental data is the archetypal example.

Suppose you want to correlate farming data with weather phenomena to predict crops, or you want to research the effect of weather on a phenomenon taking place throughout a historical period. That kind of historical weather data, almost impossible for any single organization to accumulate and curate, is very likely to be readily available by the likes of NOAA and NASA.

Those organizations curate and publish their data on a regular basis through dedicated data portals. So, if you need their data on a regular basis, you are probably familiar with the process of locating the data via those portals. Still, you will have to look at both NOAA and NASA, and potentially other sources, too.

And it gets worse if you don't just need weather data. You have to locate the right sources, and then the right data at those sources. Wouldn't it be much easier if you could just use one search interface and just find everything out there, just like when you Google something on the web? It sure would, and now you can just Google your data, too.

That did not come about out of the blue. Google's love affair with structured data and semantics has been an ongoing one. Some landmarks on this path have been the incorporation of Google's knowledge graph via the acquisition of Metaweb, and support for structured metadata via schema.org.

Anyone doing SEO will tell you just how this has transformed the quality of Google's search and the options content publishers now have available. The ability to markup content using schema.org vocabulary, apart from making possible things such as viewing ratings and the like in web search results, is the closest we have to a mass-scale web of data.

This is exactly how it works for dataset discovery, as well. In a research note published in early 2017 by Google's Natasha Noy and Dan Brickley, who also happen to be among the semantic web community's most prominent members, the development was outlined. The challenges were laid out, and a call to action was issued. The key element is, once more, schema.org.

schemaorgattributes.png 

Schema.org plays a big part in Google's search, and it's also behind the newly added support for dataset search. (Image: Go Live UK)

Schema.org is a controlled vocabulary that describes entities in the real world and their properties. When something described in schema.org is used to annotate content on the web, it lets search engines know what that content is, as well as its properties. So what happened here is that Google turned on support for dataset entities in schema.org, officially available as of today.

The first step was to make it easier to discover tabular data in search, which uses this same metadata along with the linked tabular data to provide answers to queries directly in the search results. This has been available for a while, and now full support for dataset indexing is here.

But is there anything out there to be discovered? How was Google's open call to dataset providers received? ZDNet had a Q&A with Natasha Noy from Google Research about this:

"We were pleasantly surprised by the reception that our call to action found. Perhaps, because we have many examples of other verticals at Google using the schema.org markup (think of jobs, events, and recipes), people trusted that providing this information would be useful.

Furthermore, because the standard is open and used by other companies, we know that many felt that they are doing it because it is 'the right thing to do.' While we reached out to a number of partners to encourage them to provide the markup, we were surprised to find schema.org/dataset on hundreds, if not thousands, of sites.

So, at launch, we already have millions of datasets, although we estimate it is only a fraction of what is out there. Most just marked up their data without ever letting us know."

NOAA's CDO, Ed Kearns, for example, is a strong supporter of this project and helped NOAA make many of its datasets searchable in this tool. "This type of search has long been the dream for many researchers in the open data and science communities," he said. "And for NOAA, whose mission includes the sharing of our data with others, this tool is key to making our data more accessible to an even wider community of users."

Under the hood

In other words, it's quite likely you may find what you are looking for already, and it will be increasingly likely going forward. You can already find data from NASA and NOAA, as well as from academic repositories such as Harvard's Dataverse and Inter-university Consortium for Political and Social Research (ICPSR), and data provided by news organizations, such as ProPublica.

But there are a few gotchas here, as datasets are different from regular web content that you -- and Google -- can read.

To begin with, what exactly is a dataset? Is a single table a dataset? What about a collection of related tables? What about a protein sequence? A set of images? An API that provides access to data? That was challenge No. 1 set out in Google's research note.

Those fundamental questions -- "what is topic X" and "what is the scope of the system" -- are faced by any vocabulary curator and system architect respectively, and Noy said they decided to take a shortcut rather than get lost in semantics:

"We are basically treating anything that data providers call a dataset by marking schema.org/dataset as a dataset. What constitutes a dataset varies widely by discipline and at this point, we found it useful to be open-minded about the definition."

That is a pragmatic way to deal with the question, but what are its implications? Google has developed guidelines for dataset providers to describe their data, but what happens if a publisher mis-characterizes content as being a dataset? Will Google be able to tell it's not a dataset and not list it as such, or at least penalize its ranking?

Noy said this is the case: "While the process is not fool-proof, we hope to improve as we gain more experience once users start using the tool. We work very hard to improve the quality of our results."

google-data-tech-analytics2-ss-1920.jpg

Google and data has always gone hand in hand. Now Google takes things further, by letting you search for data.

Speaking of ranking, how do you actually rank datasets? For documents, it's a combination of content (frequency and position of keywords and other such metrics) and network (authority of the source, links, etc). But what would apply to datasets? And, crucially, how would it even apply?

"We use a combination of web ranking for the pages where datasets come from (which, in turn, uses a variety of signals) and combine it with dataset-specific signals such as quality of metadata, citations, etc," Noy said.

So, it seems dataset content is not really inspected at this point. Besides the fact that this is an open challenge, there is another reason: Not all datasets discovered will be open, and therefore available for inspection.

"The metadata needs to be open, the dataset itself does not need to be. For an analogy, think of a search you do on Google Scholar: It may well take you to a publisher's website where the article is behind a paywall. Our goal is to help users discover where the data is and then access it directly from the provider," Noy said.

First research, then the world?

And what about the rest of the challenges laid out early on in this effort, and the way forward? Noy noted that while they started addressing some, the challenges in that note set a long-term agenda. Hopefully, she added, this work is the first step in that direction.

Identifying datasets, relating them, and propagating metadata among them was a related set of challenges. "You will see", Noy said, "that for many datasets, we list multiple repositories -- this information comes from a number of signals that we use to find replicas of the same dataset across repositories. We do not currently identify other relationships between datasets."

Indeed, when searching for a dataset, if it happens to be found in more than one locations, then all its instances will be listed. But there is also something else, uniquely applicable to datasets -- at least at first sight. A dataset can be related to a publication, as many datasets come from scientific work. A publication may also come with the dataset it produced, so is there a way of correlating those?

Noy said some initial steps were taken: "You will see that if a dataset directly corresponds to a publication, there is a link to the publication right next to the dataset name. We also give an approximate number of publications that reference the dataset. This is an area where we still need to do more research to understand when exactly a publication references a dataset."

pasted-image-0.png

Searching for datasets will retrieve not only multiple results for your query, but also multiple sources for each dataset. (Image: Google)

If you think about it, however, is this really only applicable to science? If you collect data from your sales pipeline and use them to derive insights and produce periodic reports, for example, isn't that conceptually similar to a scientific publication and its supporting dataset?

If data-driven decision making bears many similarities to the scientific process, and data discovery is a key part of this, could we perhaps see this as a first step of Google moving into this realm for commercial purposes as well?

When asked, Noy noted that Google sees scientists, researchers, data journalists, and others who are interested in working with data as the primary audience for this tool. She also added, however, that as Google's other recent initiatives indicate, Google sees these kinds of datasets becoming more prominent throughout Google products.

Either way, this is an important development for anyone interested in finding data out in the wild, and we expect Google to be moving the bar in data search in the coming period. First research, then the world?

Source: This article was Published zdnet.com By George Anadiotis

Published in Search Engine

Google has launched Dataset Search, a search engine for finding datasets on the internet. This search engine will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Google Dataset Search will allow users to search through datasets across thousands of repositories on the Web whether it be on a publisher’s site, a digital library, or an author’s personal web page.

Google’s Dataset Search scrapes government databases, public sources, digital libraries, and personal websites to track down the datasets. It also supports multiple languages and will add support for even more soon. The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. It may soon expand to include more sources.

Google has developed certain guidelines for dataset providers to describe their data in a way that Google can better understand the content of their pages. Anybody who publishes data structured using schema.org markup or similar equivalents described by the W3C, will be traversed by this search engine. Google also mentioned that Data Search will improve as long as data publishers are willing to provide good metadata. If publishers use the open standards to describe their data, more users will find the data that they are looking for.

Natasha Noy, a research scientist at Google AI who helped create Dataset Search, says that “the aim is to unify the tens of thousands of different repositories for datasets online. We want to make that data discoverable, but keep it where it is.”

Ed Kearns, Chief Data Officer at NOAA, is a strong supporter of this project and helped NOAA make many of their datasets searchable in this tool. “This type of search has long been the dream for many researchers in the open data and science communities,” he said.

Source: This article was Published hub.packtpub.com By Sugandha Lahoti

Published in Search Engine

The Internet is a very leaky place. Security researchers find new servers spilling private data with alarming regularity. Some incidents have involved well-known, reputable companies. This one does not. It involves a server that helped cyber criminals run a massive SPAM campaign.

While investigating massive spam-producing malware network, security researchers at Vertek Corporation made an unexpected discovery. One of the servers linked to the malware hadn't been properly secured. Anyone who had the IP address of the server could connect at will and download a massive cache of email addresses.

Vertek tallied more than 44 million addresses in total. Of those, more than 43,500,000 were unique. The data was broken down into just over 2,200 files with each one containing more than 20,000 entries.

Bleeping Computer was provided with a list that broke down which email services were the most popular with the spammers. Yahoo addresses were the most common, at nearly 9 million. AOL was a close second at just over 8 million. Comcast addresses were the third most common at around 780,000.

The numbers fall sharply after that, with none breaking half a million. Many of the addresses that appear are provided by ISPs like AT&T, Charter, Cox, and SBC. Curiously enough, very few Gmail accounts were listed. Bleeping Computer thinks that may be because the database Vertek was able to access only contained part of the spam server's address book. It's also possible that these particular domains were chosen to target a specific type of user.

Vertek's researchers have shared their findings with Troy Hunt, who is analyzing the list against the already massive database he maintains at the breach notification service HaveIBeenPwned.

It wouldn't be at all surprising if Hunt discovers that all 43 million addresses were already exposed by other leaks or hacks. Why? Because at least two other leaks from spam-linked servers contained way, way more.

In August of last year, Hunt processed a whopping 711 million addresses from a compromised server. Many of those, he determined, had been dumped before. The biggest leak involving a SPAM service involved twice as many emails. MacKeeper's Chris Vickery discovered a mind-blowing 1.4 billion addresses exposed by a shady server.

Source: This article was Published forbes.com By Lee Mathews

Published in Internet Privacy

Well, Google does not know you personally, so there is no reason to hate you. If you are writing and still not getting that first ranking on the page of the search engine, it means something is not right from your side.  First of all, let’s just get some ideas straight. How do you think search engine ranking is effective a web page? Being in the few lines of code will not always determine whether the page is capable enough to be placed on the first page of the search engine. Search engines are always on the lookout for signals to rank any page. So, it is easier for you to tweak an article and give those signals to search engines for enjoying a huge round of traffic.

Starting with the primary point:

To get that huge round of audience, you need to start with keyword research. It is one such topic which every blogger might have covered at least once. They need to work on that from the very first day of their blogging life. Every SEO blog or blogger might have used Google Keyword Planner for sure. You might have heard of it, because if you haven’t then you are missing out on a lot of things for your massive business growth.

More on Google Keyword Planner:

There are so many types of keyword research tools available in the market but Google Keyword Planner is at the top of the list. It is also one of the major keyword spy tool names you will come across recently. Google Keyword Planner is an official item from Google, offering you a traffic estimation of targeted keywords. It further helps users to find some of the related and relevant KWs for matching your niche. There are some important points you need to know about Google Keyword Planner before you can actually start using it.

  • For using this Google Keyword Planner tool, you need to register your name with Google and have an AdWords account. The tool is free of cost and you don’t have to spend a single penny on using this item. You have every right to create an AdWords tool using some simple steps and get to use it immediately.
  • If you want, you can clearly search for the current Google AdWords coupons, which will help you to create one free account for your own use. It will help you to use the Google Keyword Planner tool on an immediate count for sure.
  • The main target of this tool is towards AdWords advertisers. On the other hand, it is able to provide some amazing deals of information when it is time to find the right keyword for the blog and the relevant articles to your business.

Log online and get a clear idea on how the homepage of this tool from Google looks like. You just have to enter the target keyword in the given search bar and start your search results quite immediately.  Later, you can add filters if you want to.

Source: This article was Published irishtechnews.ie By Sujain Thomas

Published in Online Research

IN LATE JULY, a group of high-ranking Facebook executives organized an emergency conference call with reporters across the country. That morning, Facebook’s chief operating officer, Sheryl Sandberg, explained, they had shut down 32 fake pages and accounts that appeared to be coordinating disinformation campaigns on Facebook and Instagram. They couldn’t pinpoint who was behind the activity just yet, but said the accounts and pages had loose ties to Russia’s Internet Research Agency, which had spread divisive propaganda like a flesh-eating virus throughout the 2016 US election cycle.

Facebook was only two weeks into its investigation of this new network, and the executives said they expected to have more answers in the days to come. Specifically, they said some of those answers would come from the Atlantic Council's Digital Forensics Research Lab. The group, whose mission is to spot, dissect, and explain the origins of online disinformation, was one of Facebook’s newest partners in the fight against digital assaults on elections around the world. “When they do that analysis, people will be able to understand better what’s at play here,” Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said.

Back in Washington DC, meanwhile, DFRLab was still scrambling to understand just what was going on themselves. Facebook had alerted them to the eight suspicious pages the day before the press call. The lab had no access to the accounts connected to those pages, nor to any information on Facebook’s backend that would have revealed strange patterns of behavior. They could only see the parts of the pages that would have been visible to any other Facebook user before the pages were shut down—and they had less than 24 hours to do it.

“We screenshotted as much as possible,” says Graham Brookie, the group’s 28-year-old director. “But as soon as those accounts are taken down, we don’t have access to them... We had a good head start, but not a full understanding.” DFRLab is preparing to release a longer report on its findings this week.

As a company, Facebook has rarely been one to throw open its doors to outsiders. That started to change after the 2016 election, when it became clear that Facebook and other tech giants missed an active, and arguably incredibly successful, foreign influence campaign going on right under their noses. Faced with a backlash from lawmakers, the media, and their users, the company publicly committed to being more transparent and to work with outside researchers, including at the Atlantic Council.

'[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.'

GRAHAM BROOKIE, DIGITAL FORENSICS RESEARCH LAB

DFRLab is a scrappier, substantially smaller offshoot of the 57-year-old bipartisan think tank based in DC, and its team of 14 is spread around the globe. Using open source tools like Google Earth and public social media data, they analyze suspicious political activity on Facebook, offer guidance to the company, and publish their findings in regular reports on Medium. Sometimes, as with the recent batch of fake accounts and pages, Facebook feeds tips to the DFRLab for further digging. It's an evolving, somewhat delicate relationship between a corporate behemoth that wants to appear transparent without ceding too much control or violating users' privacy, and a young research group that’s ravenous for Intel and eager to establish its reputation.

“This kind of new world of information sharing is just that, it’s new,” Brookie says. “[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.”

The lab got its start almost by accident. In 2014, Brookie was working for the National Security Council under President Obama when the military conflict broke out in eastern Ukraine. At the time, he says, the US intelligence community knew that Russian troops had invaded the region, but given the classified nature of their intel they had no way to prove it to the public. That allowed the Russian government to continue denying their involvement.

What the Russians didn’t know was that proof of their military surge was sitting right out in the open online. A working group within the Atlantic Council was among the groups busy sifting through the selfies and videos that Russian soldiers were uploading to sites like Instagram and YouTube. By comparing the geolocation data on those posts to Google Earth street view images that could reveal precisely where the photos were taken, the researchers were able to track the soldiers as they made their way through Ukraine.

“It was old-school Facebook stalking, but for classified national security interests,” says Brookie.

This experiment formed the basis of DFRLab, which has continued using open source tools to investigate national security issues ever since. After the initial report on eastern Ukraine, for instance, DFRLab followed up with a piece that used satellite images to prove that the Russian government had misled the world about its air strikes on Syria; instead of hitting ISIS territory and oil reserves, as it claimed, it had in fact targeted civilian populations, hospitals, and schools.

But Brookie, who joined DFRLab in 2017, says the 2016 election radically changed the way the team worked. Unlike Syria or Ukraine, where researchers needed to extract the truth in a low-information environment, the election was plagued by another scourge: information overload. Suddenly, there was a flood of myths to be debunked. DFRLab shifted from writing lengthy policy papers to quick hits on Medium. To expand its reach even further, the group also launched a series of live events to train other academics, journalists, and government officials in their research tactics, creating even more so-called “digital Sherlocks.”

'Sometimes a fresh pair of eyes can see something we may have missed.'

KATIE HARBATH, FACEBOOK

This work caught Facebook’s attention in 2017. After it became clear that bad actors, including Russian trolls, had used Facebook to prey on users' political views during the 2016 race, Facebook pledged to better safeguard election integrity around the world. The company has since begun staffing up its security team, developing artificial intelligence to spot fake accounts and coordinated activity, and enacting measures to verify the identities of political advertisers and administrators for large pages on Facebook.

According to Katie Harbath, Facebook’s director of politics, DFRLab's skill at tracking disinformation not just on Facebook but across platforms felt like a valuable addition to this effort. The fact that the Atlantic Council’s board is stacked with foreign policy experts including former secretary of state Madeleine Albright and Stephen Hadley, former national security adviser to President George W. Bush, was an added bonus.

“They bring that unique, global view set of both established foreign policy people, who have had a lot of experience, combined with innovation and looking at problems in new ways, using open source material,” Harbath says.

That combination has helped the Atlantic Council attract as much as $24 million a year in contributions, including from government and corporate sponsors. As the think tank's profile has grown, however, it has also been accused of peddling influence for major corporate donors like FedEx. Now, after committing roughly $1 million in funding to the Atlantic Council, the bulk of which supports the DFRLab’s work, Facebook is among the organization's biggest sponsors.

But for Facebook, giving money away is the easy part. The challenge now is figuring out how best to leverage this new partnership. Facebook is a $500 billion tech juggernaut with 30,000 employees in offices around the world; it's hard to imagine what a 14-person team at a non-profit could tell them that they don't already know. But Facebook's security team and DFRLab staff swap tips daily through a shared Slack channel, and Harbath says that Brookie’s team has already made some valuable discoveries.

During the recent elections in Mexico, for example, DFRLab dissected the behavior of a political consulting group called Victory Lab that was spamming the election with fake news, driven by Twitter bots and Facebook likes that appeared to have been purchased in bulk. The team found that a substantial number of those phony likes came from the same set of Brazilian Facebook users. What's more, they all listed the same company, Frases & Versos, as their employer.

The team dug deeper, looking into the managers of Frases & Versos, and found that they were connected with an entity called PCSD, which maintained a number of pages where Facebook users could buy and sell likes, shares, and even entire pages. With the Brazilian elections on the horizon in October, Brookie says, it was critical to get the information in front of Facebook immediately.

"We flagged it for Facebook, like, 'Holy cow this is interesting,'" Brookie remembers. The Facebook team took on the investigation from there. On Wednesday, the DFRLab published its report on the topic, and Facebook confirmed to WIRED that it had removed a network of 72 groups, 46 accounts, and five pages associated with PCSD.

"We’re in this all day, every day, looking at these things," Harbath says. "Sometimes a fresh pair of eyes can see something we may have missed."

Of course, Facebook has missed a lot in the past few years, and the partnership with the DFRLab is no guarantee it won't miss more. Even as it stumbles toward transparency, the company remains highly selective about which sets of eyes get to search for what they've missed, and what they get to see. After all, Brookie's team can only examine clues that are already publicly accessible. Whatever signals Facebook is studying behind the scenes remain a mystery.

 Source: This article was Published wired.com By IE LAPOWSKY

Published in Internet Privacy

WE FACE a crisis of computing. The very devices that were supposed to augment our minds now harvest them for profit. How did we get here?

Most of us only know the oft-told mythology featuring industrious nerds who sparked a revolution in the garages of California. The heroes of the epic: Jobs, Gates, Musk, and the rest of the cast. Earlier this year, Mark Zuckerberg, hawker of neo-Esperantist bromides about “connectivity as panacea” and leader of one of the largest media distribution channels on the planet, excused himself by recounting to senators an “aw shucks” tale of building Facebook in his dorm room. Silicon Valley myths aren’t just used to rationalize bad behavior. These business school tales end up restricting how we imagine our future, limiting it to the caprices of eccentric billionaires and market forces.

What we need instead of myths are engaging, popular histories of computing and the internet, lest we remain blind to the long view.

At first blush, Yasha Levine’s Surveillance Valley: The Secret Military History of the Internet (2018) seems to fit the bill. A former editor of The eXile, a Moscow-based tabloid newspaper, and investigative reporter for PandoDaily, Levine has made a career out of writing about the dark side of tech. In this book, he traces the intellectual and institutional origins of the internet. He then focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community.

He remains unremittingly dark, however, claiming that these technologies were developed from the beginning with surveillance in mind and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”

To be sure, these constitute provocative theses, ones that attempt to confront not only the standard Silicon Valley story, but also established lore among the small group of scholars who study the history of computing. He falls short, however, of backing up his claims with sufficient evidence. Indeed, he flirts with creating a mythology of his own — one that I believe risks marginalizing the most relevant lessons from the history of computing.

The scholarly history is not widely known and worth relaying here in brief. The internet and what today we consider personal computing came out of a unique, government-funded research community that took off in the early 1960s. Keep in mind that, in the preceding decade, “computers” were radically different from what we know today. Hulking machines, they existed to crunch numbers for scientists, researchers, and civil servants. “Programs” consisted of punched cards fed into room-sized devices that would process them one at a time. Computer time was tedious and riddled with frustration. A researcher working with census data might have to queue up behind dozens of other users, book time to run her cards through, and would only know about a mistake when the whole process was over.

Users, along with IBM, remained steadfast in believing that these so-called “batch processing” systems were really what computers were for. Any progress, they believed, would entail building bigger, faster, better versions of the same thing.

But that’s obviously not what we have today. From a small research, a community emerged an entirely different set of goals, loosely described as “interactive computing.” As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind.

It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today is known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.

At ARPA, he enjoyed relative freedom in addition to plenty of cash, which enabled him to fund projects in computing whose military relevance was decidedly tenuous. He established a nationwide, multi-generational network of researchers who shared his vision. As a result, almost every significant advance in the field from the 1960s through the early 1970s was, in some form or another, funded or influenced by the community he helped establish.

Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.

Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.

The first piece of evidence he cites comes by way of ARPA’s Project Agile. A counterinsurgency research effort in Southeast Asia during the Vietnam War, it was notorious for its defoliation program that developed chemicals like Agent Orange. It also involved social science research and data collection under the guidance of an intelligence operative named William Godel, head of ARPA’s classified efforts under the Office of Foreign Developments. On more than one occasion, Levine asserts or at least suggests that Licklider and Godel’s efforts were somehow insidiously intertwined and that Licklider’s computing research in his division of ARPA had something to do with Project Agile. Despite arguing that this is clear from “pages and pages of released and declassified government files,” Levine cites only one such document as supporting evidence for this claim. It shows how Godel, who at one point had surplus funds, transferred money from his group to Licklider’s department when the latter was over budget.

This doesn’t pass the sniff test. Given the freewheeling nature of ARPA’s funding and management in the early days, such a transfer should come as no surprise. On its own, it doesn’t suggest a direct link in terms of research efforts. Years later, Taylor asked his boss at ARPA to fund the ARPANET — and, after a 20-minute conversation, he received $1 million in funds transferred from ballistic missile research. No one would seriously suggest that ARPANET and ballistic missile research were somehow closely “intertwined” because of this.

Sharon Weinberger’s recent history of ARPA, The Imagineers of War: The Untold Story of DARPA, The Pentagon Agency that Changed the World(2017), which Levine cites, makes clear what is already known from the established history. “Newcomers like Licklider were essentially making up the rules as they went along,” and were “given broad berth to establish research programs that might be tied only tangentially to a larger Pentagon goal.” Licklider took nearly every chance he could to transform his ostensible behavioral science group into an interactive computing research group. Most people in wider ARPA, let alone the DoD, had no idea what Licklider’s researchers were up to. His Command and Control division was even renamed the more descriptive Information Processing Techniques Office (IPTO).

Licklider was certainly involved in several aspects of counterinsurgency research. Annie Jacobsen, in her book The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency (2015), describes how he attended meetings discussing strategic hamlets in Southeast Asia and collaborated on proposals with others who conducted Cold War social science research. And Levine mentions Licklider’s involvement with a symposium that addressed how computers might be useful in conducting counterinsurgency work.

But Levine only points to one specific ARPA-funded computing research project that might have had something to do with counterinsurgency. In 1969, Licklider — no longer at ARPA — championed a proposal for a constellation of research efforts to develop statistical analysis and database software for social scientists. The Cambridge Project, as it was called, was a joint effort between Harvard and MIT. Formed at the height of the antiwar movement, when all DoD funding was viewed as suspicious, it was greeted with outrage by student demonstrators. As Levine mentions, students on campuses across the country viewed computers as large, bureaucratic, war-making machines that supported the military-industrial complex.

Levine makes a big deal of the Cambridge Project, but is there really a concrete connection between surveillance, counterinsurgency, computer networking, and this research effort? If there is, he doesn’t present it in the book. Instead, he relies heavily on an article in the Harvard Crimson by a student activist. He doesn’t even directly quote from the project proposal itself, which should contain at least one or two damning lines. Instead, he lists types of “data banks” the project would build, including ones on youth movements, minority integration in multicultural societies, and public opinion polls, among others. The project ran for five years but Levine never tells us what it was actually used for.

It’s worth pointing out that the DoD was the only organization that was funding computing research in a manner that could lead to real breakthroughs. Licklider and others needed to present military justification for their work, no matter how thin. In addition, as the 1960s came to a close, Congress was tightening its purse strings, which was another reason to trump up their relevance. It’s odd that an investigative reporter like Levine, ever suspicious of the standard line, should take the claims of these proposals at face value.

I spoke with John Klensin, a member of the Cambridge Project steering committee who was involved from the beginning. He has no memory of such data banks. “There was never any central archive or effort to build one,” he told me. He worked closely with Licklider and other key members of the project, and he distinctly recalls the tense atmosphere on campuses at the time, even down to the smell of tear gas. Oddly enough, he says some people worked for him by day and protested the project by night, believing that others elsewhere must be doing unethical work. According to Klensin, the Cambridge Project conducted “zero classified research.” It produced general purpose software and published its reports publicly. Some of them are available online, but Levine doesn’t cite them at all. An ARPA commissioned study of its own funding history even concluded that, while the project had been a “technical success” whose systems were “applicable to a wide variety of disciplines,” behavioral scientists hadn’t benefited much from it. Until Levine or someone else can produce documents demonstrating that the project was designed for, or even used in, counterinsurgency or surveillance efforts, we’ll have to take Klensin at his word.

As for the ARPANET, Levine only provides one source of evidence for his claim that, from its earliest days, the experimental computer network was involved in some kind of surveillance activity. He has dug up an NBC News report from the 1970s that describes how intelligence gathered in previous years (as part of an effort to create dossiers of domestic protestors) had been transferred across a new network of computer systems within the Department of Defense.

This report was read into the Congressional record during joint hearings on Surveillance Technology in 1975. But what’s clear from the subsequent testimony of Assistant Deputy Secretary of Defense David Cooke, the NBC reporter had likely confused several computer systems and networks across various government agencies. The story’s lone named source claims to have seen the data structure used for the files when they arrived at MIT. It is indeed an interesting account, but it remains unclear what was transferred, across which system, and what he saw. This incident hardly shows “how military and intelligence agencies used the network technology to spy on Americans in the first version of the Internet,” as Levine claims.

The ARPANET was not a classified system — anyone with an appropriately funded research project could use it. “ARPANET was a general purpose communication network. It is a distortion to conflate this communication system’s development with the various projects that made use of its facilities,” Vint Cerf, creator of the internet protocol, told me. Cerf concedes, however, that a “secured capability” was created early on, “presumably used to communicate classified information across the network.” That should not be surprising, as the government ran the project. But Levine’s evidence merely shows that surveillance information gathered elsewhere might have been transferred across the network. Does that count as having surveillance “baked in,” as he says, to the early internet?

Levine’s early history suffers most from viewing ARPA or even the military as a single monolithic entity. In the absence of hard evidence, he employs a jackhammer of willful insinuations as described above, pounding toward a questionable conclusion. Others have noted this tendency. He disingenuously writes that, four years ago, a review of Julian Assange’s book in this very publication accused him of being funded by the CIA, when in fact its author had merely suggested that Levine was prone to conspiracy theories. It’s a shame because today’s internet is undoubtedly a surveillance platform, both for governments and the companies whose cash crop is our collective mind. To suggest this was always the case means ignoring the effects of the hysterical national response to 9/11, which granted unprecedented funding and power to private intelligence contractors. Such dependence on private companies was itself part of a broader free market turn in national politics from the 1970s onward, which tightened funds for basic research in computing and other technical fields — and cemented the idea that private companies, rather than government-funded research, would take charge of inventing the future. Today’s comparatively incremental technical progress is the result. In The Utopia of Rules (2015), anthropologist David Graeber describes this phenomenon as a turn away from investment in technologies promoting “the possibility of alternative futures” to investment in those that “furthered labor discipline and social control.” As a result, instead of mind-enhancing devices that might have the same sort of effect as, say, mass literacy, we have a precarious gig economy and a convenience-addled relationship with reality.

Levine recognizes a tinge of this in his account of the rise of Google, the first large tech company to build a business model for profiting from user data. “Something in technology pushed other companies in the same direction. It happened just about everywhere,” he writes, though he doesn’t say what the “something” is. But the lesson to remember from history is that companies on their own are incapable of big inventions like personal computing or the internet. The quarterly pressure for earnings and “innovations” leads them toward unimaginative profit-driven developments, some of them harmful.

This is why Levine’s unsupported suspicion of government-funded computing research, regardless of the context, is counterproductive. The lessons of ARPA prove inconvenient for mythologizing Silicon Valley. They show a simple truth: in order to achieve serious invention and progress — in computers or any other advanced technology — you have to pay intelligent people to screw around with minimal interference, accept that most ideas won’t pan out, and extend this play period to longer stretches of time than the pressures of corporate finance allow. As science historian Mitchell Waldrop once wrote, the polio vaccine might never have existed otherwise; it was “discovered only after years of failure, frustration, and blind alleys, none of which could have been justified by cost/benefit analysis.” Left to corporate interests, the world would instead “have gotten the best iron lungs you ever saw.”

Computing for the benefit of the public is a more important concept now than ever. In fact, Levine agrees, writing, “The more we understand and democratize the Internet, the more we can deploy its power in the service of democratic and humanistic values.” Power in the computing world is wildly unbalanced — each of us mediated by and dependent on, indeed addicted to, invasive systems whose functionality we barely understand. Silicon Valley only exacerbates this imbalance, in the same manner, that oil companies exacerbate climate change or financialization of the economy exacerbates inequality. Today’s technology is flashy, sexy, and downright irresistible. But, while we need a cure for the ills of late-stage capitalism, our gadgets are merely “the best iron lungs you ever saw.”

 Source: This article was published lareviewofbooks.org By Eric Gade

Published in Online Research

For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network.

The professor was incredulous. David Craig had been studying the rise of entertainment on social media for several years when a Facebook Inc. employee he didn’t know emailed him last December, asking about his research. “I thought I was being pumped,” Craig said. The company flew him to Menlo Park and offered him $25,000 to fund his ongoing projects, with no obligation to do anything in return. This was definitely not normal, but after checking with his school, University of Southern California, Craig took the gift. “Hell, yes, it was generous to get an out-of-the-blue offer to support our work, with no strings,” he said. “It’s not all so black and white that they are villains.”

Other academics got these gifts, too. One, who said she had $25,000 deposited in her research account recently without signing a single document, spoke to a reporter hoping maybe the journalist could help explain it. Another professor said one of his former students got an unsolicited monetary offer from Facebook, and he had to assure the recipient it wasn’t a scam. The professor surmised that Facebook uses the gifts as a low-cost way to build connections that could lead to closer collaboration later. He also thinks Facebook “happily lives in the ambiguity” of the unusual arrangement. If researchers truly understood that the funding has no strings, “people would feel less obligated to interact with them,” he said.

The free gifts are just one of the little-known and complicated ways Facebook works with academic researchers. For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network. For Facebook, the motivations to work with outside academics are far thornier, and it’s Facebook that decides who gets access to its data to examine its impact on society.“Just from a business standpoint, people won’t want to be on Facebook if Facebook is not positive for them in their lives,” said Rob Sherman, Facebook’s deputy chief privacy officer. “We also have a broader responsibility to make sure that we’re having the right impact on society.”

The company’s long been conflicted about how to work with social scientists, and now runs several programs, each reflecting the contorted relationship Facebook has with external scrutiny. The collaborations have become even more complicated in the aftermath of the Cambridge Analytica scandal, which was set off by revelations that a professor who once collaborated with Facebook’s in-house researchers used data collected separately to influence elections. ALSO READ: Facebook admits it tracks your mouse movements

“Historically the focus of our research has been on product development, on doing things that help us understand how people are using Facebook and build improvements to Facebook,” Sherman said. Facebook’s heard more from academics and non-profits recently who say “because of the expertise that we have, and the data that Facebook stores, we have an opportunity to contribute to generalizable knowledge and to answer some of these broader social questions,” he said. “So you’ve seen us begin to invest more heavily in social science research and in answering some of these questions.”

Facebook has a corporate culture that reveres research. The company builds its product based on internal data on user behaviour, surveys and focus groups. More than a hundred Ph.D.-level researchers work on Facebook’s in-house core data science team, and employees say the information that points to growth has had more of an impact on the company’s direction than Chief Executive Officer Mark Zuckerberg’s ideas.

Facebook is far more hesitant to work with outsiders; it risks unflattering findings, leaks of proprietary information, and privacy breaches. But Facebook likes it when external research proves that Facebook is great. And in the fierce talent wars of Silicon Valley, working with professors can make it easier to recruit their students.

It can also improve the bottom line. In 2016, when Facebook changed the “like” button into a set of emojis that better-captured user expression—and feelings for advertisers— it did so with the help of Dacher Keltner, a psychology professor at the University of California, Berkeley, who’s an expert in compassion and emotions. Keltner’s Greater Good Science Center continues to work closely with the company. And this January, Facebook made research the centerpiece of a major change to its news feed algorithm. In studies published with academics at several universities, Facebook found that people who used social media actively—commenting on friends’ posts, setting up events—were likely to see a positive impact on mental health, while those who used it passively may feel depressed. In reaction, Facebook declared it would spend more time encouraging “meaningful interaction.” Of course, the more people engage with Facebook, the more data it collects for advertisers.

The company has stopped short of pursuing deeper research on the potentially negative fallout of its power. According to its public database of published research, Facebook’s written more than 180 public papers about artificial intelligence but just one study about elections, based on an experiment Facebook ran on 61 million users to mobilize voters in the Congressional midterms back in 2010. Facebook’s Sherman said, “We’ve certainly been doing a lot of work over the past couple of months, particularly to expand the areas where we’re looking.”

Facebook’s first peer-reviewed papers with outside scholars were published in 2009, and almost a decade into producing academic work, it still wavers over how to structure the arrangements. It’s given out the smaller unrestricted gifts. But those gifts don’t come with access to Facebook’s data, at least initially. The company is more restrictive about who can mine or survey its users. It looks for research projects that dovetail with its business goals.

Some academics cycle through one-year fellowships while pursuing doctorate degrees, and others get paid for consulting projects, which never get published.

When Facebook does provide data to researchers, it retains the right to veto or edit the paper before publication. None of the professors Bloomberg spoke with knew of cases when Facebook prohibited a publication, though many said the arrangement inevitably leads academics to propose investigations less likely to be challenged. “Researchers focus on things that don’t create a moral hazard,” said Dean Eckles, a former Facebook data scientist now at the MIT Sloan School of Management. Without a guaranteed right to publish, Eckles said, researchers inevitably shy away from potentially critical work. That means some of the most burning societal questions may go unprobed.

Facebook also almost always pairs outsiders with in-house researchers. This ensures scholars have a partner who’s intimately familiar with Facebook’s vast data, but some who’ve worked with Facebook say this also creates a selection bias about what gets studied. “Stuff still comes out, but only the immensely positive, happy stories—the goody-goody research that they could show off,” said one social scientist who worked as a researcher at Facebook. For example, he pointed out that the company’s published widely on issues related to well-being, or what makes people feel good and fulfilled, which is positive for Facebook’s public image and product. “The question is: ‘What’s not coming out?,’” he said.

Facebook argues its body of work on well-being does have broad importance. “Because we are a social product that has large distribution within society, it is both about societal issues as well as the product,” said David Ginsberg, Facebook’s director of research.Other social networks have smaller research ambitions, but have tried more open approaches. This spring, Twitter Inc. asked for proposals to measure the health of conversations on its platform, and Microsoft Corp.’s LinkedIn is running a multi-year programme to have researchers use its data to understand how to improve the economic opportunities of workers. Facebook has issued public calls for technical research, but until the past few months, hasn’t done so for social sciences. Yet it has solicited in that area, albeit quietly: Last summer, one scholarly association begged discretion when sharing information on a Facebook pilot project to study tech’s impact in developing economies. Its email read, “Facebook is not widely publicizing the program.”

In 2014, the prestigious Proceedings of the National Academy of Sciences published a massive study, co-authored by two Facebook researchers and an outside academic, that found emotions were “contagious” online, that people who saw sad posts were more likely to make sad posts. The catch: the results came from an experiment run on 689,003 Facebook users, where researchers secretly tweaked the algorithm of Facebook’s news feed to show some cheerier content than others. People were angry, protesting that they didn’t give Facebook permission to manipulate their emotions.

The company first said people allowed such studies by agreeing to its terms of service, and then eventually apologized. While the academic journal didn’t retract the paper, it issued an “Editorial Expression of Concern.”

To get federal research funding, universities must run testing on humans through what’s known as an institutional review board, which includes at least one outside expert, approves the ethics of the study and ensures subjects provide informed consent. Companies don’t have to run research through IRBs. The emotional-contagion study fell through the cracks.

The outcry profoundly changed Facebook’s research operations, creating a review process that was more formal and cautious. It set up a pseudo-IRB of its own, which doesn’t include an outside expert but does have policy and PR staff. Facebook also created a new public database of its published research, which lists more than 470 papers. But that database now has a notable omission—a December 2015 paper two Facebook employees co-wrote with Aleksandr Kogan, the professor at the heart of the Cambridge Analytica scandal. Facebook said it believes the study was inadvertently never posted and is working to ensure other papers aren’t left off in the future.

In March, Gary King, a Harvard University political science professor, met with some Facebook executives about trying to get the company to share more data with academics. It wasn’t the first time he’d made his case, but he left the meeting with no commitment.

A few days later, the Cambridge Analytica scandal broke, and soon Facebook was on the phone with King. Maybe it was time to cooperate, at least to understand what happens in elections. Since then, King and a Stanford University law professor have developed a complicated new structure to give more researchers access to Facebook’s data on the elections and let scholars publish whatever they find. The resulting structure is baroque, involving a new “commission” of scholars Facebook will help pick, an outside academic council that will award research projects, and seven independent U.S. foundations to fund the work. “Negotiating this was kind of like the Arab-Israel peace treaty, but with a lot more partners,” King said.

The new effort, which has yet to propose its first research project, is the most open approach Facebook’s taken yet. “We hope that will be a model that replicates not just within Facebook but across the industry,” Facebook’s Ginsberg said. “It’s a way to make data available for social science research in a way that means that it’s both independent and maintains privacy.” But the new approach will also face an uphill battle to prove its credibility. The new Facebook research project came together under the company’s public relations and policy team, not its research group of PhDs trained in ethics and research design. More than 200 scholars from the Association of Internet Researchers, a global group of interdisciplinary academics, have signed a letter saying the effort is too limited in the questions it’s asking, and also that it risks replicating what sociologists call the “Matthew effect,” where only scholars from elite universities—like Harvard and Stanford—get an inside track.

“Facebook’s new initiative is set up in such a way that it will select projects that address known problems in an area known to be problematic,” the academics wrote. The research effort, the letter said, also won’t let the world—or Facebook, for that matter—get ahead of the next big problem.

Source: This article was published hindustantimes.com By Karen Weise and Sarah Frier

Published in Social
Page 1 of 15

Upcoming Events

There are no up-coming events

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
Internet research courses

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media