Articles
Pages
Products
Research Papers
Blogs
Search Engines
Events
Webinar, Seminar, Live Classes

[This article is originally published in zdnet.com written by Steven J. Vaughan-Nichols - Uploaded by AIRS Member: Eric Beaudoin]

For less than a $100, you can have an open-source powered, easy-to-use server, which enables you -- and not Apple, Facebook, Google, or Microsoft -- to control your view of the internet.

On today's internet, most of us find ourselves locked into one service provider or the other. We find ourselves tied down to Apple, Facebook, Google, or Microsoft for our e-mail, social networking, calendaring -- you name it. It doesn't have to be that way. The FreedomBox Foundation has just released its first commercially available FreedomBox: The Pioneer Edition FreedomBox Home Server Kit. With it, you -- not some company -- control over your internet-based services.

The Olimex Pioneer FreedomBox costs less than $100 and is powered by a single-board computer (SBC), the open source hardware-based Olimex A20-OLinuXino-LIME2 board. This SBC is powered by a 1GHz A20/T2 dual-core Cortex-A7 processor and dual-core Mali 400 GPU. It also comes with a Gigabyte of RAM, a high-speed 32GB micro SD card for storage with the FreedomBox software pre-installed, two USB ports, SATA-drive support, a Gigabit Ethernet port, and a backup battery.

Doesn't sounds like much does it? But, here's the thing: You don't need much to run a personal server.

Sure, some of us have been running our own servers at home, the office, or at a hosting site for ages. I'm one of those people. But, it's hard to do. What the FreedomBox brings to the table is the power to let almost anyone run their own server without being a Linux expert.

The supplied FreedomBox software is based on Debian Linux. It's designed from the ground up to make it as hard as possible for anyone to exploit your data. It does this by putting you in control of your own corner of the internet at home. Its simple user interface lets you host your own internet services with little expertise.

You can also just download the FreedomBox software and run it on your own SBC. The Foundation recommends using the CubietruckCubieboard2BeagleBone BlackA20 OLinuXino Lime2A20 OLinuXino MICRO, and PC Engines APU. It will also run on most newer Raspberry Pi models.

Want an encrypted chat server to replace WhatsApp? It's got that. A VoIP server? Sure. A personal website? Of course! Web-based file sharing à la Dropbox? You bet. A Virtual Private Network (VPN) server of your own? Yes, that's essential for its mission.

The software stack isn't perfect. This is still a work in progress. So, for example, it still doesn't have a personal email server or federated social networking, such as GNU Social and Diaspora, to provide a privacy-respecting alternative to Facebook. That's not because they won't run on a FreedomBox; they will. What they haven't been able to do yet is to make it easy enough for anyone to do and not someone with Linux sysadmin chops. That will come in time.

As the Foundation stated, "The word 'Pioneer' was included in the name of these kits in order to emphasize the leadership required to run a FreedomBox in 2019. Users will be pioneers both because they have the initiative to define this new frontier and because their feedback will make FreedomBox better for its next generation of users."

To help you get up to speed the FreedomBox community will be offering free technical support for owners of the Pioneer Edition FreedomBox servers on its support forum. The Foundation also welcomes new developers to help it perfect the FreedomBox platform. 

Why do this?  Eben Moglen, Professor of Law at Columbia Law School, saw the mess we were heading toward almost 10 years ago: "Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age." That was before Facebook proved itself to be totally incompetent with security and sold off your data to Cambridge Analytica to scam 50 million US Facebook users with personalized anti-Clinton and pro-Trump propaganda in the 2016 election.

It didn't have to be that way. In an interview, Moglen told me this: "Concentration of technology is a surprising outcome of cheap hardware and free software. We could have had a world of peers. Instead, the net we built is the net we didn't want. We're in an age of surveillance with centralized control. We're in a world, which encourages swiping, clicking, and flame throwing."

With FreedomBox, "We can undo this. We can make it possible for ordinary people to provide internet services. You can have your own private messaging, services without a man in the middle watching your every move." 

We can, in short, rebuild the internet so that we, and not multi-billion dollar companies, are in charge.

I like this plan

Categorized in Science & Tech

[This article is originally published in searchengineland.com written by Greg Sterling - Uploaded by AIRS Member: Eric Beaudoin]

Facebook users turned to Google search or went directly to publishers.

An interesting thing happened on August 3. Facebook was down for nearly an hour in Europe and North America. During that time, many users who were shut out of their Facebook News Feeds went directly to news sites or searched for news.

Direct traffic spikes during a Facebook outage. According to data presented by Chartbeat at the recent Online News Association conference, direct traffic to news publisher sites increased 11 percent (in large part from app-driven traffic), and search traffic (to news sites) increased 8 percent during the outage that occurred a little after 4:00 p.m., as shown in the chart above.

According to late 2017 data from the Pew Research Center:

Just under half (45 percent) of U.S. adults use Facebook for news. Half of Facebook’s news users get news from that social media site alone, with just one-in-five relying on three or more sites for news.

Algorithm change sent people back to search. From that perspective, it makes sense that when Facebook is unavailable, people will turn to direct sources to get news. Earlier this year, however, Facebook began to “fix” the News Feed by minimizing third-party “commercial content.” This impacted multiple entities, but most news publishers saw their referral traffic from Facebook decline, a pattern that predated the algorithm change.

Starting in 2017, there’s evidence that as Facebook referrals have declined, more people have turned to Google to obtain their news fix. Users no longer able to get news as easily from Facebook are going to Google or directly to news sources to get it.

Why it matters to marketers. The trends shown in this chart underscore opportunities for content creators to capitalize on well-optimized pages (and possibly ads) to reach news-seeking audiences in search. It also highlights programmatic and direct-buying ad opportunities for marketers to reach these audiences on publisher sites.

Categorized in Social

 Source: This article was Published wired.com By IE LAPOWSKY - Contributed by Member: Bridget Miller

IN LATE JULY, a group of high-ranking Facebook executives organized an emergency conference call with reporters across the country. That morning, Facebook’s chief operating officer, Sheryl Sandberg, explained, they had shut down 32 fake pages and accounts that appeared to be coordinating disinformation campaigns on Facebook and Instagram. They couldn’t pinpoint who was behind the activity just yet, but said the accounts and pages had loose ties to Russia’s Internet Research Agency, which had spread divisive propaganda like a flesh-eating virus throughout the 2016 US election cycle.

Facebook was only two weeks into its investigation of this new network, and the executives said they expected to have more answers in the days to come. Specifically, they said some of those answers would come from the Atlantic Council's Digital Forensics Research Lab. The group, whose mission is to spot, dissect, and explain the origins of online disinformation, was one of Facebook’s newest partners in the fight against digital assaults on elections around the world. “When they do that analysis, people will be able to understand better what’s at play here,” Facebook’s head of cybersecurity policy, Nathaniel Gleicher, said.

Back in Washington DC, meanwhile, DFRLab was still scrambling to understand just what was going on themselves. Facebook had alerted them to the eight suspicious pages the day before the press call. The lab had no access to the accounts connected to those pages, nor to any information on Facebook’s backend that would have revealed strange patterns of behavior. They could only see the parts of the pages that would have been visible to any other Facebook user before the pages were shut down—and they had less than 24 hours to do it.

“We screenshotted as much as possible,” says Graham Brookie, the group’s 28-year-old director. “But as soon as those accounts are taken down, we don’t have access to them... We had a good head start, but not a full understanding.” DFRLab is preparing to release a longer report on its findings this week.

As a company, Facebook has rarely been one to throw open its doors to outsiders. That started to change after the 2016 election, when it became clear that Facebook and other tech giants missed an active, and arguably incredibly successful, foreign influence campaign going on right under their noses. Faced with a backlash from lawmakers, the media, and their users, the company publicly committed to being more transparent and to work with outside researchers, including at the Atlantic Council.

'[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.'

GRAHAM BROOKIE, DIGITAL FORENSICS RESEARCH LAB

DFRLab is a scrappier, substantially smaller offshoot of the 57-year-old bipartisan think tank based in DC, and its team of 14 is spread around the globe. Using open source tools like Google Earth and public social media data, they analyze suspicious political activity on Facebook, offer guidance to the company, and publish their findings in regular reports on Medium. Sometimes, as with the recent batch of fake accounts and pages, Facebook feeds tips to the DFRLab for further digging. It's an evolving, somewhat delicate relationship between a corporate behemoth that wants to appear transparent without ceding too much control or violating users' privacy, and a young research group that’s ravenous for Intel and eager to establish its reputation.

“This kind of new world of information sharing is just that, it’s new,” Brookie says. “[Facebook] is trying to figure out what the rules of the road are, frankly, as are research organizations like ours.”

The lab got its start almost by accident. In 2014, Brookie was working for the National Security Council under President Obama when the military conflict broke out in eastern Ukraine. At the time, he says, the US intelligence community knew that Russian troops had invaded the region, but given the classified nature of their intel they had no way to prove it to the public. That allowed the Russian government to continue denying their involvement.

What the Russians didn’t know was that proof of their military surge was sitting right out in the open online. A working group within the Atlantic Council was among the groups busy sifting through the selfies and videos that Russian soldiers were uploading to sites like Instagram and YouTube. By comparing the geolocation data on those posts to Google Earth street view images that could reveal precisely where the photos were taken, the researchers were able to track the soldiers as they made their way through Ukraine.

“It was old-school Facebook stalking, but for classified national security interests,” says Brookie.

This experiment formed the basis of DFRLab, which has continued using open source tools to investigate national security issues ever since. After the initial report on eastern Ukraine, for instance, DFRLab followed up with a piece that used satellite images to prove that the Russian government had misled the world about its air strikes on Syria; instead of hitting ISIS territory and oil reserves, as it claimed, it had in fact targeted civilian populations, hospitals, and schools.

But Brookie, who joined DFRLab in 2017, says the 2016 election radically changed the way the team worked. Unlike Syria or Ukraine, where researchers needed to extract the truth in a low-information environment, the election was plagued by another scourge: information overload. Suddenly, there was a flood of myths to be debunked. DFRLab shifted from writing lengthy policy papers to quick hits on Medium. To expand its reach even further, the group also launched a series of live events to train other academics, journalists, and government officials in their research tactics, creating even more so-called “digital Sherlocks.”

'Sometimes a fresh pair of eyes can see something we may have missed.'

KATIE HARBATH, FACEBOOK

This work caught Facebook’s attention in 2017. After it became clear that bad actors, including Russian trolls, had used Facebook to prey on users' political views during the 2016 race, Facebook pledged to better safeguard election integrity around the world. The company has since begun staffing up its security team, developing artificial intelligence to spot fake accounts and coordinated activity, and enacting measures to verify the identities of political advertisers and administrators for large pages on Facebook.

According to Katie Harbath, Facebook’s director of politics, DFRLab's skill at tracking disinformation not just on Facebook but across platforms felt like a valuable addition to this effort. The fact that the Atlantic Council’s board is stacked with foreign policy experts including former secretary of state Madeleine Albright and Stephen Hadley, former national security adviser to President George W. Bush, was an added bonus.

“They bring that unique, global view set of both established foreign policy people, who have had a lot of experience, combined with innovation and looking at problems in new ways, using open source material,” Harbath says.

That combination has helped the Atlantic Council attract as much as $24 million a year in contributions, including from government and corporate sponsors. As the think tank's profile has grown, however, it has also been accused of peddling influence for major corporate donors like FedEx. Now, after committing roughly $1 million in funding to the Atlantic Council, the bulk of which supports the DFRLab’s work, Facebook is among the organization's biggest sponsors.

But for Facebook, giving money away is the easy part. The challenge now is figuring out how best to leverage this new partnership. Facebook is a $500 billion tech juggernaut with 30,000 employees in offices around the world; it's hard to imagine what a 14-person team at a non-profit could tell them that they don't already know. But Facebook's security team and DFRLab staff swap tips daily through a shared Slack channel, and Harbath says that Brookie’s team has already made some valuable discoveries.

During the recent elections in Mexico, for example, DFRLab dissected the behavior of a political consulting group called Victory Lab that was spamming the election with fake news, driven by Twitter bots and Facebook likes that appeared to have been purchased in bulk. The team found that a substantial number of those phony likes came from the same set of Brazilian Facebook users. What's more, they all listed the same company, Frases & Versos, as their employer.

The team dug deeper, looking into the managers of Frases & Versos, and found that they were connected with an entity called PCSD, which maintained a number of pages where Facebook users could buy and sell likes, shares, and even entire pages. With the Brazilian elections on the horizon in October, Brookie says, it was critical to get the information in front of Facebook immediately.

"We flagged it for Facebook, like, 'Holy cow this is interesting,'" Brookie remembers. The Facebook team took on the investigation from there. On Wednesday, the DFRLab published its report on the topic, and Facebook confirmed to WIRED that it had removed a network of 72 groups, 46 accounts, and five pages associated with PCSD.

"We’re in this all day, every day, looking at these things," Harbath says. "Sometimes a fresh pair of eyes can see something we may have missed."

Of course, Facebook has missed a lot in the past few years, and the partnership with the DFRLab is no guarantee it won't miss more. Even as it stumbles toward transparency, the company remains highly selective about which sets of eyes get to search for what they've missed, and what they get to see. After all, Brookie's team can only examine clues that are already publicly accessible. Whatever signals Facebook is studying behind the scenes remain a mystery.

Categorized in Internet Privacy

Source: This article was Published techworm.net By Abhishek Kumar Jha - Contributed by Member: Jay Harris

The world has changed in recent times, and there has been the technological advancement that has made the life of people more comfortable, faster, and secure than it was before. People were not aware in the past that how useful this advanced technology can be for their personal as well as social lives.

Today, people are involved in socializing with the world on popular applications online such as Facebook, and many others. Many people are into blogging and content writing for the websites which need several measures to be fulfilled to attract the traffic on their sites.

For the help of these people, two best tools are developed by smallseotools.com: Reverse Image Search and Word Counter. Reverse image search is a CBIR (content-based image retrieval) technique that involves the particular image to be searched to retrieve the information regarding that image. Word counter is the tool that provides users with the information about the characters with and without spaces, number of words and much more.

Reverse Image Search

Reverse image search technology helps you to get to know about who else in the world is manipulating or using the image that belongs to you without seeking permission or copyrights. The image search ensures the compliance with copyright regulations.

The reverse image searching should be used by everyone who is into socializing or running any website. You can use reverse image search as a free online utility that helps you to figure out who is duplicating the picture that belongs to you.

It is quite impossible for you to identify the person who is misusing your picture on the internet because there are numerous websites present, and you can’t go through each of them for detecting who has duplicated your photo. To help you do this work faster and to provide you with knowledge about the misuse of your image or art, reverse image search tool takes few seconds and presents you with the results related to the picture.

Furthermore, people can use this tool to find a better resolution version of their desired images by searching with the lower resolution picture. This helps them posting a good quality picture that has better resolutions. This would make it possible for people to attract more people towards the pictures that they upload.

The reverse photo search can be used on any device whether it is MAC, Linux or OS. No specific operating device is compatible with reverse image search, and it can be consumed for free at any time around the globe.

Search with the picture is easy to use the tool, it does not require training or any tutorial videos to people for being able to use it. The search by a picture with this image search engine can be done in many ways. To search the reverse image search you can upload the file from your device’s gallery, you can drag the image and drop it in the drop box, or you can enter the URL of the image in the search box.

After that just by clicking on the search icon, you’ll be provided with the results (if any) on the screen. You’ll be able to see who else in this world is manipulating the content that belongs to you.

To help yourself in protecting your privacy, image reverse search by smallseotools.com is highly recommended as it is fully secured tool that does not saves the image that you search instead helps you to find out who else is misusing the image.

Word Count Checker

Word counter provides the user with the detailed statistics of syllables, sentences, the average length of words and phrases, keywords, estimated reading and speaking time, etc.

The word count tool is helpful for students who are most likely to be involved in writing tasks. Students are often assigned for the writing of work with the limit provided about the number of words. There is a bit of leeway given; however, if your word limit exceeds too much, then this would lead to a loss of marks.

It will be a waste of time and effort if you count the number of words yourself when this efficient tool is available. Teachers can make use of this tool for checking whether the students wrote in the given length or it exceeded to evaluate and mark them.

As a blogger or content writer, you may be needed to write for someone in a given limit. Here, the word count is the distinctive element that helps bloggers build their credibility. It can be a helpful tool if you own the site, if the text is within a specific length, then it would result in a higher rank for the website. The content writers have to maintain the keyword density; this tool can help them know about the top keyword used in the article.

To use the word counter, you need to enter the text in the space provided. You can also copy and paste the text in the box. The word calculator will show the results as you type. Moreover, there is an option to upload the file in the field provided, it will upload the data in the box, and you will be shown the results immediately. To clear the text for making other word count search, you can click on a button and the text box will be cleared.

The limitation of the file upload on SmallSEOTools word counter is 10 MB size for any file. However, there is no text limit about the number of words. The character count tool accepts variety of file formats like .docx, .txt, and .doc. The word count tool works on the text format most efficiently.

The word count tool is available for 100% free for all the users around the globe. Now people need not rely on the software like MS Word which first needs to be installed on the devices. You can use this tool for free

Categorized in Search Engine

 Source: This article was Published theverge.com By Dami Lee - Contributed by Member: Olivia Russell

There’s no mention of ‘fake news,’ though

There are more young people online than ever in our current age of misinformation, and Facebook is developing resources to help youths better navigate the internet in a positive, responsible way. Facebook has launched a Digital Literacy Library in partnership with the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University. The interactive lessons and videos can be downloaded for free, and they’re meant to be used in the classroom, in after-school programs, or at home.

Created from more than 10 years of academic research and “built in consultation with teens,” the curriculum is divided into five themes: Privacy and Reputation, Identity Exploration, Positive Behavior, Security, and Community Engagement. There are 18 lessons in total, available in English; there are plans to add 45 more languages. Lessons can be divided into three different age groups between 11 and 18, and they cover everything from having healthy relationships online (group activities include discussing scenarios like “over-texting”) to recognizing phishing scams.

The Digital Literacy Library is part of Facebook’s Safety Center as well as a larger effort to provide digital literacy skills to nonprofits, small businesses, and community colleges. Though it feels like a step in the right direction, curiously missing from the lesson plans are any mentions of “fake news.” Facebook has worked on a news literacy campaign with the aim of reducing the spread of false news before. But given the company’s recent announcements admitting to the discovery of “inauthentic” social media campaigns ahead of the midterm elections, it’s strange that the literacy library doesn’t call attention to spotting potential problems on its own platform.

Categorized in Social

 Source: This article was published nytimes.com By GABRIEL J.X. DANCE, NICHOLAS CONFESSORE, and MICHAEL LaFORGIA - Contributed by Member: Linda Manly

As Facebook sought to become the world’s dominant social media service, it struck agreements allowing phone and other device makers access to vast amounts of its users’ personal information.

Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.

But the partnerships, whose scope has not previously been reported, raise concerns about the company’s privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission. Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.

[Here’s what we know about Facebook’s partnerships with device makers.]

Most of the partnerships remain in effect, though Facebook began winding them down in April. The company came under intensifying scrutiny by lawmakers and regulators after news reports in March that a political consulting firm, Cambridge Analytica, misused the private information of tens of millions of Facebook users.

In the furor that followed, Facebook’s leaders said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends. But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions.

“You might think that Facebook or the device manufacturer is trustworthy,” said Serge Egelman, a privacy researcher at the University of California, Berkeley, who studies the security of mobile apps. “But the problem is that as more and more data is collected on the device — and if it can be accessed by apps on the device — it creates serious privacy and security risks.”

In interviews, Facebook officials defended the data sharing as consistent with its privacy policies, the F.T.C. agreement and pledges to users. They said its partnerships were governed by contracts that strictly limited use of the data, including any stored on partners’ servers. The officials added that they knew of no cases where the information had been misused.

The company views its device partners as extensions of Facebook, serving its more than two billion users, the officials said.

“These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.

Some device partners can retrieve Facebook users’ relationship status, religion, political leaning and upcoming events, among other data. Tests by The Times showed that the partners requested and received data in the same way other third parties did.

Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.

In interviews, several former Facebook software engineers and security experts said they were surprised at the ability to override sharing restrictions.

“It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a research and privacy consultant who formerly served as the F.T.C.’s chief technologist.

How One Phone Gains Access to Hundreds of Thousands of Facebook Accounts

After connecting to Facebook, the BlackBerry Hub app was able to retrieve detailed data on 556 of Mr. LaForgia's friends, including relationship status, religious and political leanings and events they planned to attend. Facebook has said that it cut off third parties' access to this type of information in 2015, but that it does not consider BlackBerry a third party in this case.

The Hub app was also able to access information — including unique identifiers — on 294,258 friends of Mr. LaForgia's friends.

By Rich Harris and Gabriel J.X. Dance

Details of Facebook’s partnerships have emerged amid a reckoning in Silicon Valley over the volume of personal information collected on the internet and monetized by the tech industry. The pervasive collection of data, while largely unregulated in the United States, has come under growing criticism from elected officials at home and overseas and provoked concern among consumers about how freely their information is shared.

In a tense appearance before Congress in March, Facebook’s chief executive, Mark Zuckerberg, emphasized what he said was a company priority for Facebook users.“Every piece of content that you share on Facebook you own,” he testified. ”You have complete control over who sees it and how you share it.”

But the device partnerships provoked discussion even within Facebook as early as 2012, according to Sandy Parakilas, who at the time led third-party advertising and privacy compliance for Facebook’s platform.

“This was flagged internally as a privacy issue,” said Mr. Parakilas, who left Facebook that year and has recently emerged as a harsh critic of the company. “It is shocking that this practice may still continue six years later, and it appears to contradict Facebook’s testimony to Congress that all friend permissions were disabled.”

The partnerships were briefly mentioned in documents submitted to German lawmakers investigating the social media giant’s privacy practices and released by Facebook in mid-May. But Facebook provided the lawmakers with the name of only one partner — BlackBerry, maker of the once-ubiquitous mobile device — and little information about how the agreements worked.

The submission followed testimony by Joel Kaplan, Facebook’s vice president for global public policy, during a closed-door German parliamentary hearing in April. Elisabeth Winkelmeier-Becker, one of the lawmakers who questioned Mr. Kaplan, said in an interview that she believed the data partnerships disclosed by Facebook violated users’ privacy rights.

“What we have been trying to determine is whether Facebook has knowingly handed over user data elsewhere without explicit consent,” Ms. Winkelmeier-Becker said. “I would never have imagined that this might even be happening secretly via deals with device makers. BlackBerry users seem to have been turned into data dealers, unknowingly and unwillingly.”

In interviews with The Times, Facebook identified other partners: Apple and Samsung, the world’s two biggest smartphone makers, and Amazon, which sells tablets.

An Apple spokesman said the company relied on private access to Facebook data for features that enabled users to post photos to the social network without opening the Facebook app, among other things. Apple said its phones no longer had such access to Facebook as of last September.

Samsung declined to respond to questions about whether it had any data-sharing partnerships with Facebook. Amazon also declined to respond to questions.

Usher Lieberman, a BlackBerry spokesman, said in a statement that the company used Facebook data only to give its own customers access to their Facebook networks and messages. Mr. Lieberman said that the company “did not collect or mine the Facebook data of our customers,” adding that “BlackBerry has always been in the business of protecting, not monetizing, customer data.”

Microsoft entered a partnership with Facebook in 2008 that allowed Microsoft-powered devices to do things like add contacts and friends and receive notifications, according to a spokesman. He added that the data was stored locally on the phone and was not synced to Microsoft’s servers.

Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers. A Facebook official said that regardless of where the data was kept, it was governed by strict agreements between the companies.

“I am dumbfounded by the attitude that anybody in Facebook’s corporate office would think allowing third parties access to data would be a good idea,” said Henning Schulzrinne, a computer science professor at Columbia University who specializes in network security and mobile systems.

The Cambridge Analytica scandal revealed how loosely Facebook had policed the bustling ecosystem of developers building apps on its platform. They ranged from well-known players like Zynga, the maker of the FarmVille game, to smaller ones, like a Cambridge contractor who used a quiz taken by about 300,000 Facebook users to gain access to the profiles of as many as 87 million of their friends.

Those developers relied on Facebook’s public data channels, known as application programming interfaces, or APIs. But starting in 2007, the company also established private data channels for device manufacturers.

At the time, mobile phones were less powerful, and relatively few of them could run stand-alone Facebook apps like those now common on smartphones. The company continued to build new private APIs for device makers through 2014, spreading user data through tens of millions of mobile devices, game consoles, televisions and other systems outside Facebook’s direct control.

Facebook began moving to wind down the partnerships in April, after assessing its privacy and data practices in the wake of the Cambridge Analytica scandal. Mr. Archibong said the company had concluded that the partnerships were no longer needed to serve Facebook users. About 22 of them have been shut down.

The broad access Facebook provided to device makers raises questions about its compliance with a 2011 consent decree with the F.T.C.

The decree barred Facebook from overriding users’ privacy settings without first getting explicit consent. That agreement stemmed from an investigation that found Facebook had allowed app developers and other third parties to collect personal details about users’ friends, even when those friends had asked that their information remain private.

After the Cambridge Analytica revelations, the F.T.C. began an investigation into whether Facebook’s continued sharing of data after 2011 violated the decree, potentially exposing the company to fines.

Facebook officials said the private data channels did not violate the decree because the company viewed its hardware partners as “service providers,” akin to a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. According to the consent decree, Facebook does not need to seek additional permission to share friend data with service providers.

“These contracts and partnerships are entirely consistent with Facebook’s F.T.C. consent decree,” Mr. Archibong, the Facebook official, said.

But Jessica Rich, a former F.T.C. official who helped lead the commission’s earlier Facebook investigation, disagreed with that assessment.

“Under Facebook’s interpretation, the exception swallows the rule,” said Ms. Rich, now with the Consumers Union. “They could argue that any sharing of data with third parties is part of the Facebook experience. And this is not at all how the public interpreted their 2014 announcement that they would limit third-party app access to friend data.”

To test one partner’s access to Facebook’s private data channels, The Times used a reporter’s Facebook account — with about 550 friends — and a 2013 BlackBerry device, monitoring what data the device requested and received. (More recent BlackBerry devices, which run Google’s Android operating system, do not use the same private channels, BlackBerry officials said.)

Immediately after the reporter connected the device to his Facebook account, it requested some of his profile data, including user ID, name, picture, “about” information, location, email, and cell phone number. The device then retrieved the reporter’s private messages and the responses to them, along with the name and user ID of each person with whom he was communicating.

The data flowed to a BlackBerry app known as the Hub, which was designed to let BlackBerry users view all of their messages and social media accounts in one place.

The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.

The BlackBerry device was also able to retrieve identifying information for nearly 295,000 Facebook users. Most of them were second-degree Facebook friends of the reporter, or friends of friends.

In all, Facebook empowers BlackBerry devices to access more than 50 types of information about users and their friends, The Times found.

Categorized in Social

Source: This article was published internetofbusiness.com By Malek Murison - Contributed by Member: Carol R. Venuti

Facebook has announced a raft of measures to prevent the spread of false information on its platform.

Writing in a company blog post on Friday, product manager Tessa Lyons said that Facebook’s fight against fake news has been ongoing through a combination of technology and human review.

However, she also wrote that, given the determination of people seeking to abuse the social network’s algorithms for political and other gains, “This effort will never be finished and we have a lot more to do.”

Lyons went on to announce several updates and enhancements as part of Facebook’s battle to control the veracity of content on its platform. New measures include expanding its fact-checking programme to new countries and developing systems to monitor the authenticity of photos and videos.

Both are significant in the wake of the Cambridge Analytica fiasco. While fake new stories are widely acknowledged or alleged to exist on either side of the left/right political divide, concerns are also growing about the fast-emerging ability to fake videos.


Meanwhile, numerous reports surfaced last year documenting the problem of teenagers in Macedonia producing some of the most successful viral pro-Trump content during the US presidential election.

Other measures outlined by Lyons include increasing the impact of fact-checking, taking action against repeat offenders, and extending partnerships with academic institutions to improve fact-checking results.

Machine learning to improve fact-checking

Facebook already applies machine learning algorithms to detect sensitive content. Though fallible, this software goes a long way toward ensuring that photos and videos containing violence and sexual content are flagged and removed as swiftly as possible.

Now, the company is set to use similar technologies to identify false news and take action on a bigger scale.

In part, that’s because Facebook has become a victim of its own success. With close to two billion registered users, one billion regularly active ones, and over a billion pieces of content posted every day, it’s impossible for human fact-checkers to review stories on an individual basis, without Facebook employing vast teams of people to monitor citizen behavior.

Lyons explained how machine learning is being used, not only to detect false stories but also to detect duplicates of stories that have already been classed as false. “Machine learning helps us identify duplicates of debunked stories,” she wrote.

“For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.”

The big-picture challenge, of course, is that real science is constantly advancing alongside pseudoscience, and new or competing theories constantly emerge, while others are still being tested.

Facebook is also working on technology that can sift through the metadata of published images to check their background information against the context in which they are used. This is because while the fake news is a widely known problem, the cynical deployment of genuine content, such as photos, in false or deceptive contexts can be a more insidious problem.

Machine learning is also being deployed to recognise where false claims may be emanating from. Facebook filters are now actively attempting to predict which pages are more likely to share false content, based on the profile of page administrators, the behavior of the page, and its geographical location.

Internet of Business says

Facebook’s moves are welcome and, many would argue, long overdue. However, in a world of conspiracy theories – many spun on social media – it’s inevitable that some will see the evidenced, fact-checked flagging-up of false content as itself being indicative of bias or media manipulation.

In a sense, Facebook is engaged in an age-old battle, belief versus evidence, which is now spreading into more and more areas of our lives. Experts are now routinely vilified by politicians, even as we still trust experts to keep planes in the sky, feed us, teach us, clothe us, treat our illnesses, and power our homes.

Many false stories are posted on social platforms to generate clicks and advertising revenues through controversy – hardly a revelation. However, red flags can automatically be raised when, for example, page admins live in one country but post content to users on the other side of the world.

“These admins often have suspicious accounts that are not fake, but are identified in our system as having suspicious activity,” Lyons told Buzzfeed.

An excellent point. But some media magnates also live on the other side of the world, including – for anyone outside of the US – Mark Zuckerberg.

Categorized in Social

Source: This article was published hindustantimes.com By Karen Weise and Sarah Frier - Contributed by Member: David J. Redcliff

For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network.

The professor was incredulous. David Craig had been studying the rise of entertainment on social media for several years when a Facebook Inc. employee he didn’t know emailed him last December, asking about his research. “I thought I was being pumped,” Craig said. The company flew him to Menlo Park and offered him $25,000 to fund his ongoing projects, with no obligation to do anything in return. This was definitely not normal, but after checking with his school, University of Southern California, Craig took the gift. “Hell, yes, it was generous to get an out-of-the-blue offer to support our work, with no strings,” he said. “It’s not all so black and white that they are villains.”

Other academics got these gifts, too. One, who said she had $25,000 deposited in her research account recently without signing a single document, spoke to a reporter hoping maybe the journalist could help explain it. Another professor said one of his former students got an unsolicited monetary offer from Facebook, and he had to assure the recipient it wasn’t a scam. The professor surmised that Facebook uses the gifts as a low-cost way to build connections that could lead to closer collaboration later. He also thinks Facebook “happily lives in the ambiguity” of the unusual arrangement. If researchers truly understood that the funding has no strings, “people would feel less obligated to interact with them,” he said.

The free gifts are just one of the little-known and complicated ways Facebook works with academic researchers. For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network. For Facebook, the motivations to work with outside academics are far thornier, and it’s Facebook that decides who gets access to its data to examine its impact on society.“Just from a business standpoint, people won’t want to be on Facebook if Facebook is not positive for them in their lives,” said Rob Sherman, Facebook’s deputy chief privacy officer. “We also have a broader responsibility to make sure that we’re having the right impact on society.”

The company’s long been conflicted about how to work with social scientists, and now runs several programs, each reflecting the contorted relationship Facebook has with external scrutiny. The collaborations have become even more complicated in the aftermath of the Cambridge Analytica scandal, which was set off by revelations that a professor who once collaborated with Facebook’s in-house researchers used data collected separately to influence elections. ALSO READ: Facebook admits it tracks your mouse movements

“Historically the focus of our research has been on product development, on doing things that help us understand how people are using Facebook and build improvements to Facebook,” Sherman said. Facebook’s heard more from academics and non-profits recently who say “because of the expertise that we have, and the data that Facebook stores, we have an opportunity to contribute to generalizable knowledge and to answer some of these broader social questions,” he said. “So you’ve seen us begin to invest more heavily in social science research and in answering some of these questions.”

Facebook has a corporate culture that reveres research. The company builds its product based on internal data on user behaviour, surveys and focus groups. More than a hundred Ph.D.-level researchers work on Facebook’s in-house core data science team, and employees say the information that points to growth has had more of an impact on the company’s direction than Chief Executive Officer Mark Zuckerberg’s ideas.

Facebook is far more hesitant to work with outsiders; it risks unflattering findings, leaks of proprietary information, and privacy breaches. But Facebook likes it when external research proves that Facebook is great. And in the fierce talent wars of Silicon Valley, working with professors can make it easier to recruit their students.

It can also improve the bottom line. In 2016, when Facebook changed the “like” button into a set of emojis that better-captured user expression—and feelings for advertisers— it did so with the help of Dacher Keltner, a psychology professor at the University of California, Berkeley, who’s an expert in compassion and emotions. Keltner’s Greater Good Science Center continues to work closely with the company. And this January, Facebook made research the centerpiece of a major change to its news feed algorithm. In studies published with academics at several universities, Facebook found that people who used social media actively—commenting on friends’ posts, setting up events—were likely to see a positive impact on mental health, while those who used it passively may feel depressed. In reaction, Facebook declared it would spend more time encouraging “meaningful interaction.” Of course, the more people engage with Facebook, the more data it collects for advertisers.

The company has stopped short of pursuing deeper research on the potentially negative fallout of its power. According to its public database of published research, Facebook’s written more than 180 public papers about artificial intelligence but just one study about elections, based on an experiment Facebook ran on 61 million users to mobilize voters in the Congressional midterms back in 2010. Facebook’s Sherman said, “We’ve certainly been doing a lot of work over the past couple of months, particularly to expand the areas where we’re looking.”

Facebook’s first peer-reviewed papers with outside scholars were published in 2009, and almost a decade into producing academic work, it still wavers over how to structure the arrangements. It’s given out the smaller unrestricted gifts. But those gifts don’t come with access to Facebook’s data, at least initially. The company is more restrictive about who can mine or survey its users. It looks for research projects that dovetail with its business goals.

Some academics cycle through one-year fellowships while pursuing doctorate degrees, and others get paid for consulting projects, which never get published.

When Facebook does provide data to researchers, it retains the right to veto or edit the paper before publication. None of the professors Bloomberg spoke with knew of cases when Facebook prohibited a publication, though many said the arrangement inevitably leads academics to propose investigations less likely to be challenged. “Researchers focus on things that don’t create a moral hazard,” said Dean Eckles, a former Facebook data scientist now at the MIT Sloan School of Management. Without a guaranteed right to publish, Eckles said, researchers inevitably shy away from potentially critical work. That means some of the most burning societal questions may go unprobed.

Facebook also almost always pairs outsiders with in-house researchers. This ensures scholars have a partner who’s intimately familiar with Facebook’s vast data, but some who’ve worked with Facebook say this also creates a selection bias about what gets studied. “Stuff still comes out, but only the immensely positive, happy stories—the goody-goody research that they could show off,” said one social scientist who worked as a researcher at Facebook. For example, he pointed out that the company’s published widely on issues related to well-being, or what makes people feel good and fulfilled, which is positive for Facebook’s public image and product. “The question is: ‘What’s not coming out?,’” he said.

Facebook argues its body of work on well-being does have broad importance. “Because we are a social product that has large distribution within society, it is both about societal issues as well as the product,” said David Ginsberg, Facebook’s director of research.Other social networks have smaller research ambitions, but have tried more open approaches. This spring, Twitter Inc. asked for proposals to measure the health of conversations on its platform, and Microsoft Corp.’s LinkedIn is running a multi-year programme to have researchers use its data to understand how to improve the economic opportunities of workers. Facebook has issued public calls for technical research, but until the past few months, hasn’t done so for social sciences. Yet it has solicited in that area, albeit quietly: Last summer, one scholarly association begged discretion when sharing information on a Facebook pilot project to study tech’s impact in developing economies. Its email read, “Facebook is not widely publicizing the program.”

In 2014, the prestigious Proceedings of the National Academy of Sciences published a massive study, co-authored by two Facebook researchers and an outside academic, that found emotions were “contagious” online, that people who saw sad posts were more likely to make sad posts. The catch: the results came from an experiment run on 689,003 Facebook users, where researchers secretly tweaked the algorithm of Facebook’s news feed to show some cheerier content than others. People were angry, protesting that they didn’t give Facebook permission to manipulate their emotions.

The company first said people allowed such studies by agreeing to its terms of service, and then eventually apologized. While the academic journal didn’t retract the paper, it issued an “Editorial Expression of Concern.”

To get federal research funding, universities must run testing on humans through what’s known as an institutional review board, which includes at least one outside expert, approves the ethics of the study and ensures subjects provide informed consent. Companies don’t have to run research through IRBs. The emotional-contagion study fell through the cracks.

The outcry profoundly changed Facebook’s research operations, creating a review process that was more formal and cautious. It set up a pseudo-IRB of its own, which doesn’t include an outside expert but does have policy and PR staff. Facebook also created a new public database of its published research, which lists more than 470 papers. But that database now has a notable omission—a December 2015 paper two Facebook employees co-wrote with Aleksandr Kogan, the professor at the heart of the Cambridge Analytica scandal. Facebook said it believes the study was inadvertently never posted and is working to ensure other papers aren’t left off in the future.

In March, Gary King, a Harvard University political science professor, met with some Facebook executives about trying to get the company to share more data with academics. It wasn’t the first time he’d made his case, but he left the meeting with no commitment.

A few days later, the Cambridge Analytica scandal broke, and soon Facebook was on the phone with King. Maybe it was time to cooperate, at least to understand what happens in elections. Since then, King and a Stanford University law professor have developed a complicated new structure to give more researchers access to Facebook’s data on the elections and let scholars publish whatever they find. The resulting structure is baroque, involving a new “commission” of scholars Facebook will help pick, an outside academic council that will award research projects, and seven independent U.S. foundations to fund the work. “Negotiating this was kind of like the Arab-Israel peace treaty, but with a lot more partners,” King said.

The new effort, which has yet to propose its first research project, is the most open approach Facebook’s taken yet. “We hope that will be a model that replicates not just within Facebook but across the industry,” Facebook’s Ginsberg said. “It’s a way to make data available for social science research in a way that means that it’s both independent and maintains privacy.” But the new approach will also face an uphill battle to prove its credibility. The new Facebook research project came together under the company’s public relations and policy team, not its research group of PhDs trained in ethics and research design. More than 200 scholars from the Association of Internet Researchers, a global group of interdisciplinary academics, have signed a letter saying the effort is too limited in the questions it’s asking, and also that it risks replicating what sociologists call the “Matthew effect,” where only scholars from elite universities—like Harvard and Stanford—get an inside track.

“Facebook’s new initiative is set up in such a way that it will select projects that address known problems in an area known to be problematic,” the academics wrote. The research effort, the letter said, also won’t let the world—or Facebook, for that matter—get ahead of the next big problem.

Categorized in Social

Source: This article was published entrepreneur.com By Brian Byer - Contributed by Member: Clara Johnson

Consumers do enjoy the convenience of the apps they use but are individually overwhelmed when it comes to defending their privacy.

When it comes to our collective sense of internet privacy, 2018 is definitely the year of awareness. It’s funny that it took Facebook’s unholy partnership with a little-known data-mining consulting firm named Cambridge Analytica to raise the alarm. After all, there were already abundant examples of how our information was being used by unidentified forces on the web. It really took nothing more than writing the words "Cabo San Lucas" as part of a throwaway line in some personal email to a friend to initiate a slew of Cabo resort ads and Sammy Hagar’s face plastering the perimeters of our social media feeds.

In 2018, it’s never been more clear that when we embrace technological developments, all of which make our lives easier, we are truly taking hold of a double-edged sword. But has our awakening come a little too late? As a society, are we already so hooked on the conveniences internet-enabled technologies provide us that we’re hard-pressed making the claim that we want the control of our personal data back?

It’s an interesting question. Our digital marketing firm recently conducted a survey to better understand how people feel about internet privacy issues and the new movement to re-establish control over what app providers and social networks do with our personal information.

Given the current media environment and scary headlines regarding online security breaches, the poll results, at least on the surface, were fairly predictable. According to our study, web users overwhelmingly object to how our information is being shared with and used by third-party vendors. No surprise here, a whopping 90 percent of those polled were very concerned about internet privacy. In a classic example of "Oh, how the mighty have fallen," Facebook and Google have suddenly landed in the ranks of the companies we trust the least, with only 3 percent and 4 percent of us, respectively, claiming to have any faith in how they handled our information.

Despite consumers’ apparent concern about online security, the survey results also revealed participants do very little to safeguard their information online, especially if doing so comes at the cost of convenience and time. In fact, 60 percent of them download apps without reading terms and conditions and close to one in five (17 percent) report that they’ll keep an app they like, even if it does breach their privacy by tracking their whereabouts.

While the survey reveals only 18 percent say they are “very confident” when it comes to trusting retails sites with their personal information, the sector is still on track to exceed a $410 billion e-commerce spend this year. This, despite more than half (54 percent) reporting they feel less secure purchasing from online retailers after reading about online breach after online breach.

What's become apparent from our survey is that while people are clearly dissatisfied with the state of internet privacy, they feel uninspired or simply ill-equipped to do anything about it. It appears many are hooked on the conveniences online living affords them and resigned to the loss of privacy if that’s what it costs to play.

The findings are not unique to our survey. In a recent Harvard Business School study, people who were told the ads appearing in their social media timelines had been selected specifically based on their internet search histories showed far less engagement with the ads, compared to a control group who didn't know how they'd been targeted. The study revealed that the actual act of company transparency, coming clean about the marketing tactics employed, dissuaded user response in the end.

As is the case with innocent schoolchildren, the world is a far better place when we believe there is an omniscient Santa Claus who magically knows our secret desires, instead of it being a crafty gift exchange rigged by the parents who clearly know the contents of our wish list. We say we want safeguards and privacy. We say we want transparency. But when it comes to a World Wide Web, where all the cookies have been deleted and our social media timeline knows nothing about us, the user experience becomes less fluid.

The irony is, almost two-thirds (63 percent) of those polled in our survey don’t believe that companies having access to our personal information leads to a better, more personalized, online experience at all, which is the chief reason companies like Facebook state for wanting our personal information in the first place. And yet, when an app we've installed doesn't let us tag our location to a post or inform us when a friend has tagged us in a photo or alerted us that the widget we were searching for is on sale this week, we feel slighted by our brave new world.

With the introduction of GDPR regulations this summer, the European Union has taken, collectively, the important first steps toward regaining some of the online privacy that we, as individuals, have been unable to take. GDPR casts the first stone at the Goliath that’s had free rein leveraging our personal information against us. By doling out harsh penalties and fines for those who abuse our private stats -- or at least those who aren’t abundantly transparent as to how they intend to use those stats -- the EU, and by extension, those countries conducting online business with them, has finally initiated a movement to curtail the hitherto laissez-faire practices of commercial internet enterprises. For this cyberspace Wild West, there’s finally a new sheriff in town.

I imagine that our survey takers applaud this action, although only about 25 percent were even aware of GDPR. At least on paper, the legislation has given us back some control over the privacy rights we’ve been letting slip away since we first signed up for a MySpace account. Will this new regulation affect our user experience on the internet? More than half of our respondents don’t think so, and perhaps, for now, we are on the way toward a balancing point between the information that makes us easier to market to and the information that’s been being used for any purpose under the sun. It’s time to leverage this important first step, and stay vigilant of its effectiveness with a goal of gaining back even more privacy while online.

Categorized in Internet Privacy

Source: This article was published martechadvisor.com - Contributed by Member: Martin Grossner

Video has become an integral part of the overall content marketing mix – a significant chunk of people’s online activities involve consumption of videos.  51% of marketers swear by video to justify the ROI on content marketing activities - and YouTube gets more than 500 million hours of daily watch time. Facebook is not falling behind with 100 million hours of the same. And these numbers will only increase in the coming time.

On the other hand, ample amount of opportunities brings the problem of saturation. How do you stand out from the clutter and the competition in the eyes of your viewers and the search engines? In this article, we will go over 5 techniques (Or hacks, if you will!) that will attract more viewers for your videos.

(Note: For our convenience, we’ll mostly use YouTube for all the examples. The tactics mentioned in this article will be applicable for most video hosting platforms and streaming websites.)

1. Video Keyword Search

First, let’s go over the most fundamental aspect of any content creation activity - keyword research. Before you move to create videos, generate a list of keywords and keyword phrases that you would like to rank for.

To start with, the YouTube Suggest feature will show you what people are searching for in YouTube. For instance, if you type in Digital Marketing 2018 in the search box, YouTube will present the relevant searches with that keyword phrases, and you can generate more ideas with the suggestions.

Another way to get keyword ideas is to look for keywords used in videos with a higher video count. In the screenshot below, barring the video with 150k views, you can see that videos with the keyword phrase Digital Marketing Trends for 2018 have received a good amount of views.

2. Optimize Video Metadata

To help people find your content, optimize your video content before it goes live. Here is how you can do that:

Title: Include the focus keyword in the video title along with the problem it solves. It should be engaging and should incite people to click on the video.

In the screenshot below, the video has to Maximize Your Productivity as the focus keyword and tells how batching your tasks can help solve the problem it talks about. Note, that the language is simple yet impactful.

Description: Use the focus keyword as early as you can in the description to tell the YouTube algorithm what the video is about (Source). 200+ words descriptions help YouTube and Google both in understanding more about your video. You can ask viewers to subscribe to your channel or direct them to your website and so on. Avoid stuffing keywords in the description box to avoid getting penalized by YouTube.

Tip: If your video runs longer (usually 8+ minutes), add timestamps in your description to help your viewers navigate through the video. Here’s how Gary Vaynerchuk does it.

Upload video transcript: Now, this will certainly help you outsmart your competition. Whenever you are uploading a video, upload the video transcript along with it, because search engine bots crawl closed captions included in the video. YouTube does automatic captioning, but try not to use it because you are at the mercy of the speech recognition technology which sometimes is not accurate.

Video Thumbnail: Although thumbnails don’t influence the SEO factors directly, an eye-catching thumbnail will certainly attract more viewers leading to enhanced SEO.

Make sure your thumbnail attracts eyeballs and tells viewers what the video is about. Here’s an example:

3. Create Playlists to Increase Your Watch Time

Watch time measures the amount of time people spend in watching your videos. It’s an important metric, because YouTube rewards videos that keep viewers engaged on YouTube. To optimize your watch time, create playlists on specific topics that would keep the viewers hooked for a longer time. Also, while working on playlists, make sure the name of the playlist is keyword-rich to reap the SEO benefits.

4. User Engagement

If people are liking, leaving comments, sharing your videos or subscribing to your channel, YouTube uses it as a signal that your videos are engaging. Here are 2 simple tips to increase user engagement in your videos.

a. Ask viewers to like, comment and share your videos - and to subscribe to your channel. Ask specific questions so they can easily answer them in the comments section.

b. Ensure to leave reply to the comments on your videos – select a specific time to answer all the comments. Replying to comments makes your viewers feel good and engage more with your videos leading to even increased watch time.

5. Schema Markup

Schema.org is an initiative between Google, Yahoo!, Bing and Yandex to create a structured data markup schema. Implementing Schema markup will help videos stand out in standard search results. If you are not sure whether you should go for it or not, here is Matt Cutts himself affirming the significance of it.

You need to provide the description, thumbnail URL, upload date, and duration. Google has detailed out everything here regarding schema markups for videos.

6. Submit a Video Sitemap

Video sitemaps provide search engine bots with the placement and the metadata of the videos on your website. While the bots can crawl the content themselves, submitting the video sitemap speeds up the process.  The video sitemap should contain metadata like title, description, play page URL, thumbnail URL and the raw video file URL.

 If you have already started with video marketing and feel a little lost when it comes to getting more views, these hacks will set you in the right direction. How do you plan to implement these tactics? Let us know in the comments below!

Categorized in Search Engine
Page 1 of 15

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media