fbpx
Linda Manly

Linda Manly

Friday, 31 March 2017 16:43

Internet or splinternet?

The movement towards sovereign control of the internet is growing, and a degree of fragmentation already exists.

Who owns the internet? The answer is no one and everyone. The internet is a network of networks. Each of the separate networks belongs to different companies and organisations, and they rely on physical servers in different countries with varying laws and regulations.

But without some common rules and norms, these networks cannot be linked effectively. Fragmentation - meaning the end of the internet - is a real threat.

Some estimates put the internet's economic contribution to global gross domestic product as high as $4.2 trillion in 2016. A fragmented "splinternet" would be very costly to the world, but that is one of the possible futures outlined last month in the report of the Global Commission on Internet Governance, chaired by former Swedish Prime Minister Carl Bildt.

 

The internet now connects nearly half the world's population, and another billion people - as well as some 20 billion devices - are forecast to be connected in the next five years.

But further expansion is not guaranteed. In the commission's worst-case scenario, the costs imposed by the malicious actions of criminals, and the political controls imposed by governments, would cause people to lose trust in the internet and reduce their use.

Internet's shortcomings

The cost of cybercrime in 2016 has been estimated to be as high as $445bn, and it could grow rapidly. As more devices, ranging from cars to pacemakers, are placed online, malicious hackers could turn the "internet of things" (IOT) into "the weaponisation of everything". 

Massive privacy violations by companies and governments, and cyber-attacks on civilian infrastructure such as power grids - as recently happened in Ukraine - could create insecurity that undercuts the internet's potential.

A second scenario is what the commission calls "stunted growth". Some users capture disproportionate gains, while others fail to benefit.

Three or four billion people are still offline, and the internet's economic value for many who are connected is compromised by trade barriers, censorship, laws requiring local storage of data, and other rules that limit the free flow of goods, services, and ideas.

Authoritarian countries such as Russia and China championed international treaties guaranteeing no interference with states' strong sovereign control over their portion of the internet.

The movement towards sovereign control of the internet is growing, and a degree of fragmentation already exists.

China has the largest number of internet users, but its "Great Fire Wall" has created barriers with parts of the outside world.

Many governments censor services that they think threaten their political control. If this trend continues, it could cost more than 1 percent of GDP a year, and also impinge on people's privacy, free speech, and access to knowledge.

While the world could muddle along this path, a great deal will be lost and many will be left behind.

The appropriate approach

In the commission's third scenario, a healthy internet provides unprecedented opportunities for innovation and economic growth.

The internet revolution of the past two decades has contributed something like 8 percent of global GDP and brought three billion users online, narrowing digital, physical, economic, and educational divides.

The commission's report states that the IOT may result in up to $11 trillion in additional GDP by 2025.

 

The commission concluded that sustaining unhindered innovation will require that the internet's standards are openly developed and available; that all users develop better digital "hygiene" to discourage hackers; that security and resilience be at the core of system design - rather than an afterthought, as they currently are; that governments do not require third parties to compromise encryption; that countries agree not to attack the internet's core infrastructure; and that governments mandate liability and compel transparent reporting of technological problems to provide a market-based insurance industry to enhance the IOT's security.

Until recently, the debate about the most appropriate approach to internet governance revolved around three main camps.

The first, multi-stakeholder approach, originated organically from the community that developed the internet, which ensured technical proficiency but not international legitimacy, because it was heavily dominated by American technocrats.

A second camp favoured greater control by the International Telecommunications Union, a UN specialised agency, which ensured legitimacy but at the cost of efficiency.

And authoritarian countries such as Russia and China championed international treaties guaranteeing no interference with states' strong sovereign control over their portion of the internet.

More recently, the commission argues, a fourth model is developing in which a broadened multi-stakeholder community involves more conscious planning for the participation of each stakeholder - the technical community, private organisations, companies, governments - in international conferences.

Giving away the internet

An important step in this direction was the United States Commerce Department's decision last month to hand oversight of the so-called IANA functions - the "address book" of the internet - to the Internet Corporation for Assigned Names and Numbers.

ICANN, with a government advisory committee of 162 members and 35 observers, is not a typical inter-governmental organisation: the governments do not control the organisation.

At the same time, ICANN is consistent with the multi-stakeholder approach formulated and legitimated by the Internet Governance Forum, established by the UN General Assembly.

Some American senators complained that when President Barack Obama's Commerce Department handed its oversight of the IANA functions to ICANN, it was "giving away the internet". 

But the US could not "give away" the internet, because the US does not own it.

While the original internet-linked computers are entirely in the US, today's internet connects billions of people worldwide. Moreover, the IANA address book - of which there are many copies - is not the internet.

The US action last month was a step towards a more stable and open multi-stakeholder internet of the type that the Global Commission applauded. Let's hope that further steps in this direction follow.

Source : aljazeera.com

One thing most people are very good at is pretending. I was of the opinion that people cannot fake how they are for long, but recently a friend found out that the guy she was dating was married and has 2 children. She has been with him for 6 years! So yes, trusting anybody can be an issue, especially someone you have met recently, like your boyfriend or husband. So if you have suspicions (no one will blame you if you do), you should do some primary snooping on the world wide web. Need help? Here are tips on how you can research your boyfriend or husband online.

 

  • Google: The first step, of course, is googling the person’s name. Try different combinations like first name only, first and last name, an alias that the person might use, nicknames, etc. This will usually throw up results about most websites that the person has signed up for, or maybe frequents and comments on.
  • Facebook: When on Facebook, don’t just check out his pictures and status updates. Check for people who comment on his profile the most, what kind of comments they write, which pictures he is tagged in. Often you can see some  flirting going on in comments which could point towards something graver.
  • Instagram: What kind of pictures does he share? Who does he follow? Who are his followers? Get your answers for all these questions and also check the comments and reposts.
  • Linkedin: If you have any concerns about where he works and is he being honest about it, Linkedin is what you need to check. Depending on how active he is on Linkedin, you can find out about his entire career graph!
  • Dating and matrimonial sites: Here’s the real deal. Do not shy away from searching with his name and specific dating and matrimonial sites. You might need to sign up for a few websites in order to browse it, so remember to create a fake account with fake details.

Source : thehealthsite.com

We love to think about how our personalities influence our love lives and friendships, but we should also be asking the same question about our jobs, according to research just published in Harvard Business Review.

Researchers for Deloitte's Business Chemistry system teamed up with Rutgers University anthropologist Helen Fisher and molecular biologist Lee Silver to study thousands of people's work styles. They created an assessment, gave it to more than 190,000 people, and observed more than 3,000 work sessions in action to determine how each type thrives and how different combinations work together.

While people can embody any one of these styles at any given moment, most draw primarily from one or two. To find out which you are, count how many of the qualities in each of these sets applies to you, and read below for advice.

 

Pioneer: outgoing, focused on the big picture, spontaneous, drawn to risk, adaptable, imaginative

This type is the most common among leaders. Pioneers like to try out new ideas and take risks—and don't like getting too bogged down in details. They're highly imaginative and dream big. The downside is that they can be overconfident and impulsive, which makes it important for them to listen to other perspectives.

Driver: quantitative, logical, focused, competitive, experimental, deeply curious

Drivers are decisive, like to see results, and value data. Since they can get impatient, they risk propelling forward with their ideas without thinking about how to execute them. Drivers should remember not to jump into anything without adequate consideration.

Integrator: diplomatic, empathetic, traditional, relationship-oriented, intrinsically motivated, nonconfrontational

An integrator is a "people person." They believe morale building is important to the workplace and tend to engage in conversation with their coworkers rather than get straight to business. They may dislike debates because they disrupt the peace, and they might play mediator when their coworkers argue. Since integrators don't want to make waves, drivers and pioneers sometimes end up bossing them around. But integrators shouldn't let anyone overshadow them, since the ability to cultivate work relationships is extremely valuable.

Guardian: methodical, reserved, detail-oriented, practical, structured, loyal

Guardians are perfectionists, which may be why they're the most stressed-out style. They want to know exactly how to go about something before they try it, and they may not speak up until they're sure of what they're going to say. Since guardians can get closed out of conversations for this reason, they need to remind themselves that their perspectives are worth hearing, even if they haven't thought through every detail yet. Guardians should also give themselves the chance to imagine different possibilities before dismissing them for logistical reasons.

Whatever your personality type, the researchers recommend working with your opposite to balance out the team and cultivate their qualities within yourself. This means drivers should work with integrators and guardians should work with pioneers. These are also the combinations most likely to find each other difficult to work with. So doing your best work may require reconciling differences and accepting that your personal approach isn't always the best one.

Source : glamour.com

Friday, 24 March 2017 14:07

How AI will shape the future of search

Artificial intelligence is changing the way users access information online. Columnist Justin Freid discusses where the trends are heading and what this might mean for marketers.

There is no doubt the search industry has evolved. Just one look at how search engine results pages are currently laid out shows how things have changed. We have come a long way from 10 blue links.

But have we gone far enough? At SXSW earlier this month, information access was a hot topic. People no longer head to Google’s search bar as their only way of accessing content.

 

How we access information is changing

Search engines used to be the primary (or sole) place a consumer would turn to when they needed an immediate answer. You entered in a phrase, clicked a link and read the page.

But now, there are other places we are spending out time. In fact, the average consumer spends over 40 minutes a day on YouTube and 35 minutes on Facebook. We get our news from peers on social networks and can even consult WebMD about our health through Amazon’s Alexa AI.

Most recently, Martin Sorrell, CEO of WPP, called Amazon the biggest threat to Google. When you think about it, if you are past research mode and want to buy something immediately, you will often bypass Google and head directly to Amazon.

The way results are presented to us is different

No longer are we dependent on a list of links. AI assistants across the board have changed the way content is presented back to the end user. From Siri and Cortana to Alexa, each answers your questions or searches in a unique way. Whether it is voice search or simply using the internal search function on your iPhone, results look different and include things we don’t normally consider traditional “search results,” such as apps, emails, social comments from peers and so on.

Chatbots are also becoming more and more popular. Another huge topic at SXSW, many brands are utilizing chatbots to present information to consumers as quickly as possible. Instead of sifting through content on a website, chatbots will allow the consumer to enter specific questions and get their response immediately. This process would potentially replace the need to search in a traditional manner.

 

How we search will be different

Depending on how we engage, the AI platform will shape how we search. Whether it is longer-tailed queries through voice commands or short queries entered on a mobile device, the questions we ask are shifting. There is also potential for the AI to live in new places. As the smart home evolves and becomes more affordable, AI has the potential to be accessed throughout your home and car. It could become second nature to utilize AI to access information throughout your day.

As marketers, we need to turn to AI as part of our strategy

Many readers of MarTech Today, like me have begun to consider how these new developments and technologies will affect the way we advertise and attract new customers for our clients. While there are not clear-cut paid media opportunities integrated with each AI platform, many companies, such as Amazon, have discussed monetizing theirs.

As marketers, we need to begin thinking outside of bulk sheets containing thousands of keywords and begin thinking about how the consumer mindset will shift and the new behaviors that will come along with the mass adoption of artificial intelligence.

Author : Justin Freid

Source : martechtoday.com

In October, the ACLU released emails showing that a social media monitoring company called Geofeedia had tracked the accounts of Black Lives Matterprotesters for law enforcement clients. The revelations of social media spying made headlines and led Twitter, Facebook, and Instagram to cut off Geofeedia’s access to bulk user data (which in turn prompted the company to slash half its staff). Since then, two more social media monitoring companies, Snap Trends and Media Sonar, lost Twitter data access for similar surveillance activities.

Civil liberties advocates have celebrated these decisions, but new documents suggest police still have plenty of other tools to spy on social media users.

Jennifer Helsby, co-founder of the police accountability group Lucy Parsons Labs, provided CityLab with a slideshow prepared by a former employee of the Cook County Sheriff’s Office Intelligence Center that sheds some light on how police use social media. The presentation shows intelligence analysts how to mine location and content data from Twitter, Facebook, and Instagram—and advises them on setting up fake accounts and assembling dossiers on persons of interest.

 

One tip shows sites such as Statigram and Instamap, which can help law enforcement analyze photo trends or collect photos on individuals in targeted areas. This example points to images of individuals collected using Instamap near the Cook County Jail, which the Cook County Sheriff’s Office operates, as well as images of a child, a young woman, and families in Chicago.

Other slides reveal more advanced monitoring techniques. Geofeedia, the presentation states, can be used to geolocate users and conduct a “Radius and Polygram search” of an area for social media content. Echosec, a lesser known tool can monitor and geofence users, which allows police (and marketers) to track and collect users’ posts as soon as they are disseminated within a bounded area.

These tools rely on individuals’ public social media posts, but the slideshow also explains that police can use “catfishing”—creating fake accounts—to get non-public social media data, even though such accounts are not permitted on Facebook, Twitter, and Instagram.

While social media surveillance is often thought of as targeting certain locations or terms, such as hashtags, the Cook County Sheriff’s Office records suggest that intelligence analysts are also compiling informationon persons of interest for longer term retention, not just for “situational awareness” at public events. Here’s a sample “Intelligence Information Report,” for example, to collect photos and other information.

The presentation doesn’t get into whether there are limits on who can be the target of these operations, or what legal safeguards they are ensured. One slide mentions terms such as “probable cause” and “search warrant” but there is no explanation if or how legal procedures affect the monitoring. Some of the slides suggest this police monitoring is not necessarily focused on dangerous criminal suspects. For example, the presentation links to an ABC news clip featuring a specialized LAPD unit “dedicated to tracking teen parties in real time by monitoring social media.” (The Cook County Sheriff’s Office declined CityLab’s requests for comment on its social media monitoring program.)

“As long as you type it, for police it becomes real.”

Joseph Giacolone, a retired NYPD Detective Sergeant and professor at John Jay College’s Law, Police Science and Criminal Justice Administration Department, says that while these undercover social media accounts may violate the terms of Facebook and Twitter, that doesn’t make them illegal. “It’s no different than running an undercover operation, or a buy and bust,” says Giacolone. “Requesting a friendship, as a policeman you have to be careful of that entrapment issue. But if you just put a half-naked picture of woman in there, you’re gonna get in. I mean how hard is it really? They’re gonna invite you right in.”

Nationwide, experts say there is very little clarity on how often undercover online operations are carried out. Surveys suggest such activities are often left up the discretion of police officers themselves. “Seventy percent of the detectives using this are self-taught and like half of the departments don’t even have a policy or procedure on how to use it,” says Giacolone, citing a 2014 LexisNexis survey on the use of these tools. “So cops are working without a net, so to speak, and you are going to see lots of challenges on these things.”

 

The same survey found while only 48 percent of departments polled had formal processes for using social media in investigations, 80 percent of law enforcement officers reported that they felt that “creating personas or profiles on social media outlets for use in law enforcement activities is ethical.”

One major concern among civil liberties advocates is that such methods are unfairly targeting the general public, not those who’ve already committed a crime. The LexisNexis survey also indicated that 40 percent of law enforcement officers had used social media monitoring to keep tabs on “special events” and 67 percent of respondents believed that social media monitoring “is a valuable process in anticipating crimes.”

ACLU of Northern California policy attorney Matt Cagle, who helped break the news on Geofeedia’s surveillance of Black Lives Matter, worries that covert accounts lack judicial oversight. ”This new world of surveillance products shouldn’t give law enforcement a blank check to create undercover accounts and collect information on law abiding people,” he says. “By using undercover accounts, they are potentially friending multiple people and getting much broader access than a warrant to Facebook for specific information would allow.”

So, who are really most likely to be targeted by social media snooping by law enforcement? Brendan McQuade, an assistant professor of sociology at SUNY-Cortland who studies law enforcement intelligence operations, is concerned that these methods will be used to crack down on political dissent. “It’s bad criminal tradecraft to broadcast your stuff on social media, so I would think this is more geared to political policing,” he says. McQuade points out that data already available to law enforcement, such as phone company records, allow police to grab suspects’ association and location information far more efficiently.

Geofeedia advertised its usefulness on this front during what it called “The Freddie Gray Riots” in Baltimore last year. In promotional materials aimed at police departments, the firm claims that its product, which was used by Baltimore County Police Department’s Criminal Intelligence Unit, helped police “to run social media photos through facial recognition technology to discover rioters with outstanding warrants and arrest them directly from the crowd.”

Giacolone says that social media surveillance can be useful for crime-fighting, but general only for low-level youths, not serious players. “You got two types of people out there: the young kids, the look-at-me generation that posts everything online, and then you have the older crowd that has learned to use social media to sell drugs through anonymous social media and its difficult to identify them. Most of these young kids are just posting, like, ‘Hey, everybody, I just robbed the store on the corner, look what I got.’”

“If you’re black or brown, your social media content comes with a cost—it’s a virtual prison pipeline.”

 

Even if these spying operations were only limited to suspects in such low-level criminal investigations, civil liberties advocates warn that users should be concerned about the ways in which their data is being retained and interpreted by law enforcement. Over the last few years, the NYPD has relied on millions of social media posts to justify “gang” raids across the city, targeting neighborhood youth crews whose online discussions sometimes relate to violence in their communities. Subsequent prosecutions for these raids have been controversial because social media chatter about shootings, drops in individuals’ volume of social media activity, and online photos of young men flashing hand signs have been questionably interpreted as evidence of gang membership and involvement in violent criminal conspiracies.

Matt Mitchell, a security researcher with theracial justice organizationCryptoHarlem, notes that the NYPD has carried out these operations by building intelligence dossiers on social media users over years, as the CCSO appears to be doing. ”The police are saying, ‘I’m going to follow you everywhere you go, write down every word you say, and look at every picture you take’, and now with these undercover accounts they are your friend hearing everything you say in confidence,” says Mitchell. “If you’re black or brown, your social media content comes with a cost—it’s a virtual prison pipeline.”

Often the NYPD’s social media surveillance gang operations collect and sift through social media content from teens and pre-teens over years, only to be used against them in court much later down the line. The Cook County Sheriff’s Office’s retention of social media data through intelligence reports could enable similar prosecutions. “What you say online is not always real. It’s not the same as something police pick up on a wiretap, but as long as you type it, for police it becomes real,” says Mitchell. “If you look hard enough, you’ll find something, no matter who you’re looking at. Take this post today, look at this thing they did five years ago, put it together, and you can draw any conclusion you want.”

Author : GEORGE JOSEPH

Source : http://www.citylab.com/crime/2016/12/how-police-are-watching-on-social-media/508991/

A dark web vendor is reportedly selling millions of decrypted Gmail and Yahoo accounts in an unspecified underground marketplace. Over 20 million Gmail accounts and five million Yahoo accounts from previous massive data breaches are now reportedly up for sale.

A dark web vendor going by the name "SunTzu583", who has previously also allegedly listed over one million decrypted Gmail and Yahoo accounts on the dark web, now appears to have ramped up his efforts.

According to a HackRead report, in separate listings, the cybercriminal is allegedly offering 4,928,888 and 21,800,969 Gmail accounts, of which the latter has been listed for $450 (0.4673 Bitcoins). While the first listing includes email addresses and clear text passwords, 75% of the second listing allegedly contains decrypted passwords and 25% hashed passwords.

 

The Gmail data reportedly corresponds to those stolen in previous breaches, including the Nulled.cr hack and the Dropbox data breach.

The cybercriminal is also allegedly selling 5,741,802 Yahoo accounts for $250 (0.2532 Bitcoins). Most of the accounts listed were allegedly disabled and appear to have been stolen from MySpace, Adobe and LinkedIn data breaches.

For both the Gmail and Yahoo accounts, the dark web vendor claims that not all the email and password combinations work directly, warning potential buyers to not expect them to match in all cases.

The data has reportedly been matched against those on popular data breach notification platforms such as Have I Been Pwned and Hacked-DB. However, the data has not been independently verified by IBTimes UK.

How to keep your data safe

Cybercrime ramped up to alarming levels last year, which also saw a slew of massive cyberattacks. Those concerned about keeping their accounts and data safe should incorporate safe security practices. In the event of a breach, or even a potential one, it is recommended that passwords be changed immediately. It's also essential that you not reuse passwords, instead use unique passwords for each of your accounts.

Author : Ashok
Source : https://www.yahoo.com/news/over-20-million-gmail-5-091238421.html

Facebook’s AI Research Division has developed an incredibly powerful new tool that will enhance users’ photo searching capabilities, help the visually impaired, and protect against objectionable material and spam appearing on the site.

“Until recently, online search has always been a text-driven technology, even when searching through images,” said Director of Applied Machine Learning Joaquin Candela in a blog post announcing the latest photo search upgrade.

The so-called ‘backbone’ of Facebook’s AI, the FBLearner Flow, runs 1.2 million AI experiments a month, a sixfold increase from 12 months ago, thanks to a huge improvement in automated machine learning.

 

Lumos is the machine learning platform that Facebook has deployed to catalogue both images and videos hosted on the website.

“More than 200 visual models have been trained and deployed on Lumos by dozens of teams, for purposes such as objectionable-content detection, spam fighting, and automatic image captioning,” the social media superpower said.

By pooling the resources of various departments within the machine learning and AI research subdivisions, Facebook claims users can now search through their photos without the need for proper tagging or a detailed caption.

What that means is that users can now search for old holiday snaps based on what you were wearing at the time, even if you can’t recall exactly what year the holiday took place.

A deep neural network with millions of parameters powers the object recognition aspect of the platform, meaning not only specific objects, but also the context around them, can be identified.

In addition, the search queries and their respective results are gathered on a continuing basis, so there is ever-improving precision in future search results.

Candela also stated that the company’s overall goal is to “weave AI into the Facebook engineering fabric.”

 

What this entails for the future of not only the platform itself but social media as a concept remains up for debate, especially given ever-increasing cyber-security concerns.

Privacy has become an increasing social media gremlin, and these latest features do raise certain questions about the right to privacy in the future.

It remains unclear whether there is an opt-out for the new feature or whether current security protocols could be incorporated without issue.

This would afford hackers and users with malicious intent a wealth of knowledge previously unheard of, such as a person’s travel patterns or even their daily routine (depending on the frequency of posts).

Recent updates to the site, such as the “stalker-ish” feature ‘Discover people,’ have raised concerns about where the social media platform is headed and to what extent users can control such new features.

Given that Facebook is fast approaching two billion users, such a vast amount of image and video data available from around the world could provide the company with a truly mind-blowing amount of information never before seen in history.

Source : https://www.rt.com/viral/376324-facebook-ai-powered-photo-search/

Saturday, 18 March 2017 14:45

A technical argument for quality content

Those who know me know that I’m primarily a technical SEO. I like on-site content optimization to be sure, but I like what I can measure — and now that keyword densities are mostly gone, I find it slightly less rewarding during the process (though equally rewarding via the outcome).

For this reason, I’ve never been a huge fan of the “quality content is awesome for rankings simply because quality content is awesome for rankings” argument for producing … well … quality content.

Quality content is hard to produce and often expensive, so its benefits need to be justified, especially if the content in question has nothing to do with the conversion path. I want to see measured results. The arguments for quality content are convincing, to be sure — but the pragmatist in me still needs to see hard evidence that quality content matters and directly impacts rankings.

 

I had two choices on how to obtain this evidence:

  1. I could set up a large number of very expensive experiments to weight different aspects of content and see what we come up with.
  2. Or I could do some extensive research and benefit from the expensive experiments others have done. Hmmmmm.

As I like to keep myself abreast of what others are publishing, and after seeing a number of documents around the web recently covering exactly this subject, I decided to save the money and work with the data available — which, I should add, is from a broader spectrum of different angles than I could produce on my own. So let’s look at what quality content does to rankings.

What is quality content?

The first thing we need to define is quality content itself. This is a difficult task, as quality content can range from 5,000-word white papers on highly technical areas to evergreen content that is easy but time-consuming to produce, to the perfect 30-second video put on the right product at the right time. Some quality content takes months to produce, some minutes to produce.

Quality content cannot be defined by a set criteria. Rather, it is putting the content your visitors want or need in front of them at the right time. Quality is defined by the simple principle of exceeding your visitors’ expectations on what they will find when they get to your web resource. That’s it.

Now, let’s look at what quality content actually does for your rankings.

Larry Kim on machine learning and its impact on ranking content

Anyone in the PPC industry knows Larry Kim, the founder and CTO of WordStream, but the guy knows his stuff when it comes to organic as well. And we share a passion: we both are greatly intrigued by machine learning and its impact on rankings.

We can all understand that machine learning systems like RankBrain would naturally be geared toward providing better and better user experiences (or what would they be for?). But what does this actually mean?

Kim wrote a great and informative piece for “Search Engine Journal” providing some insight into exactly what this means. In his article, he takes a look at WordStream’s own traffic (which is substantial), and here’s what he found:

  • Kim looked at the site’s top 32 organic traffic-driving pages prior to machine learning being introduced into Google’s algorithm; of these pages, the time on site was above average for about two-thirds of them and below average for the remaining third.
  • After the introduction of machine learning, only two of the top 32 pages had below-average time on site.

The conclusion Kim draws from this — and which I agree with — is that Google is becoming better at weeding out pages that do not match the user intent. In this case, they are demoting pages that do not have high user engagement and rewarding those that do.

The question is, does it impact rankings? Clearly the demotion of poor-engagement pages on the sites of others would reward sites with higher engagement, so the answer is yes.

 

Kim also goes on to discuss click-through rate (CTR) and its impact on rankings. Assuming your pages have high engagement, does having a higher click-through rate impact your rankings? Here’s what he found:

CTR impact on rankings in organic search.

What we can see in this chart is that over time, the pages with higher organic click-through rates are rewarded with higher rankings.

What do CTRs have to do with quality content, you might ask? To me, the titles and descriptions are the most important content on any web page. Write quality content in your titles and descriptions, and you’ll improve your click-through rate. And provided that quality carries over to the page itself, you’ll improve your rankings simply based on the user signals generated.

Of course, I would be remiss to be basing any full-scale strategy on a single article or study, so let’s continue …

Eric Enge on machine learning’s impact on ranking quality content

Eric Enge of Stone Temple Consulting outlined a very telling test, and the results appeared right here on Search Engine Land in January. Here’s what I love about Enge: He loves data. Like me, he’s not one to follow a principle simply because it’s trendy and sounds great — he runs a test, measures and makes conclusions to deploy on a broader scale.

What Stone Temple Consulting did for this test was replace the text on category pages — which had been written as “SEO copy” and was not particularly user-friendly — with new text that “was handcrafted and tuned with an explicit goal of adding value to the tested pages.” It was not SEO content by the classic definition; it was user content. And here’s what they found:

Content optimized for humans ranks better than content optimized for engines.

The traffic to the pages they updated with high-quality content on saw an increase of 68 percent in traffic, whereas the control pages took a hit of 11 percent. Assuming that all the pages would have taken the 11 percent drop, the pages with the gains actually improved by 80 percent. This was accomplished simply by adding content for the users instead of relying on content that search engines wanted back in 2014.

Eric points out in his article that Hummingbird‘s role in helping Google to understand natural language, combined with the speed in adjustments facilitated by machine learning, allows Google to reward sites that provide a good user experience — even when it’s not rich in keywords or traditional SEO signals.

Brian Dean on core ranking metrics

Back in September, Brian Dean of Backlinko wrote an interesting piece breaking down the core common elements of the top-ranking sites over a million search results. This is a big study, and it covers links, content and some technical consideration, but we’re going to be focusing only on the content areas here.

So with this significant amount of data, what did they find the top-ranking sites had in common with regard to content?

  • Content that was topically relevant significantly outperformed content that didn’t cover a topic in depth.
  • Longer content tended to outrank shorter content, with the average first-page result containing 1,890 words.
  • A lower bounce rate was associated with higher rankings.

Topically relevant content appears to be more about what is on the page and how it serves users than whether it contains all the keywords. To use their example, for the query “indonesian satay sauce,” we find the following page in the results:

Quality content ranks higher than stronger sites.

This page is beating out stronger sites, and it doesn’t actually use the exact term “indonesian satay sauce” anywhere on the page. It does, however, include a recipe, information on what a satay is, variations on it and so on. Basically, they beat stronger sites by having better content. Not keyword-stuffed or even “keyword-rich,” just better and more thorough content.

 

Quality content, it seems, has taken another victory in the data.

So what we see is …

I could go on with other examples and studies, but I’d simply be making you suffer through more reading to reinforce what I believe these three do well: illustrate that there is a technical argument for quality content.

More important perhaps is the reinforcement that “quality content” follows no strict definition, apart from providing what your user wants (although that may periodically be biased by what Google believes your user wants prior to attaining any information about them directly). Your click-through rates, time on page, bounce rate, the thoroughness of your pages, and pretty much anything to do with your visitors and their engagement, all factor in.

The goal, then, is to serve your users to the best of your ability. Produce high-quality content that addresses all their needs and questions and sends them either further down your conversion funnel or on to other tasks — anything but back to Google to click on the next result.

If you need one more reinforcement, however, I have one but it has no supporting authoritative data aside from its source. Periodically, Google either releases or has leaked their Quality Rater’s Guidelines. You can download the most recent (2016) in this post. While I did a fuller evaluation of these guidelines here, the key takeaway is as follows:

The quality of the Main Content (MC) is one of the most important considerations in Page Quality rating. For all types of webpages, creating high quality MC takes a significant amount of at least one of the following: time, effort, expertise, and talent/skill.

So we don’t get metrics here, but what we do get is a confirmation that Google is sending human raters to help them better understand what types of content require time, effort, expertise and talent/skill. Combine this information with machine learning and Hummingbird, and you have a system designed to look for these things and reward it.

Now what?

Producing quality content is hard. I’ve tried to do so here and hope I’ve succeeded (I suppose Google and social shares will let me know soon enough). But if you’re looking at your site trying to think of where to start, what should you be looking at?

This, of course, depends on your site and how it’s built. My recommendation is to start with the content I already have, as Eric Enge did in his test. Rather than trying to build out completely new pages, simply come up with a way to serve your users better with the content you already have. Rewriting your current pages — especially ones that rank reasonably well but not quite where you want them to be — yields results that are easily monitored, and you’ll not only be able see ranking changes but also get information on how your users are reacting.

If you don’t have any pages you can test with (as unlikely as that may be), then you need to brainstorm new content ideas. Start with content that would genuinely serve your current visitors. Think to yourself, “When a user is on my site and leaves, what question were they trying to answer when they did so?” Then create content to address that, and put it where that user will find it rather than leave.

If users are leaving your site to find the information they need, then you can bet the same thing is happening to your competitors. When these users are looking for the answer to their question, wouldn’t it be great if they found you? It’s a win-win: You get quality content that addresses a human need, and you might even intercept someone who was just at a competitor’s website.

Beyond that, the world is your oyster. There are many forms of quality content. Your job is “simply” to find the pearl in the sea of possibilities.

Author : Dave Davies

Source : http://searchengineland.com/technical-argument-quality-content-270556

When conducting a Hazard Analysis to comply with the Food Safety Modernization Act (FSMA)’s new rules, many are relying on Google to search for scientific studies, guidance and other useful information. But Google, like any tool, is only as helpful as one knows how to use it. Most fail to Google Search effectively, wasting valuable time weeding through (literally) hundreds of millions of search results with little success.

What Are Google Search Operators?
Google Search Operators are simple punctuation, expressions or a combination of the two that enable you to narrow searches to specific sites [e.g., the U.S. Food and Drug Administration (FDA)’s website], file types and words and phrases (as well as exclude unwanted search words and phrases). Put simply, search operators are like secret code that filter out fluff.

 

How Do Google Search Operators Work?
Below are a few Google Search Operators I use on a regular basis for FSMA- and Hazard Analysis and Critical Control Points (HACCP)-related Google searches.

site:[._ _ _]
Typing “site:” followed by a url extension (e.g., .com, .gov, .org) will limit your search to a specific site, like government sites (.gov), nonprofit organizations (.org) and university web pages (.edu). To search the FDA website for recalls involving cashews, for example, type in your search terms then add the FDA website extension (.fda.gov):

     

To search the California Department of Public Health’s website, just add the agency’s url extension (.ca.gov):

     


“[search term]”
Google Search generates search results based on what it thinks you’re looking for. Typing quotation marks around search terms will limit your search only to the exact word(s) or phrase(s) you entered.

-[search term]

Typing a negative (-) sign before a search term will exclude that search term from the search results. This operator is extremely useful when pestered by unrelated search results that share a common word or phrase. For example, if you are searching for cashew recalls and keep pulling up recalls involving mixed nuts, you can list the other nuts in your search terms with negative signs.


file:[file format]

 


Guidance documents and scientific articles are often stored online in PDF format. To only pull up PDFs, follow your search terms with file:pdf. If you prefer to search for word docs, type: file:.doc or file:.docx. With these search operators, you can search any file type.


The search above will yield templates in PDF form exclusively from university sites but U.S. Department of Agriculture (USDA) also has great HACCP resources. Modify your search teams to search the USDA website by typing:

Google Scholar
Google scholar is Google’s search engine that is specially designed to “help you find relevant work across the world of scholarly research.” While Google Scholar is useful, and I recommend using it, expect a high yield of search results to be highly technical scientific studies with limited practical application to a small business or person without an advanced science degree. Still, I have found useful studies with this search engine. The above search operators work in Google Scholar.

Summary
Search operators make Google searches (whether you’re using Google Search or Google Scholar) more efficient and can be used to enhance any FSMA- or HACCP-related query. Search operators weed out hundreds of millions of irrelevant search results, helping you locate what you need quickly and (relatively) painlessly.

Charlie Kalish is managing member of Food Safety Guides, a progressive food safety and quality systems consulting firm that specializes in FSMA compliance, HACCP, third-party audit preparation and food safety and quality plan development. He is also senior director for food safety at UC San Diego Extension and a Food Safety Preventive Controls Alliance Lead Instructor for human and animal food.

Author : Charlie Kalish

Source : http://www.foodsafetymagazine.com/enewsletter/fsma-tip-locate-hazard-analysis-resources-fast-with-google-search-operators/

PHILADELPHIA (WPVI) --Social media never played a bigger role than it did in this past presidential election.

Now as we settle into 2017 what role will these platforms play?

Well, for all the talk about how divided this country is, the truth is that when it comes to technology, we've never been more connected.

But the apps we turn to share our lives are changing to better serve the changing world in which we use them.

One is called Signal, the other Confide. Both are fast becoming fixtures on smart phones everywhere.

 



The apps, both of them free, operate like standard text messaging with one major difference.

Whatever you send through them is encrypted, unreadable, and unretrievable on any device other than the one to which you sent it.

Apps like these are increasingly popular in an age of prying eyes for consumers driven by fear of hackers and government surveillance.

And speaking of government -

"We are seeing some fatigue with politics on those same platforms," Caroline Bean of Digitas Health, a local company that specializes in maximizing a company's imprint on social media, told Action News.

That fatigue is resulting in one of two things: A push for something other than political rants and a need for social platforms to provide something new.

"2017 is a time for the big social media platforms to stay really big - Facebook, Twitter and Instagram - and we'll be seeing how they adopt a lot of the features that the more niche apps and social platforms have specialized in," Bean said.

For now, Facebook remains the social juggernaut.

 



A recent survey by the Pew Research Center says roughly 8-in-10 Americans who are online use the site, up 7-percent from the year before.

Twitter, too, is seeing a resurgence, driven, in part, by the President, known to use that platform at all hours.

"Politics is changing the way people are on the social platforms right now. Notably, Facebook was used to organize the Women's March," Bean said.

Much more than just a place to post pictures, social media is now a kind of town square in which to rally and organize, and to tell the other side of any given story.

Facebook is doing that with its launch of live video, allowing users to take their followers into their experiences.

Instagram does it through it newly debuted Instagram Stories - a kind of diary of a day, in pictures and videos.

But social media is also turning its "look at me" reputation into something with medical benefits.

"Things like TeleDoc and Doctors on Demand are ways that people can see somebody pretty quickly," Bean said.

They are bringing people who suffer from specific illnesses together for support.

And a trend parents may want to be aware of?

Finstagram, a smaller group within your Instagram world, where only select friends can see a particular set of posts.

"So really this is showing this understanding that generations have that social media is all about sort of your public and private face and they are figuring out ways to show both sides," Bean said.

That's one of the fascinating things about social media, the always shifting, but delicate, balance between public and private.



So a reminder to always think twice before you post and always assume it will be seen by someone you don't know.

Author : Brain 

Source : http://6abc.com/news/special-report-the-changing-role-of-social-media/1759116/

Page 5 of 6

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Finance your Training & Certification with us - Find out how?      Learn more