fbpx
Logan Hochstetler

Logan Hochstetler

Netflix is still the reigning champion of streaming entertainment, and we will always credit the media giant as transforming home-viewing. Even if Blockbuster stores still stood in every neighborhood, no one would bother to drive to one.

But as liberating as Netflix is, people still get frustrated with the service. The index of films is so gigantic, it's hard to decide what to watch. The app keeps diligent records of your viewing habits, which anyone in your home can scrutinize. And what happens when you travel? These may be first world problems, but figuring out the Netflix game can be extremely gratifying.

Here are some clever hacks for getting the most out of your Netflix experience. Many users are surprised how pliable the platform is, once they learn a few tricks. If you haven't fiddled much with your Netflix options, get ready to view the service in a whole new way.

1. Delete Browsing History

Some movies we like to see in groups. Others, we like to see alone. Whether your guilty pleasure is "Gigli" or "Last Tango in Paris," you may not want everyone to know your private cinematic tastes. That's why Netflix makes it easy to hide certain selections from your profile so that nobody else can see what you've watched. You'll find this option under Your Account.

2. Hacking Netflix

Not long ago, there was a site called "A Better Queue," which enabled you to cut through all the unknown and poorly reviewed films that clutter Netflix. That site went under, but Hacking Netflix remains a great resource: You'll find regular updates on new films and TV shows added to the catalog. Despite the aggressive name, Hacking Netflix is nothing nefarious; it's a news site that religiously compiles new releases.

3. Pick a Movie at Random Through Netflix Roulette

Some people can spend an hour sifting through the endless Netflix options and still come up with nothing. (Author Douglas Coupland called this "option paralysis.") Wouldn't it be better to just leave your decision up to fate? That's why the internet invented Netflix Roulette, a site that takes your favorite genres, actors and directors and assigns you a film at random. Satisfy any mood without having to give it a second thought.

4. Netflix Enhancer

This nifty add-on only works on Google Chrome, but it definitely earns its title: Enhancer allows you to hover over a film with your mouse and immediately watch trailers and read reviews. The immediacy of Enhancer saves you the extra effort of looking up background information on YouTube and Rotten Tomatoes.

5. Sift Through Netflix's Weird Sub-Categories

It almost sounds like an urban myth: Netflix has many little subcategories, types of films that you never imagined anyone would identify, like "Steamy Sci-Fi and Fantasy" and "World Music Concerts." Users realized that they could enter certain codes into Netflix's URL and discover extremely specific cinematic genres. Several movie buffs have decoded the Netflix system, but one easy one is Secret Codes Search.

6. Maximize Video Quality

Most people are satisfied with Netflix picture quality, whether they're watching movies on their phone or on a wide-screen TV. It's true that Netflix usually has a pixellated look, but only a fraction of viewers really grumble about it. Yet what if you could improve the resolution? Turns out you can, and pretty easily, as long as you've got a strong Wi-Fi connection. Just access Your Account and visit Playback Settings, and you'll find an option to boost your picture quality to "high."

7. Go International

This is a common shocker: People travel internationally, bringing their iPads along. They arrive in their hotel, log onto Wi-Fi, and access Neflix. But wait! What are all these weird movies? Where's "The Walking Dead"? Turns out each country has its own selection of films and TV, and many mainstays that are U.S. entertainment are not included. How to fix it? Subscribe to Mediahint, which enables you to "unblock content" almost anywhere on the globe.

Source: This article was published komando.com

Too many domain names with non-Latin letters are still shut out of the global Internet economy.

ompanies that do business online are missing out on billions in annual sales thanks to a bug that is keeping their systems incompatible with Internet domain names made of non-Latin characters. Fixing it could also bring another 17 million people who speak Russian, Chinese, Arabic, Vietnamese, and Indian languages online.

Those are the conclusions of a new study by an industry-led group sponsored by the International Corporation for Assigned Names and Numbers (ICANN), the organization responsible for maintaining the list of valid Internet domain names. The objective of the so-called Universal Acceptance Steering Group, which includes representatives from a number of Internet companies including Microsoft and GoDaddy, is to encourage software developers and service providers to update how their systems validate the string of characters to the right of the dot in a domain name or e-mail address—also called the top-level domain.

The bug wasn’t an obvious problem until 2011, when ICANN decided to dramatically expand the range of what can appear to the right of the dot (see “ICANN’s Boondoggle”). Between 2012 and 2016, the number of top-level domains ballooned from 12 to over 1,200. That includes 100 “internationalized” domains that feature a non-Latin script or Latin-alphabet characters with diacritics, like an umlaut (¨), or ligatures, like the German Eszett (ß). Some 2.6 million internationalized domain names have been registered under the new top-level domains, largely concentrated in the Russian and Chinese languages, according to the new study.

Many Web applications or e-mail clients recognize top-level domains as valid only if they are composed of characters that can be encoded using American Standard Code for Information Interchange, or ASCII.  The problem is most pronounced with e-mail addresses, which are required credentials for accessing online bank accounts and social media pages in addition to sending messages. In 2016, the group tested e-mail addresses with non-Latin characters to the right of the dot and found acceptance rates of less than 20 percent.

The bug fix, which entails changing the fundamental rules that validate domains so that they accept Unicode, a different standard for encoding text that works for many more languages, is relatively straightforward, says Ram Mohan, the steering group’s chair. The new research suggests that the potential economic benefits of making the fix outweigh the costs. Too many businesses, including e-commerce firms, e-mail services, and banks, simply aren’t yet aware that their systems don’t accept these new domains, says Mohan.

Things are improving, though. In 2014, Google updated Gmail to accept and display internationalized domain names without having to rely on an inconvenient workaround that translated the characters into ASCII. Microsoft is in the process of updating its e-mail systems, which include Outlook clients and its cloud-based service, to accept internationalized domain names and e-mail addresses.

It’s not just about the bottom line, says Mark Svancarek, a program manager for customer and partner experience at Microsoft, and a vice chair of the Universal Acceptance Steering Group. To let millions of people be held back from the Internet because “the character set is gibberish to them” is antithetical to his company’s mission, he says.

Acceptance of non-ASCII domains is likely to spur Internet adoption, since a large portion of the next billion people projected to connect to the Internet predominantly speak and write only in their local languages, says Mohan. Providing accessibility to these people will depend in many ways on the basic assumptions governing the core functions of the Internet, he says. “The problem here is that in some ways this is lazy programming, and because it’s lazy programming, it’s easy to replace it with better programming.” 

Source: This article was published technologyreview.com By Mike Orcutt

Monday, 15 May 2017 18:03

What is Ransomware?

Ransomware is a form of malicious software that locks up the files on your computer, encrypts them, and demands that you pay to get your files back. Wanna Decryptor, or WannaCry, is a form of ransomware that affects Microsoft’s Windows operating system. When a system is infected, a pop up window appears, prompting you to pay to recover all your files within three days, with a countdown timer on the left of the window. It adds that if you fail to pay within that time, the fee will be doubled, and if you don’t pay within seven days, you will lose the files forever. Payment is accepted only with Bitcoin.

How does it spread?

According to the US Computer Emergency Readiness Team (USCRT), under the Department of Homeland Security, ransomware spreads easily when it encounters unpatched or outdated software. Experts say that WannaCry is spread by an internet worm -- software that spreads copies of itself by hacking into other computers on a network, rather than the usual case of prompting unsuspecting users to open attachments. It is believe that the cyber attack was carried out with the help of tools stolen from the National Security Agency (NSA) of the United States.

Some forms of malware can lock the computer entirely, or set off a series of pop-ups that are nearly impossible to close, thereby hindering your work.

What can be done to prevent this?

The best way to protect your computer is to create regular backups of your files. The malware only affects files that exist in the computer. If you have created a thorough backup and your machine is infected with ransomware, you can reset your machine to begin on a clean slate, reinstall the software and restore your files from the backup. According to Microsoft’s Malware Protection Centre, other precautions include regularly updating your anti-virus program; enabling pop-up blockers; updating all software periodically; ensure the smart screen (in Internet Explorer) is turned on, which helps identify reported phishing and malware websites; avoid opening attachments that may appear suspicious.

Who has it affected so far?

It was first reported from Sweden, Britain and France, but Russia and Taiwan are said to be the worst hit, according to US media. Over 75,000 systems have been affected. Major companies that have reported attacks are FedEx, Telefonica and National Health Service (UK).

Source: This article was published thehindu.com

  • Chrome for Android now lets you save entire websites for reading later.
  • It's perfect for folks who want to read where a connection isn't available, like on the subway.
  • It's only available for Android users right now, unfortunately, but iOS users should try "Read it Later" or "Pocket."

Google rolled out a new feature for Android users on Monday that allows them to save entire Websites in Chrome for reading later.

With just one single long press, you can save a site and then read it when you're underground on the subway without a connection, up in a plane,or anywhere else where you're far from a network.

Unlike similar options that allow you to save the text of an article for later, Chrome's feature saves the entire website so that you can view it was meant to be. Here's how.

CNBC Tech: Google download offline
Todd Haselton | CNBC
  • Next, open up Chrome and head to a website with a list of stories you might want to read later, like CNBC.com.
CNBC Tech: Google download offline 2
Todd Haselton | CNBC
  • Long press on any link and tap "Download link" from the popup menu.
CNBC Tech: Google download offline 3
Todd Haselton | CNBC
  • The link will download. If you don't have a connection, you can tell Chrome to download the story when you're back online.
CNBC Tech: Google download offline 4
Todd Haselton | CNBC
  • To access your save stories, just view your open tabs or tap the menu button and click "Downloads."

That's it! And don't be bummed if you're on iOS. Apps such as Pocket and Read it Later offer a similar experience. 

Source: This article was published cnbc.com By Todd Haselton

A shopper in Miami searches an online retail site. The Legislature is considering a state tax on online shopping. Wilfredo LeeThe Associated Press

You soon might have to pay a 6.5 percent sales tax on your online shopping to help balance the state’s budget.

An online sales tax plan from House Democrats would bring in an estimated $340.8 million for public schools, higher education financial aid and other education-related expenditures over the next two years.

The tax also could help brick-and-mortar businesses compete with online sellers, said Rep. Kris Lytton, D-Anacortes, a key House budget negotiator.

“We have a lot of businesses in Washington state that are on an uneven playing field where their products are taxed, but that same product might not be taxed if you bought it on the internet,” Lytton said.

The Democratic proposal has not drawn outspoken resistance from Republican budget negotiators.

“If we need additional revenue, this is one of the taxes we ought to explore,” said Rep. Terry Nealey, R-Dayton, the GOP’s ranking minority member on the House Finance Committee.

Right now, Washington retailers must collect sales taxes on online purchases only if they are selling their own products to state residents.

When online retailers, such as Amazon, eBay or Overstock, sell products from out of state, they are not required to collect sales tax.

However, because Amazon has a physical presence in Washington, it must collect tax when selling its own products to people in the state, such as its e-reader Kindle.

The new proposal would require online retailers from any state to either collect state sales tax or provide buyers with information on how to pay the tax to Washington.

It would apply only to companies grossing more than $10,000 in sales in Washington.

Online retailers that opt to give buyers information on paying the tax would have to file an annual report with the state Department of Revenue to ensure the right amount of sales tax is collected.

The report would include buyers’ names, what they bought, how much they spent, and the billing, mailing and shipping addresses they provided to the retailer.

Sellers also would have to preserve the information for five years, in case the the state needs to verify a report.

Lytton said businesses would be more likely to collect sales tax themselves because the second option is “more burdensome.”

The proposed online sales tax is part of a $3 billion revenue package House Democrats have offered to comply with a court order to fix how the state pays for public schools.

Nealey said he doesn’t support most of the Democratic budget proposal, but said a “reasonable” tax on online retailers could reduce an “unfair advantage” of those who don’t have to pay sales taxes.

There is no federal regulation on online sales taxes, but Nealey said it is needed to avoid a host of varying laws in different states. In the absence of one, he said, Washington should consider bringing its tax code up to speed with new online markets.

Colorado has passed a sales tax to regulate online retailers, and at least two other states — Arkansas and Nebraska — have considered one.

Nealey said the proposal still needs to nail down how the tax would be implemented and enforced.

Sen. Dino Rossi, R-Sammamish, said he was skeptical of the tax because there is little history of it.

He said he wanted to see how it works in Colorado before committing Washington to it.

Rossi, a GOP budget writer in the Republican-led Senate, said he didn’t want to support or oppose the online sales tax proposal before Democrats approve their tax package in the House.

“Nobody knows exactly how it would work and the magnitude and the reliability of that prediction,” he said.

Source : This article was published in thenewstribune.com By FORREST HOLT

Google has no single authority metric but rather uses a bucket of signals to determine authority on a page-by-page basis.

Google’s fight against problematic content has drawn renewed attention to a common question: how does Google know what’s authoritative? The simple answer is that it has no single “authority” metric. Rather, Google looks at a variety of undisclosed metrics which may even vary from query to query.

The original authority metric: PageRank

When Google first began, it did have a single authority figure. That was called PageRank, which was all about looking at links to pages. Google counted how many links a page received to help derive a PageRank score for that page.

Google didn’t just reward pages with a lot of links, however. It also tried to calculate how important those links were. A page with a few links from other “important” pages could gain more authority than a page with many links from relatively unremarkable pages.

Even pages with a lot of authority — a lot of PageRank — weren’t guaranteed to rocket to the top of Google’s search results, however. PageRank was only one part of Google’s overall ranking algorithm, the system it uses to list pages in response to particular searches. The actual words within links had a huge impact. The words on the web pages themselves were taken into account. Other factors also played a role.

Calculating authority today

These days, links and content are still among the most important ranking signals. However, artificial intelligence — Google’s RankBrain system — is another major factor. In addition, Google’s ranking system involves over 200 major signals. Even our Periodic Table of SEO Success Factors that tries to simplify the system involves nearly 40 major areas of consideration.

None of these signals or metrics today involve a single “authority” factor as in the old days of PageRank, Google told Search Engine Land recently.

“We have no one signal that we’ll say, ‘This is authority.’ We have a whole bunch of things that we hope together help increase the amount of authority in our results,” said Paul Haahr, one of Google’s senior engineers who is involved with search quality.

What are those things? Here, Google’s quiet, not providing specifics. The most it will say is that the bucket of factors it uses to arrive at a proxy for authority are something it hopes really does correspond to making authoritative content more visible.

One of the ways it hopes to improve that mix is with feedback from the quality raters that it employs, who were recently given updated guidelines on how to flag low-quality web pages.

As I’ve explained before, those raters have no direct impact on particular web pages. It’s more like the raters are diners in a restaurant, asked to review various meals they’ve had. Google takes in those reviews, then decides how to change its overall recipes to improve its food. But in this case, the recipes are Google’s search algorithms, and the food is the search results it dishes up. Google hopes the feedback from raters, along with all its other efforts, provides results that better reward authoritative content.

“Our goal in all of this is that we are increasing the quality of the pages that we show to users. Some of our signals are correlated with these notions of quality,” Haahr said.

Authority is primarily assessed on a per-page basis

While there’s no single authority figure, that bucket of signals effectively works like one. That leads to the next issue. Is this authority something calculated for each page on the web, or can domains have an overall authority that transfers to individual pages?

Google says authority is done on a per-page basis. In particular, it avoids the idea of sitewide or domain authority because that can potentially lead to false assumptions about individual pages, especially those on popular sites.

“We wouldn’t want to look at Twitter or YouTube as, ‘How authoritative is this site?’ but how authoritative is the user [i.e., individual user pages] on this site,” Haahr said.

It’s a similar situation with sites like Tumblr, WordPress or Medium. Just because those sites are popular, using that popularity (and any authority assumption) for individual pages within the sites would give those pages a reward they don’t necessarily deserve.

What about third-party tools that try to assess both “page authority” and “domain authority?” Those aren’t Google’s metrics. Those are simply guesses by third-party companies about how they think Google might be scoring things.

Sitewide signals, not domain authority

That’s not to say that Google doesn’t have sitewide signals that, in turn, can influence individual pages. How fast a site is or whether a site has been impacted by malware are two things that can have an impact on pages within those sites. Or in the past, Google’s “Penguin Update” that was aimed at spam operated on a sitewide basis (Haahr said that’s not the case today, a shift made last year when Penguin was baked into Google’s core ranking algorithm).

When all things are equal with two different pages, sitewide signals can help individual pages.

“Consider two articles on the same topic, one on the Wall Street Journal and another on some fly-by-night domain. Given absolutely no other information, given the information we have now, the Wall Street Journal article looks better. That would be us propagating information from the domain to the page level,” Haahr said.

But pages are rarely in “all things equal” situations. Content published to the web quickly acquires its own unique page-specific signals that generally outweigh domain-specific ones. Among those signals are those in the bucket used to assess page-specific authority. In addition, the exact signals used can also vary depending on the query being answered, Google says.

This article was published in searchengineland.com By Danny Sullivan

It was way back in 2010 that we last heard anything much from InVisage Technologies, when it talked about making a new type of camera sensor suitable for smartphones, capable of producing dramatically better HDR pictures and videos than traditional sensors. It calls the tech QuantumFilm, and – five years later – it announced the final product. Here’s everything we know about it.

QuantumFilm in action

The 13-megapixel camera on your next smartphone may not be like all the rest, if it’s a QuantumFilm sensor from InVisage Technologies. The sensor promises to take astonishing, natural-looking HDR stills and shoot stable action video footage, at a level usually reserved for high-end DSLR cameras.

We got a chance to see the QuantumFilm technology in action at a private demo, plus learn a little more about how it works. InVisage’s QuantumFilm sensor replaces the traditional silicon CMOS sensor used in most cameras today, including those in phones. It’s thinner (0.5 micron versus the 3 micron of a conventional back-illuminated CMOS sensor), and manages light more effectively. InVisage claims it absorbs 100 percent of light, versus silicon’s 70 percent, or less “cross talk” or light leakage. It results in a higher dynamic range image that’s closer to film, with greater details, and it does everything without any sneaky software tweaks.

The clue as to how it differs from normal camera sensors is in the name. InVisage’s QuantumFilm is a brand new quantum dot film that’s applied to a chip and is suitable for use in either a digital camera or a smartphone. Voltage is applied at different degrees to adjust the dynamic range. Although we can mess around with HDR modes on our phones now, they increase noise in the final result, which isn’t the case with QuantumFilm.

Hands-on demo

What’s it like? We saw two demonstrations: One related to the way it handles light, and the other showing the effectiveness of its global shutter, which smoothes out video when shooting moving objects. The former compared an iPhone with a prototype smartphone fitted with a QuantumFilm sensor. It’s immediately obvious how much more effectively the QuantumFilm camera – the one at the bottom – handles the darker side of the box than the iPhone. Holding up a Galaxy S6 Edge Plus resulted in slightly better performance, but not at the level displayed by the QuantumFilm model.Invisage

Andy Boxall/Digital Trends

For the global shutter demo, another set of the same devices were suspended over a musical instrument. Plucking the strings showed how the QuantumFilm sensor’s global shutter kept the image steady and natural. The iPhone’s rolling shutter didn’t. Check out our video of the demonstration to see the difference. Again, we tried the same test with the Galaxy S6 Edge Plus, and the result was somewhere in-between. It didn’t confuse the shutter in the same way, and the strings were merely fuzzy and blurry in motion, but that’s certainly not ideal.

It’s worth pointing out that both these demos were set up and performed by InVisage, and had specific conditions relevant to both problems. For example, the iPhone’s shutter speed was manually tuned to mimic typical lighting conditions, as oppose to letting the device adjust shutter speed automatically; while it exhibited the rolling shutter effect, it’s also not how most most of us use our phones in the real-world. We’ve not had the chance to try the camera out independently.

It may not be long until we do, though. InVisage told us a Chinese manufacturer has a smartphone with the QuantumFilm sensor inside ready for release in the very near future. The phone will be sold internationally, although exact details – such as the manufacturer’s name – weren’t shared. Additionally, InVisage said two of the three major DSLR camera manufacturers have also chosen to use the QuantumFilm sensor in future hardware.

QuantumFilm is an exciting move forward for smartphone photography. It doesn’t rely on clever software tricks, or a specially tuned app, to improve pictures and video — it does so with cool science and an entirely new sensor. Based on our very early impressions, the difference is noticeable in changeable light situations, but without extended tests, we can’t see just how much better the results are over software HDR enhancements. We’re keen to find out, though.

All the announcements prior to our demo with the tech

InVisage’s original announcement of QuantumFilm came soon after the company released a short film demonstrating its ability, and various technical details. The 13-megapixel sensor is tiny, measuring just 8.5mm square, and 4mm high, meaning it’ll take up only a small amount of space inside a phone. The 1.1-micron pixel sensor has three extra dynamic range settings beyond the regular CMOS sensors we’re used to, and it doesn’t use software to achieve the effect to avoid noise.

InVisage’s demo film gave us a first look at the clever electronic global shutter, improving on the standard rolling shutter to produce smoother and more stable action video, even at 2K and 4K resolutions. The sensor works with hardware running Qualcomm and MediaTek processors.

Demo film shows potential

The short film was called Prix. It’s more than seven-minutes long, and fairly excruciating, but skip through to see how the camera deals with some challenging lighting conditions, and once the racing starts, see some smooth tracking shots.

In addition to the short film, InVisage has made a making-of film, which explains what QuantumFilm is all about. Again, it’s hard to watch thanks to all the “acting,” but there are some handy comparison images that point out QuantumFilm’s strengths over a CMOS-equipped phone.

We’ll keep you updated on InVisage’s QuantumFilm technology here, so check back often.

This article was  published on Digital Trends by Andy Boxall

Google has been rapidly adding new features to its Home connected speaker recently, and the latest will be handy for chefs. Google Home can now read out recipes step-by-step -- but it sounds like you'll need to kick off the process using your smartphone. According to a blog post that went up today, Home will be able to read back more than 5 million recipes from sites like All Recipes, Food Network, Bon Appetit, the New York Times and more. First, though, you'll need to find the recipe you want on your phone using either the Google Assistant on Android or Google search on your iPhone.

From there, you'll find a new "send to Google Home" button, provided you have one set up of course. Once you've done that, the recipe will be loaded up and ready for you -- saying "OK Google, start cooking" will prompt Home to read you the first line. You can ask Home to repeat a direction or go to a specific step at any time; otherwise saying "OK Google, next step" will move you forward. You can keep using Google Home to play music, answer questions and basically do whatever else you might want without interrupting the recipe. It'll know you're cooking and move forward with the recipe regardless of whatever else you ask it between steps.

It's a natural extension of Google Home, particularly considering how nice it is to go hands-free when you're working in the kitchen. And, as with many features Google Home has gained recently, it's something that Amazon's Echo can already do. However, Google's implementation has a few advantages -- rather than building specific "skills" for each recipe source, Google Home works with a whopping 5 million recipes from multiple sites right away. The first recipe skill for Echo came from All Recipes, and it now works with Food Network as well. Echo has a few advantages over Google's recipe feature, as well -- you can tell Echo what you have in your fridge or pantry and it'll suggest options for you.

If you want to try cooking with Google Home, the new feature is rolling out this week.

This article was  published in engadget.com by  Nathan Ingraham

Infowars article was used to help illustrate how quality raters might judge content, but raters have no power to censor, ban or penalize pages.

Google has neither censored nor banned the Infowars site, despite what some headlines out there say. The company did, however, rescind an example that its quality rater contractors received using Infowars and other sites in terms of how to judge page quality generally.

Memorandum has a roundup on the news, including a Business Insider take that gets the situation generally right.

The news came from Mike Cernovich’s blog, which gets it incorrect with a headline saying “Google Takes Out Major Contract to De-List InfoWars from Its Search Index.” Infowars itself gets it similarly incorrect with its headline of “Breaking: Google Admits To Censoring Infowars, Claims It Will Stop.”

Infowars not delisted or banned

Let’s get the censorship debunking out of the way first. Google currently has indexed about 341,000 pages from Infowars:

If Infowars had been delisted, all these pages would not be there. Nor do the Cernovich or Infowars reports suggest that Google somehow suddenly restored all these pages in the day or so since all this developed. By the way, Google has indexed more pages from Infowars than Google rival Bing, which has 301,000:

Search for Infowars by name on Google, and it ranks tops:

Search for something like “Google censorship,” and Infowars is in the top results:

The same is true for a search on “wikileaks clinton,” as shown below:

The site’s story claiming Israel had a role in the 9/11 attacks also ranks in the top results for a search on that topic:

These are not examples that would happen with a site that was delisted or censored.

Google’s quality raters & what they do

So what did happen? We have to start first with Google’s “search quality raters.” It has about 10,000 of these that it contracts with to perform search quality evaluations. Those raters are asked to perform a huge number of searches and then rate the quality of results they get back.

Those quality raters do not have the ability to ban, delist, censor or penalize any particular listing or site. This is what Google says. It’s also something I’ve never seen reported to happen by the army of independent search engine optimization professionals out there — and believe me, they’d scream about that.

Only Google employees can ban a site or pages from a site. When Google employees take such a specific “manual action,” the site impacted gets a notification — something that Infowars has not said that it received.

Instead, the data from the quality raters is used to broadly change Google’s search algorithms, the recipes that help Google decide what, from the billions of pages it gathers from across the web, to list for particular searches. Those algorithms do not target specific sites. However, sites that are deemed to match certain criteria, such as low-quality content or piracy, can be impacted by them.

Infowars cited in example to quality raters

To perform quality evaluations, quality raters are given guidelines. These are public; you can read them yourself here. They are filled with examples of various pages and guidance on potentially how these might be evaluated, as a way for the raters themselves to make their own general decisions with the searches they do. They do not contain instructions saying that any particular site or sites should always be poorly rated.

In addition to these guidelines, it appears that the vendor or vendors that Google hires to manage these quality raters may give additional instructions — and that’s where all of the current concern comes in. One of these vendors used Infowars as an example, and Cernovich was sent a screen shot of this example, which you can see on his blog.

Infowars was used as an example of how a rater might judge a page’s quality generally. The example gave reasons why the page might earn a “Low to Medium” score, putting it midway to below-average on the quality scale.

Here’s how that looks on Google’s overall page quality scale, from Google’s official guidelines:

The reasons for a lower score were mainly due to the reputation Infowars has overall for “controversial, often debunked claims.” Despite this, the example did not suggest that the page get the lowest possible score. In fact, it explained why this should not happen and even defended the reason why that particular page would be useful to some Google searchers:

Simply presenting an unorthodox perspective on the news doesn’t exactly qualify, even if it may be “harmful” in the larger, more elliptical sense of the term. You could say that we’re being generous with the upper range here, but the site may be considered useful to some users looking for an alternative source of information redeems [sic] it in some small way.

Those are not instructions telling quality raters to delist or censor Infowars, even if they had the ability to do so. Nor are they explicitly saying, as Cernovich wrote, that all Infowars pages must be rated “low quality” or “low to medium.” In fact, the instructions make it clear that raters should take many things into account when rating any page:

Remember that all news organizations that have existed long enough have courted their share of controversy and mistakes. What we’re asking you to do here is to ensure that the line between reliability and outright negligence of facts is honored when you apply your best rating judgement.

That comes after a comparison of this particular article to a similar one at CNN. It cites elements that the CNN article has that the Infowars article lacks as part of the reasoning for the lower rating for Infowars. Potentially, a different Infowars article might rate higher depending on its particular merits.

To recap: no call to ban. No call to delist. No call to censor. None of which the quality raters could do. There’s not even an explicit call to mark all Infowars pages of “low” quality.

Google rescinds the example

While there’s no censorship involved here, Google has said the specific example shouldn’t have been given by its vendor. It sent Search Engine Land this statement:

In our Search quality guidelines for raters, we do not give guidance on how to rate websites in general, but rather give broad guidance on how to evaluate the quality of specific Search result pages. Our intention is to get unbiased feedback from raters in order to inform algorithmic improvements.In this instance, we have confirmed that a vendor we work with sent out more detailed instructions to some raters without our knowledge, which included references to specific sites. This is in conflict with the intent of our guidelines and the vendor has taken action to remove these references in their training module.

Google’s statement makes no mention of lifting a ban or censorship on Infowars because, of course, the example itself didn’t say this should happen, nor could quality raters make such a thing happen.

Oddly, however, giving specific examples is not in conflict with the “intent” of Google’s own guidelines. As mentioned earlier, those guidelines are full of examples, like this about a Wall Street Journal article used to illustrate a page that might get a high rating:

Or this about a TMZ article that might be deemed to have poorly met a user’s expectations because it is stale:

Suffice to say, the TMZ example has been in the official guidelines for ages and no one has interpreted that as being Google is saying that TMZ should be censored, banned or delisted.

Google’s search quality challenge

Actually, the real worry for Infowars and any site with content that may seem “upsetting or offensive” to a wide variety of searchers is from a completely different and new part of the guidelines that were added last month.

Google launches new effort to flag upsetting or offensive content in search is our story about that. But as that story notes, any content flagged that way doesn’t get an immediate demotion nor a ban. Again, quality raters can’t do that. It’s more data for the search algorithms to use.

Google hopes to change those algorithms so that they’re more likely to show highly trustworthy and factual information for common topics. If the algorithm can detect content deemed suspect, that might be less likely to show. But such content wouldn’t be removed overall and would show for more specific searches.

None of this may change the minds of those on the right convinced that Google — a generally left-leaning company — is out to censor its results. But then again, last month Google was also accused of actively promoting climate change denial and last December accused of prioritizing sites with a right-wing bias. Both were untrue.

So which is it? Is Google purposely slanting right or left? The answer is neither. Instead, with billions of pages indexed and billions of searches happening each day, people can interpret results to lean however they want.

For more on that and Google’s struggle as the examination of its search results become more polarized, see my article from earlier this month: A deep look at Google’s biggest-ever search quality crisis.

Source : Searchengineland.com By Danny Sullivan

Google's unconfirmed update has been a hot topic in the SEO world. Has 'Fred' really hit the rankings of all websites?

At the beginning of March, it was reported that Google had released a major new, unconfirmed ranking update. An update that slipped under the radar and supposedly caused a large number of websites to lose their rankings within the search engine results pages (SERPs), sending the SEO industry into an apparent state of panic and leaving those of us that know better confused.

Deciphering ‘Fred’

This mysterious update that the industry came to name ‘Fred’ following a joke from a Googler, and is said to be targeting websites that put a larger focus on revenue rather than the user. This includes websites monetising from ads and affiliates, as well as those featuring low-quality links and content that the search engine considers to be of low value to the user.

However, Google is still yet to confirm or deny the update. So although a range of industry professionals have been quick to investigate and comment on the impact this has already had on business’ website rankings, it’s still much too early to decipher and pinpoint what this update is targeting exactly. Without confirmation and specifics, the ambiguous statements and attention that many in the SEO community are paying this elusive update, that again has not been confirmed to have happened by Google themselves, is being blown out of proportion.

Big impact

We are in a time where fake news has dominated the conversation. ‘Fred’ is just one example of how misinformation published on the web and social media can create a domino effect of reactions. Even the Washington Post commented that the SEO community is ‘freaking out’ and calling for all companies, whose websites are an important part of their lead generation and branding, to start using SEO services and employ SEO consultants in order not to get caught out in any more Google updates.

The overly-dramatic reaction towards this update has caused the story to be spread and shared without any questions. This has lead to a large acceptance of misleading information about what the update entails, causing confusion and panic. The fact that a US national publication, which usually pays no attention to the SEO industry, has stopped to cover this story demonstrates just how far fake news can travel and how it can instil an unnecessary sense of fear that gets spread even more.

Professionals within the industry are generalising that the ‘Fred’ update has hit most websites - and has hit them hard. Yet most of us dealing with high quality, white hat websites have seen no changes or impacts on our client’s search engine rankings. Penalising low-quality content has been an algorithmic focus for Google since 2011. It’s no secret that the search engine favours web pages that have been designed with the user in mind. The fact that Google is targeting low-quality websites with this unconfirmed update is nothing new and shouldn’t be surprising to the industry.

In short, websites that have reported a loss in traffic and positionings within the SERPs obviously haven’t been placing the user at the forefront of their SEO strategies.

User in mind

The reality is, every day and often twice daily, Google makes small updates to their search engine algorithms that largely go unnoticed. Because of this, SERPs are always fluctuating and a website's ranking could change at any point in time.

Each update Google releases, whether large or small, is focused more and more on black hat SEO. It’s more than likely that the ‘Fred’ update was one of Google’s minimal ones. Businesses which have reported that their positionings within the SERPs have dropped were most likely penalised by a previous smaller update and that - paired with the minimal update that ‘Fred’ is - became the reason for their fluctuation.

That being said, it’s important to take note of the overreaction ‘Fred’ has caused. Those who use black hat tactics and had a primary focus on ad revenue will at some point in time get caught out. In order to bypass any of the effects of a new algorithmic update, big or small, websites should only be designed with one thing in mind: the user.

This article was  published in  performancein.com by Simon Schnieders

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media