Say goodbye to Flash and hello to a pure HTML5 experience when Google rolls out the next update to its Chrome browser. In September Google will release Chrome 53, which will be designed to block Flash.

Most Flash on the web today is never even seen, as Google points out 90% of it is loaded behind the scenes. However, this type of Flash can slow down your web browsing experience, which is the reason behind Google’s decision to block it.

In Google’s words:

HTML5 is much lighter and faster, and publishers are switching over to speed up page loading and save you more battery life. You’ll see an improvement in responsiveness and efficiency for many sites.

Google began to de-emphasize Flash last September, when Flash content started being served on a click-to-play basis, rather than autoplaying in the browser. Following the positive effect of that change, Google appears to be all but removing Flash from its browser completely.

The company even announced a future update coming this December where HTML5 will become the default experience on Chrome. Sites that only support Flash will still be accessible in Chrome, but users will first be prompted to enable it.

What was once synonymous with in-browser gaming, entertainment, and cutting edge web design, Adobe Flash has been outshone by competing technologies which are lighter, faster, and easier on your device’s battery life.

The two companies will continue to work together, with Adobe said to be assisting Google with its efforts to transition the web to HTML5.

Source : https://www.searchenginejournal.com/googles-chrome-browser-block-flash-starting-september/170548/ 

Categorized in Search Engine

Google has multiple named parts of the algorithm that influence search rankings. Google Panda is part of the algo that is specific to the quality of content, Penguin is specific to the quality of links, and Hummingbird is Google’s part of the algo for handling conversational search queries accurately.

Google Panda

Google Panda takes the quality of a site’s content into account when ranking sites in the search results. For sites that have lower quality content, they would likely find themselves negatively impacted by Panda. As a result, this causes higher quality content to surface higher in the search results, meaning higher quality content is often rewarded with higher rankings, while low-quality content drops.

Google Panda

When Panda originally launched, many saw it as a way for Google to target content farms specifically, which were becoming a major problem in the search results with their extremely low-quality content that tended to rank due to sheer volume. These sites were publishing a fantastic amount of low-quality content very quickly on topics with very little knowledge or research, and it was very obvious to a searcher who landed on one of those pages.

Google has now evolved Panda to be part of the core algorithm. Previously, we had a known Panda update date, making it easier to identify when a site was hit or had recovered from Panda. Now it is part of a slow rolling update, lasting months per cycle. As a result, it is hard to know whether a site is negatively impacted by Panda or not, other than doing a content audit and identifying factors that sites hit by Panda tend to have.

User Generated Content

It is important to note that Panda does not target user-generated content specifically, something that many webmasters are surprised to learn. But while Panda can target user-generated content, it tends to impact those sites that are producing very low-quality content – such as spammy guest posts or forums filled with spam.

Do not remove your user-generated content, whether it is forums, blog comments or article contributions, simply because you heard it is “bad” or marketed as a “Panda proof” solution. Look at it from a quality perspective instead. There are many highly ranking sites with user-generated content, such as Stack Overflow, and many sites would lose significant traffic and rankings simply because they removed that type of content. Even comments made on a blog post can cause it to rank and even get a featured snippet.

Word Count

Word count is another aspect of Panda that is often misunderstood by SEOs. Many sites make the mistake that they refuse to publish any content unless it is above a certain word count, with 250 words and 350 words often cited. Instead, Google recommends you think about how many words the content needs to be successful for the user.

For example, there are many pages out there with very little main content, yet Google thinks the page is quality enough that it has earned the featured snippet for the query. In one case, the main content was a mere 63 words, and many would have been hard pressed to write about the topic in a non-spammy way that was 350+ words in length. So you only need enough words to answer the query.

Content Matches the Query

Ensuring your content matches the query is also important. If you see Google is sending traffic to your page for specific queries, ensure that your page is answering the question searchers are looking for when they land there. If it is not, it is often as simple as adding an extra paragraph or two to ensure that this is happening.

As a bonus, these are the types of pages – ones that answer a question or implied question – that Google is not only looking to rank well but is also awarding the featured snippet for the query to.

 

Technical SEO

Technical SEO also does not play any role in Panda. Panda looks just at the content, not things like whether you are using H1 tags or how quickly your page loads for users. That said, technical SEO can be a very important part of SEO and ranking in general, so it should not be ignored. But it does not have any direct impact on Panda specifically.

Determining Quality

If you are struggling to determine whether a particular piece of content is considered quality or not, there is one surefire way to confirm. Look in Search Analytics or your site’s analytics program such as Google Analytics and look at the individual page. If Google is ranking a page and sending it traffic, then clearly it is viewing it as quality enough to show high enough in the search results that people are landing there from those Google’s search results.

However, if a page is not getting traffic from Google, it does not automatically mean it is bad, but the content is worth looking at closer. Is it simply newer and has not received enough ranking signals to rank yet? Do you see areas of improvement you can make by adding a paragraph or two, or changing the title to match the content better? Or is it truly a garbage piece of content that could be dragging the site down the Panda hole?

Also, do not forget that there is traffic outside of Google. You may question a page because Google is not sending it traffic, but perhaps it does amazingly well in Bing, Baidu, or one of the other search engines instead. Diversity in traffic is always a good thing, and if you have pages that Google might not be sending traffic to, but is getting traffic from other search engines or other sites or through social media shares, then removing that content would be the wrong decision to make.

Panda Prevention

How to prevent Google Panda from negatively impacting your site is pretty simple. Create high-quality, unique content that answers the question searchers are asking.

Reading content out loud is a great way to tell if content is high-quality or not. When content is read aloud, suddenly things like over usage of repetitive keywords, grammatical errors, and other signals that the content is less than quality will stand out. Read it out yourself and edit as you go, or ask someone else to read it so you can flag what should be changed.

Google Penguin

Google Penguin

The second major Google algorithm is Penguin. Penguin deals solely with link quality and nothing else. Sites that have purchased links or have acquired low-quality links through places such as low-quality directories, blog spam, or link badges and infographics could find their sites no longer ranking for search terms.

Who Should Worry about Penguin?

Most sites do not need to worry about Penguin unless they have done some sketchy link building in the past or have hired an SEO who might have engaged in those tactics. Even if the site owner was not aware of what an SEO was doing, the owner is still ultimately responsible for those links. That is why site owners should always research an SEO or SEO agency before hiring.

If you have done link building in the past while tactics were accepted, but which are now against Google’s webmaster guidelines, you could be impacted by Penguin. For example, guest blogging was fine years ago, but is not a great way to build links now unless you are choosing your sites well. Likewise, asking site visitors or members to post badges that linked to your site was also fine previously, but will now definitely result in Penguin or a link manual action.

Algorithmic Penguin and Link Manual Actions

Penguin is strictly algorithmic in nature. It cannot be lifted by Google manually, regardless of the reason why those links might be pointing to a website.

Confusing the issue slightly is that there is a separate manual action for low-quality links and that one can be lifted by Google once the links have been cleaned up. This is done with a reconsideration request in Google Search Console. And sites can be impacted by both a linking manual action and Penguin at the same time.

Incoming Links Only

Penguin only deals with a site’s incoming links. Google only looks at the links pointing to the site in question and does not look at the outgoing links at all from that site. It is important to note that there is also a Google manual action related directly to a site’s outgoing links (which is different from the regular linking manual action), so the pages and sites you link to could result in a manual action and the deindexing of a site until those links are cleaned up.

Finding Your Backlinks

If you suspect your site has been negatively impacted by Penguin, you need to do a link audit and remove or disavow the low quality or spammy links. Google Search Console includes a list of backlinks for site owners, but be aware that it also includes links that are already nofollowed. If the link is nofollowed, it will not have any impact on your site, but keep in mind, the site could remove that nofollow in the future without warning.

There are also many third-party tools that will show links to your site, but because some websites block those third-party bots from crawling their site, it will not be able to show you every link pointing at your site. And while some of the sites blocking these bots are high-quality well-known sites not wanting to waste the bandwidth on those bots, it is also being used by some spammy sites to hide their low-quality links from being reported.

Assessing Link Quality

When it comes to assessing the links, this is where many have trouble. Do not assume that because a link comes from an .edu site that it is high-quality. There are plenty of students who sell links from their personal websites on those .edu domains which are extremely spammy and should be disavowed. Likewise, there are plenty of hacked sites within .edu domains that have low-quality links.

 

Do not make judgments strictly based on the type of domain. While you can’t make automatic assumptions on .edu domains, the same applies to all TLDs and ccTLDs. Google has confirmed that just being on a specific TLD it does not help or hurt the search rankings. But you do need to make individual assessments. There is a long running joke about how there’s never been a quality page on a .info domain because so many spammers were using them, but in fact, there are some great quality links coming from that TLD, which shows why individual assessment of links is so important.

Beware of Links from Presumed High-Quality Sites


Do not look at the list of links and automatically consider links from specific websites as being a great quality link, unless you know that very specific link is high quality. Just because you have a link from a major website such as Huffington Post or the BBC does not make that an automatic high-quality link in the eyes of Google – if anything, you should question it more.

Many of those sites are also selling links, albeit some disguised as advertising or done by a rogue contributor selling links within their articles. These types of links from high-quality sites actually being low-quality has been confirmed by many SEOs who have received link manual actions that include links from these sites in Google’s examples. And yes, they could likely be contributing to a Penguin issue.

As advertorial content increases, we are going to see more and more links like these get flagged as low-quality. Always investigate links, especially if you are considering not removing any of them simply based on the site the link is from.

Promotional Links

As with advertorials, you need to think about any links that sites may have pointed to you that could be considered promotional links. Paid links do not always mean money is exchanged for the links.

Examples of promotional links that are technically paid links in Google’s eyes are any links given in exchange for a free product for review or a discount on products. While these types of links were fine years ago, they now need to be nofollowed. You will still get the value of the link, but instead of it helping rankings, it would be through brand awareness and traffic. You may have links out there from a promotional campaign done years ago that are now negatively impacting a site.

For all these reasons, it is vitally important to individually assess every link. You want to remove the poor quality links because they are impacting with Penguin or could cause a future manual action. But you do not want to remove the good links, because those are the links that are helping your rankings in the search results.

Promotional links that are not nofollowed can also trigger the manual action for outgoing links on the site that placed those links.

Editor Note: Removing links and submitting a disavow request is also covered in more detail in the ‘What to Do When Things Go Wrong‘ section of our SEO Guide.

Link Removals

Once you have gone through your backlinks and determined that there are some that should be removed or disavowed, you will need to get these links removed. You should first approach site owners and ask them to remove the links pointing to your site. If removals are unsuccessful, add those URLs to a disavow file, one you will submit to Google.

There are tools that will automate the link removal requests and agencies that will handle the requests as well, but do not feel it is necessary to do this. Many webmasters find contact forms or emails and will do it themselves.

Some site owners will demand a fee to remove a link from a site, but Google recommends not paying for link removals. Just include them in your disavow file instead and move onto the next link removal. Some site owners are using link removals to generate revenue, so the practice is becoming more common.

Creating and Submitting a Disavow File

The next step in cleaning up Penguin issues is to submit a disavow file. The disavow file is a file you submit to Google that tells them to ignore all the links included in the file so that they will not have any impact on your site. The result is that the negative links will no longer cause negative ranking issues with your site, such as with Penguin, but it does also mean that if you erroneously included high-quality links in your disavow file, those links will no longer help your ranking. This is another reason why it is so crucial to check your backlinks well before deciding to remove them.

If you have previously submitted a disavow file to Google, they will replace that file with your new one, not add to it. So it is important to make sure that if you have previously disavowed links, you still include those links in your new disavow file. You can always download a copy of the current disavow file in Google Search Console.

 

Disavowing Individual Links Versus Domains

It is recommended that you choose to disavow links on a domain level instead of disavowing the individual links. There will be some cases where you will want to disavow individually specific links, such as on a major site that has a mix of quality versus paid links. But for the majority of links, you can do a domain based disavow. Then, Google only needs to crawl one page on that site for that link to be discounted on your site.

Doing domain based disavows also means that you are do not have to worry about those links being indexed as www or non-www, as the domain based disavow will take this into account.

What to Include in a Disavow File

You do not need to include any notes in your disavow file, unless they are strictly for your reference. It is fine just to include the links and nothing else. Google does not read any of the notations you have made in your disavow file, as they process it automatically without a human ever reading it. Some find it useful to add internal notations, such as the date a group of URLs was added to the disavow file or comments about their attempts to reach the webmaster about getting a link removed.

Once you have uploaded your disavow file, Google will send you a confirmation. But while Google will process it immediately, it will not immediately discount those links. So you will not instantly recover from submitting the disavow alone. Google still needs to go out and crawl those individual links you included in the disavow file, but unfortunately the disavow file itself will not prompt Google to crawl those pages specifically.

It can take six or more months for all those individual links to be crawled and disavowed. And no, there is no way to determine which links have been discounted and which ones have not been, as Google will still include both in your linking report in Google Search Console.

Speeding Up the Disavow Process

There are ways you can speed up the disavow process. The first is using domain based disavows instead of individual links. And the second is to not waste time include lengthy notations for Google’s benefit so that you can submit your disavow faster. Because reconsideration requests require you to submit more details, some misunderstand and believe the disavow needs more details, too.

 

Lastly, if you have undergone any changes in your domain, such as switching to https or switching to a new domain, you need to remember to upload that disavow file to the new domain property in Google Search Console. This is one step that many forget to do, and they can be impacted by Penguin or the linking manual action again, even though they have cleaned it up previously.

Recovery from Penguin

When you recover from Penguin, do not expect your rankings to go back to where they used to be before Penguin, nor for the return to be immediate. Far too many site owners are under the impression that they will immediately begin ranking at the top for their top search queries once Penguin is lifted.

First, some of the links that you disavowed were likely contributing to an artificially high ranking, so you cannot expect those rankings to be as high as they were before. Second, because many site owners have trouble assessing the quality of the links, some high-quality links inevitably get disavowed in the process, links that were contributing to the higher rankings.

Add to the mix the fact Google is constantly changing their ranking algorithm, so factors that benefited you previously might not have as big of an impact now, and vice versa.

Compensated Links via Badges and More

Also be aware of any link building campaigns you are doing, or legacy ones that could come back to impact your site. This would include things like badges you have given to other site owners to place on their sites or the requirement that someone includes a link to your site to get a directory listing or access something. In simple terms, if the link was placed in exchange for anything, it either needs to be nofollowed or disavowed.

When it comes to disavowing files that people are using to clean up poor quality links, there is a concern that a site could be hurt by competitors placing their URLs into a disavow file uploaded to Google. But Google has confirmed that they do not use the URLs contained within a disavow file for ranking, so even if your site appears in thousands of disavows, it will not hurt. That said, if you are concerned your site is legitimately appearing in thousands of disavows, then your site probably has a quality issue you should fix.

Negative SEO

There is also the negative SEO aspect of linking, where some site owners worry that a competitor could buy spammy links and point them to their site. And many use negative SEO as an excuse when their site gets caught by Google for low-quality links.

If you are worried about this, you can proactively disavow the links as you notice them. But Google has said they are pretty good about recognizing this when it happens, so it is not something most website owners need to worry about.

Real Time Penguin

Google is expected to release a new version of Penguin soon, which will have one very notable change. Instead of site owners needing to wait for a Penguin update or refresh, the new Penguin will be real-time. This is a huge change for those dealing with the impact of spamming links and the weights many have had to endure after cleaning up.

Hummingbird

Hummingbird

Google Hummingbird is part of the main Google search algorithm and was the first major change to their algorithm since 2001. But what is different about Hummingbird is that this one is not specifically a spam targeting algorithm, but instead an algorithm to ensure they are serving the best results for specific queries. Hummingbird is more about being able to understand search queries better, particularly with the rise of conversational search.

 

It is believed that Hummingbird is positively impacting the types of sites that are providing high-quality content that reads well to the searcher and is providing answers to the question the searcher is asking, whether it is implied or not.

Hummingbird also impacts long-tailed search queries, similarly to how Rank Brain is also helping those types of queries. Google wants to ensure that they can provide high-quality results for the longer queries. For example, instead of sending a specific question related to a company to the company’s homepage, Google will try to serve an internal page on the site about that specific topic or issue instead.

Hummingbird cannot be optimized for, outside of optimizing for the rise of conversational search. Longer search queries, such as what we see with voice search, and the types of queries that searchers tend to do on mobile are often highlighted with a conversational search. And optimizing for conversational search is easier than it sounds. Make sure your content is highly readable and can answer those longer tail queries as well as shorter tail ones.

Like Rank Brain, Hummingbird had been released for a period before it was announced, and SEOs did not particularly notice anything different regarding the rankings. It is not known how often Hummingbird is updated or changed by Google.

Source :https://www.searchenginejournal.com/seo-guide-panda-penguin-hummingbird/169167/

Categorized in Search Engine

Google has produced a car that drives itself and an Android operating system that has remarkably good speech recognition. Yes, Google has begun to master machine intelligence. So it should be no surprise that Google has finally started to figure out how to stop bad actors from gaming its crown jewel – the Google search engine. We say finally because it’s something Google has always talked about, but, until recently, has never actually been able to do.

With the improved search engine, SEO experts will have to learn a new playbook if they want to stay in the game.

SEO Wars

In January 2011, there was a groundswell of user complaints kicked off by Vivek Wadwa about Google’s search results being subpar and gamed by black hat SEO experts, people who use questionable techniques to improve search-engine results. By exploiting weaknesses in Google’s search algorithms, these characters made search less helpful for all of us.

We have been tracking the issue for a while. Back in 2007, we wrote about Americans experiencing “search engine fatigue,” as advertisers found ways to “game the system” so that their content appeared first in search results (read more here). And in 2009, we wrote about Google’s shift to providing “answers,” such as maps results and weather above search results.

Even the shift to answers was not enough to end Google’s ongoing war with SEO experts. As we describe in this CNET article from early 2012, it turns out that answers were even easier to monetize than ads. This was one of the reasons Google has increasingly turned to socially curated links.

In the past couple of years, Google has deployed a wave of algorithm updates, including Panda and Panda 2, Penguin, as well as updates to existing mechanisms such as Quality Deserved Freshness. In addition, Google made it harder to figure out what keywords people are using when they search.

The onslaught of algorithm updates has effectively made it increasingly more difficult for a host of black hat SEO techniques — such as duplicative content, link farming and keyword stuffing — to work. This doesn’t mean those techniques won’t work. One look into a query like “payday loans” or ‘‘viagra” proves they still do. But these techniques are now more query-dependent, meaning that Google has essentially given a pass for certain verticals that are naturally more overwhelmed with spam. But for the most part, using “SEO magic” to build a content site is no longer a viable long-term strategy.

The New Rules Of SEO

So is SEO over? Far from it. SEO is as important as ever. Understanding Google’s policies and not running afoul of them is critical to maintaining placement on Google search results.

With these latest changes, SEO experts will now need to have a deep understanding of the various reasons a site can inadvertently be punished by Google and how best to create solutions needed to fix the issues, or avoid them altogether.

Here’s what SEO experts need to focus on now:

Clean, well-structured site architecture. Sites should be easy to use and navigate, employ clean URL structures that make hierarchical sense, properly link internally, and have all pages, sections and categories properly labeled and tagged.

Usable Pages. Pages should be simple, clear, provide unique value, and meet the average user’s reason for coming to the page. Google wants to serve up results that will satisfy a user’s search intent. It does not want to serve up results that users will visit, click the back button, and select the next result.

Interesting content. Pages need to have more than straight facts that Google can answer above the search results, so a page needs to show more than the weather or a sports score.

No hidden content. Google sometimes thinks that hidden content is meant to game the system. So be very careful about handling hidden items that users can toggle on and off or creative pagination.

Good mobile experience. Google now penalizes sites that do not have a clean, speedy and presentable mobile experience. Sites need to stop delivering desktop web pages to mobile devices.

Duplicate content. When you think of duplicate content you probably think of content copied from one page or site to another, but that’s not the only form. Things like a URL resolving using various parameters, printable pages, and canonical issues can often create duplicate content issues that harm a site.

Markup. Rich snippets and structured data markup will help Google better understand content, as well as help users understand what’s on a page and why it’s relevant to their query, which can result in higher click-through rates.

Google chasing down and excluding content from bad actors is a huge opportunity for web content creators. Creating great content and working with SEO professionals from inception through maintenance can produce amazing results. Some of our sites have even doubled in Google traffic over the past 12 months.

So don’t think of Google’s changes as another offensive in the ongoing SEO battles. If played correctly, everyone will be better off now.

https://techcrunch.com/2013/08/03/won-the-seo-wars-google-has/

Categorized in Search Engine

 

CatholicGoogle, a new site based on Google’s custom search, is “striving to provide an easy to use resource to anyone wanting to learn more about Catholicism and provide a safer way for good Catholics to surf the web.” The site uses a permanently-on Google SafeSearch to filter out profanity and pornography, along with a filter for specific topics that floats Catholic-related sites to the top. For example, a search for “birth control” serves up pages on why birth control is viewed as a sin in the Catholic Church as its first results.

The search engine might appeal to some devout Catholics if it actually worked. However, it seems that when it comes to filtering topics beyond the standard “offensive” categories (swear words and sex) , CatholicGoogle only serves to make queries potentially more offensive. A search for “drunk” yields a video of “Drunk Catholic Kids”. Perhaps even more bizarre: a search for “sex” offers an article bashing the Church’s stance on sexuality (they may have included this in the results for a balanced alternative perspective, but I doubt it). It’s as if the site just appends the word “Catholic” to whatever you’re searching for and crosses its fingers.

If this is your sort of thing, you might also be interested in GodTube, the YouTube for Christians or Gospelr (you guessed it – the Twitter for Christians).

https://techcrunch.com/2009/01/02/catholicgoogle-your-search-engine-for-all-things-catholic/

 

Categorized in Search Engine

Google dedicated its 31 July doodle to mark the 136th birth anniversary of acclaimed novelist Munshi Premchand.

"Today's homepage celebrates a man who filled many pages (of a different kind) with words that would forever change India's literary landscape," said the search engine.

"Although much of it was fiction, Premchand's writing often incorporated realistic settings and events, a style he pioneered within Hindi literature," Google said.

"His last and most famous novel, Godaan (1936), inspired the doodle, which shows Premchand (sometimes referred to as "Upanyas Samrat," or, "emperor among novelists") bringing his signature working-class characters to life. On what would have been his 136th birthday, the illustration pays tribute to the multitude of important stories he told," the description to the Google doodle reads.

Here are eight interesting facts about the 'Upanyas Samrat':

  1. Born Dhanpat Rai in a small village near Varanasi in 1880, the renowned author started writing at the age of 13.
  2. Many of his early works are in Urdu. His began writing in Hindi in 1914, with his first short story, Saut, being unveiled in 1915.
  3. He began writing under the pen name of Nawab Rai before switching to Premchand. He produced more than a dozen novels, 250 short stories, and a number of essays, many under the pen name Premchand.
  4. Munshi Premchand worked as a teacher in a government district school in Baharaich. He quit his teaching job to join the non-cooperation movement in 1920.
  5. His works are believed to have been largely influenced by Mahatma Gandhi and the non-cooperation movement.
  6. His 1924 short story Shatranj Ke Khiladi (The Chess Players) was made into a film. The 1977 Satyajit Ray-directorial, starring Sanjeev Kumar, Shabana Azmi, Farida Jalal, Farooq Shaikh, among others, was narrated by Amitabh Bachchan. It won the National Film Award for Best Feature Film in Hindi that year.
  7. Not many know that Premchand also wrote the script for the film Majdoor.
  8. His work Godaan is considered one of the greatest Hindustani novels of modern Indian literature. The book was later translated into English and also made into a Hindi film in 1963.

http://www.catchnews.com/tech-news/google-doodle-marks-munshi-premchand-s-136th-birth-anniversary-8-lesser-known-facts-about-the-emperor-among-novelists-1469948223.html/fullview

Categorized in Search Engine

The web is teeming with images, and a lot of them are not what they seem. Reverse image search makes it easier to spot the fake images, and the fake people who are using someone else's profile photos.

Reverse image search involves choosing an image and using a search engine to find the same image on other web sites. It's a feature I use almost every day, and I'm confident that more people would do it if they knew what they were missing.

Reverse image search is both simple and free, thanks to services such as TinEye - which pioneered the field - and Google Image Search. Both offer browser extensions so all you have to do is right-click any online image and choose reverse image search from the drop-down menu. There are several other services, including meta-services like Image Raider, which will "search by image on Google, Bing, and Yandex" with up to 20 images at a time. However, Google and TinEye cover most people's needs most of the time.

So why would you use reverse image search? Reasons vary, but usually it's either to authenticate an image, by finding its source, or track its use across the web.

Tracking image use

If you have a website, publish brochures or press releases, or post copyright photographs online, you can assume that your images are going to be re-used. Reverse image search tells you where and when. After that, you can decide whether a re-use is legal and appropriate, and whether or not to take action.

Searching for publicity and advertising images will show you how much traction your press release or blog post got, and you may well find coverage that text searches have missed - perhaps in foreign languages.

You may also find your images re-used in contexts you're not happy about, such as illustrating stories about a rival company's products. If so, you can make sure they are correctly captioned and credited. Just remember that you can't complain about images that you don't actually own.

You may find some websites using your bandwidth by linking to the image on your web site rather than theirs. In that case, I've seen people replace the original photo with a less appropriate one that has the same filename.

You may also find copyright photos that you did not license for re-use. If so, you can get them taken down, or send them an invoice.

Either way, reverse image search surfaces a lot of valuable information that you couldn't easily find in any other way.

Authenticating images

When you see an image in your email or on the web, you don't really know how old it is, or where it originated. Reverse image search helps you to find out.

For example, suppose you are thinking about publishing a picture online or in print. Are you sure the supplier owns it? Is it genuine or has it been doctored? How old is it? How often has it been used before? How much is it really worth?

There are many thousands of cases where a quick reverse image search has, or would have, avoided major mistakes. Sometimes an image is claimed to show a particular event, but it was actually taken earlier, at a different event. This happens quite a lot with tweeted images and sometimes even with news stories. It might be a simple mistake by a picture agency, or it might be an attempt at deception.

Who is in the picture? In some cases, I've found, it's not the person it's said to show. Sometimes picture agencies get their captions wrong, and sometimes there are several different people with the same name. Checking the same image on several web pages usually solves both problems.

Has the image been doctored? Reverse image searches usually bring up numerous images that appear to look the same, but on closer examination, they're different. Sometimes a face may have been swapped, or something may have been removed from or added to the picture. Don't think this doesn't happen: whole websites are devoted to doctoring images, often for humorous or political reasons.

Sometimes pictures have been flipped (laterally reversed): it's an option worth trying when reverse image searches you don't find the matching images you'd expect. In the pre-web era, I once took flak for publishing a flipped photo of a famous guitarist. Dozens of fans spotted what I hadn't: that he appeared to be playing his guitar the wrong way round.

For these and similar reasons, reverse image searching is now a critical skill for mainstream publications, especially news organisations. And now you can do it in a couple of seconds, it makes sense for less critical uses, too.

Authenticating people

I also do reverse image searches on profile photos on social networking sites such as LinkedIn and Twitter. It's naive to assume everybody is who they claim to be. What appears to be an attractive young woman making friends with colleagues on LinkedIn might be a hacker fishing for information.

Surprisingly often, I find would-be contacts have stolen their profile photo from another Facebook or PhotoBucket user, or I find the same photos are being used to advertise escort services. Scammers often use photos of long-forgotten film actors and writers as well.

One day, reverse image search could save you from being scammed or conned.

If that's not a good reason to use it, I don't know what is.

Note: I'll explain the pros and cons of using Google and TinEye in my next post, Reverse image searching made easy...

http://www.zdnet.com/article/heres-why-you-and-your-business-should-use-reverse-image-search/

Categorized in Search Engine

A picture posted to Imgur Saturday reveals Google’s search engine autocompletes the query “Muslim Dad” with some brutal phrases.

When a user types in “Muslim dad,” the first autocomplete is tame, pulling up searches for “muslim dad dnc.” This refers to the speech a Muslim father gave at the Democratic National Convention in honor of his solider son who died.

The rest of the autocompletes are “muslim dad kills daughters,” “muslim dad kills his daughter,” “muslim dad runs over daughter” and “muslim dad christian mother.”

When a user types in “muslim father,” the auto completes are a little different. The first result is again “muslim father dnc.”

The remaining autocompletes are “muslim father kills daughter,” “muslim father kills gay son,” “muslim fathers and daughters,” “muslim father christian mother.”

http://dailycaller.com/2016/07/31/when-you-type-muslim-dad-in-google-some-pretty-brutal-autocomplete-results-pop-up/

Categorized in Search Engine

On Friday, Google security team announced that they finished implementing HSTS support for all the company's products running on the google.com domain.

The move comes after months of testing to make sure the feature covered all the services, including APIs, not just the main Web interfaces.

HSTS stands for HTTP Strict Transport Security and is a Web security protocol supported by all of today's browsers and Web servers.

HSTS protects HTTPS against several SSL attacks

The technology allows webmasters to protect their service and their users against HTTPS downgrades, man-in-the-middle attacks, and cookie hijacking for HTTPS connections.

The protocol prevents users from going back to an HTTP connection when accessing Google over HTTPS, and forcibly redirects them to HTTPS connections when possible.

The technology is widely regarded as the best way to protect HTTPS connections against the most common attacks on SSL but has not been widely adopted.

95% of HTTPS websites still don't use HSTS

A study from Netcraft conducted last March showed that 95% of all servers running HTTPS either fail to set up HSTS or come with configuration errors. As such, Google's team has spent a great amount of time testing.

"Ordinarily, implementing HSTS is a relatively basic process," Google's Jay Brown, Sr. Technical Program Manager, explained on Friday. "However, due to Google's particular complexities, we needed to do some extra prep work that most other domains wouldn't have needed to do. For example, we had to address mixed content, bad HREFs, redirects to HTTP, and other issues like updating legacy services which could cause problems for users as they try to access our core domain."

During HSTS tests, Brown says that the team managed to break Google's famous Santa Tracker last December. The problem was fixed, but this only comes to show the wide spectrum of products the engineers had to ensure were working properly after HSTS deployment.

http://news.softpedia.com/news/google-adds-hsts-support-to-google-com-search-engine-506816.shtml

Categorized in Search Engine

TinEye and Google Image Search are both good for doing reverse image searches, and the two websites are different enough to be complementary. But there are other options including browser extensions and smartphone apps....

There are lots of reasons for using reverse image search - see my earlier post, Here's why you and your business should use reverse image search - and quite a few ways to do it. The main ones are the TinEye and Google Image Search websites, both of which are free. Depending on your location, needs and personal preferences, you might also want to try Baidu, Yandex, Bing Image Match, Image Raider or some other service.

But if you're new to reverse image searching, I suggest you start with TinEye and Google. I use both, because they are different enough to complement one another. TinEye has better features. Google Image Search generally has a bigger, fresher database, though it doesn't find all the images that TinEye knows about.

Basically, TineEye has the smart guys while Google has the web crawlers.

TinEye wins mainly on sorting features. You can order TinEye's results by newest first or oldest first, by size, by the best match, or by the most changed. I'm often trying to find the oldest version posted, to authenticate a particular photograph.

TinEye's results often show a variety of closely related images, because some versions have been edited or adapted. Sometimes you find your searched-for picture is a small part of a larger image, which is very useful: you can switch to searching for the whole thing. TinEye is also good at finding versions of images that haven't had logos added, which is another step closer to the original.

The main drawback with TinEye is that some of the search results are a couple of years old, and when you follow the link, either the image or the page or even the whole website has disappeared. In such cases, I use the TinEye result to run a Google Image search.

Google Image Search finds web pages rather than images. If you're doing a reverse image search, it's usually more useful to look for the link that says "Find other sizes of this image" and click on "All sizes".

By default, Google displays the most exact matches in descending order of size, and the links to the sources are hidden until you click an image. You can try to make it work more like TinEye by selecting "Visually similar" from the drop-down menu, but this includes images that have nothing at all to do with the original. For most purposes, this is a waste of time.

Worse, Google can't sort images by date. As with text searches, you get options such as "Past week" and "Custom range", but these are tedious to use, and don't seem very reliable.

However, Google does some very good things that TinEye doesn't. The key features are search by type (Face, Photo, Line drawing etc) and search by usage rights. It's very useful to be able to search for images that are "labelled for reuse with modification" or "labelled for non-commercial reuse" or whatever. Handled with care, this could be a money-saver.

With a bit of experiment, some combination of TinEye and Google Image Search should meet most of your needs. If not, there are other options.

I generally use the browser extensions for TinEye and Google. These perform a reverse image search when you right-click an online image and select "search [service] with this image" or something similar. This is quicker than uploading an image from a hard drive or pasting in a web link, though you can do those things too.

Browser extensions include Google's Search by Image for Google (Chrome, Firefox), TinEye Reverse Image Search (Chrome, Firefox, Opera, Safari, Internet Explorer), and Bing Image Match (Chrome). Third-party options include Google Reverse Image Search (Firefox, not written by Google), Search Image by Bing (Firefox, not written by Microsoft) and Who stole my pictures? (Firefox). You may be able to find more. I haven't tried all of them.

Apple iPhone users can do reverse image searches with apps such as Veracity and Microsoft's official Bing app. There's also a Search By Image app for Android. Of course, you can also use Google Image Search in the Chrome browser on a smartphone. Press and hold the image, and when the box appears, touch "Search Google for this image".

Finally, there's a useful image search engine for Reddit, called Karma Decay. If you use Reddit, you will know that some amusing images are reposted on a regular basis. Karma Decay finds them all.

This is more useful than it sounds. Redditors comment on most of these images, and their comments often include links to sources and sometimes explanations. If you are, like me, trying to authenticate images, these links and comments can save quite a lot of work.

http://www.zdnet.com/article/reverse-image-searching-made-easy/

Categorized in Search Engine

HOLLYWOOD studio Warner Brothers has gone after reddit and Google in its latest attempt to crackdown on the illegal piracy of its films.

A particular subreddit on the popular forum-based website dedicated to online streaming sources of films was the target of the Warner Brothers legal action.

The studio sent a letter to the search engine giant asking for the company to remove results for the BestOfStreamingVideo subreddit from Google searches. However Google declined to heed the request.

According to the takedown notice, Warner Brothers filed 24 complaints of copyright violation with Google with the offending subreddit named as an allegedly infringing URL on one of the claims because it linked to a pirated copy of the film Interstellar.

The subreddit has 46,000 subscribers but moderators didn’t seem too concerned about the complaint.
Speculation by piracy website Torrent Freak suggested the reason Google knocked back the request to remove the subreddit from searches was because it was too broad. Another reason posited by the website was because Google may think the responsibility for such action lies with reddit.

It is unclear if a similar takedown notice was issued to the social media site but it would likely be met with similar resistance.

“The sub was made as a way for our users to share streaming video, and as Reddit is a platform for free speech it is out of our control as to what our users post,” one reddit moderator said.

Warner Brothers request to remove reddit forum from search results 

Source: http://www.news.com.au

Categorized in Others

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now