Google's upcoming October 4 event- 'Made by Google' is around the corner where the search engine giant is expected to introduce the latest Pixel smartphones and home-automation products, along with other new product upgrades. The Pixel and Pixel XL are rumoured to be Google's upcoming flagship smartphones, which will mark the end of company's popular 'Nexus' series devices. The other possible products in the list includes Daydream VR, third-generation Chromecast device and some other smart home products.

 However, the biggest announcement of the upcoming event might be the merger of Google's Android and Chrome OS that will start a new chapter in the world of mobile products from the search engine giant. Let's find out more about it. Stay tuned to GizBot for more updates!

 The fusion of Android and Chrome OS

We have been hearing about Google‘s plan to merge Android and Chrome OS into a single OS since last one year. It was in October 2015 when we heard that Google has a team of engineers working on this project of merging its mobile OS and Chrome Book platform together. We believe this might be the time that the search engine giant finally reveals the new OS to world, along with Pixel flagship smartphones. Additionally, the tweet by Senior VP of Android, Google Play, and Chrome Hiroshi Lockheimer saying- ‘I have a feeling 8 years from now we'll be talking about Oct 4', further clears the air about the possible official launch.

 Welcome Andromeda

Sources suggest that the marriage of Android and Chrome OS will give the world a new OS, which will go by the name- Andromeda. The merged OS wouldn't be ready until 2017, however a working prototype of new OS is definitely going to see the light of day on October 4

One OS for multiple devices

Andromeda is supposed to bring uniformity and will work on phones, tablets, and even laptops and 2-in-1s. This will give Google the power to target a larger base of audience who will be able to work seamlessly on an array of devices under Google's ecosystem. That said, the upcoming October 4 seems to be an event of historical significance for the search engine giant after the launch of Android OS in September 2008. Stay tuned to GizBot for more updates!

Source : http://www.gizbot.com/

Categorized in Search Engine

When a journalist gets their first job, or switches role to a new area or specialism, they need to quickly work out where to find useful leads. This often involves the use of feeds, email alerts, and social networks. In this post I’m going to explain a range of search techniques for finding useful sources across a range of platforms.

Search techniques for finding news and blog sources

Let’s get the obvious things out of the way first, starting with Google.

Aside from the main search engine, remember that there’s a specific News search option. Within that, you can also specify you want to search within blogs.

google-blog-search

But what about all those local websites and blogs that aren’t listed on Google News? Try using a normal Google search with site:blogspot.com or site:wix.com and your particular keywords to limit results to those hosted on Blogger or Wix.

If you are looking for a place which also exists elsewhere (such as Cambridge, Massachusetts or Birmigham, Alabama), use Search tools to specify you only want results from your country. This isn’t perfect: it will still include wrong results and exclude right ones, but it’s worth trying.

Search tools: specify country

You can also exclude irrelevant results by using the minus operator immediately before keywords in results you want to exclude, e.g. Birmingham -Alabama or Cambridge -Massachusetts

Finding email newsletters in your field

You can search for email newsletters by using your keyword with intitle:subscribe andintitle:email or intitle:newsletter.

Search box: birmingham intitle:subscribe intitle:email

Use an RSS reader instead of email alerts

RSS readers are much easier to read than email alerts: these pull in a range of feeds into one place. Widely used RSS readers include FeedlyNetvibes (where you can share or publish dashboards) and Flipboard (which gives you a magazine-like interface).If you think social media has taken over the role of RSS readers, you aren’t using RSS as much as you could. Here are some examples which you won’t find on social media…

You can get updated on new results by using Google Alerts. Use this on Chrome and you should be able to choose to receive results by email or by RSS.

WordPress has its own search engine, and results can be subscribed to using RSS so you get updated whenever a new post is published mentioning your keyword. Look for the ‘related topics’  box on the right, too: this links to tag pages on WordPress which are also useful.

wordpress search results

Look out for other places where you can find RSS feeds or email alerts for new search results For example TheyWorkForYou’s search page and WhatDoTheyKnow provide both for what MPs are saying and FOI requests respectively.

Consultation websites also typically offer RSS feeds: Transport for London’s has separate feeds for forthcoming, open, and closed consultations, but it will also give you a feed for searches.Here’s their guide to using RSS. Most government departments and local councils use the same system: here’s Leicester’s and here’s DEFRA’s.

The Gov.uk website’s Publications section also offers both RSS feeds and email alerts for new results matching any search you conduct.

Finding events in your area

Meetup, Eventbrite and Lanyrd are all useful for finding events in a particular area.

Meetup is good for regular and more informal events. You can search by location and radius, and get a calendar of upcoming events that meet your criteria.

meetup calendar view

Use the calendar view on Meetup to see upcoming events in your area

Joining a meetup group doesn’t mean you have to attend any – it’s more like joining a group on Facebook. The more you join, the more Meetup will suggest to you.

You can get an RSS feed of meetups you’ve signed up to, and you can add any individual meetup URL to an RSS reader to get an RSS feed of that meetup group’s updates. But you can’t get RSS feeds for areas or searches.

You can subscribe to emails on Meetup about groups you’ve joined, and to be alerted to new groups which may be of interest. New groups being set up is of course often a news story in itself, and an excuse to contact the organiser to interview them about it.

Eventbrite tends to be used for less regular events but also bigger ones. Again you can search by location and get a calendar of forthcoming events (remember to sort by date, not relevance).eventbrite birmingham events

Each event on Eventbrite has an organiser. Click on their profile to see more events. Sadly Eventbrite doesn’t seem to have any RSS feeds but there does appear to be a workaround using Zapier.

Lanyrd, which is owned by Eventbrite, is useful for finding conferences. You can search by keyword, and you can also try to find the URL for particular locations. This tends to begin withlanyrd.com/places/ followed by a place name, for example lanyrd.com/places/liverpool.

lanyrd events in Birmingham

Usefully, places on Lanyrd do have their own RSS feed, so you can receive updates on all events in that location on an RSS reader. You can also add them directly to your calendar. Both options are in the right hand column.The site also has a speaker directory, useful for finding experts in a particular field.

Your own specialist or local search engine

If you need to regularly search within a particular group of sites, consider setting up a personalised search engine using Google Custom Search.For example: you might make a list of local public body websites such as those for all local hospitals, the police and fire services, and local authority.

Reddit

Chances are that Reddit has a number of forums related to the area you’re interested in. For example there are two Birmingham subreddits (r/brum and r/Birmingham) but also subreddits for local football teams and universities. All will have RSS feeds that can be added to an RSS reader.

Using Facebook lists to create multiple newsfeed channels

Most people know about Twitter lists, but fewer people know you can create lists in Facebook.

Like Twitter lists, these can be useful for following a specific group of people (for example those in a particular industry, organisation or area) and ensuring you can check those updates regularly: remember that most updates from your connections are never shown in your news feed, so this is a way of taking control.

facebook-friends-lists

Remember to bookmark your friends list once you’ve created it, as otherwise you’ll still have to access it through the Friends menu in Facebook.

Finding people on Facebook based on location or employer

Now, how do you find those people to add to your Facebook lists? If you go to Facebook’s friend requests page you will see a series of search boxes on the right hand side. These allow you to search for people by various criteria, but the most useful are where they live now and their current employer. Look for people who live and work in relevant areas.

facebook friends search boxes

Finding useful pages and groups for journalists on Facebook: Graph Search

How do you find relevant pages and groups on Facebook? Facebook’s Graph Search allows you to identify groups and pages liked or joined by people who live in a particular area, or who have liked or joined other pages or groups.

That sounds complicated as a sentence, so here’s a picture which should be a lot clearer:

Pages liked by people who live in Birmingham

To do this you need to conduct a search in Facebook using a particular sentence structure.

If you type pages liked by people who live in and then start typing a location, Facebook should start to suggest locations that it recognises. Choose the one you mean and Facebook should show your pages that match.

By default results are shown across all types of results (people, groups, pages). So make sure  that you switch to the Pages tab to see all the results.

Another phrase is pages liked by people who like followed by the name of a page. Again, start typing that name and then select one that Facebook suggests.

pages liked by people who like Aston Villa

To find groups use the phrase Groups joined by people who joined, followed by the name of a relevant group. You can also use Groups joined by people who liked, followed by the name of a relevant page, or Groups joined by people who live in followed by a location.

People joined by people who joined Birmingham Freshers 2016

LinkedIn for journalists

LinkedIn has a number of useful features for journalists. One of these is the ability to search specifically for companies. First, make sure you select Companies from the drop-down menu to the left of the search box, then press enter (don’t type any criteria):

Select the Companies option from the drop down menu

You’ll get some initial search results for all companies on LinkedIn. You can now filter those results further by using the Location option on the left. Click + Add and start typing your location until the right one appears to select.

linkedin-company-search-by-location

Use the Companies filter and set the Location filter to get companies near you

It is generally not good practice to send contact requests to individuals on LinkedIn unless you know them. However, as you do build your personal contacts it is useful to add them on LinkedIn because you can choose to receive updates when your contacts are mentioned online:

LinkedIn: Connections in the news

Instagram

It’s easy to underestimate Instagram, but many people find it easier or more natural to use than text-based social networks. It may be the first place that someone shares a newsworthy image or experience.

Obviously the primary way of navigating Instagram is through hashtags. These can be searched on the app, but you can also browse them online by adding your tag to the end of the URLinstagram.com/explore/tags/ e.g. instagram.com/explore/tags/manchester

A second way of finding useful accounts, however, is geotagging. A much higher proportion of instagram updates are geotagged compared to posts on other social media platforms.Worldcam allows you to find updates – and therefore users – by location.

instagram-search

Snapchat

Snapchat is another social platform which is being used by an increasingly broader range of people, including politicians and celebrities. I’ve written previously about 5 techniques for finding people on Snapchat here.

Twitter search: snapchat followed by the list name

Twitter

I’ve probably written more about finding people on Twitter, and managing Twitter feeds, than any other social platform. Here are a selection of previous posts covering that:

Source : https://onlinejournalismblog.com

Categorized in Search Techniques

It’s incredible that it took just 18 years for Google -- the company reached this milestone of adulthood on Sept. 27 -- to create a market capitalization of more than $530 billion. It’s perhaps even more amazing to recall how the search engine has changed life as we know it.

Google, now a unit of holding parent company Alphabet Inc., began in Larry Page and Sergey Brin’s Stanford University dorm in 1998 before campus officials asked them to find a real office after the Stanford IT department complained Page and Brin’s were sucking up all the university’s bandwidth.

By the time I joined the company in November of 2001, it was apparent that we were changing the world. As an early employee at Google -- the second attorney hired there -- there were times when shivers ran up my spine thinking about what we were building. Democratizing access to information, and bringing the real world online -- it was an inspiring place to be.

Having grown up in a working class neighborhood, I had to travel to an affluent neighborhood to access a good public library, spending countless Saturday afternoons with volumes of reference books to learn how to apply for financial aid to attend college. In those pre-Internet days, a good library and a kind-hearted librarian were my keys to advancement.

After the printing press, the first major democratization of access to information had been driven a century ago by steel baron Andrew Carnegie. He became the world’s richest man in the late 19th century and then gave it all away, donating $60 million to fund 1,689 public libraries across the United States. To my mind, Google took Carnegie’s vision of putting information in the hands of the general public and put it on steroids, creating a virtual library akin to those found only in sci-fi movies in 1998.

Google indexed the internet extraordinarily well without human intervention, unlike previously curated outlets such as Yahoo! or LexisNexis, and in such a way that the user did not have to know how to use the index or Boolean search methods. Google enabled free searches of words or terms, making all manner of information instantly retrievable even if you did not know where it was housed. With Google, you could find any needle in any haystack at any time. Unlocking that data has indeed been a great equalizer: any individual can arm him or herself with relevant information before seeing a doctor or applying for government assistance, housing or a job.

Getting archives online

Soon, Google could trivially retrieve any piece of data on the World Wide Web. Crucially, Google started indexing information that was previously offline, such as far-flung archives (imagine a very old text in a tower in Salamanca) to make that knowledge searchable. People’s photos and videos followed. Then, of course, Google cars began cruising and mapping our streets. That paired with GPS granted us all a new superpower -- being able to find our way in almost any small town or big city in the world.

Now Google is a global archive storing our history as it is made. It is as though a virtual world is being created right alongside our real world, a simulation of reality that grows more robust by the day. Because of Google, the creation and storage of information itself has expanded exponentially as people and scholars have access to information that enables them to make new discoveries. Those discoveries, in turn, are shared with the world thanks to the culture of sharing that has been central to the internet and Google’s philosophy. All this has sped the pace of discovery.

Of course, there have been casualties. Google has changed the business of newspapers forever and virtually single-handedly run most publishers of maps out of business. It transformed advertising, using and perfecting A/B testing to understanding our tastes and what makes a person click on an ad. Sometimes I worry that technology companies have become almost too good at this, building upon and applying these lessons to other ways of collectively sucking us into our devices more and more.

This access to information without the curation of trained journalists carries other costs too, leading to an internet rife with misinformation and untruth. Nowhere is that more evident today than in our rancorous U.S. presidential election, where it seems little value is placed on objectivity, making organizations such asfactcheck.org essential reading. The growth of Google and the diminution of the role of the established media in our society at such crucial moments might cause Alexis de Tocqueville, who believed newspapers “ maintain civilization,” to turn in his grave.

One thing’s for sure: With Google, the future will bring the unexpected and sometimes delightful. Autonomous cars, robots, gesture-sensing fabrics, hands-free controls, modular cell phones and reimagined cities are among the projects that lie ahead for the search giant that even as it is one of the world’s largest companies, has maintained a startup culture at its offices, which now employmore than 61,000 people.

In breaking out beyond the constraints of the online world into the physical universe, Google has made us believe (and even expect) that when one is inspired by some great purpose, we can transcend limitations. Anything becomes possible.

Source : http://www.foxnews.com/

Categorized in Search Engine

Dive Brief

  • Google has updated four-year-old Penguin, which penalizes sites involved in artificially boosting search rankings via poor-quality links, and made it a part of the search engine's core algorithm, the company said in a blog post
  • The key changes, which are among the top requests from website developers, include making Penguin real-time, meaning any changes in rankings will be visible more quickly.
  • Penguin is also more granular, adjusting rankings based on spam signals rather than affecting the ranking of the entire site.

Dive Insight:

As the leading search engine, one of Google’s goals is to ensure strong user experiences. Penguin, which was first introduced in 2012 and last updated in 2014, is the company’s way of weeding out site pages filled with links to unrelated content in an attempt to boost search rankings.

While paid search is Google’s biggest source of revenue, search engine optimization, which Penguin addresses, is important for brands and marketers. With content marketing gaining steam as more consumers spend time online researching and reading about topics of interest, a strong SEO strategy is one of the ways that marketers can drive success for these programs.

Over the past few years, Google has been testing and developing Penguin and now feels it is ready to be part of its core algorithm. In the past, the list of sites affected by Penguin was periodically refreshed. As a result, when sites were improved with an eye toward removing bad links, website developers had to wait until the next refresh before any changes were taken into account by Google’s web crawlers.

Source : http://www.marketingdive.com/

Categorized in Search Engine

Google has removed the search tool that allows users to change their geo-location. Columnist Clay Cazier documents four ways to get around this restriction and emulate a search from any city. 

In late November 2015, Google removed the location search filter from the (shrinking) list of search tools available to refine queries. As search results have become increasingly localized, this significantly limits consumers’ ability to see results for any other location than their own.

Whether you’re a search pro who needs to see clients’ search results as returned within different localities or a normal consumer who wants to see results localized to your next travel destination, the removal of this search tool significantly limits the ability to see the SERP world beyond your own city or country.

Today’s post will provide ways to show localized search results despite Google’s removal of the search tool.

What Does Google Say?

As Google told Search Engine Land, the company maintains that the location search filter “was getting very little usage,” so they removed it. Could it be they removed the search tool but retained the ability via an advanced search screen or something similar? A quick search for “change Google search location” may give you a little hope; there’s an answer box, and even a support articleentitled, “Change your location on Google.” Problem solved? Unfortunately, no.

Google’s idea of being helpful is telling you how to change the auto-detected search location (usually by IP) to a “more precise” location they select for you, usually based on search history. For me, that meant my location changed from New York City (by corporate IP address) to Columbia, SC (my actual location). But I need to see how my Dallas, TX, client is showing in SERPs localized tothat area.

Following are four ways to show localized Google Search results.

1. Google AdPreview

It may be intended for use by Google AdWords participants, but Google’s AdPreview tool is actually available whether you’re logged in or out, regardless of whether or not you have a Google AdWords account.

In my opinion, this is the easiest and most accurate way to emulate a search from a locality other than your own but also emulate from different devices, languages and countries.

GoogleAdPreview-790x600

2. ISearchFrom.com

Another simple method is to use the www.isearchfrom.com website. It works a lot like Google’s AdPreview tool but allows a few additional search parameters like Safe Search settings (and a few others that don’t seem to make a difference in the results).

The site’s footer does say it is not actively maintained, so who knows how long this utility will work.

isearchfrom

3. Location Emulation In Google Chrome

There is a feature within Google Chrome’s Developer Tools that allows you to emulate any latitude and longitude. Hat tip to the Digital Inspiration blog for this method:

  1. Open the Chrome browser.
  2. Press [CTRL]+[SHIFT]+I to open Developer Tools.
  3. Click “Console” and then the “Emulation” tab. If you do not see the Emulation tab while in the Console, press the [ESC] key and it will appear.
  4. Within the Emulation tab’s navigation, choose “Sensors.”
  5. Check the box next to “Emulate geolocation coordinates.”
  6. Open a new tab with a utility like http://www.latlong.net/ to look up the precise latitude and longitude for a locality.
  7. Copy and paste the latitude and longitude over to the “Emulate geolocation coordinates” input boxes.
  8. Go to Google.com and submit your query to get results that match those you’d get if you were actually in that locality.

GoogleChromeEmulateGeolocation

4. The &near= Search Parameter

There is a URL parameter you can append to your Google search to return results near a certain location — just add &near=cityname to your query string, where cityname is your desired locality.

For example, after searching for “cowboy boots,” add &near=Dallas to the query URL, like so:https://www.google.com/?gws_rd=ssl#q=cowboy+boots&near=Dallas. There’s actually abookmarklet available online to make this even easier.

With that said, I have noticed the organic search results are slightly different when using the &near=parameter than when using AdPreview and Google Chrome Location Emulation. I don’t totally trust this method.

Final Thoughts

So there you go — four ways to show localized Google search results even though the search tool has been retired. I think it’s clear the AdPreview tool is the easiest, most accurate option, but perhaps you have a method you’d like to share?

Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.

Source : http://searchengineland.com/ 

Categorized in Science & Tech

In part two of a three-part series on app indexing, contributors Emily Grossman and Cindy Krum explore how Google indexes deep app content and explains what marketers can do to promote their app content in Google search.

In this article, you’ll learn how Google is surfacing deep app content and how SEOs can prepare iOS and Android deep app screens for Google’s index. Google is making significant moves to close the gap between app and Web content to make mobile interaction more seamless, and that theme will reappear throughout the analysis.

This is the second installment in a three-part series about app indexing strategies and deep linking opportunities. The first article focused on Apple’s new Search API for iOS 9, which encourages and incentivizes an app-centric mobile experience.

Today’s column, co-authored with Cindy Krum, will focus on how Google indexes deep app screens and what marketers can do to promote their app content in Google search. Google’s app indexing strategies differ significantly from Apple’s, and it’s important for marketers to understand the distinctions.

The third article in this series will focus on future app indexing challenges we will face with the growth of wearables and other non-standard device apps and device indexes.

App Indexing In Google

Historically, app landing pages on websites have been in the Google index — but actual apps andinternal app screens have not. Because crawling and indexing in-app content was impossible untilrecently, users had to discover new apps via an app store (Google Play or iTunes), which surfaces apps according to app meta data and editorial groupings instead of in-app content. For digital marketers, internal app content has been unavailable for search — part of what Marshall Simmonds calls “dark search.”

This situation has created a two-fold problem for Google:

  1. App stores had trained users away from using Google for app discovery; and
  2. App developers were historically not incentivized to optimize internal app data for search. This limited Google’s mission to collect and organize the world’s data, which in turn limited its ability to make money.

Now that Google is indexing both app landing pages and deep screens in apps, Google’s app rankings fall into two basic categories, App Packs and App Deep Links. App Packs are much more like the app search results that SEOs are used to, because they link to app download pages in Google Play or the App Store, depending on the device that you are searching from. (App Packs will only show apps that are compatible with your device’s OS.)

Ranking in an App Pack (and also in the Apps Universal, under Google’s top-navigation drop-down in the mobile search results) relies heavily on the app title, description, star ratings and reviews, and it will differ greatly from the internal app store rankings, as well as in-app indexing strategies described in the rest of this article.

Deep links are different because they link to specific deep screens within an app. Google has displayed deep links in search results in a variety of ways since it started app indexing, but there are a couple of standard deep link displays (shown below) that seem more common than others. Some deep-linked results look no different from traditional blue links for websites, while other deep link search results contain more attractive visual elements like colored “install” buttons, app icons and star ratings.

google-deep-link-types-serps.jpg

We believe that the most common deep link in the future will display the app icon and a small “open on domain.com” button because that allows users to choose between the deep app link and the Web link without an additional dialogue screen. (Currently, the dialogue screen from other types of deep links comes from the bottom of the browser window and says, “Would you like to open this in Chrome or in the [Brand Name] app?”)

It is important to note that aspects of the search context, like the mobile browser, can limit the visibility of deep links. For example, Google only supports app indexing on iOS inside the Google and Chrome apps, not in Mobile Safari, the default Web browser on iOS. It seems likely that Safari will be updated to allow for Google’s deep linking behaviors as part of the iOS 9 update, but it is not confirmed.

Similarly, Google has been experimenting with a “Basic” mobile search results view that omits rich content for searchers with slow carrier connections. “Basic” search results do not include App Packs at all (since downloading an app would not be attractive to people with slow connections), and deep link results will only show as inline blue links, without images, star ratings, icons or buttons.

These are important stipulations to keep in mind as we allocate time and budget to optimizing app indexing, but the benefits of Google app indexing are not limited to surfacing deep app screens in Google search results.

Why Is App Indexing Important For SEO?

Without apps in its index, Google was missing a huge piece of the world’s data. The new ability to index iOS and Android apps has fundamentally changed app discovery and dramatically changed mobile SEO strategies.

Now that Google’s search engine can process and surface deep app content in a similar fashion to the way it does Web content, Google search has a significant advantage over the app stores. It is still the #1 Search Engine in the world, so it can easily expose content to more potential customers than any app store could, but it can also integrate this new app content with other Google properties like Google Now, Inbox/Gmail and Google Maps.

This change has also added a whole new host of competitors to the mobile search result pages. Now, not only can app landing pages rank, but internal app screens can also compete for the same rankings.

Google’s official position at the moment is that Web parity is necessary for deep app indexing (i.e., crawlable Web content that matched the indexable app content), but at Google I/O, the company clarified that they are working on a non-parity app indexing solution. They have even started promoting an “app only interest form,” and recent live testing has reinforced the idea that apps without parity will soon be added to the index (if they haven’t been already).5457989_app-indexing--the-new-frontier-of-seo_tf23002f4.jpg

This is a big deal, so SEOs should be wary of underestimating the potential market implications of Google indexing apps without Web parity. For marketers and SEOs, it means that mobile search results could soon be flooded with new and attractive competition on a massive scale — content that they never have had to compete with before.

Let’s do a bit of math to really understand the implications.

We’ll start with a broad assumption that there are roughly 24,000 travel apps, a third of which lack Web parity. If each app contains an average of just 1,000 screens (and travel apps often include many more than that), we’re looking at roughly 8,000,000 new search results with which travel websites must compete — and that’s in the travel industry alone. That is huge!

Games, the biggest app category in both stores, promises to create an even bigger disruption in mobile search results, as it is a category that has a very high instance of apps without Web parity.

Another subtle indication of the importance of app indexing is the name change from “Google Webmaster Tools” to “Google Search Console.” Historically, webmasters and SEOs have used Google Webmaster Tools to manage and submit website URLs to Google’s index. We believe the renamed Google Search Console will eventually do the same things for both Web and apps (and possibly absorb the Google Play Console, where Android apps have been managed). In light of that, removing the “Web” reference from the old “Webmaster Tools” name makes a lot of sense.

A similar sentiment by John Mueller, from Google, is noted below, and possibly hints at the larger plan:

John-Mueller-on-Google-Plus.jpg

How Does Google Rank Deep Links?

Like everything else, Google has an algorithm to determine how an indexed deep link should rank in search results. As usual, much about Google’s ranking algorithm is unknown, but we’ve pieced together some of the signals they have announced and inferred a few others. Here’s what we currently believe to be true about how Google is ranking deep links in Google Search:

Known Positive Ranking Factors

  • Installation Status. Android apps are more prominently featured in Google search results when they are installed on a user’s device or have been in the past. Rather than checking the device, Google keeps track of app downloads in their cloud-based user history, so this only affects searchers when they are signed into Google.
  • Proper Technical Implementation. The best way app publishers can drive rankings,according to Mariya Moeva of Google, is to “ensure that the technical implementation of App Indexing is correct and that your content is worth it.” She later elaborated in a YouTube video, explaining that app screens with technical implementation errors will not be indexed at all. (So start befriending the app development team!)
  • Website Signals (title tags, description tags). Traditional SEO elements in the <head> tag of the associated Web page will display in deep link search results, and thus are also likely ranking factors for the deep links. In fact, good SEO on corresponding Web pages is critical, since Google considers the desktop Web version of the page as the canonical indexing of the content.

Known Negative Ranking Factors

  • Content Mismatch. Google will not index app screens that claim to correspond with a Web page but don’t provide enough of the same information. Google will report these “mismatch errors” in Google Search Console, so you can determine which screens need to be better aligned with their corresponding Web pages.
  • Interstitials. Interstitials are JavaScript banners that appear over the content of a website, similar to pop-ups but without generating a new browser window. The same experience can be included in apps (most often for advertisements), but this has been discouraged by both Apple and Google. In her recent Q&A with Stone Temple Consulting, Mariya Moeva implied that app interstitials are a negative ranking factor for deep links (and said to stay tuned for more information soon). Interstitials can also prevent Google from matching your app screen content to your Web page content, which could cause “Content Mismatch Errors” that prevent Google from indexing the app screen entirely. In either case, app and Web developers should stay away from interstitials and instead, opt for banners that just move content down on the screen. Both Apple and Google have endorsed their own form of app install banners and even offer app banner code templates that can be used to promote a particular app from the corresponding mobile website.

Apart from ranking on their own, app deep links can also provide an SEO benefit for websites. Google has said that indexed app deep links are a positive ranking factor for their associated Web pages, and preliminary studies have shown that Web pages can expect an average site-wide lift of .29 positions when deep link markup is in place.

Also, App Packs and App Carousels tend to float to the top of a mobile SERP (likely ranking as a group rather than ranking independently). Presence in these results increases exposure and eliminates a position that a competitor could occupy lower down in the organic rankings, since these “Packs” and “Carousels” take up spaces that would be previously held by websites.

Indexed Android apps will also get added exposure in the next release of the Android operating system, Android M. It includes a feature called “Now on Tap,” which represents a deeper integration of Google Now with the rest of the Android phone functionality. Android M allows Google to scan text on an Android user’s screen while in any app, then interpret a “context” from the on-screen text, infer potential queries and automatically display mobile applications that could assist the user with those inferred queries.

For example, a WhatsApp conversation about dinner plans could pull up a “Now on Tap” interface that suggests deep links to specific screens in OpenTable, Google Maps and Yelp. This only works for deep-linked app screens in Google’s index, but for those apps, it will likely drive significantly higher engagement and potentially more installs. From a strategic perspective, this adds another potential location to surface your content, beyond the mobile search results.

While Google will only surface apps they have indexed, they plan on crawling on-screen text inall apps, trying to perceive context for “Now on Tap.” Google doesn’t provide any opt-in mechanism, so Android apps that are not indexed for Google search can still be crawled to trigger a “Now on Tap” experience. This means that Google is essentially reserving the right to send users away from your app to a different app that has relevant screens in the index, but also that Google is allowing your app to “steal” users away from other apps if your app screens are in the index.

This could provide nearly limitless opportunities for “Now on Tap” to suggest apps to Android users, and the “rogue crawling” aspect of it reinforces our prediction that Google will soon be crawling, indexing and surfacing app screens that don’t have Web parity. This will make Google’s app indexing an even more important strategy for Android apps, especially once Android M is widely adopted.

The app rankings advantage is pushed to the next level when you understand that Google is intentionally giving preference to app results for certain queries. In some cases, being an indexed app may be the only way to rank at the top in mobile Google search. Keywords like “games” and “editor” are a common trigger for App Packs and App Carousels, but Google is also prominently surfacing apps for queries that seem to be associated with utilities or verbs (e.g., “flight tracker,” “restaurant finder,” or “watch tv”). And when the App Packs or Carousels appear, they often push the blue links below the fold (and sometimes way below the fold).

At the end of the day, for some queries, a blue link may not ever beat the “Packs” — in which case, the best strategy may be to focus on App Pack listings over deep links.

How Can I Get Deep App Screens Indexed For Google Search?

Setting up app indexing for Android and iOS Apps is pretty straightforward and well-documented by Google. Conceptually, it is a three-part process:

  1. Enable your app to handle deep links.
  2. Add code to your corresponding Web pages that references deep links.
  3. Optimize for private indexing.

These steps can be taken out of order if the app is still in development, but the second step iscrucial; without it, your app will be set up with deep links but will not be set up for Google indexing, so the deep links will not show up in Google Search.

NOTE: iOS app indexing is still in limited release with Google, so there is a special form submission and approval process even after you have added all the technical elements to your iOS app. That being said, the technical implementations take some time. By the time your company has finished, Google may have opened up indexing to all iOS apps, and this cumbersome approval process may be a thing of the past.

Following are the steps for Google deep-link indexing. (For a PDF version of the instructions, click here.)

Step 1: Add Code To Your App That Establishes The Deep Links

A. Pick A URL Scheme To Use When Referencing Deep Screens In Your App

App URL schemes are simply a systematic way to reference the deep linked screens within an app, much like a Web URL references a specific page on a website.

In iOS, developers are currently limited to using Custom URL Schemes, which are formatted in a way that is more natural for app design but different from Web.

In Android, you can choose from either HTTP URL schemes (which look almost exactly like Web URLs) or Custom URL Schemes, or you can use both. If you have a choice and can only support one type of URL Scheme on Android, choose HTTP.

app-URL-scheme-deep-links-800x247

B. Support That App’s URL Schemes In The App

Since iOS and Android apps are built in different frameworks, different code must be added to the app to enable the deep link URL Schemes to work within the specific framework.

support-app-URL-schemes-800x438

C. Set Up CocoaPods

CocoaPods is a dependency management tool for iOS. It acts as a translation layer between iOS apps and the Google SDKs, so it is only necessary in iOS apps. Google has moved all its libraries to CocoaPods, and this will now be the only supported way to source them in an iOS app.

set-up-cocoapods-800x139

NOTE: Developers who have never worked with CocoaPods may have to rework how they currently handle all dependent libraries in the app, because once CocoaPods is installed, it is harder and more complicated to handle other non-CocoaPods libraries. There are some iOS developers who favor CocoaPods and have been using them for some time, so your app may already be working with CocoaPods. If that’s true, prepping for iOS app indexing will be much easier.

D. Enable The Back Bar

iOS devices don’t come equipped with a hardware or persistent software “back” button, so Apple and Google have built workarounds to make inter-app back navigation easier. Google requires that iOS apps recognize an additional GSD Custom URL Scheme (that was set up in Step 1B). Google only uses this to trigger a “back” bar in the iOS app.

Google will generate the GSD Custom URLs automatically when someone clicks on an iOS deep link from a search result page, so we don’t need to generate new GSD deep links for every screen; we just need to support the format in the Info.plist file and add code that will communicate with the “GoogleAppIndexing” Pod when a GSD link is received by the app.

enable-back-bar

NOTE: Google’s solution is similar to Apple’s iOS 9 “Back to Search” buttons that display in the upper left portion of the phone’s Status Bar, but when it is triggered, it appears as a blue “Back Bar” that hovers over the entire phone Status Bar. The Back Bar will disappear after a short period of time if the user does not tap on it. This “disappearing” behavior also represents a unique experience for iOS deep linking in Google, since after a certain period of time, there won’t be a way for iOS users to get back to the Google Search results without switching apps manually, by clicking through the home screen. Developers compensate by adopting more tactics that pull users deeper into the app, eat up time, and distract the user from going back to Google Search until the bar disappears.

E. Set Up Robots & Google Play/Google Search Consoles

In some cases, it may make sense to generate deep links for an app screen but prevent it from showing up in search results. In Android, Google allows us to provide instructions about which screens we would like indexed for search and which we would not, but no similar mechanism is available for iOS.

Digital marketers and SEOs should use the Google Play Console and the Google Search Console to help connect your app to your website and manage app indexation. Also, double check that your website’s robots.txt file allows access to Googlebot, since it will be looking for the Web aspect of the deep links in its normal crawls.

set-up-robots-google-play

Step 2: Add Code To Your Website That References The URL Schemes You Set Up In The App

A. Format & Validate Web Deep Links For The Appropriate App Store

Google’s current app indexing process relies on Googlebot to discover and index deep links from a website crawl. Code must be added to each Web page that references a corresponding app screen.

When marking up your website, a special deep link format must be used to encode the app screen URL, along with all of the other information Google needs to open a deep link in your app. The required formatting varies slightly for Android and iOS apps and is slightly different from the URL Schemes used in the app code, but they do have some elements in common.

The {scheme} part of the link always refers to the URL scheme set up in your app in Step 1, and the {host_path} is the part of the deep link that identifies the specific app screen being referenced, like the tail of a URL. Other elements vary, as shown below:

validate-web-deep-link

B. Add Web Deep Links To Web Pages With Corresponding App Screens

Internal app screens can be indexed when Googlebot finds deep app links in any of the following locations on your website:

  • In a rel=”alternate” in the HTML <head>
  • In a rel=”alternate” in the XML sitemap
  • In Schema.org ViewAction markup

Sample code formatting for each of those indexing options is included below:

rel-sample-code

xml-sitemap-sample-code-800x416

schema-sample-code-800x355

Step 3: Optimize For Private Indexing

Both Google and Apple have a “Private” indexing feature that allows individual user behaviors to be associated with specific screens in an app. App activity that is specific to one user can be indexed on that users’ phone, for private consumption only (e.g., a WhatsApp message you’ve viewed or an email you’ve opened in Mailbox).

Activities that are Privately indexed do not generate deep links that can surface in a public Google search result, but instead generate deep links that surface in other search contexts. For Android apps, this is in Chrome’s autocomplete and Google Now; for iOS, this is in Spotlight, Siri, or Safari’s Spotlight Suggest results.

optimize-private-indexing-800x294

NOTE: Google’s documentation seems to indicate that Activities are only used for private indexing, but Google may also use them as a measurement of engagement for more global evaluations of an app (as Apple does with NSUserActivities in Apple Search). Google has not highlighted their private indexing feature as vocally as Apple, and a user’s private index can be accessed from the Phone icon in the bottom navigation of the Google Now app on Android and iOS. Currently, only Google’s apps (like Gmail) are able to surface privately indexed content in organic Google search results, but we suspect this will be opened up to third-party apps in the future.

Concluding Remarks

App indexing and deep linking are changing the digital marketing landscape and dramatically altering the makeup of organic mobile search results. They are emerging from the world of “dark search” and becoming a force to be reckoned with in SEO.

Marketers and SEOs can either look at these changes as a threat — another hurdle to overcome — or a new opportunity to get a leg up over the competition. Those who wish to stay on the cutting edge of digital marketing will take heed and learn how to optimize non-HTML content like apps in all of the formats and locations where they surface.

That being said, relying on app deep links alone to drive Google search engine traffic is still not an option. Traditional SEO and mobile SEO are still hugely important for securing a presence in Google’s mobile searches. Google still considers desktop websites the ultimate canonical for keyword crawling and indexing, and the search engine relies heavily on website parity because its strength is still crawling and indexing Web content.

The next big app indexing questions are all about apps that lack Web parity. Google does not currently use a roaming app crawler to discover deep links themselves, but we feel confident that this will change. Google’s App Indexing API currently only helps surface Android apps in autocomplete, but we believe in the future, it will help surface apps that don’t have Web parity.

Calling the system an “App Indexing API” seems to allude to a richer functionality than just adding app auto-complete functionality — and Google’s original app indexing documentation from April also indicated a more robust plan.

As shown in the diagram below, the original documentation explained that developers could use the App Indexing API (also referred to here as “Search Suggest,” which is different from the Search Suggest API) to notify Google of deep links “with or without corresponding Web pages.” That line has since been removed from the documentation, but the implication is clear: Google is paving the way for indexing apps without Web parity. Until that happens, traditional website optimization will remain a key component of optimizing app content for Google search, but when app screens can be indexed without Web parity, there will be a whole new set of ranking factors to consider and optimize for.

App-Indexing-Documentation-800x369

As we charge into this new frontier, the immediate benefits of app indexing are clear, but the newness may require a small leap of faith for more traditional marketers and SEOs.

Some may be left suspicious, with many questions: How long will Google provide a ranking benefit for deep-linked content? Will this be perceived as a “bait and switch,” like the Mobile Friendly update? Will app ranking factors evolve to include more traditional Web page ranking factors (like links and social signals)? Will Google begin to crawl app content more indiscriminately, using deep app links like Web links? Will Google develop a new app-specific crawler, or was the algorithm change on April 21 (aka “Mobilegeddon“) really this — that apps are already being crawled, rendered and evaluated by the smartphone crawler, just the same as Web?

Let us know what you think in the comments.

Source : http://searchengineland.com/

Categorized in Search Engine

A new Google pilot program now allows publishers to describe CSV and other tabular datasets for scientific and government data.

 Google added a new structured data type named Science datasets. This is a new markup, which technically can be used by Google for rich cards/rich snippets in the Google search results interface.

Science data sets are “specialized repositories for datasets in many scientific domains: life sciences, earth sciences, material sciences, and more,” Google said. Google added, “Many governments maintain repositories of civic and government data,” which can be used for this as well.

Here is the example Google gave:

For example, consider this dataset that describes historical snow levels in the Northern Hemisphere. This page contains basic information about the data, like spatial coverage and units. Other pages on the site contain additional metadata: who produces the dataset, how to download it, and the license for using the data. With structured data markup, these pages can be more easily discovered by other scientists searching for climate data in that subject area.

This specific schema is not something that Google will show in the search results today. Google said this is something they are experimenting with: “Dataset markup is available for you to experiment with before it’s released to general availability.” Google explained you should be able to see the “previews in the Structured Data Testing Tools,” but “you won’t, however, see your datasets appear in Search.”

Here are the data sets that qualify for this markup:

  • a table or a CSV file with some data;
  • a file in a proprietary format that contains data;
  • a collection of files that together constitute some meaningful dataset;
  • a structured object with data in some other format that you might want to load into a special tool for processing;
  • images capturing the data; and
  • anything that looks like a dat aset to you.

Aaron Bradley seemed to first spot this and said “with [a] pilot program, Google now allows publishers to describe CSV and other tabular datasets.”

Source : http://searchengineland.com/

Categorized in Search Engine

Google has officially confirmed on the Google webmaster blog that they've began rolling out the Penguin 4.0 real time algorithm. It has been just under two years since we had a confirmed Penguin update, which we named Penguin 3.0 in October 2014 and this will be the last time Google confirms a Penguin update again. We saw signals yesterday that Google began testing Penguin 4.0, Google wouldn't confirm if those signals were related to this Penguin 4.0 launch announcement this morning but nevertheless, we are live now with Penguin 4.0.

No Future Penguin Confirmations

Google said because this is a real-time algorithm "we're not going to comment on future refreshes." By real time, Google means that as soon as Google recrawls and reindexes your pages, those signals will be immediately used in the new Penguin algorithm.

Google did this also with Panda, when it became part of the core algorithm. Google saidno more confirmations for Panda.

Penguin 4.0 Is Real Time & More Granular

Google again said this is now rolling out, so you may not yet see the full impact until it fully rolls out. I don't expect the roll out to take long. Google wrote:

  • Penguin is now real-time. Historically, the list of sites affected by Penguin was periodically refreshed at the same time. Once a webmaster considerably improved their site and its presence on the internet, many of Google's algorithms would take that into consideration very fast, but others, like Penguin, needed to be refreshed. With this change, Penguin's data is refreshed in real time, so changes will be visible much faster, typically taking effect shortly after we recrawl and reindex a page. It also means we're not going to comment on future refreshes.
  • Penguin is now more granular. Penguin now devalues spam by adjusting ranking based on spam signals, rather than affecting ranking of the whole site.

The real-time part we understand, it means when Google indexes the page, it will immediately recalculate the signals around Penguin.

Penguin being more "granular" is a bit confusing. I suspect it means that now Penguin can impact sites on a page-by-page basis, as opposed to how it worked in the past where it impacted the whole site. So really spammy pages or spammy sections of your site can solely be impacted by Penguin now, as opposed to your whole web site. That is my guess, I am trying to get clarification on this.

Google Penguin Timeline:

Was My Site Impacted By Google Penguin 4.0

If your site was hit by Penguin 3.0 and you don't see a recovery, even a small one by now, then that probably means you are still impacted. I'd give this a couple weeks to fully roll out and check your analytics to see if you have a nice recovery. Again, I still think specific sections and pages will be impacted and it will make it harder to know if you are impacted by this update or not.

The nice thing, you can use the disavow file on links you think are hurting you and you should know pretty quickly (I suspect days) if that helped as opposed to waiting two years for Google to refresh the algorithm. At the same time, you can be hit by Penguin much faster now.

Source : https://www.seroundtable.com

Categorized in Search Engine

AROUND MIDNIGHT ONE Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane­ously wrote what she now bitterly refers to as “the tweet that launched a thousand ships.” The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper­bolic joke, she tweeted out a list of political carica­tures, one of which called the typical Sanders fan a “vitriolic crypto­racist who spends 20 hours a day on the Internet yelling at women.”

The ill-advised late-night tweet was, Jeong admits, provocative and absurd—she even supported Sanders. But what happened next was the kind of backlash that’s all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to “rip each one of [her] hairs out” and “twist her tits clear off.”

The attacks continued for weeks. “I was in crisis mode,” she recalls. So she did what many victims of mass harassment do: She gave up and let her abusers have the last word. Jeong made her tweets private, removing herself from the public conversation for a month. And she took a two-week unpaid leave from her job as a contributor to the tech news site Motherboard.

For years now, on Twitter and practically any other freewheeling public forum, the trolls have been out in force. Just in recent months: Trump’s anti-Semitic supporters mobbed Jewish public figures with menacing Holocaust “jokes.” Anonymous racists bullied African American comedian Leslie Jones off Twitter temporarily with pictures of apes and Photoshopped images of semen on her face.Guardian columnist Jessica Valenti quit the service after a horde of misogynist attackers resorted to rape threats against her 5-year-old daughter. “It’s too much,” she signed off. “I can’t live like this.” Feminist writer Sady Doyle says her experience of mass harassment has induced a kind of permanent self-­censorship. “There are things I won’t allow myself to talk about,” she says. “Names I won’t allow myself to say.”qa 

Jigsaw's Jared Cohen: “I want us to feel the responsibility of the burden we’re shouldering.”

Mass harassment online has proved so effective that it’s emerging as a weapon of repressive governments. In late 2014, Finnish journalist Jessikka Aro reported on Russia’s troll farms, where day laborers regurgitate messages that promote the government’s interests and inundate oppo­nents with vitriol on every possible outlet, including Twitter and Facebook. In turn, she’s been barraged daily by bullies on social media, in the comments of news stories, and via email. They call her a liar, a “NATO skank,” even a drug dealer, after digging up a fine she received 12 years ago for possessing amphetamines. “They want to normalize hate speech, to create chaos and mistrust,” Aro says. “It’s just a way of making people disillusioned.”

All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on—clamoring for—the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”

Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

Conversation AI represents just one of Jigsaw’s wildly ambitious projects. The New York–based think tank and tech incubator aims to build products that use Google’s massive infra­structure and engineer­ing muscle not to advance the best possibilities of the Internet but to fix the worst of it: surveillance, extremist indoctrination, censorship. The group sees its work, in part, as taking on the most intract­able jobs in Google’s larger mission to make the world’s information “universally accessible and useful.”

Cohen founded Jigsaw, which now has about 50 staffers (almost half are engineers), after a brief high-profile and controversial career in the US State Department, where he worked to focus American diplomacy on the Internet like never before. One of the moon-shot goals he’s set for Jigsaw is to end censorship within a decade, whether it comes in the form of politically motivated cyberattacks on opposition websites or government strangleholds on Internet service providers. And if that task isn’t daunting enough, Jigsaw is about to unleash Conversation AI on the murky challenge of harassment, where the only way to protect some of the web’s most repressed voices may be to selectively shut up others. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

 
jigsaw_illo2.jpg

But slowly, the group’s lofty challenges began to attract engineers, some joining from other parts of Google after volunteering for Cohen’s team. One of their first creations was a tool called uProxy that allows anyone whose Internet access is censored to bounce their traffic through a friend’s connection outside the firewall; it’s now used in more than 100 countries. Another tool, a Chrome add-on called Password Alert, aims to block phishing by warning people when they’re retyping their Gmail password into a malicious look-­alike site; the company developed it for Syrian activists targeted by government-friendly hackers, but when it proved effective, it was rolled out to all of Google’s users.

  

“We are not going to be one of those groups that justimagines what vulnerable populations are experienc­ing. We’re going to get to know our users.”

In February, the group was renamed Jigsaw to reflect its focus on building practical products. A program called Montage lets war correspondents and nonprofits crowdsource the analysis of YouTube videos to track conflicts and gather evidence of human rights violations. Another free service called Project Shield uses Google’s servers to absorb government-sponsored cyberattacks intended to take down the websites of media, election-­monitoring, and human rights organi­zations. And an initiative, aimed at deradicalizing ISIS recruits, identifies would-be jihadis based on their search terms, then shows them ads redirecting them to videos by former extremists who explain the downsides of joining an ultraviolent, apocalyptic cult. In a pilot project, the anti-ISIS ads were so effective that they were in some cases two to three times more likely to be clicked than typical search advertising campaigns.

The common thread that binds these projects, Cohen says, is a focus on what he calls “vulnerable populations.” To that end, he gives new hires an assignment: Draw a scrap of paper from a baseball cap filled with the names of the world’s most troubled or repressive countries; track down someone under threat there and talk to them about their life online. Then present their stories to other Jigsaw employees.

At one recent meeting, Cohen leans over a conference table as 15 or so Jigsaw recruits—engineers, designers, and foreign policy wonks—prepare to report back from the dark corners of the Internet. “We are not going to be one of those groups that sits in our offices and imagines what vulnerable populations around the world are experiencing,” Cohen says. “We’re going to get to know our users.” He speaks in a fast-­forward, geeky patter that contrasts with his blue-eyed, broad-­shouldered good looks, like a politician disguised as a Silicon Valley executive or vice versa. “Every single day, I want us to feel the burden of the responsibility we’re shouldering.”

“Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying.”

 We hear about an Albanian LGBT activist who tries to hide his identity on Facebook despite its real-names-only policy, an admini­strator for a Libyan youth group wary of govern­ment infiltrators, a defector’s memories from the digital black hole of North Korea. Many of the T-shirt-and-­sandal-­wearing Googlers in the room will later be sent to some of those far-flung places to meet their contacts face-to-face.

“They’ll hear stories about people being tortured for their passwords or of state-sponsored cyberbullying,” Cohen tells me later. The purpose of these field trips isn’t simply to get feedback for future products, he says. They’re about creating personal investment in otherwise distant, invisible problems—a sense of investment Cohen says he himself gained in his twenties during his four-year stint in the State Department, and before that during extensive travel in the Middle East and Africa as a student.

Cohen reports directly to Alphabet’s top execs, but in practice, Jigsaw functions as Google’s blue-sky, human-rights-focused skunkworks. At the group’s launch, Schmidt declared its audacious mission to be “tackling the world’s toughest geopolitical problems” and listed some of the challenges within its remit: “money laundering, organized crime, police brutality, human trafficking, and terrorism.” In an interview in Google’s New York office, Schmidt (now chair of Alphabet) summarized them to me as the “problems that bedevil humanity involving information.”

Jigsaw, in other words, has become ­Google’s Internet justice league, and it represents the notion that the company is no longer content with merely not being evil. It wants—as difficult and even ethically fraught as the impulse may be—to do good.

 
Yasmin Green, Jigsaw’s head of R&D.

IN SEPTEMBER OF 2015, Yasmin Green, then head of operations and strategy for ­Google Ideas, the working group that would become Jigsaw, invited 10 women who had been harassment victims to come to the office and discuss their experiences. Some of them had been targeted by members of the antifeminist Gamergate movement. Game developer Zoë Quinn had been threatened repeatedly with rape, and her attackers had dug up and distributed old nude photos of her. Another visitor, Anita Sarkeesian, had moved out of her home temporarily because of numerous death threats.

At the end of the session, Green and a few other Google employees took a photo with the women and posted it to the company’s Twitter account. Almost immediately, the Gamergate trolls turned their ire against Google itself. Over the next 48 hours, tens of thousands of comments on Reddit and Twitter demanded the Googlers be fired for enabling “feminazis.”

“It’s like you walk into Madison Square Garden and you have 50,000 people saying you suck, you’re horrible, die,” Green says. “If you really believe that’s what the universe thinks about you, you certainly shut up. And you might just take your own life.”

To combat trolling, services like Reddit, YouTube, and Facebook have for years depended on users to flag abuse for review by overworked staffers or an offshore workforce of content moderators in countries like the Philippines. The task is expensive and can be scarring for the employees who spend days on end reviewing loathsome content—yet often it’s still not enough to keep up with the real-time flood of filth. Twitter recently introduced new filters designed to keep users from seeing unwanted tweets, but it’s not yet clear whether the move will tame determined trolls.

The meeting with the Gamergate victims was the genesis for another approach. Lucas Dixon, a wide-eyed Scot with a doctorate in machine learning, and product manager CJ Adams wondered: Could an abuse-detecting AI clean up online conversations by detecting toxic language—with all its idioms and ambiguities—as reliably as humans?

Show millions of vile Inter­net comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

To create a viable tool, Jigsaw first needed to teach its algorithm to tell the difference between harmless banter and harassment. For that, it would need a massive number of examples. So the group partnered withThe New York Times, which gave Jigsaw’s engineers 17 million comments fromTimes stories, along with data about which of those comments were flagged as inappropriate by moderators. Jigsaw also worked with the Wikimedia Foundation to parse 130,000 snippets of discussion around Wikipedia pages. It showed those text strings to panels of 10 people recruited randomly from the CrowdFlower crowdsourcing service and asked whether they found each snippet to represent a “personal attack” or “harassment.” Jigsaw then fed the massive corpus of online conversation and human evaluations into Google’s open source machine learning software, TensorFlow.

Machine learning, a branch of computer science that Google uses to continually improve everything from Google Translate to its core search engine, works something like human learning. Instead of programming an algorithm, you teach it with examples. Show a toddler enough shapes identified as a cat and eventually she can recognize a cat. Show millions of vile Internet comments to Google’s self-improving artificial intelligence engine and it can recognize a troll.

In fact, by some measures Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10 percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack. For now the tool looks only at the content of that single string of text. But Green says Jigsaw has also looked into detecting methods of mass harassment based on the volume of messages and other long-term patterns.

Wikipedia and the Times will be the first to try out Google’s automated harassment detector on comment threads and article discussion pages. Wikimedia is still considering exactly how it will use the tool, while the Times plans to make Conversation AI the first pass of its website’s com­ments, blocking any abuse it detects until it can be moder­ated by a human. Jigsaw will also make its work open source, letting any web forum or social media platform adopt it to automatically flag insults, scold harassers, or even auto-delete toxic language, preventing an intended harassment victim from ever seeing the offending comment. The hope is that “anyone can take these models and run with them,” says Adams, who helped lead the machine learning project.

Adams types in “What’s up, bitch?” and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale.

What’s more, some limited evidence suggests that this kind of quick detection can actually help to tame trolling. Conversation AI was inspired in part by an experiment undertaken by Riot Games, the video­game company that runs the world’s biggest multi­player world, known as League of Legends, with 67 million players. Starting in late 2012, Riot began using machine learning to try to analyze the results of in-game conversations that led to players being banned. It used the resulting algorithm to show players in real time when they had made sexist or abusive remarks. When players saw immediate automated warnings, 92 percent of them changed their behavior for the better, according to areport in the science journal Nature.

My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office, when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyze. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.

In fact, Conversation AI’s algorithm goes on to make impressively subtle distinctions. Pluralizing my trashy greeting to “What’s up bitches?” drops the attack score to 45. Add a smiling emoji and it falls to 39. So far, so good.

But later, after I’ve left Google’s office, I open the Conver­sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver­sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.

FOR A TECH EXECUTIVE taking on would-be terrorists, state-sponsored trolls, and tyrannical surveillance regimes, Jigsaw’s creator has a surprisingly sunny outlook on the battle between the people who use the Internet and the authorities that seek to control them. “I have a fundamental belief that technology empowers people,” Jared Cohen says. Between us sits a coffee table covered in souvenirs from his travels: a clay prayer coin from Iraq, a plastic-wrapped nut bar from Syria, a packet of North Korean cigarettes. “It’s hard for me to imagine a world where there’s not a continued cat-and-mouse game. But over time, the mouse might just become bigger than the cat.”

 

JIGSAW’S PROJECTS

Project Shield

Montage

Password Alert

The Redirect Method

Conversation AI

Digital Attack Map

When Cohen became the youngest person ever to join the State Depart­ment’s Policy Planning Staff in 2006, he brought with him a notion that he’d formed from seeing digitally shrewd Middle Eastern youths flout systems of control: that the Internet could be a force for political empowerment and even upheaval. And as Facebook, then YouTube and Twitter, started to evolve into tools of protest and even revo­lution, that theory earned him access to officials far above his pay grade—all the way up to secretaries of state Condo­leezza Rice and later Hillary Clinton. Rice would describe Cohen in her memoirs as an “inspired” appoint­ment. Former Policy Planning director Anne-Marie Slaughter, his boss under Clinton, remembers him as “ferociously intelligent.”

Many of his ideas had a digital twist. After visiting Afghanistan, Cohen helped create a cell-phone-based payment system for local police, a move that allowed officers to speed up cash trans­fers to remote family members. And in June of 2009, when Twitter had scheduled downtime for maintenance during a massive Iranian protest against hardliner president Mahmoud Ahmadi­nejad, Cohen emailed founder Jack Dorsey and asked him to keep the service online. The unauthorized move, which violated the Obama administra­tion’s noninterference policy with Iran, nearly cost Cohen his job. But when Clinton backed Cohen, it signaled a shift in the State Department’s relationship with both Iran and Silicon Valley.

Around the same time, Cohen began calling up tech CEOs and inviting them on tech delegation trips, or “techdels”—conceived to somehow inspire them to build products that could help people in repressed corners of the world. He asked Google’s Schmidt to visit Iraq, a trip that sparked the relationship that a year later would result in Schmidt’s invitation to Cohen to create Google Ideas. But it was Cohen’s email to Twitter during the Iran protests that most impressed Schmidt. “He wasn’t following a playbook,” Schmidt tells me. “He was inventing the playbook.”

The story Cohen’s critics focus on, however, is his involvement in a notorious piece of software called Haystack, intended to provide online anonymity and circumvent censorship. They say Cohen helped to hype the tool in early 2010 as a potential boon to Iranian dissidents. After the US govern­ment fast-tracked it for approval, however, a security researcher revealed it had egregious vulnerabilities that put any dissident who used it in grave danger of detection. Today, Cohen disclaims any responsibility for Haystack, but two former colleagues say he championed the project. His former boss Slaughter describes his time in government more diplomatically: “At State there was a mismatch between the scale of Jared’s ideas and the tools the department had to deliver on them,” she says. “Jigsaw is a much better match.”

But inserting Google into thorny geopolitical problems has led to new questions about the role of a multinational corporation. Some have accused the group of trying to monetize the sensitive issues they’re taking on; the Electronic Frontier Foundation’s director of international free expression, Jillian York, calls its work “a little bit imperialistic.” For all its altruistic talk, she points out, Jigsaw is part of a for-profit entity. And on that point, Schmidt is clear: Alphabet hopes to someday make money from Jigsaw’s work. “The easiest way to understand it is, better connectivity, better information access, we make more money,” he explains to me. He draws an analogy to the company’s efforts to lay fiber in some developing countries. “Why would we try to wire up Africa?” he asks. “Because eventually there will be advertising markets there.”

“We’re not a government,” Eric Schmidt says slowly and carefully. “We’re not engaged in regime change. We don’t do that stuff.”

Throwing out well-intentioned speech thatresembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect. When I ask Conversation AI’s inventors about its potential for collateral damage, the engineers argue that its false positive rate will improve over time as the software continues to train itself. But on the question of how its judgments will be enforced, they say that’s up to whoever uses the tool. “We want to let communities have the discussions they want to have,” says Conversation AI cocreator Lucas Dixon. And if that favors a sanitized Internet over a freewheeling one? Better to err on the side of civility. “There are already plenty of nasty places on the Internet. What we can do is create places where people can have better conversations.”

ON A MUGGY MORNING in June, I join Jared Cohen at one of his favorite spots in New York: the Soldiers’ and Sailors’ Monument, an empty, expansive, tomblike dome of worn marble in sleepy Riverside Park. When Cohen arrives, he tells me the place reminds him of the quiet ruins he liked to roam during his travels in rural Syria.

Our meeting is in part to air the criticisms I’ve heard of Conversation AI. But when I mention the possibility of false positives actually censoring speech, he answers with surprising humility. “We’ve been asking these exact questions,” he says. And they apply not just to Conversation AI but to everything Jigsaw builds, he says. “What’s the most dangerous use case for this? Are there risks we haven’t sufficiently stress-tested?”

Jigsaw runs all of its projects by groups of beta testers and asks for input from the same groups it intends to recruit as users, he says. But Cohen admits he never knows if they’re getting enough feedback, or the right kind. Conversation AI in particular, he says, remains an experiment. “When you’re looking at curbing online harassment and at free expression, there’s a tension between the two,” he acknowledges, a far more measured response than what I’d heard from Conversation AI’s developers. “We don’t claim to have all the answers.”

And if that experiment fails, and the tool ends up harming the exact free speech it’s trying to protect, would Jigsaw kill it? “Could be,” Cohen answers without hesitation.

 

I start to ask another question, but Cohen interrupts, unwilling to drop the notion that Jigsaw’s tools may have unintended consequences. He wants to talk about the people he met while wandering through the Middle East’s most repressive countries, the friends who hosted him and served as his guide, seemingly out of sheer curiosity and hospitality.

It wasn’t until after Cohen returned to the US that he realized how dangerous it had been for them to help him or even to be seen with him, a Jewish American during a peak of anti-­Americanism. “My very presence could have put them at risk,” he says, with what sounds like genuine throat-­tightening emotion. “To the extent I have a guilt I act on, it’s that. I never want to make that mistake again.”

Cohen still sends some of those friends, particularly ones in the war-torn orbit of Syria and ISIS, an encrypted message almost daily, simply to confirm that they’re alive and well. It’s an exercise, like the one he assigns to new Jigsaw hires but designed as maintenance for his own conscience: a daily check-in to assure himself his interventions in the world have left it better than it was before.

“Ten years from now I’ll look back at where my head is at today too,” he says. “What I got right and what I got wrong.” He hopes he’ll have done good.

Source : https://www.wired.com

Categorized in Internet Privacy

Google is an amazingly powerful tool for finding information online.

Many of us use it daily in our personal and professional lives for all kinds of purposes. In its speedy and seamless way, Google retrieves web material based on keywords entered in its search box.

Although its relevancy ranking algorithm is a closely guarded trade secret and a big reason for Google’s success as the world’s most popular search engine, we know basically how it works. You simply type in words or phrases and Google will retrieve web sources that match those terms.

The ranking of those sources is based on such things as how many times your search-term words appear, where they appear (e.g. title), and how many other websites link to those sources.

Since Google is simply yet precisely executing a series of steps matching and weighting those terms, Donald Trump and Hillary Clinton should get identical search results if each entered the same words or phrases such as “ISIS” or “Black Lives Matter.” Or so we would think.

In 2011, political activist and web organizer Eli Pariser wrote the book "The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think." In it, he revealed that Google search results may in fact vary widely from user to user.

Why? Because in an effort to personalize your search results, Google will feed you sources that match your interests. And just how does Google know your interests? Because it maintains a log of all of your past Google searches and sites viewed, that’s how.

This kind of personalization of the web is widespread. Anyone who shops on Amazon or uses Netflix knows that those services review the items you have purchased or just simply browsed and then offer recommendations for other similar books or movies.

In the case of Google, most users realize that the ads that appear with their search results are connected to their own search history. But personalization also affects which sources are retrieved and the order in which they are displayed. While it may be seen as a benefit to have Google customize your search results, it comes with some serious consequences.

In its early days, the internet was seen as a marvelous way to broaden one’s world by making it easier to disseminate and retrieve information. The web seemed to embody the true spirit of democracy by providing free and equal access to information for all. And though that is still largely true, the filter bubble has had a substantial narrowing effect on the information we receive through web services.

A recent study by the Pew Internet Research Center showed that 62 percent of adults in the U.S. get their news from social media sites, and that 18 percent do often. And the leader of the social media pack is, as you might guess, Facebook. Earlier this year, a former Facebook employed charged that Facebook suppressed conservative stories from its news feed. After much media attention and a denial, CEO Mark Zuckerberg convened a group of conservatives to discuss the issue and build trust between them and Facebook.

Facebook recently made the news again when the New York Times reported last month that Facebook profiles its users by their political leanings, among other things. Like Google, Facebook knows every post or site you read or liked, every ad you followed, every Facebook friend you have, and categorizes you accordingly. To find out how Facebook has labeled you politically, go to http://nyti.ms/2bfm2gU.

Also significant is the amount of political information produced and shared exclusively within Facebook. There are numerous political organizations ranging from the Occupy Democrats to The Angry Patriot that host Facebook sites where they post their views. These posts may be shared, liked and thus circulated to a large readership. Taken all together, these sites reach a combined audience of tens of millions of people, comparable in size to that of CNN and the New York Times itself, who also reported this story in August.

The moral to this story is “user beware.” If you can spare nine minutes, watch Eli Pariser’s TED Talk on the filter bubble. It will forever change your view of the neutrality of the web and make you more aware of the type of information you are fed online.

Source : http://www.pressrepublican.com

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now