fbpx

Google is bringing fact check information to image search results worldwide starting today.

Google is adding “Fact Check” labels to thumbnails in image search results in a continuation of its fact check efforts in Search and News.

“Photos and videos are an incredible way to help people understand what’s going on in the world. But the power of visual media has its pitfalls⁠—especially when there are questions surrounding the origin, authenticity or context of an image.”

This change is being rolled out today to help people navigate issues around determining the authenticity of images, and make more informed decisions about the content they consume.

When you see certain pictures in Google Images, such as a shark swimming down the street in Houston, Google will attach a “Fact Check” label underneath the thumbnail.

Screenshot 1
Is that image of a shark swimming down a street in Houston real? Google Images now has "Fact Check" labels to help inform you in some cases like this (no, it was not real). Our post today explains more about how & when fact checks appear in Google Images: https://www.blog.google/products/search/bringing-fact-check-information-google-images/ …

EbIVJlCU4AAonJG.jpg

After tapping on a fact-checked result to view a larger preview of the image, Google will display a summary of the information contained on the web page where the image is featured.

A “Fact Check” label will only appear on select images that come from independent, authoritative sources on the web. It’s not exactly known what criteria a publisher needs to meet in order to be considered authoritative.

According to a help page, Google uses an algorithm to determine which publishers are trusted sources.

Google also relies on ClaimReview structured data markup that publishers are required to use to indicate fact check content to search engines.

Fact Check labels may appear both for fact check articles about specific images and for fact check articles that include an image in the story.

As mentioned at the beginning of this article, Google already highlights fact checks in regular search results and Google News. YouTube also utilizes ClaimReview to surface fact check information panels in Brazil, India and the U.S.

Google says its fact check labels are surfaced billions of times per year.

While adding ClaimReview markup is encouraged, being eligible to serve a Fact Check label does not affect rankings. This goes for Google Search, Google Images, Google News, and YouTube.

 [Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Olivia Russell]

Categorized in Search Engine

Google has started rolling out a new core search algorithm update that it calls the May 2020 Core Update. The new update comes months after the search giant released the last core algorithm update back in January. The goal behind updates like these are to try and improve the quality of results that users get when they enter a search query into the site. While this is good for an end user, many sites might see their performance fluctuate as a result of the core update. To avoid its results being manipulated, Google doesn't give out the details of its routine updates, merely advising content creators to focus on quality content.

A tweet posted by Google earlier today announced that the May 2020 Core Update has started rolling out for all users. The update would, however, take about one to two weeks to fully roll out.

Screenshot 1
Broader update to enhance Google search experience
The new update is a broad core algorithm update that would bring a list of changes to Google's search algorithms and systems. This is unlike the regular changes that the company releases incrementally to improve search results.
“Several times a year, we make significant, broad changes to our search algorithms and systems. We refer to these as ‘core updates.' They're designed to ensure that overall, we're delivering on our mission to present relevant and authoritative content to searchers,” Google noted in a blog post defining the core algorithm updates.

Apart from other changes, core updates are likely to affect Google Discover. Some sites are also expected to note drops or gains during such updates.

“We know those with sites that experience drops will be looking for a fix, and we want to ensure they don't try to fix the wrong things. Moreover, there might not be anything to fix at all,” the search giant said.

Having said that, webmasters and search engine optimisation (SEO) teams are advised to stay focussed on bringing quality content through their sites. The content should provide original information, reporting, research, or analysis along with a comprehensive description of the topic. It is also recommended to have a descriptive but not exaggerating or sensationalist headline. Furthermore, there are a list of content, quality, expertise, and comparative questions that webmasters and SEO folks should ask themselves about their content.

Drops, gains from search algorithm updates are common
It is natural that since Google makes certain changes at the algorithm level, some websites face drops, while the other ones get some gains in their traffic. There isn't any hard and fast rule to fix such impacts post an update starts rolling out. Nevertheless, it's better to consider analytics to understand ranking changes of your website.

The last core algorithm update that Google brought to its search engine took place in January. The company had also introduced a new design for desktop searches around the same time that faced some backlash from users initially.

[Source: This article was published in gadgets.ndtv.com By Jagmeet Singh - Uploaded by the Association Member: Deborah Tannen]

Categorized in Search Engine

Google’s new BERT algorithm means search engine marketers will need to change their approach to keyword research in 2020 and focus more on intent research. Adam Bunn, the Director of SEO, Content & Digital PR at Greenlight Digital, looks at what companies should expect from keyword searches in 2020.

The way people search online is changing. The introduction of Google’s ‘BERT’ algorithm (Bidirectional Encoder Representations from Transformers) is evidence of this and highlights the complexity with which people have begun to utilise search engines. BERT utilises Natural Language Processing (NLP) which helps analyse natural human language beyond just basic keywords and gather more information about how all the words in a query relate to each other.

In this way, BERT can look at the search query as a whole, rather than focusing on independent keywords in an attempt to reveal, and then prioritise, the intent of the search. This ensures the search results are the most relevant not only when it comes to the specific topic the user is researching, but also to the user’s intention behind the search.

As a result of this change in the way search engine queries are being performed, marketers must adapt to the way they tackle Search Engine Optimisation (SEO). Fréderic Dubut, Senior Programme Manager Lead at Bing, recently said that search engine marketers must change their approach to keyword research in the following year and focus more on intent research. But does this mean keywords are going to become redundant?

Voice search changing SEO terms

BERT is one of Google’s biggest updates to web search in recent years. It uses NLP to better understand the context of search queries by taking into account things such as linking words (and, the, from) or interrogative markers (how, what, which). While some users have learned to query Google using unconnected and grammatical keywords such as ‘best digital strategy SEO’, the popularisation of voice search demands that search engines understand the way people naturally speak and look beyond just keywords.

Voice search produces queries that use conversational language such as “what is the best digital strategy for SEOs” which means they require NLP in order to render the best results. BERT can also take into account previous recent searches to some extent in a similar way to how a regular conversation works. Asking “How long does driving there take?” after inquiring about the location of nearest supermarket will provide relevant results without the user having to specify the supermarket again.

In today’s fast-paced information age, users are no longer willing to spend time going through countless search results to find the page that delivers the information they are looking for. Many people don’t even go beyond the first page of Google’s search results nowadays. As such, search engines are looking to provide results which are relevant not only to the keywords a user puts into the search engine, but also to the ‘why’ behind a search query: the search intent. In other words, search engine page results (SERPs) are optimised to understand what direct action a user wants to undertake through their search result (learn, purchase, find a specific website etc.) and prioritise the specific websites that match that intent.

Shifting from keywords to intent

As search engines become more advanced, incorporating more intent-based models and practices into research should be a key focus for digital marketers in 2020. However, intent research models can be quite subjective from a classification perspective as they rely on one person’s perspective to decide the user intentions behind a list of keywords. Moreover, the main types of search intent – informational, navigational, transactional and commercial – are very broad and, realistically, not very actionable.

For intent researches to be most effective, marketers need to have reliable data metrics, such as click-through rates and conversion rates to support the agreed intention behind a keyword. This allows them to create relevant lists of purchase and research intention keywords whilst ensuring the keywords they use for a specific search intent are the most relevant to their users. Checking SERPs statistics for keyword reliability over time also provides insight on which keywords are best to target for the specific type of intent.

There is still room for keywords

Understanding search intent, and taking it into account when delivering the most relevant answer is ultimately Google’s priority. While keywords are still a big part of search queries, digital marketers must understand that relying solely on keywords is not enough for SEO anymore. Backlinks and other traditional Google rankings are still important, but if the page doesn’t meet the user’s search intent, it’s not going to rank highly on SERPs.

However, that doesn’t mean that keywords are going to become obsolete. John Mueller, Senior Webmaster Trend Analyst at Google, agreed that keywords are always going to be helpful, even if they are not the main focus. Showing specific words to users makes it easier for them to understand what the page is about which, in turn, provides a better user experience.

Ultimately, optimising for user experience should be key in 2020 and shifting an SEO strategy to prioritise search intent is part of that. Focusing more on intent research and enforcing more intent-based practices off the back of keyword research is definitely something we’ll see more of in 2020.

By Adam Bunn
Director of SEO, Content & Digital PR
Greenlight Digital

[Source: This article was published in netimperative.com By Robin - Uploaded by the Association Member: Clara Johnson]

Categorized in Search Engine

It’s not paid inclusion, but it is paid occlusion

Happy Friday to you! I have been reflecting a bit on the controversy du jour: Google’s redesigned search results. Google is trying to foreground sourcing and URLs, but in the process it made its results look more like ads, or vice versa. Bottom line: Google’s ads just look like search results now.

I’m thinking about it because I have to admit that I don’t personally hate the new favicon -plus-URL structure. But I think that might be because I am not a normal consumer of web content. I’ve been on the web since the late ‘90s and I parse information out of URLs kind of without thinking about it. (In fact, the relative decline of valuable information getting encoded into the URL is a thing that makes me sad.)

I admit that I am not a normal user. I set up custom Chrome searches and export them to my other browsers. I know what SERP means and the term kind of slips out in regular conversation sometimes. I have opinions about AMP and its URL and caching structure. I’m a weirdo.

As that weirdo, Google’s design makes perfect sense and it’s possible it might do the same for regular folk. The new layout for search result is ugly at first glance — but then Google was always ugly until relatively recently. I very quickly learned to unconsciously take in the information from the top favicon and URL-esque info without it really distracting me.

...Which is basically the problem. Google’s using that same design language to identify its ads instead of much more obvious, visually distinct methods. It’s consistent, I guess, but it also feels deceptive.

Recode’s Peter Kafka recently interviewed Buzzfeed CEO Jonah Peretti, and Peretti said something really insightful: what if Google’s ads really aren’t that good? What if Google is just taking credit for clicks on ads just because people would have been searching for that stuff anyway? I’ve been thinking about it all day: what if Google ads actually aren’t that effective and the only reason they make so much is billions of people use Google?

The pressure to make them more effective would be fairly strong, then, wouldn’t it? And it would get increasingly hard to resist that pressure over time.

I am old enough to remember using the search engines before Google. I didn’t know how bad their search technology was compared to what was to come, but I did have to bounce between several of them to find what I wanted. Knowing what was a good search for WebCrawler and what was good for Yahoo was one of my Power User Of The Internet skills.

So when Google hit, I didn’t realize how powerful and good the PageRank technology was right away. What I noticed right away is that I could trust the search results to be “organic” instead of paid and that there were no dark patterns tricking me into clicking on an ad.

One of the reasons Google won search in the first place with old people like me was that in addition to its superior technology, it drew a harder line against allowing paid advertisements into its search results than its competitors.

With other search engines, there was the problem of “paid inclusion,” which is the rare business practice that does exactly what the phrase means. You never really knew if what you were seeing was the result of a web-crawling bot or a business deal.

This new ad layout doesn’t cross that line, but it’s definitely problematic and it definitely reduces my trust in Google’s results. It’s not so much paid inclusion as paid occlusion.

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

And most of all: if Google is willing to visually muddle ads, how long until its users lose trust in the algorithm itself? With this change, Google is becoming what it once sought to overcome: AltaVista.

Read More...

[Source: This article was published in theverge.com By Barry Schwartz - Uploaded by the Association Member: James Gill]

Categorized in Search Engine

Ever had to search for something on Google, but you’re not exactly sure what it is, so you just use some language that vaguely implies it? Google’s about to make that a whole lot easier.

Google announced today it’s rolling out a new machine learning-based language understanding technique called Bidirectional Encoder Representations from Transformers, or BERT. BERT helps decipher your search queries based on the context of the language used, rather than individual words. According to Google, “when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English.”

Most of us know that Google usually responds to words, rather than to phrases — and Google’s aware of it, too. In the announcement, Pandu Nayak, Google’s VP of search, called this kind of searching “keyword-ese,” or “typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.” It’s amusing to see these kinds of searches — heck, Wired has made a whole cottage industry out of celebrities reacting to these keyword-ese queries in their “Autocomplete” video series” — but Nayak’s correct that this is not how most of us would naturally ask a question.

As you might expect, this subtle change might make some pretty big waves for potential searchers. Nayak said this “[represents] the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.” Google offered several examples of this in action, such as “Do estheticians stand a lot at work,” which apparently returned far more accurate search results.

I’m not sure if this is something most of us will notice — heck, I probably wouldn’t have noticed if I hadn’t read Google’s announcement, but it’ll sure make our lives a bit easier. The only reason I can see it not having a huge impact at first is that we’re now so used to keyword-ese, which is in some cases more economical to type. For example, I can search “What movie did William Powell and Jean Harlow star in together?” and get the correct result (Libeled Lady; not sure if that’s BERT’s doing or not), but I can also search “William Powell Jean Harlow movie” and get the exact same result.

BERT will only be applied to English-based searches in the US, but Google is apparently hoping to roll this out to more countries soon.

[Source: This article was published in thenextweb.com By RACHEL KASER - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

Google is the search engine that most of us know and use, so much so that the word Google has become synonymous with search. As of Sept 2019, the search engine giant has captured 92.96% of the market share. That’s why it has become utmost important for businesses to get better rank in Google search results if they want to be noticed. That’s where SERP or “Search Engine Results Page” scraping can come in handy. Whenever a user searches for something on Google, they get a SERP result which consists of paid Google Ads results, featured snippets, organic results, videos, product listing, and things like that. Tracking these SERP results using a service like Serpstack is necessary for businesses that either want to rank their products or help other businesses to do the same.

Manually tracking SERP results is next to impossible as they vary highly depending on the search query, the origin of queries, and a plethora of other factors. Also, the number of listing in a single search query is so high that manual tracking makes no sense at all. Serpstack, on the other hand, is an automated Google Search Results API that can automatically scrape real-time and accurate SERP results data and present it in an easy to consume format. In this article, we are going to take a brief look at Serpstack to see what it brings to the table and how it can help you track SERP results data for keywords and queries that are important for your business.

Serpstack REST API for SERP Data: What It Brings?

Serpstack’s JSON REST API for SERP data is a fast and reliable and always gives you real-time and accurate search results data. The service is trusted by some of the largest brands in the world. The best part about the Serpstack apart from its reliable data is the fact that it can scrape Google search results at scale. Whether you need one thousand or one million results, Serpstack can handle it with ease. Not only that, but Serpstack also brings built-in solutions for problems such as global IPs, browser clusters, or CAPTCHAs, so you as a user don’t have to worry about anything.

Serpstack Scraping Photo

If you decide to give Serpstack REST API a chance, here are the main features that you can expect from this service:

  • Serpstack is scalable and queueless thanks to its powerful cloud infrastructure which can withstand high volume API requests without the need of a queue.
  • The search queries are highly customizable. You can tailor your queries based on a series of options including location, language, device, and more, so you get the data that you need.
  • Built-in solutions for problems such as global IPs, browser clusters, and CAPTCHAs.
  • It brings simple integration. You can start scraping SERP pages at scale in a few minutes of you logging into the service.
  • Serpstack features bank-grade 256-bit SSL encryption for all its data streams. That means, your data is always protected.
  • An easy-to-use REST API responding in JSON or CSV, compatible with any programming language.
  • With Serpstack, you are getting super-fast scraping speeds. All the API requests sent to Serpstack are processed in a matter of milliseconds.
  • Clear Serpstack API documentation which shows you exactly how you can use this service. It makes the service beginner-friendly and you can get started even if you have never used a SERP scraping service before.

Seeing at the features list above, I hope you can understand why Serpstack is one of the best if not the best SERP scraping services on the market. I am especially astounded by its scalability, incredibly fast speed, and built-in privacy and security protocols. However, there’s one more thing that we have not discussed till now which pushes it at the top spot for me and that is its pricing. Well, that’s what we are going to discuss in the next section.

Pricing and Availability

Serpstack’s pricing is what makes it accessible for both individuals and small & large businesses. It offers a capable free version which should serve the needs of most individuals and even smaller businesses. If you are operating a larger business that requires more, you have various pricing plans to choose from depending on your requirements. Talking about the free plan first, the best part is that it’s free forever and there are no-underlying hidden charges. The free version gets you 100 searches/month with access to global locations, proxy networks, and all the main features. The only big missing feature is the HTTPS encryption.

serpst

Once you are ready to pay, you can start with the basic plan which costs $29.99/month ($23.99/month if billed annually). In this plan, you get 5,000 searches/month along with all the missing features in the basics plan. I think this plan should be enough for most small to medium-sized businesses. However, if you require more, there’s a Business plan $99.99/month ($79.99/month if billed annually) which gets you 20,000 searches and a Business Pro Plan $199.99/month ($159.99/month if billed annually) which gets you 50,000 search per month. There’s also a custom pricing solution for companies that require tailored pricing structure.

Serpstack Makes Google Search Results Scraping Accessible

SERP scraping is important if you want to compete in today’s world. To see which queries are fetching which results is an important step in determining the list of your competitors. Once you know them, you can devise an action plan to compete with them. Without SERP data, your business will have a big disadvantage in the online world. So, use Serpstack to scrape SERP data so you can build a successful online business.

[Source: This article was published in beebom.com By Partner Content - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

[Source: This article was Published in theverge.com BY James Vincent - Uploaded by the Association Member: Jennifer Levin] 

A ‘tsunami’ of cheap AI content could cause problems for search engines

Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance, it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:

You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.

The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.”

The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.

As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.

AI text generation will make the article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.”

Mike Blumenthal, an SEO consultant, and expert says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.

And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue. That doesn’t stop it being politically damaging.

The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.

“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge. “So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.”

Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.

However, as we’ve seen with visual deep fakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger. Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.

Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.

SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.

More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems, but it certainly minimizes the risk from high-volume spam.

The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.

“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic, it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all?

Categorized in Search Engine

[This article is originally published in searchenginejournal.com written by Matt Southern - Uploaded by AIRS Member: Jeremy Frink]

Google published a 30-page white paper with details about how the company fights disinformation in Search, News, and YouTube.

Here is a summary of key takeaways from the white paper.

What is Disinformation?

Everyone has different perspectives on what is considered disinformation, or “fake news.”

Google says it becomes objectively problematic to users when people make deliberate, malicious attempts to deceive others.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation.”

So that’s what the white paper is referring to with respect to term disinformation.

How Does Google Fight Disinformation?

Google admits it’s challenging to fight disinformation because it’s near-impossible to determine the intent behind a piece of content.

The company has designed a framework for tackling this challenge, which is comprised of the following three strategies.

1. Make content count

Information is organized by ranking algorithms, which are geared toward surfacing useful content and not fostering ideological viewpoints.

2. Counteract malicious actors

Algorithms alone cannot verify the accuracy of a piece of content. So Google has invested in systems that can reduce spammy behaviors
at scale. It also relies on human reviews.

3. Give users more context

Google provides more context to users through mechanisms such as:

  • Knowledge panels
  • Fact-check labels
  • “Full Coverage” function in Google News
  • “Breaking News” panels on YouTube
  • “Why this ad” labels on Google Ads
  • Feedback buttons in search, YouTube, and advertising products

Fighting Disinformation in Google Search & Google News

As SEOs, we know Google uses ranking algorithms and human evaluators to organize search results.

Google’s white paper explains this in detail for those who may not be familiar with how search works.

Google notes that Search and News share the same defenses against spam, but they do not employ the same ranking systems and content policies.

For example, Google Search does not remove content except in very limited circumstances. Whereas Google News is more restrictive.

Contrary to popular belief, Google says, there is very little personalization in search results based on users’ interests or search history.

Fighting Disinformation in Google Ads

Google looks for and takes action against attempts to circumvent its advertising policies.

Policies to tackle disinformation on Google’s advertising platforms are focused on the following types of behavior:

  • Scraped or unoriginal content: Google does not allow ads for pages with insufficient original content, or pages that offer little to no value.
  • Misrepresentation: Google does not allow ads that intend to deceive users by excluding relevant information or giving misleading information.
  • Inappropriate content: Ads are not allowed for shocking, dangerous, derogatory, or violent content.
  • Certain types of political content: Ads for foreign influence operations are removed and the advertisers’ accounts are terminated.
  • Election integrity: Additional verification is required for anyone who wants to purchase an election ad on Google in the US.

Fighting Disinformation on YouTube

Google has strict policies to keep content on YouTube unless it is in direct violation of its community guidelines.

The company is more selective of content when it comes to YouTube’s recommendation system.

Google aims to recommend quality content on YouTube while less frequently recommending content that may come close to, but not quite, violating the community guidelines.

Content that could misinform users in harmful ways, or low-quality content that may result in a poor experience for users (like clickbait), is also recommended less frequently.

More Information

For more information about how Google fights disinformation across its properties, download the full PDF here.

Categorized in Search Engine

[This article is originally published in thenextweb.com written by IVAN MEHTA - Uploaded by AIRS Member: Dana W. Jimenez] 

Google has launched a dedicated dataset search website to help journalists and researchers unearth publicly available data that can aid in their projects. Traditionally, researchers have relied on sources like the World Bank, NASA, and ProPublica or search engines like Kaggle. This new tool will make their work much easier.

The website takes Google’s familiar approach and design for search and applies it to datasets published across the web. So if you need to look at historical weather trends, you can use a simple query like “daily weather” to begin your research. Plus, the engine supports shortcuts that work on Google’s regular search tool, like ‘weather site:noaa.gov’ to retrieve results only from the National Oceanic and Atmospheric Administration agency in the US

The company explained that the new tool scrapes government databases, public sources, digital libraries, and personal websites to track down the datasets you’re looking for. If they’re structured using schema.org’s markup or similar equivalents described by the W3C, Google can find it. It already supports multiple languages and will add support for more of them soon.

This year, Google has focused on a lot of initiatives directed towards journalists. In July, it had rolled out an improved representation of tabular data in search results. In India, it has launched a program to train journalists to identify misinformation. And at its developer conference earlier this year, it rolled out a revamped Google News with improved personalization and discovery features.

Categorized in Search Engine

 Source: This article was published searchengineland.com By Barry Schwartz - Contributed by Member: Jennifer Levin

New markup from Schema.org including HowTo, QAPage, and FAQPage can be used to potentially show your content in Google in a brand new way. Google previewed this in Singapore a couple weeks ago.

Google has confirmed with Search Engine Land that it has been testing for the past several months a new form of search results snippets — the way the search results appear to searchers. These new search snippets are in the form of FAQs or frequently asked questions, Q&A or question & answers and How-Tos.

Akhil Agarwal notified us about this feature on Twitter, and Google has just sent us a statement explaining the test. Here is the screenshot presented at a recent Google event in Singapore:

A Google Spokesperson told us:

We’re always looking for new ways to provide the most relevant, useful results for our users. We’ve recently introduced new ways to help users understand whether responses on a given Q&A or forum site could have the best answer for their question. By bringing a preview of these answers onto Search, we’re helping our users more quickly identify which source is most likely to have the information they’re looking for. We’re currently working with partners to experiment with ways to surface similar previews for FAQ and How-to content.

These new snippet features give more insights into what the searcher can expect from that web page before deciding to click on the search result. Webmasters should be able to mark up their content with structured data and to have their search results be eligible to have the question and answer previews shown. Similar to how supporting metadata around the number of upvotes and the Top Answer feature works.

Google will soon open up an interest form to allow publishers and webmasters to participate in the FAQ and How-to formats shown in the screenshot above.

Categorized in Search Engine
Page 1 of 2

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media