fbpx

Google’s new BERT algorithm means search engine marketers will need to change their approach to keyword research in 2020 and focus more on intent research. Adam Bunn, the Director of SEO, Content & Digital PR at Greenlight Digital, looks at what companies should expect from keyword searches in 2020.

The way people search online is changing. The introduction of Google’s ‘BERT’ algorithm (Bidirectional Encoder Representations from Transformers) is evidence of this and highlights the complexity with which people have begun to utilise search engines. BERT utilises Natural Language Processing (NLP) which helps analyse natural human language beyond just basic keywords and gather more information about how all the words in a query relate to each other.

In this way, BERT can look at the search query as a whole, rather than focusing on independent keywords in an attempt to reveal, and then prioritise, the intent of the search. This ensures the search results are the most relevant not only when it comes to the specific topic the user is researching, but also to the user’s intention behind the search.

As a result of this change in the way search engine queries are being performed, marketers must adapt to the way they tackle Search Engine Optimisation (SEO). Fréderic Dubut, Senior Programme Manager Lead at Bing, recently said that search engine marketers must change their approach to keyword research in the following year and focus more on intent research. But does this mean keywords are going to become redundant?

Voice search changing SEO terms

BERT is one of Google’s biggest updates to web search in recent years. It uses NLP to better understand the context of search queries by taking into account things such as linking words (and, the, from) or interrogative markers (how, what, which). While some users have learned to query Google using unconnected and grammatical keywords such as ‘best digital strategy SEO’, the popularisation of voice search demands that search engines understand the way people naturally speak and look beyond just keywords.

Voice search produces queries that use conversational language such as “what is the best digital strategy for SEOs” which means they require NLP in order to render the best results. BERT can also take into account previous recent searches to some extent in a similar way to how a regular conversation works. Asking “How long does driving there take?” after inquiring about the location of nearest supermarket will provide relevant results without the user having to specify the supermarket again.

In today’s fast-paced information age, users are no longer willing to spend time going through countless search results to find the page that delivers the information they are looking for. Many people don’t even go beyond the first page of Google’s search results nowadays. As such, search engines are looking to provide results which are relevant not only to the keywords a user puts into the search engine, but also to the ‘why’ behind a search query: the search intent. In other words, search engine page results (SERPs) are optimised to understand what direct action a user wants to undertake through their search result (learn, purchase, find a specific website etc.) and prioritise the specific websites that match that intent.

Shifting from keywords to intent

As search engines become more advanced, incorporating more intent-based models and practices into research should be a key focus for digital marketers in 2020. However, intent research models can be quite subjective from a classification perspective as they rely on one person’s perspective to decide the user intentions behind a list of keywords. Moreover, the main types of search intent – informational, navigational, transactional and commercial – are very broad and, realistically, not very actionable.

For intent researches to be most effective, marketers need to have reliable data metrics, such as click-through rates and conversion rates to support the agreed intention behind a keyword. This allows them to create relevant lists of purchase and research intention keywords whilst ensuring the keywords they use for a specific search intent are the most relevant to their users. Checking SERPs statistics for keyword reliability over time also provides insight on which keywords are best to target for the specific type of intent.

There is still room for keywords

Understanding search intent, and taking it into account when delivering the most relevant answer is ultimately Google’s priority. While keywords are still a big part of search queries, digital marketers must understand that relying solely on keywords is not enough for SEO anymore. Backlinks and other traditional Google rankings are still important, but if the page doesn’t meet the user’s search intent, it’s not going to rank highly on SERPs.

However, that doesn’t mean that keywords are going to become obsolete. John Mueller, Senior Webmaster Trend Analyst at Google, agreed that keywords are always going to be helpful, even if they are not the main focus. Showing specific words to users makes it easier for them to understand what the page is about which, in turn, provides a better user experience.

Ultimately, optimising for user experience should be key in 2020 and shifting an SEO strategy to prioritise search intent is part of that. Focusing more on intent research and enforcing more intent-based practices off the back of keyword research is definitely something we’ll see more of in 2020.

By Adam Bunn
Director of SEO, Content & Digital PR
Greenlight Digital

[Source: This article was published in netimperative.com By Robin - Uploaded by the Association Member: Clara Johnson]

Categorized in Search Engine

In previous articles, we learned how to perform advanced, BERT-powered, automated intent classification in Python.

We also learned how to automatically populate Google Sheets in Python.

Wouldn’t it be cool if we could perform our intent classification directly in Google Sheets?

That is exactly what we will do here!

intent-googlsheets.gif

Introducing Google Apps Script

One limitation of the built-in functions in Google Sheets is that it limits you to predefined behavior.

The good news is that you can define custom functions with new behavior if you can code them yourself in Google Apps Script.

Google Apps Script is based on JavaScript and adds additional functionality that helps interact with Sheets, Docs and other Google Apps.

We are going to define a new custom function named fetchPrediction that will take keywords in Google Sheet cells, and run them through a BERT-powered predictive model to get the intention of search users.

Here is our plan of action:

  • Learn to review and update values in Google Sheets from Apps Script.
  • Practice fetching results from an API and populate a sheet with the retrieved values.
  • Train our BERT-powered predictive model using Uber’s Ludwig.
  • Use Ludwig to power an API we can call from Apps Script.
  • Learn some new tools and concepts that help us connect both services together.

Let’s get started!

Retrieving Keyword Data From Google Sheets

This is an empty Google sheet with some barcode related keywords we pulled from SEMrush.

In our first exercise, we will read and print the first 10 keywords from column A.

logkeywords-.png

Go to Tools > Script Editor to get started.

This is a built-in IDE (Integrated Development Environment) for Google Sheets.

We are going to write a simple JavaScript function called logKeywords that will read all the keywords in our sheet and log them to the console.

Please refer to the official documentation here.

function logKeywords() {
  var sheet = SpreadsheetApp.getActiveSheet();
  var data = sheet.getDataRange().getValues();
  
  for (var i = 0; i  data.length; i++) {
      console.log('Keyword: ' + data[i][0]);
  }
}

Let’s walk over the function, step by step.

We first get a reference to the active sheet, in this case, it is Sheet1.

If you compare this code to the one we wrote in Python, you will see some advantages.

  • We didn’t need to authenticate.
  • We didn’t need to open the spreadsheet.

Got to View > Stackdriver logging. There you will get a link to the Apps Script Dashboard. Click on that to see the console logs.

It is a good idea to keep this page in another tab as you will refer to it often as your code and want to see if the changes worked.

You will see the latest log entry at the top of the list. Click on it and you will see something like the screenshot above.

Now, we printed more than 100 rows, which took a bit of time. When you are writing and testing your code, it is better to work with smaller lists.

We can make a simple change in the loop to fix that.

function logKeywords() {
  var sheet = SpreadsheetApp.getActiveSheet();
  var data = sheet.getDataRange().getValues();
  
  //for (var i = 0; i  data.length; i++) {
  for (var i = 0; i  10; i++) {
      
    console.log('Keyword: ' + data[i][0]);
  }
}

Note that I hardcoded the value 10 as the limit and left a comment with the correct code.

I prefer to comment out code changes instead of deleting them as it will be easier to revert back when I’m ready to release for production use.

When you run this, it not only runs faster but checking the log is also a lot faster.

Add a Column with keyword IDs

Next, let’s learn to add data to the sheet.

We are going to write a new function named addIDtoKeywords. It creates a column with one numeric ID per keyword.

There isn’t a lot of value in doing this, but it should help you test the technique with something super simple.

Here is the code to do that.

function addIDtoKeywords() {
  var sheet = SpreadsheetApp.getActiveSheet();
  var data = sheet.getRange("B1");

  //build value list
  
  var values = []; 
  
  //number of keywords
  length = 100;
  
  for (var i = 1; i = length+1; i++){
    
    values.push([i]);
  }
  
  console.log(values.length);

  //Update sheet column with calculated values
  var column = sheet.getRange("B2:B102");
  

  column.setValues(values);
  
}

Select this function in the pull-down and click on the play button to run.

addids.png

You should get a new column in the sheet with numbers in increasing order.

We can also add a column header in bold named Keyword ID using the following code.

 data.setValue("Keyword ID");
 data.setFontWeight("bold");

This is what the updated output looks like.

keywordid.png

It is a very similar code. Let’s review the changes.

I added a JavaScript array named values to hold the keyword IDs.

During the loop, I added a line to add each ID generated within the loop to the array.

values.push([i]);

I printed the length of the value array at the end of the loop to make sure the correct number of IDs was generated.

Finally, I need to get the values to the sheet.

 var column = sheet.getRange("B2:B102");

This code selects the correct cells to populate and then I can simply set their value using the list I generated.

 column.setValues(values);

It can’t get simpler than this!

Fetching API Results From Apps Script

In the next exercise, we will learn to perform API requests from Apps Script.

I recommend you follow this codelab from Google to get familiar with some of the more advanced concepts.

We are going to adapt code from step 11 which pulls data from a Books API.

Instead of fetching books, we will translate keywords using the Google Translate API.

Now, we are starting to write more useful code!

Here is a new function named fetchTranslation based on code adapted from step 11.

function fetchTranslation(TEXT){
  API_KEY="INPUT YOUR API KEY";

  TEXT = encodeURI(TEXT); //"My name is Steve" -> "My%20name%20is%20Steve";
  
  var url = `https://translation.googleapis.com/language/translate/v2?target=es&key=${API_KEY}&q=${TEXT}`;
      
  //console.log(url);
  
  var response = UrlFetchApp.fetch(url, {'muteHttpExceptions': true});
  
  var json = response.getContentText();
  
  //console.log(json);
  
  translation = JSON.parse(json);
  
  return translation["data"]["translations"][0]["translatedText"];
  
}

This function takes an input text, encodes it and inserts it into an API URL to call the Google Translate service.

There is an API key we need to get and also we need to enable to Translate service. I also recommend restricting the API to the IP you are using to test during development.

api_key.png

Once we have the API URL to call, it is as simple as calling this code.

  var response = UrlFetchApp.fetch(url, {'muteHttpExceptions': true});

The next lines get us the response in JSON format and after a bit of navigation down the JSON tree, we get the translated text.

As you can see in my code, I like to log almost every step in the code to the console to confirm it is doing what I expect.

Here is one example of how I figured out the correct JSON path sequence.

//console.log(translation["data"]);

//console.log(translation["data"]["translations"]);

//console.log(translation["data"]["translations"][0]);

//console.log(translation["data"]["translations"][0]["translatedText"]);
fetchtranslation.png

You can see the progression in the logs here, including the final output.

Translating Keywords

As we tested the function and it works, we can proceed to create another function to fetch and translate the keywords from the sheet.

We will build up from what we’ve learned so far.

We will call this function a super original name TranslateKeywords!

function TranslateKeywords() {
  var sheet = SpreadsheetApp.getActiveSheet();
  var header = sheet.getRange("B1");

  // Add a new header column named Translation
  header.setValue("Translation");
  header.setFontWeight("bold");
  
  //var keyword = "barcode generator"; 
  var keyword = sheet.getRange("A2").getValue();
  
  console.log(keyword);
  
  translated_keyword = fetchTranslation(keyword);
  
  console.log(translated_keyword);
  
  var data = sheet.getRange("B2");
  
  data.setValue(translated_keyword);

  
}

The code in this function is very similar to the one we used to set Keyword IDs.

The main difference is that we pass the keyword to our new fetchTranslation function and update a single cell with the result.

Here is what it looks like for our example keyword.

translatekeywords.png

As you can probably see, there is no for loop, so this will only update one single row/keyword. The first one.

Please complete the for loop to get the translation for all keywords as a homework exercise.

Building an Intent Classification Model

Let’s move to build our intent classification service that we will call to populate keyword intents.

In my previous deep learning articles, I’ve covered Ludwig, Uber’s AI toolbox.

I like it a lot because it allows you to build state-of-the-art deep learning models without writing a single line of code.

It is also very convenient to run in Google Colab.

We are going to follow the same steps I described in this article, this will give us a powerful intent prediction model powered by BERT.

Here is a quick summary of the steps you need paste into Google Colab (make sure to select the GPU runtime!).

Please refer to my article for the context:

%tensorflow_version 1.x 

import tensorflow as tf; print(tf.__version__)

!pip install ludwig

#upload Question_Classification_Dataset.csv and 'Question Report_Page 1_Table.csv'
from google.colab import files

files.upload()

import pandas as pd
df = pd.read_csv("Question_Classification_Dataset.csv", index_col=0)

!wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
!unzip uncased_L-12_H-768_A-12.zip

# create the ludwig configuration file for BERT-powered classification
template="""
input_features:
-
name: Questions
type: text
encoder: bert
config_path: uncased_L-12_H-768_A-12/bert_config.json
checkpoint_path: uncased_L-12_H-768_A-12/bert_model.ckpt
preprocessing:
word_tokenizer: bert
word_vocab_file: uncased_L-12_H-768_A-12/vocab.txt
padding_symbol: '[PAD]'
unknown_symbol: '[UNK]'
output_features:
-
name: Category0
type: category
-
name: Category2
type: category
text:
word_sequence_length_limit: 128
training:
batch_size: 32
learning_rate: 0.00002
"""
with open("model_definition.yaml", "w") as f:
    f.write(template)
!pip install bert-tensorflow

!ludwig experiment \
  --data_csv Question_Classification_Dataset.csv\
  --model_definition_file model_definition.yaml

After completing these steps in Google Colab, we should get a high accuracy predictive model for search intent.

We can verify the predictions with this code.

test_df = pd.read_csv("Question Report_Page 1_Table.csv")
#we rename Query to Questions to match what the model expects

predictions = model.predict(test_df.rename(columns={'Query': 'Questions'} ))

test_df.join(predictions)[["Query", "Category2_predictions"]]

We get a data frame like this one.

predictions-.png

The intentions predicted are not the ones you typically expect: navigational, transactional, informational, but they are good enough to illustrate the concept.

Please check an awesome article by Kristin Tynski that explains how to expand this concept to get true search intents.

Turning Our Model Into an API Service

Ludwig has one super cool feature that allows you to serve models directly as an API service.

The command for this is Ludwig serve.

I was trying to accomplish the same thing following a super complicated path because I didn’t check that something like this already existed. 

It is not installed by default, we need to install it with this command.

!pip install ludwig[serve]

We can check the command-line options with:

!ludwig serve --help
ludwig-serve-.png

Creating an API from our model is as simple as running this command.

!ludwig serve -m results/experiment_run/model

INFO:     Started server process [5604]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Shutting down
INFO:     Finished server process [5604]

As we are running this code in the notebook, we need to use a little trick to push this process to the background (a separate thread).

%%bash --bg

nohup ludwig serve -m results/experiment_run/model > debug.log 2>&1

The magic command %%bash –bg runs the shellcode in a separate thread returning control to the notebook so we can run code that can interact with the service.

I found this to be a super cool and valuable trick. I’m also introducing more shell tricks that I learned many years ago.

The nohup command prevents the process from getting killed when the parent dies. It is optional here.

The code 2>&1 redirects the standard error to the standard input and both are then sent to the file debug.log. You can learn more about this technique here.

We can track the progress of the background process using this command.

!tail debug.log

After you see this message, you can proceed to the next step.

INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

Let’s send a test API request using curl to see if the service works.

!curl http://0.0.0.0:8000/predict -X POST -F 'Questions=who is the boss?'

You should get this response back.

{"Category0_predictions":"HUMAN","Category0_probabilities_":0.00021219381596893072,"Category0_probabilities_ENTITY":7.17515722499229e-05,"Category0_probabilities_HUMAN":0.9988889098167419,"Category0_probabilities_DESCRIPTION":0.000423480843892321,"Category0_probabilities_NUMERIC":2.7793401386588812e-05,"Category0_probabilities_LOCATION":0.0003020864969585091,"Category0_probabilities_ABBREVIATION":7.374086999334395e-05,"Category0_probability":0.9988889098167419,"Category2_predictions":"ind","Category2_probabilities_":8.839580550557002e-05,"Category2_probabilities_ind":0.9759176969528198,"Category2_probabilities_other":0.0013697665417566895,"Category2_probabilities_def":3.929347076336853e-05,"Category2_probabilities_count":4.732362140202895e-05,"Category2_probabilities_desc":0.014149238355457783,"Category2_probabilities_manner":7.225596345961094e-05,"Category2_probabilities_date":7.537546480307356e-05,"Category2_probabilities_cremat":0.00012272763706278056,"Category2_probabilities_reason":0.00042629052768461406,"Category2_probabilities_gr":0.0025540771894156933,"Category2_probabilities_country":0.0002626778441481292,"Category2_probabilities_city":0.0004305317997932434,"Category2_probabilities_animal":0.00024954770924523473,"Category2_probabilities_food":8.139225974446163e-05,"Category2_probabilities_dismed":7.852958515286446e-05,"Category2_probabilities_termeq":0.00023714809503871948,"Category2_probabilities_period":4.197505040792748e-05,"Category2_probabilities_money":3.626687248470262e-05,"Category2_probabilities_exp":5.991378566250205e-05,"Category2_probabilities_state":0.00010361814202042297,"Category2_probabilities_sport":8.741072088014334e-05,"Category2_probabilities_event":0.00013374585250858217,"Category2_probabilities_product":5.6306344049517065e-05,"Category2_probabilities_substance":0.00016623239207547158,"Category2_probabilities_color":1.9601659005274996e-05,"Category2_probabilities_techmeth":4.74867774755694e-05,"Category2_probabilities_dist":9.92789282463491e-05,"Category2_probabilities_perc":3.87108520953916e-05,"Category2_probabilities_veh":0.00011915313370991498,"Category2_probabilities_word":0.00016430433606728911,"Category2_probabilities_title":0.0010781479068100452,"Category2_probabilities_mount":0.00024070330255199224,"Category2_probabilities_body":0.0001515906333224848,"Category2_probabilities_abb":8.521509153069928e-05,"Category2_probabilities_lang":0.00022924368386156857,"Category2_probabilities_plant":4.893113509751856e-05,"Category2_probabilities_volsize":0.0001462997024646029,"Category2_probabilities_symbol":9.98345494735986e-05,"Category2_probabilities_weight":8.899033855414018e-05,"Category2_probabilities_instru":2.636547105794307e-05,"Category2_probabilities_letter":3.7610192521242425e-05,"Category2_probabilities_speed":4.142118996242061e-05,"Category2_probabilities_code":5.926147059653886e-05,"Category2_probabilities_temp":3.687662319862284e-05,"Category2_probabilities_ord":6.72415699227713e-05,"Category2_probabilities_religion":0.00012743560364469886,"Category2_probabilities_currency":5.8569487009663135e-05,"Category2_probability":0.9759176969528198}

Exposing Our Service Using Ngrok

So, we have a new API that can make intent predictions, but one big problem is that it is only accessible from within our Colab notebook.

Let me introduce another cool service that I use often, Ngrok.

Ngrok helps you create publicly accessible URLs that connect to a local service like the one we just created.

I do not recommend doing this for production use, but it is very handy during development and testing.

You don’t need to create an account, but I personally do it because I get to set up a custom subdomain that I use very frequently.

Here are the steps to give our API a public URL to call from App Script.

!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip && unzip ngrok-stable-linux-amd64.zip

We first download and uncompress ngrok.

%%bash --bg

./ngrok http -hostname=api.yourdomain.com 8000 2> ngrok.log

The code above tells ngrok to connect to the local service in port 8000. That is all we need to do.

!curl http://api.yourdomain.com/predict -X POST -F 'Questions=who is the boss?'

You can confirm it works by repeating the curl call, but calling the public URL. You should get the same result.

If you don’t want to set up a custom domain, you can use this code instead.

%%bash --bg

./ngrok http 8000 2> ngrok.log

This will generate a random public URL and you get retrieve with this code.

!curl -s http://localhost:4040/api/tunnels | python3 -c \

    "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"

Now, we get back to our final steps.

Fetching Intent Predictions

We are going to adapt the code we used to make Google Translate API requests so we can make intent prediction requests.

One big difference between the two API services is that we need to make HTTP POST requests instead of simpler HTTP GET requests.

Let’s see how that changes our code and learn a bit more about HTTP in the process.

function fetchPrediction(question = "who is the boss?"){
  
  TEXT = encodeURI(TEXT);
  
  console.log(TEXT);
  
  var url = "http://api.yourdomain.com/predict";
  
   var options = {
    "method" : "POST",
    "contentType" : "application/x-www-form-urlencoded",
    "payload" : TEXT,
    'muteHttpExceptions': true 
  };
        
  var response = UrlFetchApp.fetch(url, options);
  
  var json = response.getContentText();
  
  //console.log(json);
  
  prediction = JSON.parse(json);
  
  //console.log(prediction);

  console.log(prediction["Category0_predictions"]);
  
  return prediction["Category0_predictions"];
  
}

The function fetchPrediction calls the API service we created and returns the predicted intent. It basically reproduces the equivalent of the curl call we made Colab, but in Apps Script.

I highlighted some key changes in the code. Let’s review them.

One key difference between GET and POST requests is that in GET requests the data is passed in the URL as parameters.

In POST requests, the data is passed inside the body of the request.

We need to format the data before we pass it in the body and we need to set the correct content type so the server knows how to decode it.

This line encodes the question we are passing.

 TEXT = encodeURI(TEXT);

This is an example of what the encoded TEXT looks like.

Questions=label%20generator

The correct content type for this encoding is application/x-www-form-urlencoded. This is recommended encoding for HTML form data.

We create an options data structure where we specify these settings and the correct request type and we are set to go.

Select the function fetchPrediction from the pull-down and click on the run button.

intent-logs.png

You should see the encoded input and predicted intent in the logs.

How do we get the intentions for all the keywords in the sheet?

You might be thinking we will create another function that will read the keywords in a loop and populate the intentions. Not at all!

We can simply call this function by name directly from the sheet! How cool is that?

fetchintent.png

Resources to Learn More

Combining simple App Script functions with powerful API backends that you can code in any language opens the doors to infinite productivity hacks.

Here are some of the resources I read while putting this together.

Finally, let me highlight a very important and valuable project that JR Oakes started.

icodeseo.png

It is an awesome repository for Python and JavaScript projects from the coders in the SEO community. I plan to find time to upload my code snippets, please make sure to contribute yours.

For some reason, this non-issue keeps popping up in my Twitter feed. I will leave this tweet here as a friendly reminder. 

 [Source: This article was published in searchenginejournal.com By Hamlet Batista - Uploaded by the Association Member: Alex Gray]

Categorized in Search Engine

Google has started testing a feature that will display the search query in the Chrome address bar rather than the actual page's URL when performing searches on Google.

This experimental feature is called "Query in Omnibox" and has been available as a flag in Google Chrome since Chrome 71, but is disabled by default.

In a test being conducted by Google, this feature is being enabled for some users and will cause the search keyword to be displayed in the browser's address bar, or Omnibox, instead of the URL that you normally see. 

enabled-search.jpg

Query in Omnibox enabled

In BleepingComputer's tests, this feature only affects searches on Google and does not affect any other search engine.

When this feature is not enabled, Google will display the URL of the search in the Omnibox as you would expect. This allows you to not only properly identify the site you are on, but also to easily share the search with another user.

experiment-disabled.jpg

Query in Omnibox Disabled​​​

For example, to see the above search, you can just copy the https://www.google.com/search?q=test link from the address bar and share it with someone else.

With the Query in Omnibox feature enabled, though, if you copy the search keyword it will just copy that keyword into the clipboard rather than the site's URL. If you want to access the URL, you need to right-click on the keyword and select 'Show URL'.

show-url.jpg

Google is eroding the URL

Google has made it clear that they do not think that the URL is very useful to users.

In a Wired interview, Adrienne Porter Felt, Chrome's engineering manager. explained that Google wants to change how they are displayed in Chrome as people have a hard time understanding them.

"People have a really hard time understanding URLs. They’re hard to read, it’s hard to know which part of them is supposed to be trusted, and in general I don’t think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone—they know who they’re talking to when they’re using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity."

Instead of removing them in one fell swoop, Google is gradually eroding the various elements of a URL until there is nothing left.

We saw the beginning of this transition when Google Chrome 79 was released and it stopped displaying the www subdomain in URLs.

no-www.jpg

WWW subdomain removed from URL

In this next phase, they are testing the removal of URLs altogether from Google searches, which as everyone knows, is by far the most used web search engine.

What is next? The removal of URLs on other search engines or only showing a page title when browsing a web site?

All these questions remain to be answered, but could it be that Google is not wrong about URLs?

I was opposed to the removal of the WWW trivial subdomain from URLs for a variety of reasons and now I don't even realize it's missing.

BleepingComputer has reached out to Google with questions about this test, but had not heard back as of yet.

 [This article is originally published in bleepingcomputer.com By Lawrence Abrams - Uploaded by AIRS Member: Dana W. Jimenez]

Categorized in Search Engine

Ever had to search for something on Google, but you’re not exactly sure what it is, so you just use some language that vaguely implies it? Google’s about to make that a whole lot easier.

Google announced today it’s rolling out a new machine learning-based language understanding technique called Bidirectional Encoder Representations from Transformers, or BERT. BERT helps decipher your search queries based on the context of the language used, rather than individual words. According to Google, “when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English.”

Most of us know that Google usually responds to words, rather than to phrases — and Google’s aware of it, too. In the announcement, Pandu Nayak, Google’s VP of search, called this kind of searching “keyword-ese,” or “typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.” It’s amusing to see these kinds of searches — heck, Wired has made a whole cottage industry out of celebrities reacting to these keyword-ese queries in their “Autocomplete” video series” — but Nayak’s correct that this is not how most of us would naturally ask a question.

As you might expect, this subtle change might make some pretty big waves for potential searchers. Nayak said this “[represents] the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search.” Google offered several examples of this in action, such as “Do estheticians stand a lot at work,” which apparently returned far more accurate search results.

I’m not sure if this is something most of us will notice — heck, I probably wouldn’t have noticed if I hadn’t read Google’s announcement, but it’ll sure make our lives a bit easier. The only reason I can see it not having a huge impact at first is that we’re now so used to keyword-ese, which is in some cases more economical to type. For example, I can search “What movie did William Powell and Jean Harlow star in together?” and get the correct result (Libeled Lady; not sure if that’s BERT’s doing or not), but I can also search “William Powell Jean Harlow movie” and get the exact same result.

BERT will only be applied to English-based searches in the US, but Google is apparently hoping to roll this out to more countries soon.

[Source: This article was published in thenextweb.com By RACHEL KASER - Uploaded by the Association Member: Dorothy Allen]

Categorized in Search Engine

[Source: This article was published in seroundtable.com By Barry Schwartz - Uploaded by the Association Member: Bridget Miller]

Google's John Mueller said it again, do not worry about words or keywords in the URLs. John responded to a recent question on Twitter saying "I wouldn't worry about keywords or words in a URL. In many cases, URLs aren't seen by users anyway."

oliver

It references that video from Matt Cutts back in 2009 where it says keywords play a small role in rankings, but really small.

In 2017, John Mueller said keywords in URLs are overrated and that it is a small ranking factor back in 2016.

Forum discussion at Twitter.

Categorized in Search Engine

Source: This article was Published searchengineland.com By Ginny Marvin - Contributed by Member: Jeremy Frink

Here's what some marketers are saying about the move to include same meaning queries in exact match close variants.

Marketer reactions to the news that Google is yet again degrading the original intent (much less meaning) of exact match to include “same meaning” close variants is ranging from pessimism to ho-hum to optimism.

Expected impact on performance

“The impact of this will probably be most felt by accounts where exact match has historically been successful and where an exact match of a query made a difference in conversions — hence the reason you’d use exact in the first place,” said digital consultant and President of Netptune MoonJulie Friedman Bacchini.

Friedman Bacchini said the loss of control with exact match defeats the match type’s purpose. Many marketers use exact match to be explicit — exacting — in their targeting and expect a match type called “exact” to be just that.

Brad Geddes, the co-founder of ad testing platform AdAlysis and head of consultancy Certified Knowledge, said one problem with expanding the queries that can trigger an exact match keyword is that past changes have shown it can affect the overall performance of exact match. “The last change meant that our ‘variation matches’ had worse conversion rates than our exact match and that we lowered bids on most exact match terms. This change might just drive us from using it completely, or really hitting the negative keywords.”

Like Geddes, Andy Taylor, associate director of research at performance agency Merkle, also said they saw an increase in traffic assigned as exact match close variants with the last change, “and those close variants generally convert at a lower rate than true exact matches.”

Yet, others who participated in the test see the loosening of the reigns as a positive action.

One of the beta testers for this change was ExtraSpace Storage, a self-storage company in the U.S. with locations in more than 40 states. The company says it saw positive results from the test.

“The search queries were relevant to our industry and almost all of our primary KPIs saw an overall improvement,” said Steph Christensen, senior analyst for paid search at ExtraSpace.

Christensen said that during the test they did not do any keyword management, letting it run in a “normal environment to give it the best chance to provide the truest results.” She says they will continue to watch performance and make adjustments as needed after it’s fully rolled out by the end of October.

Advertisers as machine learning beneficiaries or guinea pigs

A big driver of these changes, of course, is machine learning. The machine learning/artificial intelligence race is on among Google and the other big tech companies.

Google says its machine learning is now good enough to determine when a query has the same intent as a keyword with a high enough rate of success that advertisers will see an overall performance lift.

Another way to look at the move, though, is that by opening up exact match to include same meaning queries, Google gets the benefit of having marketers train its algorithms by taking action on query reports.

Or as Geddes, put it: “Advertisers are basically paying the fee for Google to try and learn intent.”

Geddes’ point is that this change will help Google’s machine learning algorithms improve understanding of intent across millions of queries through advertiser actions and budgets.

“The fact that Google doesn’t understand user intent coupled with how poor their machine learning has been at times, means we might just move completely away from exact match,” says Geddes.

Of the example Google highlighted in its announcement, Geddes says, “If I search for Yosemite camping; I might want a blog article, stories, social media, or a campground. If I search for a campground — I want a campground.” (As an aside, from what I’ve found it appears Google doesn’t even monetize “Yosemite camping” or “Yosemite campground” results pages that it used as examples.)

Expected workflow changes

One big thing Google has emphasized is that these close variants changes allow advertisers to focus on things other than building out giant keyword lists to get their ads to show for relevant queries. Rather than doing a lot of upfront keyword research before launching, the idea is that the management will happen after the campaign runs and accumulates data. Marketers will add negatives and new keywords as appropriate. But this reframing of the management process and what amounts to a new definition of exact match has marketers thinking anew about all match types.

“The further un-exacting of exact match has me looking at phrase match again,” says Friedman Bacchini. “I definitely see it impacting use of negatives and time involved to review SQRs and apply negatives properly and exhaustively”.

Taylor agrees. “This change places more importance on regularly checking for negatives, but that has already been engrained in our management processes for years and won’t be anything new.”

Geddes said that advertisers might come up against negative keyword limits, which he has seen happen on occasion. Rather than relying heavily on adding negatives, he says they may consider only using phrase match going forward.

In addition to having ads trigger for queries that aren’t relevant or don’t convert well, there’s the matter of having the right ad trigger for a query when you have close variants in an account already.

Matt van Wagner, president and founder of search marketing firm Find Me Faster, says the agency will be monitoring the impact before assessing workflow adjustments, but is not anticipating performance lifts.

“We’ll watch search queries and how, or if, traffic shifts from other ad groups as well as CPC levels. We expect this to have neutral impact at best,” says van Wagner, “since we believe we have our keywords set to trigger on searches with other match types.”

Along those lines, Geddes says it will be critical to watch for duplicate queries triggering keywords across an account to make sure the right ad displays. It puts new focus on negative keyword strategies, says Geddes:

Google will show the most specific matching keyword within a campaign; but won’t do it across the account. So if I have both terms in my account as exact match (“Yosemite camping” and “Yosemite campground”), with one a much higher bid than the other, my higher bid keyword will usually show over my actual exact match word in a different campaign. That means that I now need to also copy my exact match keywords from one campaign and make them exact match negatives in another campaigns that is already using exact match just to control ad serving and bidding. I should never have to do that.

Measuring impact can be challenging

The effects of the change will take some time to unfold. Taylor says it took several months to see the impact of the last change to exact match close variants.

It’s difficult to calculate the incremental effect of these changes to close variants, in part says Taylor, because some close variant traffic comes from keywords – close variants or other match types — that are already elsewhere in the account.

“Google gives a nod to this in its recent announcement, saying that ‘Early tests show that advertisers using mostly exact match keywords see 3 percent more exact match clicks and conversions on average, with most coming from queries they aren’t reaching today,’” Taylor highlights with bolding added.

Another complicating factor, particularly for agencies, is that the effects of these changes don’t play out uniformly across accounts. Taylor shares an example:

An advertiser saw traffic on one of its key brand keywords shift to a different brand keyword several months after the close variants change last year.

“The normal reaction might be to use negatives to get that traffic back over to the correct keyword, but we were getting a better CPC and still getting the same traffic volume with the new variation,.

It didn’t make much sense, especially given Google’s continued assertion even in the current announcement that ‘Google Ads will still prefer to use keywords identical to the search query,’ but if the clicks are cheaper, the clicks are cheaper. This also speaks to how there’s not really a universal response to deploy for changes in close variants, aside from being mindful of what queries are coming in and how they’re performing.”

Looking ahead

Performance advertisers go where they get the best results.

“At the end of the day, the question is if poorer converting close variant queries might pull keyword performance down enough to force advertisers to pull back on bids and reduce overall investment,” said Taylor. “Generally speaking, giving sophisticated advertisers greater control to set the appropriate bids for each query (or any other segment) allows for more efficient allocation of spend, which should maximize overall investment in paid search.”

Geddes says their “priority is to make sure our Bing Ads budgets are maxed and that we’re not leaving anything on the table there. If our [Google] results get worse, we’ll also move some budgets to other places. But this might be one where we really have to do another account organization just to get around Google’s decisions.”

After the change has fully rolled out and they have enough data to act on, ExtraSpace’s Christensen said they will evaluate again. “Since we have such a large [account] build, when we do decide to make any changes we will have to show how we can do this at scale and maintain performance.”

Bacchini calls attention to the current misnomer of exact match and said Google should get rid of exact match altogether if it’s going to take away the original control of exact match. “It is particularly sneaky when you think of this move in terms of less sophisticated advertisers,” said Bacchini. “If they did not click on the ‘Learn More’ link below the formatting for entering in match types for keywords, how exactly would they know that Google Ads does not really mean exact?”

Categorized in Search Engine

Source: This article was Published irishtechnews.ie By Sujain Thomas - Contributed by Member: Carol R. Venuti

Well, Google does not know you personally, so there is no reason to hate you. If you are writing and still not getting that first ranking on the page of the search engine, it means something is not right from your side.  First of all, let’s just get some ideas straight. How do you think search engine ranking is effective a web page? Being in the few lines of code will not always determine whether the page is capable enough to be placed on the first page of the search engine. Search engines are always on the lookout for signals to rank any page. So, it is easier for you to tweak an article and give those signals to search engines for enjoying a huge round of traffic.

Starting with the primary point:

To get that huge round of audience, you need to start with keyword research. It is one such topic which every blogger might have covered at least once. They need to work on that from the very first day of their blogging life. Every SEO blog or blogger might have used Google Keyword Planner for sure. You might have heard of it, because if you haven’t then you are missing out on a lot of things for your massive business growth.

More on Google Keyword Planner:

There are so many types of keyword research tools available in the market but Google Keyword Planner is at the top of the list. It is also one of the major keyword spy tool names you will come across recently. Google Keyword Planner is an official item from Google, offering you a traffic estimation of targeted keywords. It further helps users to find some of the related and relevant KWs for matching your niche. There are some important points you need to know about Google Keyword Planner before you can actually start using it.

  • For using this Google Keyword Planner tool, you need to register your name with Google and have an AdWords account. The tool is free of cost and you don’t have to spend a single penny on using this item. You have every right to create an AdWords tool using some simple steps and get to use it immediately.
  • If you want, you can clearly search for the current Google AdWords coupons, which will help you to create one free account for your own use. It will help you to use the Google Keyword Planner tool on an immediate count for sure.
  • The main target of this tool is towards AdWords advertisers. On the other hand, it is able to provide some amazing deals of information when it is time to find the right keyword for the blog and the relevant articles to your business.

Log online and get a clear idea on how the homepage of this tool from Google looks like. You just have to enter the target keyword in the given search bar and start your search results quite immediately.  Later, you can add filters if you want to.

Categorized in Online Research

Online research involves collecting information from the internet. It saves cost, is impactful and it offers ease of access. Online research is valuable for gathering information. Tools such as questionnaires, online surveys, polls and focus groups aid market research. You can conduct market research with little or no investment for e-commerce development.

Search Engine Optimization makes sure that your research is discoverable. If your research is highly ranked more people will find, read and cite your research.

Steps to improve the visibility of your research include:

  1. The title gives the reader a clear idea of what the research is about. The title is the first thing a reader sees. Make your research title relevant and consistent. Use a search engine friendly title. Make sure your title provides a solution.
  2. Keywords are key concepts in your research output. They index your article and make sure your research is found quickly. Use keywords that are relevant and common to your research field. Places to use relevant keywords include title, heading, description tags, abstract, graphics, main body text and file name of the document.
  3. Abstract convince readers to read an article. It aids return in a search.
  4. When others cite your research your visibility and reputation will increase. Citing your earlier works will also improve how search engines rank your research.
  5. External links from your research to blogs, personal webpage, and social networking sites will make your research more visible.
  6. The type of graphics you use affects your ranking. Use vectors such as .svg, .eps, .as and .ps. Vectors improve your research optimization.
  7. Make sure you are consistent with your name across all publications. Be distinguishable from others.
  8. Use social media sites such as Facebook, Twitter, and Instagram to publicize your research. Inform everyone. Share your links everywhere.
  9. Make sure your research is on a platform indexed properly by search engines.

Online research is developing and can take place in email, chat rooms, instant messaging and web pages.  Online research is done for customer satisfaction, product testing, audience targeting and database mining.

Ethical dilemmas in online research include:

  1. How to get informed consent from the participants being researched?
  2. What constitutes privacy in online research?
  3. How can researchers prove the real identity of participants?
  4. When is covert observation justifiable?

Knowing how to choose resources when doing online research can help you avoid wasted time.

WAYS TO MAKE ONLINE RESEARCH EASY AND EFFECTIVE

  1. Ask: Know the resources recommended for your research from knowledgeable people. You can get information on valuable online journals or websites from an expert or knowledgeable people.
  2. Fact from fiction: Know the sites that are the best for your research topic. Make sure the websites you have chosen are valuable and up to date. Sites with .edu and .gov are usually safe. If you use a .org website make sure it is proper, reliable and credible. If you use a .com site; check if the site advertises, bias is a possibility.

Social media sites, blogs, and personal websites will give you personal opinions and not facts.

  1. Search Smartly: Use established search engines. Use specific terms. Try alternative searches. Use search operators or advanced search. Know the best sites.
  2. Focus: Do not be distracted when conducting an online research. Stay focused and away from social media sites.
  3. Cite Properly: Cite the source properly. Do not just copy and paste for plagiarism can affect your work.

When conducting research use legitimate and trustworthy resources. sites to help you find articles and journals that are reliable include:

  1. BioMedCentral
  2. Artcyclopedia
  3. FindArticles.com
  4. Digital History
  5. Infomine
  6. Internet Public Library
  7. Internet History Sourcebooks
  8. Librarians Internet Index
  9. Intute
  10. Library of Congress
  11. Project Gutenberg
  12. Perseus Digital Library
  13. Research Guide for Students.

No matter what you are researching the internet is a valuable tool. Use sites wisely and you will get all the information you need.

ONLINE RESEARCH METHODS

  1. Online focus group: This is for business to business service research, consumer research and political research. Pre-selected participants who represent specific interest are invited as part of the focus group.
  2. Online interview: This is done using computer-mediated communication (CMC) such as SMS or Email. Online interview is synchronous or asynchronous. In synchronous interviews, responses are received in real-time for example online chat interviews. In asynchronous interviews, responses are not in real-time such as email interviews. Online interviews use feedbacks about topics to get insight into the participants, attitudes, experiences or ideas.
  3. Online qualitative research: This includes blogs, communities and mobile diaries. It saves cost, time and is convenient. Respondents for online qualitative research can be gotten from surveys, databases or panels.
  4. Social network analysis: This has gained acceptance. With social network analysis researchers can measure the relationship between people, groups, organization, URLs and so on.

Other methods of online research include cyber-ethnography, online content analysis, and Web-based experiments.

TYPES OF ONLINE RESEARCH

  1. Customer satisfaction research: This occurs through phone calls or emails. Customers are asked to give feedback on their experience with a product, service or an organization.
  2. New product research: This is carried out by testing a new product with a group of selected individuals and immediately collecting feedback.
  3. Brand loyalty: This research seeks to find out what attracts customers to a brand. The research is to maintain or improve a brand.
  4. Employee satisfaction research: With this research, you can know what employees think about working for your organization. The moral of your organization can contribute to its productivity.

When conducting an online research give open-ended questions and show urgency but be tolerant.

Written by Junaid Ali Qureshi he is a digital marketing specialist who has helped several businesses gain traffic, outperform the competition and generate profitable leads. His current ventures include Progostech, Magentodevelopers.online.eLabelz, Smart Leads.ae, Progos Tech and eCig.

Categorized in Online Research

Source: This article was Published business.com By Katharine Paljug - Contributed by Member: Grace Irwin

Good content marketing, which makes use of long-tail keywords, can be key to making sure your small business ranks well on Google.

As the internet continues to change consumer behavior, more marketers are turning to content marketing to reach customers. But the rules for this new form of consumer outreach are different than those of traditional ads. Rather than creating a slogan or image to catch customers' attention, content marketing requires the careful use of long-tail keywords.

What are long-tail keywords?

Trying to figure out long-tail keywords can feel overwhelming, especially if you aren't a marketing professional. For instance, a simple Google search for the phrase returns more than 77 million results. At its core, long-tail keywords refer to a phrase or several words that indicate precisely what a user has typed into Google. If you tailor your SEO properly, you will rank high in the search results for the phrase that directly corresponds to what your customers are searching for online as it related to your business. 

For example, say your Atlanta-based company makes doodads that are only meant for use within restaurants and bars. Someone looking to buy those doodads might search for "where to find doodads for restaurants in Atlanta." And if you're positioned well in search results (because you've made effective use of that long-tail keyword phrase on your website), you may show up in the first- or second-page search results. 

To use long-tail keywords, you don't need to know everything about them. You just need to understand six things about the changing world of marketing, how long-tail keywords fit in that picture and where you can find them. The answer, generally speaking, is content marketing.

Content marketing has a low cost and high ROI.

Though you can still purchase ads online, one of the most cost-effective and valuable ways to reach customers is through content marketing. That involves creating online material, such as blog posts, website pages, videos or social media posts, that do not explicitly promote your brand. Instead, the messaging stimulates interest in your business and products by appealing to the needs and interests of your target customers. 

Content marketing is a form of inbound marketing, bringing consumers to you and gaining their trust and loyalty. It generates more than three times as many leads as traditional outbound marketing while costing about 62 percent less. 

However, blogging and other forms of content marketing aren't effective unless you make effective use of keywords, particularly long-tail keywords.

Long-tail keywords are essential to content marketing.

When creating online content, you want customers to be able to find it. The most common way that customers find content online is through search engines. The average business website receives more than three-quarters of its traffic from search, but that level of traffic is impossible without using keywords. 

When you incorporate relevant keywords in your content, you optimize your website for search, making it more likely that customers searching for the keywords you have used will find your business. This search engine optimization, or SEO, increases your web traffic and exposes new audiences to your brand. 

Just using keywords isn't enough. To create effective content that makes it to the top of a search engine results page, you need to use a specific type of keyword known as long-tail keywords.

Long-tail keywords attract customers who are ready to buy.

Long-tail keywords are phrases of three or more words, but their length isn't where the name comes from. Long tail describes the portion of the search-demand curve where these keywords live. 

In statistics, the long tail is the portion of a distribution graph that tapers off gradually rather than ending sharply. This tail usually has many small values and goes on for a long time. 

When it comes to online marketing, a small number of simple keywords are searched for very frequently, while keywords that fall into the long-tail are searched for more sporadically. For example, a simple keyword that is searched for hundreds of thousands of times would be "fitness." A long-tail keyword would be "dance fitness class in Boston." Because the tail is so long and there are so many of them, these keywords account for about 70 percent of all online searches, even though the individual keywords themselves are not searched as often. 

Long-tail keywords are not searched for as frequently as simple keywords like "hotel" or "socks," because they don't apply to everyone. They're what a customer plugs into a search engine when they know exactly what they want and need an online search to help them find it. These search terms communicate a consumer's intent – especially their intent to buy – rather than their general interest. 

This means that when you use the right long-tail keywords, you appeal directly to customers who are looking for what you are selling. You want to determine what your audience might be searching and then work those phrases into your content marketing.

Look for high search volume and low competition.

Because long-tail keywords are so niche, there is much less competition for them. If your long-tail keyword is "dance fitness class in Boston," you aren't competing for search traffic with every dance class out there or even every gym in Boston. You are only competing with Boston studios that offer dance fitness classes. That is a much smaller field. 

However, you still need enough people to search for your keywords for your investment in content marketing to be worthwhile. The best long-tail keywords are low in competition but higher in search volume. High volume in this context doesn't mean thousands of searches every day. But several dozens to a couple hundred searches shows that many of your potential customers are actively searching for that keyword.

There are many tools to help you find long-tail keywords.

The best way to find low-competition, high-volume long-tail keywords is with a keyword tool. These tools allow you to plug in a seed keyword related to your business or audience, and they will return relevant long-tail keywords. 

Keyword planners, such as Answer the Public and Keywords Everywhere, are free, though the number of keywords and the information they provide about them is limited. You can also plug seed keywords into a Google search and use the auto-complete and related search term features to find new long-tail keywords. 

Paid keyword research tools, such as LongTailPro or Ahrefs Keyword Explorer, return not only thousands of relevant long-tail keywords but also statistics on the number of monthly searches and the level of competition for those keywords. They also include tools for project planning, search filters, and additional traffic stats. However, these tools can be expensive, costing several hundred dollars to use. 

The type of tool you select depends on your budget and the scope of your content marketing, and the keywords that get you the best results depend on your business and your customers.

Long-tail keywords tell you what content to create.

If you know who your target customer is, you can use their interests and concerns as seed keywords to find related long-tail keywords. For example, if you know that your customers are interested in travel, you can search for those words to find related keyword such as "which travel insurance is best" or "tax deductible travel expenses." 

Once you have a list of these high-volume, low-competition keywords, they provide you with ideas for blog posts, social media, video content, web pages and more. You can create a series of blog posts comparing kinds of travel insurance. You can make an infographic about tax-deductible travel expenses. Rather than wondering what content to create, the long-tail keywords themselves can serve as your topics. 

Creating content around these relevant keywords automatically optimizes your web platforms for search. And since your initial seed keywords were based on what you know about your target customer, you are designing content that directly appeals to the people searching for a business like yours. Using long-tail keywords effectively works with search engines to bring customers directly to your website, rather than hoping that they see an ad and decide your business is worth visiting.

Categorized in Online Research

Source: This article was published searchenginejournal.com By Matt Southern - Contributed by Member: Corey Parker

Google’s John Mueller revealed that the search engine’s algorithms do not punish keyword stuffing too harshly.

In fact, keyword stuffing may be ignored altogether if the content is found to otherwise have value to searchers.

This information was provided on Twitter in response to users inquiring about keyword stuffing. More specifically, a user was concerned about a page ranking well in search results despite obvious signs of keyword repetition.

Prefacing his statement with the suggestion to focus on one’s own content rather than someone else’s, Mueller goes on to say that there are over 200 factors used to rank pages and “the nice part is that you don’t have to get them all perfect.”

When the excessive keyword repetition was further criticized by another user, Mueller said this practice shouldn’t result in a page being removed from search results, and “boring keyword stuffing” may be ignored altogether.

Official AdWords Campaign Templates
Select your industry. Download your campaign template. Custom built with exact match keywords and converting ad copy with high clickthrough rates.

“Yeah, but if we can ignore boring keyword stuffing (this was popular in the 90’s; search engines have a lot of practice here), there’s sometimes still enough value to be found elsewhere. I don’t know the page, but IMO keyword stuffing shouldn’t result in removal from the index.”

There are several takeaways from this exchange:

  • An SEO’s time is better spent improving their own content, rather than trying to figure out why other content is ranking higher.
  • Excessive keyword stuffing will not result in a page being removed from indexing.
  • Google may overlook keyword stuffing if the content has value otherwise.
  • Use of keywords is only one of over 200 ranking factors.

Overall, it’s probably not a good idea to overuse keywords because it arguably makes the content less enjoyable to read. However, keyword repetition will not hurt a piece of content when it comes to ranking in search results.

Categorized in Search Engine
Page 1 of 4

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media