fbpx

Whatever research you intend doing online you do have to start somewhere. The collection and collation of data require you to be organised. You must develop your own research techniques when online and stick to them.

Are you looking for info on…

  • data collection tools in research methodology
  • what are the disadvantages of online research
  • good online research tools
  • best internet research tips

Internet Research Techniques

Before starting any research on the internet you need to know some of the Pro’s and Con’s …

What are the advantages of doing internet research?

  • Ability to obtain a large sample, which increases statistical power
  • Ability to obtain a more diverse sample than in traditional university-based research
  • Prevents experimenter demand effects (with no interaction with the experimenter, no “experimenter expectancy” effect)

What are the disadvantages of doing internet research?

  • Some subjects may try to participant in the same study more than once
    1. To overcome this problem, you can ask for the email addresses of each participant and then look for duplicates.
    2. Since nowadays it's easy for people to create multiple email addresses, you can also ask for name and/or address of each subject. Sometimes researchers will have a “lottery” as incentive to participate (e.g., $100 lottery prize for each 400 participants), so asking for name/address is necessary to award the lottery check.
    3. You can also collect the IP address of each participant and look for duplicates. One issue here is that sometimes DSL providers give the same IP address to multiple people.
  • Some subjects may drop out of the study before finishing
    1. In traditional laboratory-based research its unusual for a subject to walk out of a study, but online a subject can get distracted or simply lose interest and end the study. Sometimes researchers will have a “lottery” as incentive to have the subject participate in the study, but with any type of monetary incentive IRB’s typically require a statement in the consent form saying something to the effect of “you may discontinue participation at any time without any consequences or losing your entry in the lottery.”
    2. Since a certain number of online subjects won't finish the study, you can over-collect the number of subjects you think you need to offset the number of subjects who don't finish the study, usually around 10-20%.
  • Some subjects may stop the study and then continue minutes/hours later
    1. The problem here is that some studies involve manipulations which may lose power if there is a time lag between the manipulation and measures in the study. One advantage of online studies is that you can record how long the subject is taking part in the study, so you can identify the average length of time of your study, and also identify those subjects who take an extrordinary long amount of time to finish the study.

 

 

Categorized in Online Research

These are, of course, the latest technology available to large metro centers.  Your own part of the world will offer speeds that vary with the technology and providers available in your area.

For Cellphone Users in City Limits

Modern cellphone connections should be 5 to 12 megabits-per-second (5 to 12 Mbps) if you have the 4th Generation LTE technology. 

For Desktop Users in City Limits

Modern high-speed cable connections to a home desktop should be 50 to 150 megabits-per-second (50 to 150 Mbps).

Also remember: these speeds are theoretical numbers.  In practice, most users will experience speeds that are slower than these theoretical values. Speeds vary with many factors.

Here are several ways you can test your internet connection speed and see your own performance.

1-Ookla Speed Test for Android

Ookla Android speed test
Ookla Android speed test. screenshot

Ookla is a respected American name that has offered speed testing services for years.  Their Ookla mobile app will perform upload and download speed tests with controlled data over a 30-second interval.  It will then provide you graphical results to show what speeds your mobile device is achieving on 4G, LTE, EDGE, 3G, and EVDO networks.

Important note:  many ISP's will offer to be the target Ookla server for you, so their results may be skewed to inflate their performance numbers.  After your first test, it is a good idea to go into Ookla settings and choose an independent server outside of your ISP's control when you run your second and third Android speed test.More »

Ookla speed test for iPhone/iOS
 Ookla speed test for iPhone/iOS. screenshot

In the same fashion as the Android version, Ookla for Apple will connect to a server from your iPhone, and send and receive data with a strict stopwatch to capture the results.  The results will show in stylish graphs, and you can choose to save your results online so you can share it with friends, or even your ISP.

When you use Ookla on your Apple, make sure to run it multiple times, and after the first test, using the Ookla settings to choose a target server that is not owned by your ISP; you are more likely to get unbiased results from a 3rd party server. More »

Bandwidthplace.com speed test
 Bandwidthplace.com speed test. screenshot

This is a good free speed test choice for residents of the USA, Canada, and the UK. The convenience of Bandwidthplace.com is that you need not install anything; just run their speed test in your Safari or Chrome or IE browser.

Bandwidth Place only has 19 servers around the world at this time, though, with most of its servers in the USA. Accordingly, if you are far away from the Bandwidth Place servers, your internet speed will appear quite slow. More »

DSLReports speed testDSLReports speed test. screenshot

 As an alternative to Ookla and Bandwidthplace, the tools at DSLReports offer some interesting additional features.  You can choose to test your bandwidth speed when it is encrypted (scrambled to prevent eavesdropping) or unencrypted. It also tests you against multiple servers simultaneously. More »

5-ZDNet Speed Test for Desktop

ZDNet speed testZDNet speed test. screenshot

 Another alternative to Ookla is ZDNet.  This fast test also offers international statistics on how other countries are faring for internet speeds. More »

6-Speedof.Me Speed Test for Desktop

Speedof.Me speed test
 Speedof.Me speed test. screenshot

Some network analysts claim that speed tests based on HTML5 technology are the most accurate mimic of how internet traffic really flows. The HTML 5 tool at Speedof.Me is one good option for testing your desktop or cell phone speed.  This browser-based tool is convenient for how it requires no install.

You don't get to choose the servers with Speedof.me, but you do get to pick what kind of data file you want to upload and download for the test. More »

7-Where Does Internet Sluggishness Come From?

Where does internet sluggishness come from?
 Where does internet sluggishness come from?. Buena Vista / Getty

Your performance is likely to fall short of the theoretical maximum on your ISP account.  This is because many variables come into play:

  1. Online traffic and congestion: if you are sharing a connection with many other users, and if those users are heavy gamers or downloaders, then you'll definitely experience a slowdown.
  2. Your location and distance from the server:  particularly try for those of you in rural settings, the more distance the signal travels, the more your data will hit bottlenecks across the many cable 'hops' to reach your device.
  3. Hardware: hundreds of pieces of hardware connect you to the Web, including your network connector, your router and model, many servers and many cables. Not to mention: a wireless connection has to compete with other signals in the air.
  4. Time of day:  just like the roads during rush hour, the cables of the Internet have peak times for traffic. This definitely contributes to your speed experience slowing down.
  5. Selective throttling:  some ISP's will actually analyze data, and purposely slow down specific types of data.  For example, many ISP's will purposely slow down your movie downloads, or even dial all your speeds down if you consume more than your monthly quota of data.
  6. Software running on your system:  you may unwittingly have some malware or some bandwidth-intensive application running that will rob your internet speed.
  7. The other people in your house or building:  if your teenage daughter is streaming music in the next room, or if your building neighbor below you is downloading 20GB of movies, then you'll likely experience sluggishness.

8-What to Do When Your Speed Doesn't Match What Your ISP Promises...

What if your internet speed is far below your ISP promises?
 What if your internet speed is far below your ISP promises?. Buena Vista / Getty

If the speed variance is within 20-35% of the promised speed, you may not have much recourse.  That's to say if your ISP promises you 100 Mbps and you can show them that you get 70 Mbps, the customer service people will probably just tell you politely that's you need to live with it.

On the other hand, if you paid for a 150 Mbps connection, and you are getting 44 Mbps, then you are well within reasonable to ask them to audit your connection.  If they mistakenly toggled you at a slower speed, then they should give you what you paid for, or credit you back fees.

 Source: This article was published lifewire.com By Paul Gil

Categorized in Science & Tech

The most common sources of data collection in qualitative research are interviews, observations, and review of documents (Creswell, 2009b; Locke, Silverman, & Spirduso, 2010; Marshall & Rossman, 1999). The methodology is planned and pilot-tested before the study. Creswell (2003) places the data-collecting procedures into four categories: observations, interviews, documents, and audiovisual materials. He provides a concise table of the four methods, the options within each type, the advantages of each type, and the limitations of each.

We noted previously that the researcher typically has some type of framework (sub-purposes perhaps) that determines and guides the nature of the data collection. For example, one phase of the research might pertain to the manner in which expert and nonexpert sports performers perceive various aspects of a game. This phase could involve having the athlete describe his or her perceptions of what is taking place in a specific scenario. A second phase of the study might focus on the interactive thought processes and decisions of the two groups of athletes while they are playing. The data for this phase could be obtained from filming them in action and then interviewing them while they are watching their performances on videotape. Still another aspect of the study could be directed at the knowledge structure of the participants, which could be determined by a researcher-constructed instrument.

You should not expect qualitative data collection to be quick. It is time intensive. Collecting good data takes time (Locke, Silverman, & Spirduso, 2010), and quick interviews or short observations are unlikely to help you gain more understanding. If you are doing qualitative research, you must plan to be in the environment for enough time to collect good data and understand the nuance of what is occurring.

Interviews

The interview is undoubtedly the most common source of data in qualitative studies. The person-to-person format is most prevalent, but occasionally group interviews and focus groups are conducted. Interviews range from the highly structured style, in which questions are determined before the interview, to the open-ended, conversational format. In qualitative research, the highly structured format is used primarily to gather sociodemographic information. For the most part, however, interviews are more open-ended and less structured (Merriam, 2001). Frequently, the interviewer asks the same questions of all the participants, but the order of the questions, the exact wording, and the type of follow-up questions may vary considerably.

Being a good interviewer requires skill and experience. We emphasized earlier that the researcher must first establish rapport with the respondents. If the participants do not trust the researcher, they will not open up and describe their true feelings, thoughts, and intentions. Complete rapport is established over time as people get to know and trust one another. An important skill in interviewing is being able to ask questions in such a way that the respondent believes that he or she can talk freely.

Kirk and Miller (1986) described their field research in Peru, where they tried to learn how much urban, lower-middle-class people knew about coca, the organic source of cocaine. Coca is legal and widely available in Peru. In their initial attempts to get the people to tell them about coca, they received the same culturally approved answers from all the respondents. Only after they changed their style to asking less sensitive questions (e.g., “How did you find out you didn’t like coca?”) did the Peruvians open up and elaborate on their knowledge of (and sometimes their personal use of) coca. Kirk and Miller made a good point about asking the right questions and the value of using various approaches. Indeed, this is a basic argument for the validity of qualitative research.

Skillful interviewing takes practice. Ways to develop this skill include videotaping your own performance in conducting an interview, observing experienced interviewers, role-playing, and critiquing peers. It is important that the interviewer appear nonjudgmental. This can be difficult in situations where the interviewee’s views are quite different from those of the interviewer. The interviewer must be alert to both verbal and nonverbal messages and be flexible in rephrasing and pursuing certain lines of questioning. The interviewer must use words that are clear and meaningful to the respondent and must be able to ask questions so that the participant understands what is being asked. Above all, the interviewer has to be a good listener.

The use of a digital recorder is undoubtedly the most common method of recording interview data because it has the obvious advantage of preserving the entire verbal part of the interview for later analysis. Although some respondents may be nervous to talk while being recorded, this uneasiness usually disappears in a short time. The main drawback with recording is the malfunctioning of equipment. This problem is vexing and frustrating when it happens during the interview, but it is devastating when it happens afterward when you are trying to replay and analyze the interview. Certainly, you should have fresh batteries and make sure that the recorder is working properly early in the interview. You should also stop and play back some of the interviews to see whether the person is speaking into the microphone loudly and clearly enough and whether you are getting the data. Some participants (especially children) love to hear themselves speak, so playing back the recording for them can also serve as motivation. Remember, however, that machines can malfunction at any time.

Video recording seems to be the best method because you preserve not only what the person said but also his or her nonverbal behavior. The drawback to using video is that it can be awkward and intrusive. Therefore, it is used infrequently. Taking notes during the interview is another common method. Occasionally note taking is used in addition to recording, primarily when the interviewer wishes to note certain points of emphasis or make additional notations. Taking notes without recording prevents the interviewer from being able to record all that is said. It keeps the interviewer busy, interfering with her or his thoughts and observations while the respondent is talking. In highly structured interviews and when using some types of formal instrument, the interviewer can more easily take notes by checking off items and writing short responses.

The least preferred technique is trying to remember and write down afterward what was said in the interview. The drawbacks are many, and this method is seldom used.

Focus Groups

Another type of qualitative research technique employs interviews on a specific topic with a small group of people, called a focus group. This technique can be efficient because the researcher can gather information about several people in one session. The group is usually homogeneous, such as a group of students, an athletic team, or a group of teachers.

In his 1996 book Focus Groups as Qualitative Research, Morgan discussed the applications of focus groups in social science qualitative research. Patton (2002) argued that focus group interviews might provide quality controls because participants tend to provide checks and balances on one another that can serve to curb false or extreme views. Focus group interviews are usually enjoyable for the participants, and they may be less fearful of being evaluated by the interviewer because of the group setting. The group members get to hear what others in the group have to say, which may stimulate the individuals to rethink their own views.

In the focus group interview, the researcher is not trying to persuade the group to reach consensus. It is an interview. Taking notes can be difficult, but an audio or video recorder may solve that problem. Certain group dynamics such as power struggles and reluctance to state views publicly are limitations of the focus group interview. The number of questions that can be asked in one session is limited. Obviously, the focus group should be used in combination with other data-gathering techniques.

Observation

Observation in qualitative research generally involves spending a prolonged amount of time in the setting. Field notes are taken throughout the observations and are focused on what is seen. Many researchers also record notes to assist in determining what the observed events might mean and to provide help for answering the research questions during subsequent data analysis (Bogdan & Biklen, 2007; Pitney & Parker, 2009). Although some researchers use cameras to record what is occurring at the research site, that method is uncommon and most researchers use field notes to record what has occurred in the setting.

One major drawback to observation methods is obtrusiveness. A stranger with a pad and pencil or a camera is trying to record people’s natural behavior. A keyword here is stranger. The task of a qualitative researcher is to make sure that the participants become accustomed to having the researcher (and, if appropriate, a recording device) around. For example, the researcher may want to visit the site for at least a couple of days before the initial data collection.

In an artificial setting, researchers can use one-way mirrors and observation rooms. In a natural setting, the limitations that stem from the presence of an observer can never be ignored. Locke (1989) observed that most naturalistic field studies are reports of what goes on when a visitor is present. The important question is, How important and limiting is this? Locke suggested ways of suppressing reactivity, such as the visitor’s being in the setting long enough so that he or she is no longer considered a novelty and being as unobtrusive as possible in everything from dress to choice of location in a room.

Other Data-Gathering Methods

Among the many sources of data in qualitative research are self-reports of knowledge and attitude. The researcher can also develop scenarios, in the form of descriptions of situations or actual pictures, that are acted out for participants to observe. The participant then gives her or his interpretation of what is going on in the scenario. The participant’s responses provide her or his perceptions, interpretations, and awareness of the total situation and of the interplay of the actors in the scenario.

Other recording devices include notebooks, narrative field logs, and diaries, in which researchers record their reactions, concerns, and speculations. Printed materials such as course syllabi, team rosters, evaluation reports, participant notes, and photographs of the setting and situations are examples of document data used in qualitative research.

Source: This article was published humankinetics.com By Stephen J. Silverman, EdD

Categorized in Research Methods

In researching high-growth professional services firms we found firms that did systematic business research on their target client group grew faster and were more profitable.

Further, those that did more frequent business research (at least quarterly), grew the fastest and were most profitable. Additional research also confirms that the fastest growing firms do more research on their target clients.

Think about that for a minute: Faster growth and more profit. Sounds pretty appealing.

The first question is usually around what kind of research to do and how it might help grow your firm. I’ve reflected on the kinds of questions we’ve asked when doing research for our professional services clients and how the process has impacted their strategy and financial results.

There are a number of types of research that your firm can use, including:

  • Brand research
  • Persona research
  • Market research
  • Lost prospect analysis
  • Client satisfaction research
  • Benchmarking research
  • Employee surveys

So those are the types of research, but what are the big questions that you need answers for? We looked across the research we have done on behalf of our clients to isolate the most insightful and impactful areas of inquiry.

The result is this list of the top 10 research questions that can drive firm growth and profitability:

1. Why do your best clients choose your firm?

Notice we are focusing on the best clients, not necessarily the average client. Understanding what they find appealing about your firm can help you find others just like them and turn them into your next new client.

2. What are those same clients trying to avoid?

This is the flip side of the first question and offers a valuable perspective. As a practical matter, avoiding being ruled out during the early rounds of a prospect’s selection process is pretty darned important. This is also important in helping shape your business practices and strategy.

In our research on professional services buyers and sellers, we’ve found that the top circumstances that buyers want to avoid in a service provider are broken promises and a firm that’s indistinguishable from everyone else.

Notice that this chart also shows what sellers (professional services providers) believe buyers want to avoid. Notice that many sellers misjudge their potential client’s priorities. Closing this perception gap is one of the ways that research can help a firm grow faster. If you understand how your prospects think you can do a much better job of turning them into clients.

3. Who are your real competitors?

Most firms aren’t very good at identifying their true competitors. When we ask a firm to list their competitors and ask their clients to do the same, there is often only about a 25% overlap in their lists.

Why? Sometimes, it’s because you know too much about your industry and rules out competitors too easily. At other times, it’s because you are viewing a client’s problems through your filter and overlook completely different categories of solutions that they are considering.

For example, a company facing declining sales could easily consider sales training, new product development, or a new marketing campaign. If you consult on new product development the other possible solutions are all competitors. In any case, ignorance of true competitors seldom helps you compete.

4. How do potential clients see their greatest challenges?

The answer to this question helps you understand what is on prospective clients’ minds and how they are likely to describe and talk about those issues. The key here is that you may offer services that can be of great benefit to organizations, but they never consider you because they are thinking about their challenges through a different lens.

They may want cost reduction when you are offering process improvement (which, in fact, reduces cost). Someone needs to connect the dots or you will miss the opportunity. This is similar to the dilemma of understanding the full range of competitors described above.

5. What is the real benefit your firm provides?

Sure, you know your services and what they are intended to do for clients. But what do they actually do? Often, firms are surprised to learn the true benefit of their service. What might’ve attracted a client to your firm initially might not be what they end up valuing most when working with you. For example, you might have won the sale based on your good reputation, but after working with you, your client might value your specialized skills and expertise most.

When you understand what true value and benefit of your services, you’re in a position to enhance it or even develop new services with other true benefits.

6. What are emerging trends and challenges?

Where is the market headed? Will it grow or contract? What services might be needed in the future? This is fairly common research fodder in large market-driven industries, but it’s surprisingly rare among professional services firms.

Understanding emerging trends can help you conserve and better target limited marketing dollars. I’ve seen many firms add entire service lines, including new hires and big marketing budgets, based on little more than hunches and anecdotal observations. These decisions should be driven by research and data. Research reduces your risk associated with this type of decision.

7. How strong is your brand?

What is your firm known for? How strong is your reputation? How visible are you in the marketplace? Answers to each of these questions can vary from market to market. Knowing where you stand cannot only guide your overall strategy, it can also have a profound impact on your marketing budget. An understanding of your brand’s strengths and weaknesses can help you understand why you are getting traction in one segment and not another.

8. What is the best way to market to your prime target clients?

Wouldn’t it be nice to know where your target clients go to get recommendations and referrals? Wouldn’t it be great if you knew how they want to be marketed to? These are all questions that can be answered through systematic business research. The answers will greatly reduce the level of spending needed to reach your best clients. This is perhaps one of the key reasons that firms that do regular research are more profitable.

9. How should you price your services?

This is often a huge stumbling block for professional services firms. In my experience, most firms overestimate the role price plays in buying decisions. Perhaps it is because firms are told that the reason they don’t win an engagement is because of the price. It is the easiest reason for a buyer to share when providing feedback. 

However, if a firm hires an impartial third party to dig deeper into why it loses competitive bids, it often learns that what appears to be price may really be a perceived level of expertise, lack of attention to detail or an impression of non-responsiveness. We’ve seen firms lose business because of typos in their proposal — while attributing the loss to their fees.

10. How do your current clients really feel about you?

How likely are clients to refer you to others? What would they change about your firm? How long are they likely to remain a client? These are the kinds of questions that can help you fine-tune your procedures and get a more accurate feel for what the future holds. In some cases, we’ve seen clients reveal previously hidden strengths. In others, they have uncovered important vulnerabilities that need attention.

The tricky part here is that clients are rarely eager to tell you the truth directly. They may want to avoid an uncomfortable situation or are worried that they will make matters worse by sharing their true feelings.

Understanding the key questions discussed above can have a positive impact on your firm’s growth and profitability. That is the real power of well-designed and professionally executed business research.

Source: This article was published accountingweb.com By Lee Frederiksen

Categorized in Business Research

Internet Advertising Market report provides leading vendors in the Market is included based on profile, business performance, sales. Vendors mentioned as Alphabet, Facebook, Baidu, Yahoo! Inc, Microsoft, Alibaba, Tencent, Twitter, AOL(Verizon Communications), eBay, Linkedin, Amazon, IAC, Soho, Pandora

- Agency -.

The Global Internet Advertising Market will reach xxx Million USD in 2017 with CAGR xx% from 2011 -2023.The objective of the Global Internet Advertising Market Research Report 2011-2023 is to offer information with access to high-value unique databases of information, like market projection on the basis of product type, application, and region.

Download Report at www.reportsnreports.com/contacts/r…aspx?name=1169245

On the basis of the end users/applications, this report focuses on the status and outlook for major applications/end users, consumption (sales), market share and growth rate for each application, including

  • - Retail
  • - Automotive
  • - Entertainment
  • - Financial Services
  • - Telecom
  • - Consumer Goods

Online advertising, also called online or Internet advertising or web advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. Online advertising is a marketing strategy that involves the use of the Internet as a medium to obtain website traffic and target and deliver marketing messages to the right customers.

Online advertising is geared toward defining markets through unique and useful applications.

- Agency -.

Internet Advertising Industry Leading vendors (Key Players) like  Alphabet, Facebook, Baidu, Yahoo! Inc, Microsoft, Alibaba, Tencent, Twitter, AOL(Verizon Communications), eBay, Linkedin, Amazon, IAC, Soho, Pandora New Material amongst others have been included based on profile, and business performance for the clients to make informed decisions.

Internet Advertising Market Segment as follows:

Based on Products Type, the report describes major products type share of regional market. Products mentioned as follows:

  • Search Ads
  • Mobile Ads
  • Banner Ads
  • Classified Ads
  • Digital Video Ads

Access report at www.reportsnreports.com/contacts/d…aspx?name=1169245

The report describes major Regions Market by-products and application. Regions mentioned as follows:

  • North America (U.S., Canada, Mexico)
  • Europe (Germany, U.K., France, Italy, Russia, Spain.
  • South America (Brazil, Argentina.
  • Middle East & Africa (Saudi Arabia, South Africa.

The main contents of the Internet Advertising Market Research Report including:

Global market size and forecast

Regional market size, production data, and export & import

Key manufacturers (manufacturing sites, capacity, and production, product specifications.)

Average market price by SUK

Major applications

Table of Contents:

1 Internet Advertising Market Overview

Access a Copy of this Research Report at www.reportsnreports.com/purchase.aspx?name=1169245

List of Tables

Table Global Internet Advertising Market and Growth by Type

Table Global Internet Advertising Market and Growth by End-Use / Application

Table Global Internet Advertising Revenue (Million USD) by Vendors (2011-2017)

Table Global Machinery & Equipment Revenue Share by Vendors (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by Vendors (2011-2017)

Table Global Internet Advertising Market Volume Share by Vendors (2011-2017)

Table Headquarter, Factories & Sales Regions Comparison of Vendors

Table Product List of Vendors

Table Global Internet Advertising Market (Million USD) by Type (2011-2017)

Table Global Internet Advertising Market Share by Type (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by Type (2011-2017)

Table Global Internet Advertising Market Volume Share by Type (2011-2017)

Table Global Internet Advertising Market (Million USD) by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Share by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Volume Share by End-Use / Application (2011-2017)

List of Figures

Figure Global Internet Advertising Market Size (Million USD) 2012-2022

Figure North America Market Growth 2011-2018

Figure Europe Market Growth 2011-2017

Figure Asia-Pacific Market Growth 2011-2017

Figure South America Market Growth 2011-2017

Figure Middle East & Africa Market Growth 2011-2017

Figure Global Internet Advertising Market (Million USD) and Growth Forecast (2018-2023)

Figure Global Internet Advertising Market Volume (Volume) and Growth Forecast (2018-2023)

Source: This article was published whatech.com

Categorized in Market Research

What do real customers search for?

It seems like a straightforward question, but once you start digging into research and data, things become muddled. A word or phrase might be searched for often, yet that fact alone doesn’t mean those are your customers.

While a paid search campaign will give us insight into our “money” keywords — those that convert into customers and/or sales — there are also many other ways to discover what real customers search.

Keyword Evolution

We are in the era where intent-based searches are more important to us than pure volume. As the search engines strive to better understand the user, we have to be just as savvy about it too, meaning we have to know a lot about our prospects and customers.

In addition, we have to consider voice search and how that growth will impact our traffic and ultimately conversions. Most of us are already on this track, but if you are not or want to sharpen your research skills, there are many tools and tactics you can employ.

Below are my go-to tools and techniques that have made the difference between average keyword research and targeted keyword research that leads to interested web visitors.

1. Get to Know the Human(s) You’re Targeting

Knowing the target audience, I mean really knowing them, is something I have preached for years. If you have read any of my past blog posts, you know I’m a broken record.

You should take the extra step to learn the questions customers are asking and how they describe their problems. In marketing, we need to focus on solving a problem.

SEO is marketing. That means our targeted keywords and content focus should be centered on this concept.

2. Go Beyond Traditional Keyword Tools

I love keyword research tools. There is no doubt they streamline the process of finding some great words and phrases, especially the tools that provide suggested or related terms that help us build our lists. Don’t forget about the not-so-obvious tools, though.

Demographics Pro is designed to give you detailed insights into social media audiences, which in turn gives you a sense of who might be searching for your brand or products. You can see what they’re interested in and what they might be looking for. It puts you on the right track to targeting words your customers are using versus words your company believes people are using.

You can glean similar data about your prospective customers by using a free tool, Social Searcher. It’s not hard to use — all you have to do is input your keyword(s), select the source and choose the post type. You can see recent posts, users, sentiment and even related hashtags/words, as reflected in the following Social Searcher report:

social searcher screen shot

If you are struggling with your keywords, another great tool to try is Seed Keywords. This tool makes it possible to create a search scenario that you can then send to your friends. It is especially useful if you are in a niche industry and it is hard to find keywords.

Once you have created the search scenario, you get a link that you can send to people. The words they use to search are then collected and available to you. These words are all possible keywords.

seed keywords screen shot

3. Dig into Intent

Once I get a feel for some of the keywords I want to target, it is time to take it a step further. I want to know what type of content is ranking for those keywords, which gives me an idea of what Google, and the searchers, believe the intent to be.

For the sake of providing a simple example (there are many other types of intent that occur during the buyer’s journey), let’s focus on two main categories of intent: buy and know.

The State of Local Search 2018: Expert Webinar
Join a panel of the biggest local search experts as we explore how the industry changed in 2017 and predict what search engines might have in store.

Let’s say I’m targeting the term “fair trade coffee:”

Google search result page

Based on what is in results, Google believes the searcher’s intent could either be to purchase fair trade coffee or to learn more about it. In this case, the page I am trying to optimize can be targeted toward either intent.

Here’s another example:

Google search result page

In this scenario, if I was targeting the keyword, “safe weed removal,” I would create and/or optimize a page that provides information, or in other words, satisfies the “know” intent.

There are many tools that can help you determine what pages are ranking for your targeted keywords, including SEOToolSet, SEMRush, and Ahrefs. You would simply click through them to determine the intent of the pages.

4. Go from Keywords to Questions

People search questions. That’s not newsworthy, but we should be capitalizing on all of the opportunities to answer those questions. Therefore, don’t ever forget about the long-tail keyword.

Some of my favorite tools to assist in finding questions are Answer the Public, the new Question Analyzer by BuzzSumo, and FaqFox.

Answer The Public uses autosuggest technology to present the common questions and phrases associated with your keywords. It generates a visualization of data that can help you get a better feel for the topics being searched.

With this tool, you get a list of questions, not to mention other data that isn’t depicted below:

Answer the public chart

The Question Analyzer by BuzzSumo locates the most popular questions that are asked across countless forums and websites, including Amazon, Reddit, and Quora. If I want to know what people ask about “coffee machines,” I can get that information:


question analyzer screen shot

FaqFox will also provide you with questions related to your keywords using such sites at Quora, Reddit, and Topix.

For example, if I want to target people searching for “iced coffee,” I might consider creating and optimizing content based on the following questions:

faq fox screen shot

Final Thoughts

There are constantly new techniques and tools to make our jobs easier. Your main focus should be on how to get customers to your website, which is done by knowing how to draw them in with the right keywords, questions, and content.

 

Source: This article was published searchenginejournal By Mindy Weinstein

Categorized in Online Research

When Congress voted in March to reverse rules intended to protect Internet users’ privacy, many people began looking for ways to keep their online activity private. One of the most popular and effective is Tor, a software system millions of people use to protect their anonymity online.

But even Tor has weaknesses, and in a new paper, researchers at Princeton University recommend steps to combat certain types of Tor’s vulnerabilities.

Tor was designed in the early 2000s to make it more difficult to track what people are doing online by routing their traffic through a series of “proxy” servers before it reaches its final destination. This makes it difficult to track Tor users because their connections to a particular server first pass through intermediate Tor servers called relays. But while Tor can be a powerful tool to help protect users’ privacy and anonymity online, it is not perfect.

In earlier work, a research group led by Prateek Mittal, an assistant professor of electrical engineering, identified different ways that the Tor network can be compromised, as well as ways to make Tor more resilient to those types of attacks. Many of their latest findings on how to mitigate Tor vulnerabilities are detailed in a paper titled “Counter-RAPTOR: Safeguarding Tor Against Active Routing Attacks,” presented at the IEEE Symposium on Security and Privacy in San Jose, California, in May.

The paper is written by Mittal, Ph.D. students Yixin Sun and Anne Edmundson, and Nick Feamster, professor of computer science, and Mung Chiang, the Arthur LeGrand Doty Professor of Electrical Engineering. Support for the project was provided in part by the National Science Foundation, the Open Technology Fund and the U.S. Defense Department.

The research builds on earlier work done by some of the authors identifying a method of attacking Tor called “RAPTOR” (short for Routing Attacks on Privacy in TOR). In that work, Mittal and his collaborators demonstrated methods under which adversaries could use attacks at the network level to identify Tor users.

“As the internet gets bigger and more dynamic, more organizations have the ability to observe users’ traffic,′ said Sun, a graduate student in computer science. “We wanted to understand possible ways that these organizations could identify users and provide Tor with ways to defend itself against these attacks as a way to help preserve online privacy.”

Mittal said the vulnerability emerges from the fact that there are big companies that control large parts of the internet and forward traffic through their systems. “The idea was, if there’s a network like AT&T or Verizon that can see user traffic coming into and coming out of the Tor network, then they can do statistical analysis on whose traffic it is,” Mittal explained. “We started to think about the potential threats that were posed by these entities and the new attacks — the RAPTOR attacks — that these entities could use to gain visibility into Tor.”

Even though a Tor user’s traffic is routed through proxy servers, every user’s traffic patterns are distinctive, in terms of the size and sequence of data packets they’re sending online. So if an internet service provider sees similar-looking traffic streams enter the Tor network and leaving the Tor network after being routed through proxy servers, the provider may be able to piece together the user’s identity. And internet service providers are often able to manipulate how traffic on the internet is routed, so they can observe particular streams of traffic, making Tor more vulnerable to this kind of attack.

These types of attacks are important because there is a lot of interest in being able to break the anonymity Tor provides. “There is a slide from an NSA (the U.S. National Security Agency) presentation that Edward Snowden leaked that outlines their attempts at breaking the privacy of the Tor network,” Mittal pointed out. “The NSA wasn’t successful, but it shows that they tried. And that was the starting point for this project because when we looked at those documents we thought, with these types of capabilities, surely they can do better.”

In their latest paper, the researchers recommend steps that Tor can take to better protect its users from RAPTOR-type attacks. First, they provide a way to measure internet service providers’ susceptibility to these attacks. (This depends on the structure of the providers’ networks.) The researchers then use those measurements to develop an algorithm that selects how a Tor user’s traffic will be routed through proxy servers depending on the servers’ vulnerability to attack. Currently, Tor proxy servers are randomly selected, though some attention is given to making sure that no servers are overloaded with traffic. In their paper, the researchers propose a way to select Tor proxy servers that take into consideration vulnerability to outside attack. When the researchers implemented this algorithm, they found that it reduced the risk of a successful network-level attack by 36 percent.

The researchers also built a network-monitoring system to check network traffic to uncover manipulation that could indicate attacks on Tor. When they simulated such attacks themselves, the researchers found that their system was able to identify the attacks with very low false positive rates.

Roger Dingledine, president and research director of the Tor Project, expressed interest in implementing the network monitoring approach for Tor. “We could use that right now,” he said, adding that implementing the proposed changes to how proxy servers are selected might be more complicated.

“Research along these lines is extremely valuable for making sure Tor can keep real users safe,” Dingledine said. “Our best chance at keeping Tor safe is for researchers and developers all around the world to team up and all work in the open to build on each other’s progress.”

Mittal and his collaborators also hope that their findings of potential vulnerabilities will ultimately serve to strengthen Tor’s security.

“Tor is amongst the best tools for anonymous communications,” Mittal said. “Making Tor more robust directly serves to strengthen individual liberty and freedom of expression in online communications.”

Source: This article was published princeton.edu By Josephine Wolff

Categorized in Internet Privacy

Over the past half-decade I’ve written extensively about web archiving, including why we need to understand what’s in our massive archives of the web, whether our archives are failing to capture the modern and social web, the need for archives to modernize their technology infrastructures and, perhaps most intriguingly for the world of “big data,” how archives can make their petabytes of holdings available for research. What might it look like if the world’s web archives opened up their collections for academic research, making hundreds of billions of web objects totaling tens of petabytes and stretching back to the founding of the modern web available as a massive shared corpus to power the modern data mining revolution, from studies of the evolution of the web to powering the vast training corpuses required to build today’s cutting edge neural networks?

When it comes to crawling the open web to build large corpuses for data mining, universities in the US and Canada have largely adopted a hands-offapproach, exempting most work from ethical review, granting permission to ignore terms of use or copyright restrictions and waiving traditional policies on data management and replication on the grounds that material harvested from the open web is publicly accessible information and that its copyright owners, by virtue of being making it available on the web without password protection, encourage its access and use.

On the other hand, the world’s non-profit and governmental web archives, whom collectively hold tens of petabytes of archived content crawled from the open web stretching back 20+ years, have as a whole largely resisted opening their collections to bulk academic research. Many provide no access at all to their collections, some provide access only on a case-by-case basis and others provide access to a single page at a time, with no facilities for bulk exporting large portions of their holdings or even analyzing them in situ.

While some archives have cited technical limitations in making their content more accessible, the most common argument against offering bulk data mining access revolves around copyright law and concern that by boxing up gigabytes, terabytes or even petabytes of web content and redistributing it to researchers, web archives could potentially be viewed as “redistributing” copyrighted content. Given the growing interest among large content holders in licensing their material for precisely such bulk data mining efforts, some archives have expressed concern that traditional application of “fair use” doctrine in potentially permitting such data mining access may be gradually eroding.

Thus, paradoxically, research universities have largely adopted the stance that researchers are free to crawl the web and bulk download vast quantities of content to use in their data mining research, while web archives as a whole have adopted the stance that they cannot make their holdings available for data mining because they would, in their view, be “redistributing” the content they downloaded to third parties to use for data mining.

One large web archive has bucked this trend and stood alone among its peers: Common Crawl. Similar to other large web archiving initiatives like the Internet Archive, Common Crawl conducts regular web wide crawls of the open web and preserves all of the content it downloads in the standard WARC file format. Unlike many other archives, it focuses primarily on preserving HTML web pages and does not archive images, videos, JavaScript files, CSS stylesheets, etc. Its goal is not to preserve the exact look and feel of a website on a given snapshot in time, but rather to collect a vast cross section of HTML web pages from across the web in a single place to enable large-scale data mining at web scale.

Yet, what makes Common Crawl so unique is that it makes everything it crawls freely available for download for research. Each month it conducts an open web crawl, boxes up all of the HTML pages it downloads and makes a set of WARC files and a few derivative file formats available for download.

Its most recent crawl, covering August 2017, contains more than 3.28 billion pages totaling 280TiB, while the previous month’s crawl contains 3.16 billion pages and 260TiB of content. The total collection thus totals tens of billions of pages dating back years and totaling more than a petabyte, with all of it instantly available for download to support an incredible diversity of web research.

Of course, without the images, CSS stylesheets, JavaScript files and other non-HTML content saved by preservation-focused web archives like the Internet Archive, this vast compilation of web pages cannot be used to reproduce a page’s appearance as it stood on a given point in time. Instead, it is primarily useful for large-scale data mining research, exploring questions like the linking structure of the web or analyzing the textual content of pages, rather than acting as a historical replay service.

The project excludes sites which have robots.txt exclusion policies, following the historical policy of many other web archives, though it is worth noting that the Internet Archive earlier this year began slowly phasing out its reliance on such files due to their detrimental effect on preservation completeness. Common Crawl also allows sites to request removal from their index. Other than these cases, Common Crawl attempts to crawl as much of the remaining web as possible, aiming for a representative sample of the open web.

Moreover, Common Crawl has made its data publicly available for more than half a decade and has become a staple of large academic studies of the web with high visibility in the research community, suggesting that its approach to copyright compliance and research access appears to be working for it.

Yet, beyond its summary and full terms of use documents, the project has published little in terms of how it views its work fitting into US and international standards on copyright and fair use, so I reached out Sara Crouse, Director of Common Crawl, to speak to how the project approaches copyright and fair use and any advice they might have for other web archives considering broadening access to their holdings for academic big data research.

Ms. Crouse noted the risk adverse nature of the web archiving community as a whole (historically many adhered and still adhere to a strict “opt in” policy requiring prior approval before crawling a site) and the unwillingness of many archives to modernize their thinking on copyright and to engage more closely with the legal community in ways that could help them expand fair use horizons. In particular, she noted “since we [in the US] are beholden to the Copyright Act, while living in a digital age, many well-intentioned organizations devoted to web science, archiving, and information provision may benefit from a stronger understanding of how copyright is interpreted in present day, and its hard boundaries” and that “many talented legal advisers and groups are interested in the precedent-setting nature of this topic; some are willing to work Pro Bono.”

Given that US universities as a whole have moved aggressively towards this idea of expanding the boundaries of fair use and permitting opt-out bulk crawling of the web to compile research datasets, Common Crawl seems to be in good company when it comes to interpreting fair use for the digital age and modern views on utilizing the web for research.

Returning to the difference between Common Crawl’s datasets and traditional preservation-focused web archiving, Ms. Crouse emphasized that they capture only HTML pages and exclude multimedia content like images, video and other dynamic content.

She noted that a key aspect of their approach to fair use is that web pages are intended for consumption by human beings one at a time using a web browser, while Common Crawl concatenates billions of pages together in the specialized WARC file format designed for machine data mining. Specifically, “Common Crawl does not offer separate/individual web pages for easy consumption. The three data formats that are provided include text, metadata, and raw data, and the data is concatenated” and “the format of the output is not a downloaded web page. The output is in WARC file format which contains the components of a page that are beneficial to machine-level analysis and make for space- efficient archiving (essentially: header, text, and some metadata).”

In the eyes of Common Crawl, the use of specialized archival-oriented file formats like WARC (which is the format of choice of most web archives) limit the content’s use to transformative purposes like data mining and, combined with the lack of capture of styling, image and other visual content, renders the captured pages unsuitable to human browsing, transforming them from their originally intended purpose of human consumption.

As Ms. Crouse put it, “this is big data intended for machine learning/readability. Further, our intention for its use is for public benefit i.e. to encourage research and innovation, not direct consumption.” She noted that “from the layperson’s perspective, it is not at all trivial at present to extract a specific website’s content (that is, text) from a Common Crawl dataset. This task generally requires one to know how to install and run a Hadoop cluster, among other things. This is not structured data. Further it is likely that not all pages of that website will be included (depending on the parameters for depth set for the specific crawl).” This means that “the bulk of [Common Crawl’s] users are from the noncommercial, educational, and research sectors. At a higher level, it’s important to note that we provide a broad and representative sample of the web, in the form of web crawl data, each month. No one really knows how big the web is, and at present, we limit our monthly data publication to approximately 3 billion pages.”

Of course, given that content owners are increasingly looking to bulk data mining access licensing as a revenue stream, this raises the concern that even if web archives are transforming content designed for human consumption into machine friendly streams designed for data mining, such transformation may conflict with copyright holders’ own bulk licensing ambitions. For example, many of the large content licensors like LexisNexis, Factiva and Bloomberg all offer licensed commercial bulk feeds designed to support data mining access that pay royalty fees to content owners for their material that is used.

Common Crawl believes it addresses this through the fact that its archive represents only a sample of each website crawled, rather than striving for 100% coverage. Specifically, Ms. Crouse noted that “at present, [crawls are] in monthly increments that are discontinuous month-to-month. We do only what is reasonable, necessary, and economical to achieve a representative sample. For instance, we limit the number of pages crawled from any given domain so, for large content owners, it is highly probable that their content, if included in a certain month’s crawl data, is not wholly represented and thus not ideal for mining for comprehensive results … if the content owner is not a large site, or in a niche market, their URL is less likely to be included in the seeds in the frontier, and, since we limit depth (# of links followed) for the sake of both economy and broader representative web coverage, 'niche' content may not even appear in a given month’s dataset.”

To put it another way, Common Crawl’s mission is to create a “representative sample” of the web at large by crawling a sampling of pages and limiting the number of pages from each site they capture. Thus, their capture of any given site will represent a discontinuous sampling of pages that can change from month to month. A researcher wishing to analyze a single web site in its entirety would therefore not be able to turn to Common Crawl and would instead have to conduct their own crawl of the site or turn to a commercial aggregator that partners with the content holder to license the complete contents of the site.

In Common Crawl’s view this is a critical distinction that sets it apart from both traditional web archiving and the commercial content aggregators that generate data mining revenue for content owners. By focusing on creating a “representative sample” of the web at large, rather than attempting to capture a single site in its entirety (and in fact ensuring that it does not include more than a certain number of pages per site), the crawl self-limits itself to being applicable only to macro-level research examining web scale questions. Such “web scale” questions cannot be answered through any existing open dataset and by incorporating specific design features Common Crawl ensures that more traditional research questions, like data mining the entirety of a single site, which might be viewed as redistribution of that site or competing with its owner’s ability to license its content for data mining, is simply not possible.

Thus, to summarize, Common Crawl is both similar to other web archives in its workflow of crawling the web and archiving what it finds, but sets itself apart by focusing on creating a representative sample of HTML pages from across the entire web, rather than trying to preserve the entirety of a specific set of websites with an eye towards visual and functional preservation. Even when a given page is contained in Common Crawl’s archives, the technical sophistication and effort required to extract it and the lack of supporting CSS, JavaScript and image/video files renders the capture useless for the kind of non-technical browser-based access and interaction such pages are designed for.

Of course, copyright and what counts as "fair use" is a notoriously complex, contradictory, contested and ever-changing field and only time will tell whether Common Crawl’s interpretation of fair use holds up and becomes a standard that other web archives follow. At the very least, however, Common Crawl presents a powerful and intriguing model for how web-scale data can power open data research and offers traditional web archives a set of workflows, rationales and precedent to examine that are fully aligned with those of the academic community. Given its popularity and continued growth over the past decade it is clear that Common Crawl’s model is working and that many of its underlying approaches are highly applicable to the broader web archiving community.

Putting this all together, today’s web archives preserve for future generations the dawn of our digital society, but lock those tens of petabytes of documentary holdings away in dark archives or permit only a page at a time to be accessed. Common Crawl’s success and the projects that have been built upon its data stands testament to the incredible possibilities when such archives are unlocked and made available to the research community. Perhaps as the web archiving community modernizes and “open big data” continues to reshape how academic research is conducted, more web archives will follow Common Crawl’s example and explore ways of shaping the future of fair use and gradually opening their doors to research, all while ensuring that copyright and the rights of content holders are respected.

Source: This article was published forbes.com By Kalev Leetaru,

Categorized in Online Research

Using the internet makes people happier, especially seniors and those with health problems that limit their ability to fully take part in social life, says a study in Computers in Human Behavior.

The issue: A generation after the internet began appearing widely in homes and offices, it is not unusual to hear people ask if near-constant access to the web has made us happier. Research on the association between internet use and happiness have been ambiguous. Some have found that the connectivity empowers people. A 2014  study published in the journal Computers in Human Behavior notes that excessive time spent online can leave people socially isolated. Compulsive online behavior can have a negative impacton mental health.

A new paper examines if quality of life in the golden years is impacted by the ubiquitous internet.

An academic study worth reading: “Life Satisfaction in the Internet Age – Changes in the Past Decade,”published in Computers in Human Behavior, 2016.

Study summary: Sabina Lissitsa and Svetlana Chachashvili-Bolotin, two researchers in Israel, investigate how internet adoption impacts life satisfaction among Israelis over age 65, compared with working-age adults (aged 20-64). They use annual, repeated cross-sectional survey data collected by Israel’s statistics agency from 2003 to 2012 – totaling 75,523 respondents.

They define life satisfaction broadly — on perceptions of one’s health, job, education, empowerment, relationships and place in society — and asked respondents to rate their satisfaction on a four-point scale. They also measured specific types of internet use, for example email, social media and shopping.

Finally, Lissitsa and Chachashvili-Bolotin also analyzed demographic data, information on respondents’ health, the amount they interact with friends and how often, if at all, they feel lonely.

Findings:

  • Internet users report higher levels of life satisfaction than non-users. This finding:
    • Is higher among people with health problems.
    • Decreases over time (possibly because internet saturation is spreading, making it harder to compare those with and those without internet access).
    • Decreases as incomes rise.
  • Internet access among seniors rose from 8 to 34 percent between 2003 and 2012; among the younger group, access increased from 44 to 78 percent. Therefore, the digital divide grew during the study period.
  • Seniors who use the internet report higher levels of life satisfaction than seniors who do not.
  • “Internet adoption promotes life satisfaction in weaker social groups and can serve as a channel for increasing life satisfaction.”
  • Using email and shopping online are associated with an increase in life satisfaction.
  • Using social media and playing games have no association with life satisfaction. The authors speculate that this is because some people grow addicted and abuse these internet applications.
  • The ability to use the internet to seek information has an insignificant impact on happiness for the total sample. But it has a positive association for users with health problems — possibly because the internet increases their ability to interact with others.
  • The findings can be broadly generalized to other developed countries.

Helpful resources:

The Organization for Economic Development and Cooperation (OECD) publishes key data on the global internet economy.

The United Nations publishes the ICT Development Index to compare countries’ adoption of internet and communications technologies.

The Digital Economy and Society Index measures European Union members’ progress toward closing the digital divides in their societies.

Other research:

2015 article by the same authors examines rates of internet adoption by senior citizens.

2014 study looks at how compulsive online behavior is negatively associated with life satisfaction. Similarly, this 2014 article specifically focuses on the compulsive use of Facebook.

2014 study tests the association between happiness and online connections.

Journalist’s Resource has examined the cost of aging populations on national budgets around the world.

Categorized in Online Research

The internet is humongous. Finding what you need means that you should select from amongst millions and sometimes trillions of search results. However, no one can claim for sure that you have found the right information. Is the information reliable and accurate? Or would you have to shop for another set of information that is even better? Or say, relevant to the query – While the Internet keeps growing every single minute, the clutter makes it even harder to catch up with, and perhaps, a more valuable information keeps getting buried underneath it. Unfortunately, the larger the internet grows, it gets harder to find what you need.

Think of search engines and its browsers to be a set of information search tools that will fetch what you need from the Internet. But, a tool is as good as the job it gets done. While, Google, Bing, Yahoo and the like are considered a more generic tool for Internet search, they perform a “fit all search types job”. The search results throw tons of web pages at you and thus, much harder selections and surely less accuracy. 


A simple solution to deal with too much information on the Internet is out there, but only if you care to pay attention – here is a List of Over 1500 Search Engines and Directories to cut your research time in half.


There exists a whole new world of Internet search tools that are job specific and finds that information you need through filtered and precision search. They subscribe to the same world wide web and look through the same web pages as the main search engines, but only better. These search tools are split up into Specialized Search Engines and Online Directories.

The Specialized Search Engines are built to drill down into a more accurate type of information. They can collect a filtered and less cluttered search results when compared to the leading search engines such as Google, Bing, Yahoo. What makes them unique is their built-in ability to use powerful customized filters, and sometimes it has its database to deliver the type of information you need in specific file formats.

Advanced Research Method

We will classify Specialized Search Engines into Meta-crawlers (or Meta-SearchEngine) and the Specialized

Content SearchEngine

Unlike conventional search engines, the Meta-crawlers don’t crawl the web themselves, and they do not build their own web page indexes; instead, they allow search snippets to be collected (aggregated) from several mainstream search engines (Google, Bing, Yahoo and similar) all at once. They don't have their proprietary search technology or the large and expensive infrastructure as the main search engines do. The Meta-crawler aggregates the results and displays these on their proprietary search result pages. In short, they usually concentrate on front-end technologies such as user interface experience and novel ways of displaying the information. They generate revenues by displaying ads and provide the user option to search for images, audio, video, news and even more options, simulating a typical search browsing experience.

Some of the well-known Meta-Crawlers to explore.

  • Ixquick  -  A meta-search engine with options for choosing what the results should be based on? - It respects the information privacy, and the results get opened in Ixquick proxy window.
  • Google - Considered the first stop by many Web searchers. Has a large index and results are known for their high relevancy. Includes ability to search for images, and products, among other features.
  • Bing- General web search engine from Microsoft.
  • Google Scholar - One of Google's specialized search tools, Google Scholar focuses primarily on information from scholarly and peer-reviewed sources. By using the Scholar Preferences page, you can link back to URI's subscriptions for access to many otherwise fee-based articles.
  • DuckDuckGoA general search engine with a focus on user privacy.
  • Yahoo!A combination search engine and human-compiled directory, Yahoo also allows you to search for images, Yellow Page listings, and products.
  • Internet Public LibraryA collection of carefully selected and arranged Internet reference sources, online texts, online periodicals, and library-related links. Includes IPL original resources such as Associations on the Net, the Online Literary Criticism Collection, POTUS: Presidents of the United States, and Stately Knowledge (facts about the states).
  • URI Libraries' Internet ResourcesThis is a collection of links collected and maintained by the URI librarians. It is arranged by subject, like our online databases, and provides access to free internet resources to enhance your learning and research options.
  • Carrot Search   A meta-search engine based on a variety of search engines. Has clickable topic links and diagrams to narrow down search results.
  • iBoogie  -  A meta-search engine with customizable search type tabs. Search rankings have an emphasis on clusters.
  • iSeek  – The meta-search results are from a compilation of authoritative resources from university, government, and established non-commercial providers.
  • PDF Search Engine  – Searches for documents with the following extensions such as, .doc, .pdf, .chm, rtf, .txt.

The Specialized Content Search Engine focuses on a specific segment of online content; that is why they are also called a Topical (Subject Specific) Search Engines. The content area may is based on topicality, media, and content type or genre of content – further to this, the source of material and the original function it performs in transforming it, is what defines their specialty.

We can go a bit further and split these into three groups.

Information Contribution – The information source can be data collected from a Public Contribution Resource Engines as social media contributions and from reference platform such as Wikis. Examples are YouTube, Vimeo, Linked-in, Facebook, Reddit. The other types are a Private Contribution Resource Engines of the searchable database. These are created internally by the efforts of the search engine vendors; examples are Netflix (movies), Reuters (news content), Tineye(image repository), LexisNexis (legal information).

Specialized Function - These are the search engines that are programmed to perform a type of service that is proprietary and unique. They execute tasks that involve collecting web content as information and work on it with algorithms of their own, adding value to the result it produces.

An example of such types of search engines are websites such as the Wayback Machine Organization that provides and maintain records of website pages that are no longer available online as a historical record. Alexa Analytics that performs web analytics and measures traffic on websites and provide performance metrics and Alpha Wolfram who is more than a search engine. It gives you access to the world's facts and data and calculates answers across a range of topics.

Information Category (Subject Specific material) - This is where the search is subject specific and based on the information it retrieves. It does this by a special arrangement with outside sources on a consistent basis. Some of their examples are found under the broader headings.

  • Yellow Pages and phone directories
  • PeopleSearch
  • Government Database and archives
  • Public libraries
  • News Bureaus, Online Journals, and magazines
  • international organizations

web directory or Link Directory is a well-organized catalog on the World Wide Web. A collection of data organized into categories and subcategories. This directory specializes in linking to other web sites and categorizing those links. The web directory is not a search engine, and it does not show numerous web pages from the keyword search. Instead, it exhibits a list of website links according to category and subcategory. Most web directory entries are not commonly found by web crawlers. Instead, they are searched by humans. This categorization encompasses the whole website instead of a single page or a set of keywords; here the websites are often limited to inclusion in only a few categories. Web directories often allow site owners to submit their site for listing and have editors review submissions for its fitness.

The directories are distinguished into two broad categories.

Public Directories that do not require user registration or fee; and the Private Directories with an online registration that may or may not be subject to a fee for inclusions in its listings. Examples of Paid Commercial Versions.

The Public Directories is for General Topics, or it can be Subject Based or Domain-Specific Public Directories.

The General Topics Directory carry popular reference subjects, interests, content domains and their subcategories. Their examples are, DMOZ  (The largest directory of the Web. The open content is mirrored at many sites, including the Google Directory (until July 20, 2011). The A1 Web Directory Organization (This is a general web directory that lists various quality sites under traditional categories and relevant subcategories). The PHPLink Directory  ( A Free Directory Script phpLD is released to the public as a free directory script in 2006, and they continue to offer this as the free download).

The Subject Based or Domain-Specific Public Directories are subject and topic focused. A more famous of these are Hot Frog (a commercial web directory providing websites categorized topically and regionally). The Librarians Index to Internet (directory listing program from the Library of California) and OpenDOAR  (This is an authoritative directory of academic open access repositories).

The PrivateDirectories requires online registration and may be subject to a fee for inclusions in its listings.

Examples of Paid Commercial Versions.

  • Starting Point Directory - $99/Yr
  • Yelp Business Directory - $100/Yr
  • Manta.com - $299/Yr

The Directories that require registration as a member, employee, student or a subscriber.  Examples of these types are found in.

  • Government Employees Websites (Government Secure Portals)
  • Library Networks (Private, Public and Local Libraries)
  • Bureaus, Public Records Access, Legal Documents, Courts Data, Medical Records

The Association of Internet Research Specialists (AIRS) have compiled a comprehensive list they call an "Internet Information Resources." There you will find an extensive collection of Search Engines and interesting information resources for avid Internet research enthusiasts; especially, those that seek serious information without the hassle of sifting through the many pages of the unfiltered Internet. Alternatively, one can search through Phil Bradley’s website or The Search Engine’s List that has some interesting links for the many alternatives to typical search engines out there. 

 Author: Naveed Manzoor [Toronto, Ontario] 

Categorized in Online Research

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media