Displaying items by tag: research - AIRS
Website Search
Research Papers
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

Scientists have produced a series of papers designed to improve research on conservation and the environment.

A group of researchers, led by the University of Exeter, have contributed to a special issue of the journal Methods in Ecology and Evolution to examine commonly used social science techniques and provide a checklist for scientists to follow.

Traditional conservation biology has been dominated by quantitative data (measured in numbers) but today it frequently relies on qualitative methods such as interviews and focus group discussions.

The aim of the special issue is to help researchers decide which techniques are most appropriate for their study, and improve the "methodological rigour" of these techniques.

"Qualitative techniques are an important part of the curriculum for most undergraduate, graduate and doctoral studies in biodiversity conservation and the environment," said Dr. Nibedita Mukherjee, of the University of Exeter, who coordinated the special issue of the journal.

"Yet the application of these techniques is often flawed or badly reported."

Dr. Mukherjee, of the Centre for Ecology and Conservation of Exeter's Penryn Campus in Cornwall, added: "In putting together this special issue, we urge greater collaboration across the disciplines within conservation, incorporating the rigorous use of qualitative methods.

"We envisage a future in which conservation scientists test, modify and improve these techniques so that they become even more relevant and widely used."

The five papers in the special issue include one which examines the use of interviews as part of research into conservation decision-making.

It found that researchers do not always justify their use of interviews, or report on their usefully enough for readers to make informed judgments.

"While interview-based research might not always be reproducible, we should still leave the reader in no doubt about what was done," said lead author Dr David Rose, formerly of the University of Cambridge but now at the University of East Anglia.

Another  looked at the use of . Lead author Tobias Nyumba, from the University of Cambridge, said focus groups are often used but many researchers are "not particularly keen on the process, from planning, execution, analysis to reporting.

"This paper is, therefore, a must read if focus groups must form part of your research toolkit," he said.

A third paper looked at the nominal group  (NGT).

Lead author Dr. Jean Huge said: "While  conflicts are on the rise worldwide, NGT provides a simple yet systematic approach to prioritise management options and could help reduce conflict."

This could inform the choice of criteria in the MultiCriteria Decision Analysis as observed by Dr. Blal Adem Esmail in his paper.

Source: This article was published phys.org

Published in Online Research

Academics say they have been forced to leave the country to pursue their research interests as British universities are accused of blocking studies over fears of a backlash on social media.

As they come under increasing attack from online activists, some of the country’s leading academics have accused universities of putting their reputations before their responsibility to defend academic freedom.

Speaking to The Sunday Telegraph, they claim that university ethics committees are now “drifting into moral vanity” by vetoing research in areas that are seen as “politically incorrect”.

Their comments come amid widespread concern for free speech on campuses, with the Government urging universities to do more to counter the rise of so-called safe spaces and “no-platforming”.

James Caspian, who has been banned by a university from doing transgender research

James Caspian, who has been banned by a university from doing transgender research CREDIT:GEOFF PUGH FOR THE TELEGRAPH

The academics have decided to speak out as James Caspian, one of the country’s leading gender specialists, revealed that he is planning to take Bath Spa University to judicial review over its decision to turn down his research into transgenderism.

A professor who recently left a prestigious Russell Group institution to work in Italy said that while safeguards were needed to ensure research was conducted ethically, some universities now appeared to be “covering their own arses”.

“I’ve certainly heard and known of ethics committees voicing concerns about parts of research that would to most of us seem ridiculous. I think they sometimes go too far.

“In general I’m supportive of ethics committees, but there is room for discussion on their criteria. Attracting a lot unwanted attraction on social media...most researchers would not consider that relevant.

“That’s a matter for the PR office, not an ethics committee.”

Prof Sheila Jeffreys

Prof Sheila Jeffreys CREDIT: THE AGE/SIMON SCHLUTER

Dr. Heather Brunskell-Evans, a fellow of King’s College London who has previously sat on research awarding bodies, claimed that some universities were becoming “authoritarian”.

Universities project themselves as places of open debate, while at the same time they are very worried about being seen to fall foul of the consensus,” she added.

“They are increasingly managerial and bureaucratic. They are now prioritizing the risk of reputational damage over their duty to uphold freedom of inquiry.”

Dr. Brunskell-Evans said she has encountered resistance when researching the dangers associated with prostitution, adding that many universities had “shut down” any critical analysis of the subject which might offend advocates in favor of legalisation.

Whilst working at the University of Leicester, she claimed that a critical analysis she published of Vanity Fair magazine’s visual representation of the transgendering of Bruce to Caitlyn Jenner had been pulled after complaints were made.

It was later republished after the university’s lawyers were consulted. The University of Leicester was unavailable for comment

Others said research decisions are increasingly based on how much money could be generated through research grants, meaning “trendy” and “fashionable” subjects were being prioritised over controversial topics.

“The work done by myself and others would not happen today. University now is about only speaking views which attract funding,” said Prof Sheila Jeffreys, a British feminist and former political scientist at the University of Melbourne.

“I was offered the job in Melbourne because they wanted someone specifically to teach this stuff. It would have been difficult to get back [into a British university]. I suspect that even if I wanted to take up a fellowship I would struggle.”

Dr. Werner Kierski, a psychotherapist who has taught at Anglia Ruskin and Middlesex, added: “They [ethics committees] have become hysterical. If it’s not blocking research, it’s putting limits on what researchers can do.

“In one case, I had an ethics committee force my researchers to text me before and after interviewing people, to confirm that they are still alive.

It’s completely unnecessary and deeply patronising.

“We’ve reached a point where research conducted in other countries will become increasingly dominant. UK research will become insignificant because they [researchers] are so stifled by ethics requirements.”

Bath Spa University caused controversy earlier this year when it emerged that it had declined Mr. Caspian’s research proposal to examine why growing numbers of transgender people were reversing their transition surgery.

After accepting his proposal in 2015, the university later U-turned when Mr. Caspian asked to look for participants on online forums, informing him that his research could provoke “unnecessary offence” and “attacks on social media”.

Jo Johnson, Universities Minister

Jo Johnson, Universities Minister CREDIT: GETTY

Bath Spa has since offered to refund a third of Mr. Caspian’s fees but has rejected his request for an internal review.

A university spokesman said it would “not be commenting further at this stage”.

Mr. Caspian is now crowdfunding online in order to fight the case and has received almost £6,000 in donations from fellow academics and trans people who support his work.

In a letter sent this week to the universities minister Jo Johnson, Mr. Caspian writes that the “suppression of research on spurious grounds” is a growing problem in Britain.

“I have already heard of academics leaving the UK for countries where they felt they would be more welcomed to carry out their research,” the letter continues.

“I believe that it should be made clear that any infringement of our academic freedom should not be allowed. I would ask you to consider the ramifications should academics continue to be censored in this way.”

Last night, Mr. Johnson said that academic freedom was the “foundation of higher education”, adding that he expected universities to “protect and promote it”.

Under the new Higher Education and Research Act, he said that universities would be expected to champion “the freedom to put forward new ideas and controversial or unpopular opinions”.

A spokesman for Universities UK said that its members had “robust processes” to ensure that all research was conducted appropriately.

“They also recognise that there may be legitimate academic reasons to study matters which may be controversial in nature,” they added.

 Source: This article was published telegraph.co.uk By Harry Yorke

Published in Online Research

In researching high-growth professional services firms we found firms that did systematic business research on their target client group grew faster and were more profitable.

Further, those that did more frequent business research (at least quarterly), grew the fastest and were most profitable. Additional research also confirms that the fastest growing firms do more research on their target clients.

Think about that for a minute: Faster growth and more profit. Sounds pretty appealing.

The first question is usually around what kind of research to do and how it might help grow your firm. I’ve reflected on the kinds of questions we’ve asked when doing research for our professional services clients and how the process has impacted their strategy and financial results.

There are a number of types of research that your firm can use, including:

  • Brand research
  • Persona research
  • Market research
  • Lost prospect analysis
  • Client satisfaction research
  • Benchmarking research
  • Employee surveys

So those are the types of research, but what are the big questions that you need answers for? We looked across the research we have done on behalf of our clients to isolate the most insightful and impactful areas of inquiry.

The result is this list of the top 10 research questions that can drive firm growth and profitability:

1. Why do your best clients choose your firm?

Notice we are focusing on the best clients, not necessarily the average client. Understanding what they find appealing about your firm can help you find others just like them and turn them into your next new client.

2. What are those same clients trying to avoid?

This is the flip side of the first question and offers a valuable perspective. As a practical matter, avoiding being ruled out during the early rounds of a prospect’s selection process is pretty darned important. This is also important in helping shape your business practices and strategy.

In our research on professional services buyers and sellers, we’ve found that the top circumstances that buyers want to avoid in a service provider are broken promises and a firm that’s indistinguishable from everyone else.

Notice that this chart also shows what sellers (professional services providers) believe buyers want to avoid. Notice that many sellers misjudge their potential client’s priorities. Closing this perception gap is one of the ways that research can help a firm grow faster. If you understand how your prospects think you can do a much better job of turning them into clients.

3. Who are your real competitors?

Most firms aren’t very good at identifying their true competitors. When we ask a firm to list their competitors and ask their clients to do the same, there is often only about a 25% overlap in their lists.

Why? Sometimes, it’s because you know too much about your industry and rules out competitors too easily. At other times, it’s because you are viewing a client’s problems through your filter and overlook completely different categories of solutions that they are considering.

For example, a company facing declining sales could easily consider sales training, new product development, or a new marketing campaign. If you consult on new product development the other possible solutions are all competitors. In any case, ignorance of true competitors seldom helps you compete.

4. How do potential clients see their greatest challenges?

The answer to this question helps you understand what is on prospective clients’ minds and how they are likely to describe and talk about those issues. The key here is that you may offer services that can be of great benefit to organizations, but they never consider you because they are thinking about their challenges through a different lens.

They may want cost reduction when you are offering process improvement (which, in fact, reduces cost). Someone needs to connect the dots or you will miss the opportunity. This is similar to the dilemma of understanding the full range of competitors described above.

5. What is the real benefit your firm provides?

Sure, you know your services and what they are intended to do for clients. But what do they actually do? Often, firms are surprised to learn the true benefit of their service. What might’ve attracted a client to your firm initially might not be what they end up valuing most when working with you. For example, you might have won the sale based on your good reputation, but after working with you, your client might value your specialized skills and expertise most.

When you understand what true value and benefit of your services, you’re in a position to enhance it or even develop new services with other true benefits.

6. What are emerging trends and challenges?

Where is the market headed? Will it grow or contract? What services might be needed in the future? This is fairly common research fodder in large market-driven industries, but it’s surprisingly rare among professional services firms.

Understanding emerging trends can help you conserve and better target limited marketing dollars. I’ve seen many firms add entire service lines, including new hires and big marketing budgets, based on little more than hunches and anecdotal observations. These decisions should be driven by research and data. Research reduces your risk associated with this type of decision.

7. How strong is your brand?

What is your firm known for? How strong is your reputation? How visible are you in the marketplace? Answers to each of these questions can vary from market to market. Knowing where you stand cannot only guide your overall strategy, it can also have a profound impact on your marketing budget. An understanding of your brand’s strengths and weaknesses can help you understand why you are getting traction in one segment and not another.

8. What is the best way to market to your prime target clients?

Wouldn’t it be nice to know where your target clients go to get recommendations and referrals? Wouldn’t it be great if you knew how they want to be marketed to? These are all questions that can be answered through systematic business research. The answers will greatly reduce the level of spending needed to reach your best clients. This is perhaps one of the key reasons that firms that do regular research are more profitable.

9. How should you price your services?

This is often a huge stumbling block for professional services firms. In my experience, most firms overestimate the role price plays in buying decisions. Perhaps it is because firms are told that the reason they don’t win an engagement is because of the price. It is the easiest reason for a buyer to share when providing feedback. 

However, if a firm hires an impartial third party to dig deeper into why it loses competitive bids, it often learns that what appears to be price may really be a perceived level of expertise, lack of attention to detail or an impression of non-responsiveness. We’ve seen firms lose business because of typos in their proposal — while attributing the loss to their fees.

10. How do your current clients really feel about you?

How likely are clients to refer you to others? What would they change about your firm? How long are they likely to remain a client? These are the kinds of questions that can help you fine-tune your procedures and get a more accurate feel for what the future holds. In some cases, we’ve seen clients reveal previously hidden strengths. In others, they have uncovered important vulnerabilities that need attention.

The tricky part here is that clients are rarely eager to tell you the truth directly. They may want to avoid an uncomfortable situation or are worried that they will make matters worse by sharing their true feelings.

Understanding the key questions discussed above can have a positive impact on your firm’s growth and profitability. That is the real power of well-designed and professionally executed business research.

Source: This article was published accountingweb.com By Lee Frederiksen

Published in Business Research

Internet Advertising Market report provides leading vendors in the Market is included based on profile, business performance, sales. Vendors mentioned as Alphabet, Facebook, Baidu, Yahoo! Inc, Microsoft, Alibaba, Tencent, Twitter, AOL(Verizon Communications), eBay, Linkedin, Amazon, IAC, Soho, Pandora

- Agency -.

The Global Internet Advertising Market will reach xxx Million USD in 2017 with CAGR xx% from 2011 -2023.The objective of the Global Internet Advertising Market Research Report 2011-2023 is to offer information with access to high-value unique databases of information, like market projection on the basis of product type, application, and region.

Download Report at www.reportsnreports.com/contacts/r…aspx?name=1169245

On the basis of the end users/applications, this report focuses on the status and outlook for major applications/end users, consumption (sales), market share and growth rate for each application, including

  • - Retail
  • - Automotive
  • - Entertainment
  • - Financial Services
  • - Telecom
  • - Consumer Goods

Online advertising, also called online or Internet advertising or web advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. Online advertising is a marketing strategy that involves the use of the Internet as a medium to obtain website traffic and target and deliver marketing messages to the right customers.

Online advertising is geared toward defining markets through unique and useful applications.

- Agency -.

Internet Advertising Industry Leading vendors (Key Players) like  Alphabet, Facebook, Baidu, Yahoo! Inc, Microsoft, Alibaba, Tencent, Twitter, AOL(Verizon Communications), eBay, Linkedin, Amazon, IAC, Soho, Pandora New Material amongst others have been included based on profile, and business performance for the clients to make informed decisions.

Internet Advertising Market Segment as follows:

Based on Products Type, the report describes major products type share of regional market. Products mentioned as follows:

  • Search Ads
  • Mobile Ads
  • Banner Ads
  • Classified Ads
  • Digital Video Ads

Access report at www.reportsnreports.com/contacts/d…aspx?name=1169245

The report describes major Regions Market by-products and application. Regions mentioned as follows:

  • North America (U.S., Canada, Mexico)
  • Europe (Germany, U.K., France, Italy, Russia, Spain.
  • South America (Brazil, Argentina.
  • Middle East & Africa (Saudi Arabia, South Africa.

The main contents of the Internet Advertising Market Research Report including:

Global market size and forecast

Regional market size, production data, and export & import

Key manufacturers (manufacturing sites, capacity, and production, product specifications.)

Average market price by SUK

Major applications

Table of Contents:

1 Internet Advertising Market Overview

Access a Copy of this Research Report at www.reportsnreports.com/purchase.aspx?name=1169245

List of Tables

Table Global Internet Advertising Market and Growth by Type

Table Global Internet Advertising Market and Growth by End-Use / Application

Table Global Internet Advertising Revenue (Million USD) by Vendors (2011-2017)

Table Global Machinery & Equipment Revenue Share by Vendors (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by Vendors (2011-2017)

Table Global Internet Advertising Market Volume Share by Vendors (2011-2017)

Table Headquarter, Factories & Sales Regions Comparison of Vendors

Table Product List of Vendors

Table Global Internet Advertising Market (Million USD) by Type (2011-2017)

Table Global Internet Advertising Market Share by Type (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by Type (2011-2017)

Table Global Internet Advertising Market Volume Share by Type (2011-2017)

Table Global Internet Advertising Market (Million USD) by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Share by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Volume (Volume) by End-Use / Application (2011-2017)

Table Global Internet Advertising Market Volume Share by End-Use / Application (2011-2017)

List of Figures

Figure Global Internet Advertising Market Size (Million USD) 2012-2022

Figure North America Market Growth 2011-2018

Figure Europe Market Growth 2011-2017

Figure Asia-Pacific Market Growth 2011-2017

Figure South America Market Growth 2011-2017

Figure Middle East & Africa Market Growth 2011-2017

Figure Global Internet Advertising Market (Million USD) and Growth Forecast (2018-2023)

Figure Global Internet Advertising Market Volume (Volume) and Growth Forecast (2018-2023)

Source: This article was published whatech.com

Published in Market Research

What do real customers search for?

It seems like a straightforward question, but once you start digging into research and data, things become muddled. A word or phrase might be searched for often, yet that fact alone doesn’t mean those are your customers.

While a paid search campaign will give us insight into our “money” keywords — those that convert into customers and/or sales — there are also many other ways to discover what real customers search.

Keyword Evolution

We are in the era where intent-based searches are more important to us than pure volume. As the search engines strive to better understand the user, we have to be just as savvy about it too, meaning we have to know a lot about our prospects and customers.

In addition, we have to consider voice search and how that growth will impact our traffic and ultimately conversions. Most of us are already on this track, but if you are not or want to sharpen your research skills, there are many tools and tactics you can employ.

Below are my go-to tools and techniques that have made the difference between average keyword research and targeted keyword research that leads to interested web visitors.

1. Get to Know the Human(s) You’re Targeting

Knowing the target audience, I mean really knowing them, is something I have preached for years. If you have read any of my past blog posts, you know I’m a broken record.

You should take the extra step to learn the questions customers are asking and how they describe their problems. In marketing, we need to focus on solving a problem.

SEO is marketing. That means our targeted keywords and content focus should be centered on this concept.

2. Go Beyond Traditional Keyword Tools

I love keyword research tools. There is no doubt they streamline the process of finding some great words and phrases, especially the tools that provide suggested or related terms that help us build our lists. Don’t forget about the not-so-obvious tools, though.

Demographics Pro is designed to give you detailed insights into social media audiences, which in turn gives you a sense of who might be searching for your brand or products. You can see what they’re interested in and what they might be looking for. It puts you on the right track to targeting words your customers are using versus words your company believes people are using.

You can glean similar data about your prospective customers by using a free tool, Social Searcher. It’s not hard to use — all you have to do is input your keyword(s), select the source and choose the post type. You can see recent posts, users, sentiment and even related hashtags/words, as reflected in the following Social Searcher report:

social searcher screen shot

If you are struggling with your keywords, another great tool to try is Seed Keywords. This tool makes it possible to create a search scenario that you can then send to your friends. It is especially useful if you are in a niche industry and it is hard to find keywords.

Once you have created the search scenario, you get a link that you can send to people. The words they use to search are then collected and available to you. These words are all possible keywords.

seed keywords screen shot

3. Dig into Intent

Once I get a feel for some of the keywords I want to target, it is time to take it a step further. I want to know what type of content is ranking for those keywords, which gives me an idea of what Google, and the searchers, believe the intent to be.

For the sake of providing a simple example (there are many other types of intent that occur during the buyer’s journey), let’s focus on two main categories of intent: buy and know.

The State of Local Search 2018: Expert Webinar
Join a panel of the biggest local search experts as we explore how the industry changed in 2017 and predict what search engines might have in store.

Let’s say I’m targeting the term “fair trade coffee:”

Google search result page

Based on what is in results, Google believes the searcher’s intent could either be to purchase fair trade coffee or to learn more about it. In this case, the page I am trying to optimize can be targeted toward either intent.

Here’s another example:

Google search result page

In this scenario, if I was targeting the keyword, “safe weed removal,” I would create and/or optimize a page that provides information, or in other words, satisfies the “know” intent.

There are many tools that can help you determine what pages are ranking for your targeted keywords, including SEOToolSet, SEMRush, and Ahrefs. You would simply click through them to determine the intent of the pages.

4. Go from Keywords to Questions

People search questions. That’s not newsworthy, but we should be capitalizing on all of the opportunities to answer those questions. Therefore, don’t ever forget about the long-tail keyword.

Some of my favorite tools to assist in finding questions are Answer the Public, the new Question Analyzer by BuzzSumo, and FaqFox.

Answer The Public uses autosuggest technology to present the common questions and phrases associated with your keywords. It generates a visualization of data that can help you get a better feel for the topics being searched.

With this tool, you get a list of questions, not to mention other data that isn’t depicted below:

Answer the public chart

The Question Analyzer by BuzzSumo locates the most popular questions that are asked across countless forums and websites, including Amazon, Reddit, and Quora. If I want to know what people ask about “coffee machines,” I can get that information:


question analyzer screen shot

FaqFox will also provide you with questions related to your keywords using such sites at Quora, Reddit, and Topix.

For example, if I want to target people searching for “iced coffee,” I might consider creating and optimizing content based on the following questions:

faq fox screen shot

Final Thoughts

There are constantly new techniques and tools to make our jobs easier. Your main focus should be on how to get customers to your website, which is done by knowing how to draw them in with the right keywords, questions, and content.

 

Source: This article was published searchenginejournal By Mindy Weinstein

Published in Online Research

Incorporating Pinterest into your online marketing strategy is good -- getting that content to rank in Google search is better. Columnist Thomas Stern explains how to increase the search visibility of your Pinterest content.

In mid-2014, Pinterest introduced Guided Search, a feature that greatly expanded the information available to marketers by providing insight to popular keyword phrases for boards and Pins. Unfortunately, this feature requires inputting keywords on a per-board or per-Pin basis, which can be incredibly time-consuming for most marketers.

In early 2015, our team received access to the Pinterest advertising beta program. This granted our team further insight into keyword targeting opportunities around our clients’ products. While this was a great step toward ensuring visibility on the platform, the targeting and keyword insights were considerably limited, undoubtedly something that Pinterest is working to improve.

We decided to take matters into our own hands. After all, we’ve seen the tremendous performance with Pinterest when utilized correctly for clients. Similar performance has also been validated by numerous case studies, most recently by Marketing Sherpa earlier this year.

Google + Pinterest = 

After Pinterest took off in popularity a few years ago, our SEO team noticed more and more page-one Google results that included Pinterest. Most recently, we’ve come across indexed boards and Pins in image results, along with a unique mobile result that displays multiple Pin images underneath a link to the board.

SEL 2 pinterest google

Clearly, Google considers Pinterest content to be authoritative, so we decided to see exactly how Pinterest compares to other websites in terms of a unique number of keyword rankings on page one.

Pinterest Keyword Ranking

Using SEMRush’s extensive database of organic listings, we see that Pinterest ranks #8 among all websites for a number of keywords ranking in Google’s top 20 results — just ahead of eBay, Yellow Pages and TripAdvisor. With nearly five million ranked keywords to evaluate, we’ve put together a method to identify the commonalities between keywords and categories.

Step 1: Identify Commonly Occurring Keywords

Considering the sheer volume of keywords, an initial filtering process is required to make sense of the data. We felt that it was easiest to identify the most commonly occurring keywords to create initial groupings. The following example includes the most frequently occurring keywords with adjectives and pronouns omitted (cool, cheap, her, him, etc.).

Pinterest Google Keywords

Step 2: Build & Prioritize Keyword Phrases

Outside of branded search, Pinterest results on Google are primarily long-tail, descriptive phrases. To help identify these phrases, an additional round of keyword insight is needed. The following example takes the “home & home furnishings” keywords from step one and aligns them with the most searched pairings that Pinterest ranks on Google.

Pinterest Keyword Combinations

When reviewing these combinations, it’s immediately clear that a theme exists across the reviewed home and furniture category: Pinterest users are interested in smaller homes and furniture that accommodates a smaller space.

Putting this into a marketing context, brands like West Elm, Ikea and CB2 could greatly benefit from creating Pinterest boards around space-saving furniture offerings. All three brands reference small spaces on a dedicated Pinterest board, but none seem to quite capture the varied intent (room type, furniture type) of Pinterest searchers.

Step 3: Optimize With Pinterest Ranking Factors In Mind

While researching ways to utilize Google data to inform Pinterest keyword strategies, we identified some slight differences between boards and Pins that rank at the top of each search engine (Pinterest vs. Google). On Google specifically, it seemed that boards and Pins with a high degree of interaction (repins) were favored. On Pinterest, it’s a bit more difficult to pin down in entirety (no pun intended), but Google’s ranking factors in addition to others are certainly in place. Regardless of search engine, it’s important to keep the following optimization principles in mind:

  • Conduct Keyword Research. As is evident in the aforementioned furniture example, there is an abundance of keyword combinations that can help brands align with how users search.
  • Be Descriptive. Authentic and utilitarian content must coincide with keyword strategies. On Pinterest, this means creating boards that are common in theme but also provide enough specificity to align with users’ needs (e.g., “Small Living Room Ideas” or “Small Space Living”). It also means Pins should have well-written descriptions that thoroughly describe what the image is about.
  • Use Markup. One of the simplest ways to ensure the content from your website and/or blog is optimized for Pinterest is to use Rich Pins in conjunction with the appropriate markup (different types of markup are supported for recipes, movies, articles, products or places). We highly recommend identifying which relevant content types are on your website and implementing markup ASAP.
  • Be Active. Just like Facebook, the level of engagement of content on Pinterest helps algorithms on the platform determine which boards and pins should rank. Brands often overlook the fact that pinning other users’ and websites’ content is common practice on the platform, and brands can be rewarded for participating.

 Source: This article was published searchengineland.com By Thomas Stern

Published in Search Engine

When Congress voted in March to reverse rules intended to protect Internet users’ privacy, many people began looking for ways to keep their online activity private. One of the most popular and effective is Tor, a software system millions of people use to protect their anonymity online.

But even Tor has weaknesses, and in a new paper, researchers at Princeton University recommend steps to combat certain types of Tor’s vulnerabilities.

Tor was designed in the early 2000s to make it more difficult to track what people are doing online by routing their traffic through a series of “proxy” servers before it reaches its final destination. This makes it difficult to track Tor users because their connections to a particular server first pass through intermediate Tor servers called relays. But while Tor can be a powerful tool to help protect users’ privacy and anonymity online, it is not perfect.

In earlier work, a research group led by Prateek Mittal, an assistant professor of electrical engineering, identified different ways that the Tor network can be compromised, as well as ways to make Tor more resilient to those types of attacks. Many of their latest findings on how to mitigate Tor vulnerabilities are detailed in a paper titled “Counter-RAPTOR: Safeguarding Tor Against Active Routing Attacks,” presented at the IEEE Symposium on Security and Privacy in San Jose, California, in May.

The paper is written by Mittal, Ph.D. students Yixin Sun and Anne Edmundson, and Nick Feamster, professor of computer science, and Mung Chiang, the Arthur LeGrand Doty Professor of Electrical Engineering. Support for the project was provided in part by the National Science Foundation, the Open Technology Fund and the U.S. Defense Department.

The research builds on earlier work done by some of the authors identifying a method of attacking Tor called “RAPTOR” (short for Routing Attacks on Privacy in TOR). In that work, Mittal and his collaborators demonstrated methods under which adversaries could use attacks at the network level to identify Tor users.

“As the internet gets bigger and more dynamic, more organizations have the ability to observe users’ traffic,′ said Sun, a graduate student in computer science. “We wanted to understand possible ways that these organizations could identify users and provide Tor with ways to defend itself against these attacks as a way to help preserve online privacy.”

Mittal said the vulnerability emerges from the fact that there are big companies that control large parts of the internet and forward traffic through their systems. “The idea was, if there’s a network like AT&T or Verizon that can see user traffic coming into and coming out of the Tor network, then they can do statistical analysis on whose traffic it is,” Mittal explained. “We started to think about the potential threats that were posed by these entities and the new attacks — the RAPTOR attacks — that these entities could use to gain visibility into Tor.”

Even though a Tor user’s traffic is routed through proxy servers, every user’s traffic patterns are distinctive, in terms of the size and sequence of data packets they’re sending online. So if an internet service provider sees similar-looking traffic streams enter the Tor network and leaving the Tor network after being routed through proxy servers, the provider may be able to piece together the user’s identity. And internet service providers are often able to manipulate how traffic on the internet is routed, so they can observe particular streams of traffic, making Tor more vulnerable to this kind of attack.

These types of attacks are important because there is a lot of interest in being able to break the anonymity Tor provides. “There is a slide from an NSA (the U.S. National Security Agency) presentation that Edward Snowden leaked that outlines their attempts at breaking the privacy of the Tor network,” Mittal pointed out. “The NSA wasn’t successful, but it shows that they tried. And that was the starting point for this project because when we looked at those documents we thought, with these types of capabilities, surely they can do better.”

In their latest paper, the researchers recommend steps that Tor can take to better protect its users from RAPTOR-type attacks. First, they provide a way to measure internet service providers’ susceptibility to these attacks. (This depends on the structure of the providers’ networks.) The researchers then use those measurements to develop an algorithm that selects how a Tor user’s traffic will be routed through proxy servers depending on the servers’ vulnerability to attack. Currently, Tor proxy servers are randomly selected, though some attention is given to making sure that no servers are overloaded with traffic. In their paper, the researchers propose a way to select Tor proxy servers that take into consideration vulnerability to outside attack. When the researchers implemented this algorithm, they found that it reduced the risk of a successful network-level attack by 36 percent.

The researchers also built a network-monitoring system to check network traffic to uncover manipulation that could indicate attacks on Tor. When they simulated such attacks themselves, the researchers found that their system was able to identify the attacks with very low false positive rates.

Roger Dingledine, president and research director of the Tor Project, expressed interest in implementing the network monitoring approach for Tor. “We could use that right now,” he said, adding that implementing the proposed changes to how proxy servers are selected might be more complicated.

“Research along these lines is extremely valuable for making sure Tor can keep real users safe,” Dingledine said. “Our best chance at keeping Tor safe is for researchers and developers all around the world to team up and all work in the open to build on each other’s progress.”

Mittal and his collaborators also hope that their findings of potential vulnerabilities will ultimately serve to strengthen Tor’s security.

“Tor is amongst the best tools for anonymous communications,” Mittal said. “Making Tor more robust directly serves to strengthen individual liberty and freedom of expression in online communications.”

Source: This article was published princeton.edu By Josephine Wolff

Published in Internet Privacy

Over the past half-decade I’ve written extensively about web archiving, including why we need to understand what’s in our massive archives of the web, whether our archives are failing to capture the modern and social web, the need for archives to modernize their technology infrastructures and, perhaps most intriguingly for the world of “big data,” how archives can make their petabytes of holdings available for research. What might it look like if the world’s web archives opened up their collections for academic research, making hundreds of billions of web objects totaling tens of petabytes and stretching back to the founding of the modern web available as a massive shared corpus to power the modern data mining revolution, from studies of the evolution of the web to powering the vast training corpuses required to build today’s cutting edge neural networks?

When it comes to crawling the open web to build large corpuses for data mining, universities in the US and Canada have largely adopted a hands-offapproach, exempting most work from ethical review, granting permission to ignore terms of use or copyright restrictions and waiving traditional policies on data management and replication on the grounds that material harvested from the open web is publicly accessible information and that its copyright owners, by virtue of being making it available on the web without password protection, encourage its access and use.

On the other hand, the world’s non-profit and governmental web archives, whom collectively hold tens of petabytes of archived content crawled from the open web stretching back 20+ years, have as a whole largely resisted opening their collections to bulk academic research. Many provide no access at all to their collections, some provide access only on a case-by-case basis and others provide access to a single page at a time, with no facilities for bulk exporting large portions of their holdings or even analyzing them in situ.

While some archives have cited technical limitations in making their content more accessible, the most common argument against offering bulk data mining access revolves around copyright law and concern that by boxing up gigabytes, terabytes or even petabytes of web content and redistributing it to researchers, web archives could potentially be viewed as “redistributing” copyrighted content. Given the growing interest among large content holders in licensing their material for precisely such bulk data mining efforts, some archives have expressed concern that traditional application of “fair use” doctrine in potentially permitting such data mining access may be gradually eroding.

Thus, paradoxically, research universities have largely adopted the stance that researchers are free to crawl the web and bulk download vast quantities of content to use in their data mining research, while web archives as a whole have adopted the stance that they cannot make their holdings available for data mining because they would, in their view, be “redistributing” the content they downloaded to third parties to use for data mining.

One large web archive has bucked this trend and stood alone among its peers: Common Crawl. Similar to other large web archiving initiatives like the Internet Archive, Common Crawl conducts regular web wide crawls of the open web and preserves all of the content it downloads in the standard WARC file format. Unlike many other archives, it focuses primarily on preserving HTML web pages and does not archive images, videos, JavaScript files, CSS stylesheets, etc. Its goal is not to preserve the exact look and feel of a website on a given snapshot in time, but rather to collect a vast cross section of HTML web pages from across the web in a single place to enable large-scale data mining at web scale.

Yet, what makes Common Crawl so unique is that it makes everything it crawls freely available for download for research. Each month it conducts an open web crawl, boxes up all of the HTML pages it downloads and makes a set of WARC files and a few derivative file formats available for download.

Its most recent crawl, covering August 2017, contains more than 3.28 billion pages totaling 280TiB, while the previous month’s crawl contains 3.16 billion pages and 260TiB of content. The total collection thus totals tens of billions of pages dating back years and totaling more than a petabyte, with all of it instantly available for download to support an incredible diversity of web research.

Of course, without the images, CSS stylesheets, JavaScript files and other non-HTML content saved by preservation-focused web archives like the Internet Archive, this vast compilation of web pages cannot be used to reproduce a page’s appearance as it stood on a given point in time. Instead, it is primarily useful for large-scale data mining research, exploring questions like the linking structure of the web or analyzing the textual content of pages, rather than acting as a historical replay service.

The project excludes sites which have robots.txt exclusion policies, following the historical policy of many other web archives, though it is worth noting that the Internet Archive earlier this year began slowly phasing out its reliance on such files due to their detrimental effect on preservation completeness. Common Crawl also allows sites to request removal from their index. Other than these cases, Common Crawl attempts to crawl as much of the remaining web as possible, aiming for a representative sample of the open web.

Moreover, Common Crawl has made its data publicly available for more than half a decade and has become a staple of large academic studies of the web with high visibility in the research community, suggesting that its approach to copyright compliance and research access appears to be working for it.

Yet, beyond its summary and full terms of use documents, the project has published little in terms of how it views its work fitting into US and international standards on copyright and fair use, so I reached out Sara Crouse, Director of Common Crawl, to speak to how the project approaches copyright and fair use and any advice they might have for other web archives considering broadening access to their holdings for academic big data research.

Ms. Crouse noted the risk adverse nature of the web archiving community as a whole (historically many adhered and still adhere to a strict “opt in” policy requiring prior approval before crawling a site) and the unwillingness of many archives to modernize their thinking on copyright and to engage more closely with the legal community in ways that could help them expand fair use horizons. In particular, she noted “since we [in the US] are beholden to the Copyright Act, while living in a digital age, many well-intentioned organizations devoted to web science, archiving, and information provision may benefit from a stronger understanding of how copyright is interpreted in present day, and its hard boundaries” and that “many talented legal advisers and groups are interested in the precedent-setting nature of this topic; some are willing to work Pro Bono.”

Given that US universities as a whole have moved aggressively towards this idea of expanding the boundaries of fair use and permitting opt-out bulk crawling of the web to compile research datasets, Common Crawl seems to be in good company when it comes to interpreting fair use for the digital age and modern views on utilizing the web for research.

Returning to the difference between Common Crawl’s datasets and traditional preservation-focused web archiving, Ms. Crouse emphasized that they capture only HTML pages and exclude multimedia content like images, video and other dynamic content.

She noted that a key aspect of their approach to fair use is that web pages are intended for consumption by human beings one at a time using a web browser, while Common Crawl concatenates billions of pages together in the specialized WARC file format designed for machine data mining. Specifically, “Common Crawl does not offer separate/individual web pages for easy consumption. The three data formats that are provided include text, metadata, and raw data, and the data is concatenated” and “the format of the output is not a downloaded web page. The output is in WARC file format which contains the components of a page that are beneficial to machine-level analysis and make for space- efficient archiving (essentially: header, text, and some metadata).”

In the eyes of Common Crawl, the use of specialized archival-oriented file formats like WARC (which is the format of choice of most web archives) limit the content’s use to transformative purposes like data mining and, combined with the lack of capture of styling, image and other visual content, renders the captured pages unsuitable to human browsing, transforming them from their originally intended purpose of human consumption.

As Ms. Crouse put it, “this is big data intended for machine learning/readability. Further, our intention for its use is for public benefit i.e. to encourage research and innovation, not direct consumption.” She noted that “from the layperson’s perspective, it is not at all trivial at present to extract a specific website’s content (that is, text) from a Common Crawl dataset. This task generally requires one to know how to install and run a Hadoop cluster, among other things. This is not structured data. Further it is likely that not all pages of that website will be included (depending on the parameters for depth set for the specific crawl).” This means that “the bulk of [Common Crawl’s] users are from the noncommercial, educational, and research sectors. At a higher level, it’s important to note that we provide a broad and representative sample of the web, in the form of web crawl data, each month. No one really knows how big the web is, and at present, we limit our monthly data publication to approximately 3 billion pages.”

Of course, given that content owners are increasingly looking to bulk data mining access licensing as a revenue stream, this raises the concern that even if web archives are transforming content designed for human consumption into machine friendly streams designed for data mining, such transformation may conflict with copyright holders’ own bulk licensing ambitions. For example, many of the large content licensors like LexisNexis, Factiva and Bloomberg all offer licensed commercial bulk feeds designed to support data mining access that pay royalty fees to content owners for their material that is used.

Common Crawl believes it addresses this through the fact that its archive represents only a sample of each website crawled, rather than striving for 100% coverage. Specifically, Ms. Crouse noted that “at present, [crawls are] in monthly increments that are discontinuous month-to-month. We do only what is reasonable, necessary, and economical to achieve a representative sample. For instance, we limit the number of pages crawled from any given domain so, for large content owners, it is highly probable that their content, if included in a certain month’s crawl data, is not wholly represented and thus not ideal for mining for comprehensive results … if the content owner is not a large site, or in a niche market, their URL is less likely to be included in the seeds in the frontier, and, since we limit depth (# of links followed) for the sake of both economy and broader representative web coverage, 'niche' content may not even appear in a given month’s dataset.”

To put it another way, Common Crawl’s mission is to create a “representative sample” of the web at large by crawling a sampling of pages and limiting the number of pages from each site they capture. Thus, their capture of any given site will represent a discontinuous sampling of pages that can change from month to month. A researcher wishing to analyze a single web site in its entirety would therefore not be able to turn to Common Crawl and would instead have to conduct their own crawl of the site or turn to a commercial aggregator that partners with the content holder to license the complete contents of the site.

In Common Crawl’s view this is a critical distinction that sets it apart from both traditional web archiving and the commercial content aggregators that generate data mining revenue for content owners. By focusing on creating a “representative sample” of the web at large, rather than attempting to capture a single site in its entirety (and in fact ensuring that it does not include more than a certain number of pages per site), the crawl self-limits itself to being applicable only to macro-level research examining web scale questions. Such “web scale” questions cannot be answered through any existing open dataset and by incorporating specific design features Common Crawl ensures that more traditional research questions, like data mining the entirety of a single site, which might be viewed as redistribution of that site or competing with its owner’s ability to license its content for data mining, is simply not possible.

Thus, to summarize, Common Crawl is both similar to other web archives in its workflow of crawling the web and archiving what it finds, but sets itself apart by focusing on creating a representative sample of HTML pages from across the entire web, rather than trying to preserve the entirety of a specific set of websites with an eye towards visual and functional preservation. Even when a given page is contained in Common Crawl’s archives, the technical sophistication and effort required to extract it and the lack of supporting CSS, JavaScript and image/video files renders the capture useless for the kind of non-technical browser-based access and interaction such pages are designed for.

Of course, copyright and what counts as "fair use" is a notoriously complex, contradictory, contested and ever-changing field and only time will tell whether Common Crawl’s interpretation of fair use holds up and becomes a standard that other web archives follow. At the very least, however, Common Crawl presents a powerful and intriguing model for how web-scale data can power open data research and offers traditional web archives a set of workflows, rationales and precedent to examine that are fully aligned with those of the academic community. Given its popularity and continued growth over the past decade it is clear that Common Crawl’s model is working and that many of its underlying approaches are highly applicable to the broader web archiving community.

Putting this all together, today’s web archives preserve for future generations the dawn of our digital society, but lock those tens of petabytes of documentary holdings away in dark archives or permit only a page at a time to be accessed. Common Crawl’s success and the projects that have been built upon its data stands testament to the incredible possibilities when such archives are unlocked and made available to the research community. Perhaps as the web archiving community modernizes and “open big data” continues to reshape how academic research is conducted, more web archives will follow Common Crawl’s example and explore ways of shaping the future of fair use and gradually opening their doors to research, all while ensuring that copyright and the rights of content holders are respected.

Source: This article was published forbes.com By Kalev Leetaru,

Published in Online Research

Google has officially announced that it is opening an AI center in Beijing, China.

The confirmation comes after months of speculation fueled by a major push to hire AI talent inside the country.

Google’s search engine is blocked in China, but the company still has hundreds of staff in China which work on its international services. In reference to that workforce, Alphabet chairman Eric Schmidt has said the company “never left” China, and it makes sense that Google wouldn’t want to ignore China’s deep and growing AI talent pool, which has been hailed by experts that include former Google China head Kaifu Lee.

Like the general talent with Google China, this AI hiring push isn’t a sign that Google will launch new services in China. Although it did make its Google Translate app available in China earlier this year in a rare product move on Chinese soil.

Instead, the Beijing-based team will work with AI colleagues in Google offices across the world, including New York, Toronto, London and Zurich.

“I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it,” wrote Dr. Fei-Fei Li, Chief Scientist at Google Cloud, in a blog post announcing plans for the China lab.

Related...

Li, formerly the director of Stanford University’s Artificial Intelligence Lab, was a high-profile arrival when she joined Google one year ago. She will lead the China-based team alongside Jia Li, who was hired from Snap where she had been head of research at the same time as Li.

The China lab has “already hired some top talent” and there are currently more than 20 jobs open, according to a vacancy listing.

“Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community,” Li added.

Google is up against some tough competitors for talent. Aside from the country’s three largest tech companies Baidu, Tencent and Alibaba, ambitious $30 billion firm Bytedance — which acquired Musical.ly for $1 billion — and fast-growing companies SenseTime and Face++ all compete for AI engineers with compensation deals growing higher.

Source: This article was published techcrunch.com By Jon Russell

Published in Online Research

Using the internet makes people happier, especially seniors and those with health problems that limit their ability to fully take part in social life, says a study in Computers in Human Behavior.

The issue: A generation after the internet began appearing widely in homes and offices, it is not unusual to hear people ask if near-constant access to the web has made us happier. Research on the association between internet use and happiness have been ambiguous. Some have found that the connectivity empowers people. A 2014  study published in the journal Computers in Human Behavior notes that excessive time spent online can leave people socially isolated. Compulsive online behavior can have a negative impacton mental health.

A new paper examines if quality of life in the golden years is impacted by the ubiquitous internet.

An academic study worth reading: “Life Satisfaction in the Internet Age – Changes in the Past Decade,”published in Computers in Human Behavior, 2016.

Study summary: Sabina Lissitsa and Svetlana Chachashvili-Bolotin, two researchers in Israel, investigate how internet adoption impacts life satisfaction among Israelis over age 65, compared with working-age adults (aged 20-64). They use annual, repeated cross-sectional survey data collected by Israel’s statistics agency from 2003 to 2012 – totaling 75,523 respondents.

They define life satisfaction broadly — on perceptions of one’s health, job, education, empowerment, relationships and place in society — and asked respondents to rate their satisfaction on a four-point scale. They also measured specific types of internet use, for example email, social media and shopping.

Finally, Lissitsa and Chachashvili-Bolotin also analyzed demographic data, information on respondents’ health, the amount they interact with friends and how often, if at all, they feel lonely.

Findings:

  • Internet users report higher levels of life satisfaction than non-users. This finding:
    • Is higher among people with health problems.
    • Decreases over time (possibly because internet saturation is spreading, making it harder to compare those with and those without internet access).
    • Decreases as incomes rise.
  • Internet access among seniors rose from 8 to 34 percent between 2003 and 2012; among the younger group, access increased from 44 to 78 percent. Therefore, the digital divide grew during the study period.
  • Seniors who use the internet report higher levels of life satisfaction than seniors who do not.
  • “Internet adoption promotes life satisfaction in weaker social groups and can serve as a channel for increasing life satisfaction.”
  • Using email and shopping online are associated with an increase in life satisfaction.
  • Using social media and playing games have no association with life satisfaction. The authors speculate that this is because some people grow addicted and abuse these internet applications.
  • The ability to use the internet to seek information has an insignificant impact on happiness for the total sample. But it has a positive association for users with health problems — possibly because the internet increases their ability to interact with others.
  • The findings can be broadly generalized to other developed countries.

Helpful resources:

The Organization for Economic Development and Cooperation (OECD) publishes key data on the global internet economy.

The United Nations publishes the ICT Development Index to compare countries’ adoption of internet and communications technologies.

The Digital Economy and Society Index measures European Union members’ progress toward closing the digital divides in their societies.

Other research:

2015 article by the same authors examines rates of internet adoption by senior citizens.

2014 study looks at how compulsive online behavior is negatively associated with life satisfaction. Similarly, this 2014 article specifically focuses on the compulsive use of Facebook.

2014 study tests the association between happiness and online connections.

Journalist’s Resource has examined the cost of aging populations on national budgets around the world.

Published in Online Research
Page 1 of 11

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Read more content?
Register or Login as "AIRS Guest"
Enjoy Guest Account
or

x
Create an account
x

or