fbpx
Barbara Larson

Barbara Larson

Sunday, 05 February 2017 01:55

User Intent: It’s the Future of SEO

In recent years search engines have been optimized around users needs and their search experience. Google has evolved in rewarding content that is valuable and relevant to readers.

Modern-day SEO is all about user intent, and you can improve your online presence by focusing on several key psychological principles to entice your readers, rank well in the search rankings and ultimately, help grow your business.

Let’s take a look at some ways you can understand your audience and the steps needed to create content that gets found in Google.

Customer Personas Should Direct SEO

Online marketers love to get into the heads of their customers. We do this with A/B testing, analytics, and other methods to understand what our audience does when they consume our content. One tool to understand the mind of your target audience is to build customer personas.

You can use customer personas as a sort of blueprint to help articulate details about your audience. Customer personas assist you in identifying areas that you can fill with quality content aimed at addressing and solving their problems.

You can start the process of building customer personas by talking to people you have done business with in the past, or individuals who represent your ideal client.

Some questions you can ask to get more information about how your products and services can solve their problems include:

  • What are their pain points?
  • What products or services have they used in the past?
  • Why do they continue to use those goods and services, or why did they cease using them?
  • What types of solutions will they not use?

Customer personas help you align your content with the qualities of your readers.

For example, your content will look, read, and position itself differently depending on if your target audience is a middle-aged man or a pre-teen female. You need to create content with specific qualities based on your audience if you want that content to resonate and lead to online conversions and sales.

Today’s search engines work to connect relevant content with searchers based on the questions in each search query. While search engines in the past were pretty clunky, modern-day search engines are powered by behavioral learning algorithms and LSI keywords to increase the quality and relevancy of search results.

Each time someone searches for something on Google, searchers are asking a question. You can optimize content for your users once you understand that they are asking questions when they visit Google.

Questions reflect the intent of your users, so you should create content that addresses the core questions of your customers.

Position your content around the needs of your audience from your customer personas. How you do this will look different based on your business and industry, but here are a few examples to get you started:

  • “Restaurant near me” means “I am hungry, don’t want to cook, and want to eat somewhere now.”
  • “Best barber for men” means “I am a man and what is the most trusted barber for me?”
  • “U.S. presidential information” means “I am interested in U.S. politics and what is some information about President nominees?”

The purpose behind a search will vary from person to person even when the same search query is used. This is why you should produce specific content for specific personas. Once you figure out what your customers are after, you can create engaging content that drives them towards a conversion.

User Intent and Keyword Research

If your business wants to grow its online presence and attract new customers, then you need to focus on the intent and needs of your users. The way user intent translates to the digital marketing world is through keywords and search terms used by users who are looking for a particular product or service.

Every search query put into Google is a question.

People go to Google to ask specific issues with the goal of finding specific answers. Through various algorithm changes, Google has shown that their primary interest is to deliver relevant content to users. For Google, this ensures people keep coming back to their service, and their end-users build brand loyalty with Google’s product suite.

Although keyword research remains an essential component of semantic SEO, your business needs to change the way it conducts and implements keywords.

Creating content around keywords will bring your content closer to user intent and its variations. Here are a few points to consider when creating more user-focused keywords for your business content.

Semantic SEO and Search Engines

Semantic search is the newest focus by Google and other search engines that focus on how each word is situated in a search query and the relationship between those words.

Integrating semantic search terms enriches your content and makes your online content more readable for your readers. This approach also helps prevent repeating keywords too many times, and it improves how search engines read your content in several ways, including:

  • Introduces various sets of keywords that are directly related to your original set of keywords.
  • An active secondary set of keywords helps build rich context for search engines to understand your content and deliver better results to your customers.

By focusing on what each user types into the search bar, Google can understand what the user is looking for on a deeper level compared to only looking at keywords. In a sense, semantic SEO takes the entire search query into account and uses past searches by the user and similar searches to intelligently find the meaning and intent of the user’s expectations.

How to Perform Semantic SEO Keyword Research

As your business builds content around semantic keywords, you will want to begin by looking at your customer personas and your target audience. Since semantic SEO is meant to deliver more relevant content to your audience, you need to work on the requirements and desires of your audience.

Once you identify the problems and questions your customers are looking for through Google search queries, you will be able to build valuable content. However, there is some research needed to ensure you create your content efficiently.

Before writing a single blog post or putting together a stellar infographic, you will want to create a simple spreadsheet outlining the topics, concepts, and keywords to cover. Here is an example of the spreadsheet your business can make in about 10 minutes to save you a lot of time and money.

Let’s take a look at the exact steps and tools your business can use to create the above spreadsheet. As you see, the spreadsheet covers essential elements of semantic SEO keywords, including:

  • Topic
  • Concepts
  • Keywords

 

1. Topic

Begin thinking about your content based on your user personas. Figure out what your readers are looking for and how you can tie those searches back to your business. You can start this step by looking through real-world questions your customers are asking. Some places to consider include:

  • Your customer service records and calls
  • Reddit
  • Quora

 

Once you know what your customers are searching for, you will be able to identify the base topics to cover. For this example, I will use the search topic of “Semantic Search” since that is what I am writing about right now.

2. Concepts

The next step is to identify your core set of ideas based around your core topics. These concepts will be the root of all keywords and will be the base of your content.

As you begin to think about what your customers are asking for the topics listed above, you will be able to find key points to create content around. These concepts should be connected to the problems (or questions) your customers ask and connect them back to your business.

SEMRush is the primary tool I use for this stage of semantic SEO keyword research. The free version works fine for me, and here is a screenshot of findings with notes on the important elements to pay attention to.

3. Keywords

Finally, you will build a list of keywords related to the core concepts and topics that your customers are searching for. These keywords, phrases, and search terms should be relevant to your main topic and supporting ideas.

These keywords should be unique from your concepts and main issue but connect based on how the words and ideas work together. The best free tool I use for this stage of semantic SEO keyword research is LSIGraph.com. Below is a screenshot of some keywords I chose based on my research of LSIGraph.com

Conclusion

Since your business is working to acquire market share and retain your customers, you need to make sure that all your content is relevant to your readers and easily found in search engines.

Semantic SEO keyword research makes it possible for you to save time and money when creating content. The future of SEO strategies need to focus on the intent of users and the quality of your content.

You can use the above tools and processes to deliver the content your customers are looking for so you can drive more traffic and close more sales through your online business!

Author: Chris Giarratana
Source: https://www.searchenginejournal.com/user-intent-future-seo/184217

 

Saturday, 04 February 2017 15:47

The Impending Crisis of the Internet of Things

Picture our beloved Internet as a massive luxury cruise ship navigating the world’s icy waters. Stationed on the bottom deck is a bonafide navigational expert making sure everything is OK. This person has all the gadgets: radar, sonar, charts, compasses, ring dials, chronometers. He can chart a path through the thickest of fogs, no sweat at all.

One night, after examining the path, the navigator sees a field of icebergs straight ahead. These are the huge security flaws in smart televisions, cameras, dishwashers, cars, and everything else that makes up the expanding roster of devices known as the Internet of Things.

“We have to change course,” the experts says.

“OK, OK,” everyone says. “We will.”

But no one does. And the ship continues moving in the same direction. After days and weeks of warnings, the ship finally hits the first, small iceberg. Saucers go flying and surf-and-turf dinner carts roll off the deck and into the sea. This was the DDoS attack back in October that took down huge chunks of the Internet for a day.

“What do we do now?” everyone asks the expert. “How do we fix this?”

The expert looks around. The ship is surrounded by miles and miles of icebergs, their sharp points poking out of the surface as far as the eye can see.


“When you look at the Internet as a whole, it was never constructed to be secure,” says James Scott, a senior fellow at the Institute for Critical Infrastructure Technology. “But now you have insecure devices being networked to an insecure Internet.” In fact, despite the massive effects that the October effect had, the tactics used to make it possible were elementary. “It was not sophisticated,” Scott says. “All [the hacker] did was focus on very pronounced vulnerable devices, and used them to drive traffic wherever they wanted.”

Rather than the attack being successful due to the hacker’s technical proficiency, it was really only successful because of the number of IoT devices currently out in the world. According to Gartner, there are more than 6.4 billion IoT devices in use — a number expected to rise to 50 billion by 2020. That’s an estimated 4,000 new devices installed every day, roughly 186 of which are vulnerable to malware used in the October attack. It’s not surprising, then, that DDoS attacks rose 71 percent between Q3 of 2015 and Q3 of 2016.

Something that makes the IoT problem different from other security problems is that there’s really no sound way to add security to the devices already out there.

“If you look at your PCs or other devices, they have the ability to install software after the fact by the consumer,” says Alan Grau, president of Icon Labs, a provider of IoT security. “With the IoT, that’s generally not the case.”

There are few updates or patches that provide stronger security measures. If there’s an option to change the device’s password, then, yes, the consumer can (and very much should) do so. But many devices don’t even have that option, or, if they do, it’s too complex for the average consumer.

“Part of the problem is the cost of the security flaw is not born by the person building the product,” Grau says. If a botnet infects a bunch of smart TVs that are then used in a DDoS attack to knock banking institutions offline for a day, that hurts their businesses, but it doesn’t really hurt those “real-life” producers constructing the products. “That’s why regulations are required to create an incentive.”

Meaning, it’s on legislatures to come up with stricter laws that keep these devices off the market until they have stronger security. But, if you haven’t noticed, legislatures have had their hands full with, well, a whole lot.

“The conversation hasn’t even gotten to the hill because [the Mirai attack] happened during the elections,” Scott says. “The hill is slow to evolve because they think additional standards will somehow snuff out the entrepreneurial marketability. But security-by-design as an enforceable standard is no different from car manufacturers having to include brakes on their vehicles.”

The manufacturers won’t do anything until their hand is forced. And the consumers can only do so much. We’re all left floating in the iceberg field, waiting for the big one to crack the hull.

When I ask Grau to predict the future of these attacks, he mentions the possibility of hackers using ransomware to infect a bunch of devices and then telling the manufacturers to pay up or face the consequences. (Something like this was attempted back in November with the cyber hijacking of San Francisco’s light-rail MUNI system.)

“I think we need to see a huge disaster for people on the hill to start enforcing standards,” Scott says. Say, if a botnet were to take down an electrical grid in the dead of winter. Or shut down the power to a hospital. Or exploit features in smart cars while they’re in motion, causing accidents on the road. Unfortunately, it seems like a big disaster like this isn’t just inevitable, but necessary before regulatory measures are introduced.

Google frequently makes us giggle with the geeky ways it handles things.

Like the time it rewarded the man who managed to buy the "Google.com" domain for one minute: Google gave researcher Sanmay Ved $6,006.13, choosing that specific amount because it spells out Google, numerically — "squint a little and you'll see it!" the company said.

This kind of quirky antic has become almost par for the course for the search giant, which has long been down for a little nerdy fun.

"Googleyness" is all about intellectual creativity, after all.

Here are some of our other favorite times that Google did or responded to something in a particularly silly way.

Jillian D'Onfro contributed to an earlier version of this story. 

It all started with the IPO. Google used a funny string of numbers in its initial S-1 filing for how much it hoped to raise.

It all started with the IPO. Google used a funny string of numbers in its initial S-1 filing for how much it hoped to raise.
Google

The first 10 digits of the mathematical constant "e" are 2,718,281,828.

Then, a year later, Google collected a bit more than $4 billion by selling 14,159,265 million of its shares.

Get it? Because "14,159,265" are the first eight digits after the decimal point in the number pi.

The search giant showed off its numerical whimsy again in 2011, when it bid $1,902,160,540 and $2,614,972,128 for some wireless patents.

The search giant showed off its numerical whimsy again in 2011, when it bid $1,902,160,540 and $2,614,972,128 for some wireless patents.
REUTERS/Peter Power

In case those numbers don't instantly ring a bell: They're Brun's constant and the Meissel-Mertens constant, respectively.

Google didn't end up winning the patents, but it definitely mystified other bidders.

In another auction, Google spent $25 million to buy the entire ".app" domain. But the clever part came when we followed up with the company about it.

In another auction, Google spent $25 million to buy the entire ".app" domain. But the clever part came when we followed up with the company about it.
REUTERS/Guadalupe Pardo

"We've been excited and curious about the potential for new top-level domains (TLDs) for .soy long," a Google representative told Business Insider in a pun-riddled email. "We are very .app-y with .how, at a .minna-mum, they have the potential to .foo-ward internet innovation."

The company has also responded to press questions with memes. It sent The Verge this one when it revealed that a weird skull showing up in people's Gmail accounts was caused by a bug in its own in-house debugger.

The company has also responded to press questions with memes. It sent The Verge this one when it revealed that a weird skull showing up in people's Gmail accounts was caused by a bug in its own in-house debugger.
Google

Source: The Verge

Google once even got a little risqué with this response.

And once it addressed an issue where Google Maps was showing Sauron's tower from "The Lord of the Rings" as appearing in Australia with a comment ... in Elvish.

And once it addressed an issue where Google Maps was showing Sauron's tower from "The Lord of the Rings" as appearing in Australia with a comment ... in Elvish.
Google

"If your Elvish is a bit rusty, here is a rough translation," the representative said. "We encourage users to contribute their local knowledge and updates using Google Map Maker, even the whims of Sauron will not compromise our quest to provide useful and accurate maps."

Google has recruited new developers using super-cryptic challenges that people can only access through a secret website.

Google has recruited new developers using super-cryptic challenges that people can only access through a secret website.
Google

And when Business Insider reached out to Google about this recruiting technique, we received the following response:

" import string

z=string.ascii_lowercase

m=''.join([z[6],z[11],z[7],z[5]])

print(m) "

The message? "GLHF," which stands for "Good luck, have fun!"

A few months after Google became Alphabet in 2015, it bought the entire alphabet as a domain name, with the URL abcdefghijklmnopqrstuvwxyz.com.

"We realized we missed a few letters in abc.xyz, so we're just being thorough," a representative told Re/code.

In the same vein, Alphabet also bought back a bunch of stock for $5,099,019,513.59.

In the same vein, Alphabet also bought back a bunch of stock for $5,099,019,513.59.
Thomson Reuters

That's the square root of 26, the number of letters in the alphabet, times a billion.

Not about numbers, but pretty funny: In its official code of conduct, Google declares itself a "dog company"...

Not about numbers, but pretty funny: In its official code of conduct, Google declares itself a "dog company"...
Amy Lombard/VSCO

"Google's affection for our canine friends is an integral facet of our corporate culture," reads the company's code of conduct on its investor-relations site. "We like cats, but we're a dog company, so as a general rule we feel cats visiting our offices would be fairly stressed out."

... a fact that the company made sure to reference again in 2015 after the press noted that new parent company Alphabet didn't include Google's famous "don't be evil" line in its new code of co    nduct.

... a fact that the company made sure to reference again in 2015 after the press noted that new parent company Alphabet didn't include Google's famous "don't be evil" line in its new code of conduct.
Thomson Reuters

After The Wall Street Journal's Alistair Barr caught the change, the company sent him a cheeky response.

"Individual Alphabet companies may of course have their own codes to ensure they continue to promote compliance and great values," a Google spokesman said. "But if they start bringing cats to work, there's gonna be trouble with a capital T."

When a report surfaced from WWD earlier in 2017 that Google could potentially buy Condé Nast, Google's communications team took the opportunity to write a clever response.

When a report surfaced from WWD earlier in 2017 that Google could potentially buy Condé Nast, Google's communications team took the opportunity to write a clever response.
WWD

While some people found Google's response pretty funny, WWD made a wisecrack of their own, responding: "Maybe they Googled how to write a witty response"

Author : Avery Hartmans

Source : http://uk.businessinsider.com/funny-google-jokes-responses-2017-1/#it-all-started-with-the-ipo-google-used-a-funny-string-of-numbers-in-its-initial-s-1-filing-for-how-much-it-hoped-to-raise-1

Wednesday, 01 February 2017 12:03

The Internet Is Mostly Bots

Look around you, people of the internet. The bots. They’re everywhere.

Most website visitors aren’t humans, but are instead bots—or, programs built to do automated tasks. They are the worker bees of the internet, and also the henchmen. Some bots help refresh your Facebook feed or figure out how to rank Google search results; other bots impersonate humans and carry out devastating DDoS attacks.

Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online.

Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.

“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”

Put another way: More than 94 percent of the 100,000 domains included in the report experienced at least one bot attack over the 90-day period in Imperva’s study.

Websites that are less popular with humans—as measured by traffic—tended to attract more visits from bots. “Simply put,” Zeifman wrote, “good bots will crawl your website and bad bots will try to hack it regardless of how popular it is with the human folk. They will even keep visiting a domain in absence of all human traffic.”

Though bots are interested in websites even when humans are not, bot activity tends to mirror human behavior online. For instance, the most active helper-bot online is what’s known as a “feed fetcher,” and it’s the kind of bot that helps refresh a person’s Facebook feed on the site’s mobile app. Facebook’s feed fetcher, by itself, accounted for 4.4 percent of all website traffic, according to the report—which is perhaps stunning, but not altogether surprising. Facebook is a behemoth, and its bot traffic illustrates as much.

Overall, Feed fetchers accounted for more than 12 percent of web traffic last year. Search engine bots, commercial data-extracting spiders, and website monitoring bots are among the other helpful bots you’re likely to encounter online. (That is, if you consider the collection of your personal data for advertising purposes to be helpful.)

Data-grabbing bots do their work invisibly, while other bots are easier to spot. In fact, bots and people bump into one another often. Spambots show up in comment sections and Twitter bots clog people’s timelines with everything from marketing, to political campaigning, to social activism, to utter nonsense. These sorts of bots aren’t always pleasant, but they aren’t outright dangerous.

For the real villains, we turn to impersonator bots used for DDoS attacks. They accounted for about 24 percent of overall web traffic last year. Top offenders in this category included Nitol malware, a bot called Cyclone meant to mimic Google’s good search-ranking bots, and Mirai malware—a virus that caused mass internet disruptions in the United States in October.

Other bad bots to contend with include unauthorized-data-scrapers, spambots, and scavengers seeking security vulnerabilities to exploit. Together, they made up about 5 percent of web traffic.

And even though the internet is already  mostly bots, we’re only just beginning to see the Bot Age take shape. According to the market-research firm CB Insights, more than a dozen venture-capital-backed bot startups raised their first round of funding last year.

Author : ADRIENNE LAFRANCE

Source : https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/

You just cannot ignore the fact that when you think about searching online for your assignments and projects, the first name that comes to your mind is Google. However, when it comes to academic research, Google search engine does not serve the purpose as it does in most cases. Its search results are not perfectsome times. Thisdoesn’t mean that it is the end of the world for students trying to collect academic data online. Apart from Google, there are a number of search engines thatare especially designed for the purpose of academic research.They can help you get your hands on relevant information without going through irrelevant or low-quality pages.

Given below is a list of some of the best academic search engines that will help you get the research material you want quickly and easily, and without compromising on quality.

Academic Info

 contains an in-depth directory of the most useful links and resources within a specific subject area. You can browse through this website to get a list of useful academic websites for research. This site also offers online degrees, online courses and distance learning information from a selection of online accredited schools.

iSeek Education

 is easily one of the best and widely used search engines for academic research on the internet. It has been especially designed keeping the students, teachers and scholarsin mind. This search engine only shows reliable and relevant resultsthat ultimately save your time and enable you to get your work done quickly. You can find safe, authoritative, intelligent and time-saving resources with iSeek.

Virtual LRC

 or The Virtual Learning Resources Centre allows you to explore educational sites with high-quality information. It has indexed thousands of academic information websites. On top of that, with custom Google search, you will be able to get more refined results, which will help you complete your research in less time. It has been organized by teachers and library professionals around the world to provide students with great resources for academic assignments and projects.In short, Virtual LRC is the best place to start looking for research material that can help you in your studies.  

Refseek

 is an academic search engine which is simpler than Google even in appearance. Refseek does not claim to offer more results than Google.Instead, it removes results that are not related to science, academia and research. The best thing about Refseek is that you can search for information related to your subject without getting distracted by sponsored links.With a database of over 1 billion documents, web pages, books, journals, newspapers, online encyclopedias and articles,Refseek is your ultimate companion for academic research.

Google Scholar

As the name suggests, is an academic search engine from the house of Google. Especially designed to search for scholarly literature, it helps you find relevant informationfrom the world of scholarly research. With Google Scholar, you can exploremany sources such as books, dissertations, articles and abstracts from various academic publishers, professional societies, universities and other websites. In May 2014, third-party researchers estimated that Google Scholar database contains roughly 160 million documents.

Microsoft Academic Search

 is a great search engine from the software giant Microsoft. It gives you the ability to explore more than 38 million publications. One of the best features of this search engine is that it provides trends, graphs and maps for your academic research. It contains more than 40 million publications and 20 million authors.

Conclusion

If you want your work to be of high quality, then you certainly need to gather information from genuine and reliable sources. The sources mentioned above can be of great help to you in making your research powerful. Ultimately, they will enable you to submit quality projects and assignments.

Author : Liana Daren

Source : http://www.teachercast.net/2016/03/01/6-best-search-engines-academic-research/

Monday, 30 January 2017 11:29

Google removes Plugin controls from Chrome

Google made a change in Chrome 57 that removes options from the browser to manage plugins such as Google Widevine, Adobe Flash, or the Chrome PDF Viewer.

If you load chrome://plugins in Chrome 56 or earlier, a list of installed plugins is displayed to you. The list includes information about each plugin, including a name and description, location on the local system, version, and options to disable it or set it to "always run".

You can use it to disable plugins that you don't require. While you can do the same for some plugins, Flash and PDF Viewer, using Chrome's Settings, the same is not possible for the DRM plugin Widevine, and any other plugin Google may add to Chrome in the future.

Starting with Chrome 57, that option is no longer available. This means essentially that Chrome users won't be able to disable -- some -- plugins anymore, or even list the plugins that are installed in the web browser.

Please note that this affects Google Chrome and Chromium.

Google removes Plugin controls from Chrome

chrome plugins

This goes hand in hand with a change in Chrome 56 that saw plugins getting re-enabled on restart automatically, and without you being able to do anything about that either.

Technically with the latest changes to the plugins handling code all plugins will be in the "enabled" state as seen on the chrome://plugins page.

To sum it up:

  1. chrome://plugins is deprecated in Chrome 57.
  2. Only Flash and the PDF Viewer can be controlled via the Chrome Settings.
  3. All other plugins cannot be controlled anymore by the user.
  4. Disable plugins like Flash or Widevine are re-enabled in Chrome 56 after restarts.

You have to dig deep on the Chromium bugs website to find information on those changes. This bug highlights that chrome://plugins is deprecated, and that plugin control access has been removed from Chrome with the exception of Adobe Flash and PDF Viewer.

One issue when it comes to disabling Flash is that Chrome handles Flash content differently depending on where it was disabled.

If you disable Flash on chrome://plugins, Flash is completely disabled. If you use the Settings instead, you get a square asking whether you want to enable Flash to play content instead.

Users may overcome this by enabling this flag: chrome://flags/#prefer-html-over-flash

This bug highlights that Google considers all plugins but Flash and the PDF Viewer, as integral parts of the Chrome browser, and that it does not want users to disable those.

All other plugins (NaCL and WideVine) are considered integral part of the browser and can not be disabled.

Temporary Solution

The only option that is left is to delete the plugin folder on the local system. The caveat is that it gets added again when Chrome updates.

The location is platform specific. On windows, it is located here: C:\Program Files (x86)\Google\Chrome\Application\[Chrome Version]\WidevineCdm\.

Close Chrome, delete the folder, and restart the browser. The plugin is no longer loaded by Chrome. you do need to repeat this whenever Chrome updates though.

Closing Words

Google is removing control over plugins from the web browser, and is rightfully criticized for making that decision as it is anything but user friendly. Let us hope that Vivaldi and Opera won't follow Chrome's example.

Now You: Have you disabled any plugins installed in Chrome?

Author : Martin Brinkmann

Source : https://www.ghacks.net/2017/01/29/google-removes-plugin-controls-from-chrome/

Friday, 27 January 2017 11:14

Oscobo, a new privacy focused search engine

Ever since the Snowden relevations, privacy search engines and privacy in general has been a boom on the Internet.

Search engines focused on privacy have seen a rise in daily searches. While they are still nowhere near popular as Google Search or Bing, the two main search services in most parts of the world, they have shown that there is a market for these kind of services.

Oscobo is a new privacy focused search engine that shares similarities with established players such as Startpage or DuckDuckGo.

The creators of the search engine promise that they don't track users and don't set cookies on user computer systems, and that users are not profiled in any shape or form.

Oscobo review

The search engine's current address is https://oscobo.co.uk/ which highlights one of the limitations in place currently as it is focused on users from the UK at the moment.

The site does not set cookies which you can verify by opening the Developer Tools of the web browser you are using and checking the resources of the site.

oscbobo

While that is the case, results include English pages outside the UK as well. The results page looks like any other search engine for the most part but displays results from Twitter next to the actual results which can be interesting as these results are usually not as old (but may be more spammy).

oscobo search

The top lists options to switch from Web searches to videos, images or news, and you may find advertisement listed on the results page as well.

The only information used to determine which advertisement to display are the search term and the user's locations (using the IP address), and both are not recorded by the search engine.

It is quite difficult to spot the ad as it uses the same format as organic results. Only the small "ad"  link underneath the description field indicates advertisement.

Like DuckDuckGo, search results are taken from Bing/Yahoo. Using data from one or multiple of the big search engines out there appears to be the only financially viable solution for privacy focused search companies.

It will be interesting to see how Oscobo will fare when they enter non-English markets, as localized Bing results are usually not that good.

Users who like the search engine can make it the default search engine for their browser, add it to their browser, or install the extension. The options are displayed on the homepage, but only if the browser used is supported.

The extension seems to be only available for Chrome-based browsers right now for instance.

Closing Words

Oscobo or DuckDuckGo? The two search engines are very similar in many regards: both use Bing to power their results, both don't track or profile users, and both use advertisement for revenue generation.

If you look closer, you find distinguishing factors. DuckDuckGo concentrates on the US market, while Oscobo on UK (and in the future other European markets). DuckDuckGo certainly has the edge when it comes to features, its !bang syntax is excellent for instance and Oscobo does not support a feature like the zero-click information that DuckDuckGo may display on top of the results.

Author : Martin Brinkmann

Source : http://www.ghacks.net/2016/01/07/oscobo-a-new-privacy-focused-search-engine/

Ironically, many of the things which don’t go down too well in the average workplace could be signs of a creative mind, like missing deadlines and daydreaming.

Dr Jeremy Dean of Psyblog says that recent studies have shown that daydreaming and sarcasm both require considerable mental powers – and can be signs of creativity.

Dr Dean writes, ‘People tend to think of daydreaming and letting the mind wander as a waste of time. How wrong they are.

Other signs of creativity include sarcasm – which actually inspires creativity – talking to people one disagrees with, and missing deadlines.

People who let their minds wander could actually be ‘digging deep’ for important inspiration, Dean writes.

Creative people also tend to be easily distracted, to make decisions unconsciously – and also tend to be neurotic.


Professor Moshe Bar says, ‘Over the last 15 or 20 years, scientists have shown that — unlike the localized neural activity associated with specific tasks — mind wandering involves the activation of a gigantic default network involving many parts of the brain.

This cross-brain involvement may be involved in behavioral outcomes such as creativity and mood, and may also contribute to the ability to stay successfully on-task while the mind goes off on its merry mental way.’

Author : Rob Waugh

Source : https://ca.news.yahoo.com/7-strange-signs-youre-a-really-creative-person-by-psychologists-102109228.html

One of SEO’s hottest topics recently has been the need to migrate from HTTP to HTTPS, especially for those websites which collect personal data or passwords. Websites serving content over the secure HTTPS protocol have been given a ranking boost since 2014. Google Chrome (which owns ~55% of the desktop browser market) is visibly branding websites as secure.

In Google Chrome 56, the upcoming browser update will also start branding websites served on HTTP as not secure. (If you haven’t yet made the switch to HTTPS, you should read this SEJ article from Tony Messer.)

So what’s next on Google’s agenda? Within the SEO community, we already know that Google is able to ‘guess’ standard URLs on websites, such as XML sitemaps, about pages, and contact pages. Could it be plausible for Google to start making passive scans of websites too?

Passive scanning is a form of web vulnerability testing and is seen as a less dangerous alternative to active vulnerability testing, where a system is probed and stressed, and carries risks ranging from performance lag to system crash.

While passive scans won’t discover as much as an active test, they may provide enough information to aid an IDT (Intrusion Detection Tool).

The Case for Google and Passive Scans

As HTTPS becomes widespread and the new standard for all websites, the bar of what is and isn’t safe needs to be raised again.

At the end of 2016, Google Webmasters published a link to the Sucuri 2016/Q2 Hacked Website Report, highlighting analysis of 9,000 infected websites made up of WordPress, Joomla!, Magento, and Drupal builds.

Google’s main reason for flagging this report is the increased number of hacks for SEO purposes. Hackers targeted search optimized websites to ‘piggyback’ off of their good rankings. After gaining access, they created malicious 301 redirects to other websites. The intention is to extract information or money from the user, or infect their machine with malware, ransomware, or viruses.

A site hack can leave lasting damage. For example, the below screenshot is of my landlord’s website:

Hacked Website Warning Google

This is a WordPress website that was hacked via an outdated plugin. Instead of displaying a title tag for a letting (leasing) agency based in the North of England, it’s displaying the name of a Japanese consumer electronics company.

The website still appears within Google UK for its exact match name, but little else. Anyone who comes across the site via online search will likely perceive this as a black mark against my landlord’s trust and credibility. 

Secure Your Website Ahead of Time

Google has given us warning that they are going to start marking websites as unsecure in Chrome 56, giving webmasters the opportunity to migrate to HTTPS. But there is currently no indication that there will be any advance notice of passive scans or other security checks. Here’s how you can stay ahead of the game:

  1. If you’re running open source software, it’s important that you keep it up-to-date and ensure it’s updated to the latest version.
  2. Check your plugins. According to the Sucuri report, 22% of all WordPress hacks they found came from three plugins not being updated. It’s estimated that WordPress websites have an average of 12 plugins installed. Any that haven’t been updated recently by the developer should be considered for replacement. Plugins that have been abandoned by developers can become unsecure over time and offer easy access to a website.
  3. Use an edge network company and put your website behind a Web Application Firewall (WAF) such as Cloudflare or Amazon AWS. This will protect your website from a number of online attack types such as SQL injections or cross-site scripting.
  4. Note that Google could also include known vulnerabilities that aren’t platform-specific in their passive scans, such as the OWASP Top 10. The OWASP updates each year and identifies the top 10 flaws that are being exploited by hackers. You should stay on top of their recommendations and updates.

How Do I Know If I’ve Been Hacked?

There are a number of articles and studies that claim that the ‘This site may be hacked’ message only appears on 50% of all websites that have been hacked. While Google now says they will flag malicious redirects they identify via Google Search Console, you still can’t rely on Google as your only indicator as to whether your site has been compromised.

In my experience, keeping an eye on your Search Console data for anomalies is worthwhile. In one case, I saw a non-sports related site start to inexplicably gain impressions for cheap NFL jerseys and sportswear terms. This turned out to be “bait & switch” hacking:  the hackers had managed to inject around 500 sportswear-related pages, and all redirected to another website via JavaScript. What did the hackers gain from all this? An affiliate fee from the click.

Lastly, if you’re seeing an unexplained drop in traffic or rankings, hacking should be considered (as one of many possibilities).

Putting the User First

Google has been both reactive and proactive to the change in search behavior, as users move to spend more time on mobile vs desktop.

Given that their Webmaster’s blog post also promotes the vision of a no hack web (sporting the #NoHack hashtag), in my opinion there is a strong case for Google incorporating both passive scans and/or ranking incentives for websites that are secure beyond HTTPS encryption. Google already takes into account whether or not a website affects “your money or your life” (YMYL), so passive scans could be an extension of this philosophy, as a hacked website could obviously impact a user in profoundly negative ways.

Author : Dan Taylor

Source : https://www.searchenginejournal.com/will-google-search-give-weight-cybersecurity-2017-beyond/184145/

NEW YORK, Jan. 23, 2017 /PRNewswire/ -- Report Details
The latest report from business intelligence provider visiongain offers comprehensive analysis of the global Internet of Things market. Visiongain assesses that this market will generate revenues of $1,128 billion in 2017.

Internet of Things (IoT) Outlook

We expect that in the next five years, IoT will further penetrate several different industries and human activities, ranging from the home to the transportation industry, even affecting healthcare systems. IoT has not only allowed for machine-to-machine and machine to object communications, but is radically changing how we perform daily activities such as driving or shopping for daily goods, generating great value for enterprises in the form of increases in production and efficiency. This is an example of the business critical news that you need to know about - and more importantly, you need to read visiongain's objective analysis of how this will impact your company and the IoT industry more broadly. How are you and your company reacting to these changes? Are you sufficiently informed?

How this report will benefit you

Read on to discover how you can exploit the future business opportunities emerging in the internet of things sector. Visiongain's new study tells you and tells you NOW.

In this brand new report you will receive 51 in-depth tables, charts and graphs PLUS 3 EXCLUSIVE interviews – all unavailable elsewhere.

The 118 page report provides clear detailed insight into the global Internet of Things market. It reveals the key drivers and challenges affecting the market.

By ordering and reading our brand new report today you will be better informed and ready to act.

Report Scope

- How is the Internet of Things Market evolving?
- Global Internet of Things market forecasts from 2017-2022
- Regional Internet of Things market forecasts from 2017-2022 covering
- Asia-Pacific
- Latin America
- Europe
- Middle East and Africa
- North America
- Country level Internet of Things forecasts from 2017-2022 covering
- China
- USA
- Japan
- France
- UK
- Germany
- India
- Russia
- Italy
- Brazil
- Row
- Internet of Things submarket forecasts from 2017-2022 covering:
- Industrial IoT
- Automotive & Transportation IoT
- Healthcare IoT
- Consumer Electronics IoT
- Others IoT

- Analysis of the key factors driving growth in the global and regional / country level Internet of Things markets from 2017-2022
- Analysis of game changing technological trends being employed by the leading players and how these will shape the Internet of Things industry.
- Who are the leading IoT players and what are their prospects over the forecast period?
- Amazon
- Apple
- ARM
- AT&T
- BlackBerry
- China Mobile
- Cisco
- Freescale
- General Electric
- Google
- HP
- IBM
- Intel
- Kore Telematics
- Microsoft
- Oracle
- PTC
- Qualcomm
- Samsung
- SAP
- Verizon Communications

- SWOT analysis of the major strengths and weaknesses of the IoT market, together with the opportunities available and the key threats faced.
- Market conclusions & recommendations.
- 3 Full transcripts of exclusive visiongain interviews with key opinion-leaders in the market, from the following companies:
- Able Device
- Humavox
- PTC

How will you benefit from this report?
- This report will keep your knowledge base up to speed. Don't get left behind
- This report will reinforce strategic decision-making based upon definitive and reliable market data
- You will learn how to exploit new technological trends
- You will be able to realise your company's full potential within the market
- You will better understand the competitive landscape and identify potential new business opportunities & partnerships


Who should read this report?
- Anyone within the IoT value chain.
- Internet of Things vendors
- M2M Vendors
- Mobile equipment vendors
- Mobile infrastructure solution providers
- Mobile network operators (MNO)
- M2M service providers
- Device manufacturers (OEM)
- Chip set vendors
- Electricity utility companies,
- Grid security vendors, smart grid
- IT vendors and service providers
- CEO's
- COO's
- CIO's
- Business development managers
- Marketing managers
- Technologists
- Suppliers
- Investors
- Banks
- Government agencies
- Contractors
Read the full report: http://www.reportlinker.com/p04636248-summary/view-report.html

About Reportlinker 
ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.



http://www.reportlinker.com
__________________________
Contact Clare: This email address is being protected from spambots. You need JavaScript enabled to view it.
US: (339)-368-6001
Intl: +1 339-368-6001

Source : http://www.prnewswire.com/news-releases/internet-of-things-iot-market-report-2017-2022-300395148.html

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media