Barbara Larson - AIRS
Website Search
Research Papers
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events
Barbara Larson

Barbara Larson

Google Home devices capture the attention of shoppers as they reveal their intention to increase online shopping via voice-controlled devices next year

2% of all UK consumers are already using voice-controlled devices such as Google Home and Amazon Echo for online shopping – that’s 20% of all device owners in the UK.

The survey of almost 3000 UK consumers, conducted by post-purchase experts Narvar in association with YouGov, reveals that adoption of voice technology among consumers is at a tipping point, and use cases are expanding well beyond traffic, weather, and entertainment.

Most notably, consumers were asked about their future intentions to purchase online; 6% of respondents said that they expect to use a voice-controlled device to shop online within 2018. That’s triple the figure in 2017.

Amit Sharma, CEO, Narvar, comments: “Adoption of voice-controlled devices has accelerated in 2017, and for retailers and brands, this represents an important new channel for personalized customer communications. Although it’s still early days, we have found that UK consumers are already using voice for shopping and that activity will increase.”

For the purposes of this research, ‘online shopping’ covers the entire process – from researching products and making the purchase to tracking the order and making customer services inquiries.

New research shows we’re using cellphones for more than four hours a day. That ain’t good,

New year, new you?

You know I don’t buy that. The goal is never to be perfect. The goal is just to be a little better than before. So I thought this week I’d share three productivity tips I’ve recently started using in my life.

Phone it in

Do you have too many boring meetings in dusty rooms with boxes of stale Timbits in the middle of the table? “I’ll take the rock hard jelly one.”

Stop! Here’s my challenge. Move the recurring one-on-one you have with somebody you know really well (that boss or co-worker you like) and move the meeting to the phone. Permanently. This lets you pop in your headphones, zip on your coat and take the meeting outside.

The average person walks five kilometers an hour. That means if you can take even two meetings outside a day, then you’re getting an extra 10 kilometers of exercise. Never mind the fresh air and clarity of thought that comes from getting away from the screen.

The average person walks five kilometres an hour, which means if you can take two meetings outside a day, you could be getting an extra 10 kilometres of exercise.
The average person walks five kilometers an hour, which means if you can take two meetings outside a day, you could be getting an extra 10 kilometers of exercise.  (DREAMSTIME)  

And what do you say when someone asks if that’s an ambulance siren in the background of the budget review? Easy. “I do most of my meetings walking because it helps me think clearly and avoid screens and distractions.” Who’s going to argue with that?

Maintain momentum with Momentum

Momentum gives you a rotating chilled-out picture, a personal hello, the weather outside and your daily goal.
Momentum gives you a rotating chilled-out picture, a personal hello, the weather outside and your daily goal.  (NEIL PASRICHA)  

Have you heard of the Google Chrome extension Momentum? People had been telling me about it for years.

“Every time you open a new browser tab it hits you with your daily goal on top of beautiful images of nature” must have sounded too simple because I never bothered with it until recently.

And now I’m in love. A rotating chilled-out picture, a personal hello, the weather outside and my daily goal! It pushes the daily goal to the front of my head over and over throughout the day.

And it helps prevent falling into an internet rabbit hole — you know, where you somehow end up browsing mindlessly for 45 minutes until you’re reading the Wikipedia entry on 1970s-era baseball player Rance Mulliniks or the Boer War for no reason.

Get anti-social

We are all getting far too addicted to our cell phones. New research shows we’re using them for more than four hours a day. That ain’t good.

So, what do you do? Take a deep breath and follow my lead.

I deleted every single game and social media app off my phone. Yes, it was painful. But I didn’t delete my accounts. I just removed “instant access” to them from my pockets all day. No alerts, notifications, updates. Nothing.

Sure, I can still log in from my laptop and I can (and do) download the apps again when I’m on Wi-Fi and want to get in. But this means I do it once a week or so, spend meaningful time on there and then delete the app again. I write all my social content each week and use Buffer to plan and post it.

My goal is to avoid the “mindless skimming” effect that constantly tugs away at getting anything done. No need to be a Luddite. Just be smart.

So, those are three of my tips, but what are yours? If you’d like to submit an idea I should try (and possibly include in a future article) just drop me a line. I’m at This email address is being protected from spambots. You need JavaScript enabled to view it. and would love to hear from you.

Source: This article was published thestar.com By NEIL PASRICHA

Searching video surveillance streaming for relevant information is a time-consuming mission that does not always convey accurate results. A new cloud-based deep-learning search engine augments surveillance systems with natural language search capabilities across recorded video footage.

The Ella search engine, developed by IC Realtime, uses both algorithmic and deep learning tools to give any surveillance or security camera the ability to recognize objects, colors, people, vehicles, animals and more.

It was designed with the technology backbone of Camio, a startup founded by ex-Googlers who realized there could be a way to apply search to streaming video feeds. Ella makes every nanosecond of video searchable instantly, letting users type in queries like “white truck” to find every relevant clip instead of searching through hours of footage. Ella quite simply creates a Google for video.

Traditional systems only allow the user to search for events by date, time, and camera type and to return very broad results that still require sifting, according to businesswire.com. The average surveillance camera sees less than two minutes of interesting video each day despite streaming and recording 24/7.

Ella instead does the work for users to highlight the interesting events and to enable fast searches of their surveillance and security footage. From the moment Ella comes online and is connected, it begins learning and tagging objects the cameras see.

The deep learning engine lives in the cloud and comes preloaded with recognition of thousands of objects like makes and models of cars; within the first minute of being online, users can start to search their footage.

Hardware agnostic, the technology also solves the issue of limited bandwidth for any HD streaming camera or NVR. Rather than push every second of recorded video to the cloud, Ella features interest-based video compression. Based on machine learning algorithms that recognize patterns of motion in each camera scene to recognize what is interesting within each scene, Ella will only record in HD when it recognizes something important. The uninteresting events are still stored in a low-resolution time-lapse format, so they provide 24×7 continuous security coverage without using up valuable bandwidth.

Ella works with both existing DIY and professionally installed surveillance and security cameras and is comprised of an on-premise video gateway device and the cloud platform subscription.

Source: This article was published i-hls.com

IOT IS COMING and a lot of IT execs are scared silly. Or maybe it’s more accurate to say they are resigned to their fates.

In a May study of 553 IT decision makers, 78% said they thought it was at least somewhat likely that their businesses would suffer data loss or theft enabled by IoT devices. Some 72% said the speed at which IoT is advancing makes it harder to keep up with evolving security requirements.

Such fears are rooted in reality. Last October, hackers took down the company that controls much of the Internet’s domain name system infrastructure using some 100,000 “malicious endpoints” from IoT devices. More recently, the WannaCry ransomware attack crippled some Bank of China ATM networks and washing machine networks. For naysayers, those attacks validated fears that hackers could cause mayhem by commandeering our IoT devices.

At the same time, the IoT industry continues its steady growth path. Gartner predicts that by 2020 there will be some 21 billion IoT devices in existence, up from 5 billion in 2015. About 8 billion of those devices will be industrial, not consumer devices. Both present a juicy target for hackers.

For some, it seems like IoT is a slow-motion wreck playing out in real time. “The reason that the industry hasn’t backed off is the value proposition is very powerful,” said Chris Moyer, CTO, and VP-cybersecurity at DXC. “The risk proposition is also very powerful and that’s where the balancing is going on.”

Regardless of the industry’s appetite, IoT isn’t likely to get a scale until the industry addresses its security issue. That will take a cooperation among vendors, government intervention, and standardization. In 2017, none of those things appear to be on the horizon.

What’s wrong with IoT security

The consensus is that IoT is still under-secured and presents possibly catastrophic security risks as companies trust IoT devices for business, operational and safety decisions.  Existing standards are not in place and vendors keep struggling to embed the right level of intelligence and management into products.  Add the increasing collaboration among attackers and then it creates a need to address these challenges across a set of dimensions.

Consider what we face with the security of IoT devices;

  • Unlike PCs or smartphones, IoT devices are generally short on processing power and memory. That means that they lack robust security solutions and encryption protocols that would protect them from threats.
  • Because such devices are connected to the Internet, they will encounter threats daily. And search engines for IoT devices exist that offer hackers an entrée into webcams, routers and security systems.
  • Security was never contemplated in the design or development stages for many of these Internet-connected devices.
  • It’s not just the devices themselves that lack security capability; many of the networks and protocols that connect them don’t have a robust end-to-end encryption mechanism.
  • Many IoT devices require manual intervention to be upgraded while others can’t be upgraded at all. “Some of these devices were built very rapidly with limited design thinking beyond Iteration 1 and they’re not update-able,” said Moyer.
  • IoT devices are a “weak link” that allows hackers to infiltrate an IT system. This is especially true if the devices are linked to the overall network.
  • Many IoT devices have default passwords that hackers can look up online. The Mirai distributed denial of services attack was possible because of this very fact.
  • The devices may have “backdoors” that provide openings for hackers.
  • The cost of security for a device may negate its financial value. “When you have a 2-cent component, when you put a dollar’s worth of security on top of it, you’ve just broken the business model,” said Beau Woods, an IoT security expert.
  • The devices also produce a huge amount of data. “It’s not just 21 billion devices you have to work with,” said Kieran McCorry, director of technology programs at DXC. “It’s all the data generated from 21 billion devices. There are huge amounts of data that are almost orders of magnitude more than the number of devices that are out there producing that data. It’s a massive data-crunching problem.”

Taking such shortcomings into account, businesses can protect themselves to a certain extent by following best practices for IoT security. But if compliance isn’t 100% (which it won’t be) then, inevitably, attacks will occur and the industry will lose faith in IoT. That’s why security standards are imperative.

Who will set the standards?

Various government agencies already regulate some IoT devices. For instance, the FAA regulates drones and the National Highway Traffic Safety Administration regulates autonomous vehicles. The Department of Homeland Security is getting involved with IoT-based smart cities initiatives. The FDA also has oversight of IoT medical devices.

At the moment though, no government agency oversees the IoT used in smart factories or consumer-focused IoT devices for smart homes. In 2015, the Federal Trade Commission issued a report on IoT that included advice on best practices. In early 2017, the FTC also issued a “challenge” to the public to create a “tool that would address security vulnerabilities caused by out-of-date software in IoT devices” and offered a $25,000 prize for the winner.

Moyer said that while the government will regulate some aspects of IoT, he believes that only the industry can create a standard. He envisions two pathways to such a standard: Either buyer will push for one and refuse to purchase items that don’t support a standard or a dominant player or two will set a de facto standard with its market dominance. “I don’t think it’s going to happen that way,” Moyer said, noting that no such player exists.

Instead of one or two standards, the industry has several right now and none appears to be edging toward dominance. Those include vendor-based standards and ones put forth by the IoT Security Foundation, the IEEE, the Trusted Computing Group, the IoT World Alliance and the Industrial Internet Consortium Security Working Group. All of those bodies are working on standards, protocols and best practices for security IoT environments.

Ultimately what will change the market is buyers, who will begin demanding standards, Moyer said. “Standards get set for lots of reasons,” Moyer said. “Some are regulatory but a lot is because buyers say it’s important to me.”

Lacking standards, Woods sees several paths to improve IoT security. One is transparency in business models. “If you’re buying 1,000 fleet vehicles, one might be able to do over-the-air updates and the other we’d have to replace manually and it would take seven months,” Woods said. “It’s a different risk calculus.”

Another solution is to require manufacturers to assume liability for their devices. Woods said that’s currently the case for hardware devices, but it is often unclear who assumes liability for software malfunctions.

AI to the rescue?

A wildcard in this scenario is artificial intelligence. Proponents argue that machine learning can spot general usage patterns and alert the system when abnormalities occur. Bitdefender, for instance, looks at cloud server data from all endpoints and uses machine learning to identify abnormal or malicious behavior. Just as a credit card’s system might flag a $1,000 splurge in a foreign country as suspicious, a ML system might identify unusual behavior from a sensor or smart device. Because IoT devices are limited in function, it should be relatively easy to spot such abnormalities.

Since the use of machine learning for security is still new, defenders of this approach advocate using a security system that includes human intervention.

The real solution: A combination of everything

While AI may play a bigger role in IoT security than initially thought, a comprehensive IoT solution will include a bit of everything, including government regulation, standards, and AI.

The industry is capable of creating such a solution, but the catch is that it needs to do it on a very accelerated timetable. At the moment, in the race between IoT security and IoT adoption, the latter is winning.

So what can companies do now to latch on to IoT without making security compromises? Moyer had a few suggestions:

  1. Take an integration approach. This is a case where more is better. Moyer said that companies using IoT should integrate management solutions and bring the IoT platform in for primary connectivity and data movement and pull that data into an analytics environment that’s more sophisticated and lets them do a behavioral analysis, which can be automated. “By integrating those components, you can be more confident that what you’ve got from a feed in an IoT environment is more statistically valid,” he said.
  2. Pick the right IoT devices. Those are devices that have a super-strong ecosystem and a set of partners that are being open about how they’re sharing information.
  3. Use IoT Gateways and Edge Devices. To mitigate against an overall lack of security, many companies are using IoT gateways and edge devices to segregate and provide layers of protection between insecure devices and the Internet.
  4. Get involved in creating standards. On a macro level, the best thing you can do to ensure IoT security over the long run is to get involved in setting standards both in your particular industry and in tech as a whole.

This article was produced by WIRED Brand Lab for DXC Technology.

What do real customers search for?

It seems like a straightforward question, but once you start digging into research and data, things become muddled. A word or phrase might be searched for often, yet that fact alone doesn’t mean those are your customers.

While a paid search campaign will give us insight into our “money” keywords — those that convert into customers and/or sales — there are also many other ways to discover what real customers search.

Keyword Evolution

We are in the era where intent-based searches are more important to us than pure volume. As the search engines strive to better understand the user, we have to be just as savvy about it too, meaning we have to know a lot about our prospects and customers.

In addition, we have to consider voice search and how that growth will impact our traffic and ultimately conversions. Most of us are already on this track, but if you are not or want to sharpen your research skills, there are many tools and tactics you can employ.

Below are my go-to tools and techniques that have made the difference between average keyword research and targeted keyword research that leads to interested web visitors.

1. Get to Know the Human(s) You’re Targeting

Knowing the target audience, I mean really knowing them, is something I have preached for years. If you have read any of my past blog posts, you know I’m a broken record.

You should take the extra step to learn the questions customers are asking and how they describe their problems. In marketing, we need to focus on solving a problem.

SEO is marketing. That means our targeted keywords and content focus should be centered on this concept.

2. Go Beyond Traditional Keyword Tools

I love keyword research tools. There is no doubt they streamline the process of finding some great words and phrases, especially the tools that provide suggested or related terms that help us build our lists. Don’t forget about the not-so-obvious tools, though.

Demographics Pro is designed to give you detailed insights into social media audiences, which in turn gives you a sense of who might be searching for your brand or products. You can see what they’re interested in and what they might be looking for. It puts you on the right track to targeting words your customers are using versus words your company believes people are using.

You can glean similar data about your prospective customers by using a free tool, Social Searcher. It’s not hard to use — all you have to do is input your keyword(s), select the source and choose the post type. You can see recent posts, users, sentiment and even related hashtags/words, as reflected in the following Social Searcher report:

social searcher screen shot

If you are struggling with your keywords, another great tool to try is Seed Keywords. This tool makes it possible to create a search scenario that you can then send to your friends. It is especially useful if you are in a niche industry and it is hard to find keywords.

Once you have created the search scenario, you get a link that you can send to people. The words they use to search are then collected and available to you. These words are all possible keywords.

seed keywords screen shot

3. Dig into Intent

Once I get a feel for some of the keywords I want to target, it is time to take it a step further. I want to know what type of content is ranking for those keywords, which gives me an idea of what Google, and the searchers, believe the intent to be.

For the sake of providing a simple example (there are many other types of intent that occur during the buyer’s journey), let’s focus on two main categories of intent: buy and know.

The State of Local Search 2018: Expert Webinar
Join a panel of the biggest local search experts as we explore how the industry changed in 2017 and predict what search engines might have in store.

Let’s say I’m targeting the term “fair trade coffee:”

Google search result page

Based on what is in results, Google believes the searcher’s intent could either be to purchase fair trade coffee or to learn more about it. In this case, the page I am trying to optimize can be targeted toward either intent.

Here’s another example:

Google search result page

In this scenario, if I was targeting the keyword, “safe weed removal,” I would create and/or optimize a page that provides information, or in other words, satisfies the “know” intent.

There are many tools that can help you determine what pages are ranking for your targeted keywords, including SEOToolSet, SEMRush, and Ahrefs. You would simply click through them to determine the intent of the pages.

4. Go from Keywords to Questions

People search questions. That’s not newsworthy, but we should be capitalizing on all of the opportunities to answer those questions. Therefore, don’t ever forget about the long-tail keyword.

Some of my favorite tools to assist in finding questions are Answer the Public, the new Question Analyzer by BuzzSumo, and FaqFox.

Answer The Public uses autosuggest technology to present the common questions and phrases associated with your keywords. It generates a visualization of data that can help you get a better feel for the topics being searched.

With this tool, you get a list of questions, not to mention other data that isn’t depicted below:

Answer the public chart

The Question Analyzer by BuzzSumo locates the most popular questions that are asked across countless forums and websites, including Amazon, Reddit, and Quora. If I want to know what people ask about “coffee machines,” I can get that information:


question analyzer screen shot

FaqFox will also provide you with questions related to your keywords using such sites at Quora, Reddit, and Topix.

For example, if I want to target people searching for “iced coffee,” I might consider creating and optimizing content based on the following questions:

faq fox screen shot

Final Thoughts

There are constantly new techniques and tools to make our jobs easier. Your main focus should be on how to get customers to your website, which is done by knowing how to draw them in with the right keywords, questions, and content.

 

Source: This article was published searchenginejournal By Mindy Weinstein

As the updates continue to roll out, early indications suggest disruptions in mobile SERPs, sites with no schema data & those relying on doorway pages being most impacted

Google has confirmed what many in the search industry have seen over the past week, updates to their algorithm that are significantly shifting rankings in the SERPs. A Google spokesperson told Search Engine Land “We released several minor improvements during this timeframe, part of our regular and routine efforts to improve relevancy.”

Our own Barry Schwartz analyzed his Search Engine Roundtable survey of 100 webmasters and concluded that the updates are related to keyword permutations and sites utilizing doorway pages. You can read his full analysis here.

Early signs point to mobile & schema

I reached out to a few of the SEO tool vendors that do large-scale tracking of ranking fluctuations to get their sense of where the updates may be targeted.

Ilya Onskul, the Product Owner of SEMrush Sensor gave this analysis:

“SEMrush Sensor follows all the changes that occur on Google SERPs in 6 countries for both mobile and desktop separately. On top of the general volatility score per country, Sensor tracks scores for various industries and indicates the change in 15 SERP features and % of HTTPS and AMP.

Some industries experience more change than the others on daily basis (for example, due to higher competitiveness). Thus, Sensor introduced the Deviation score that analyses which industries had biggest the volatility spikes in relation to their usual score.”

SEMrush Sensor data for all keyword categories (US) – December 20

Based on this data, Onskul concludes “Normally, December is one of the calmest months when it comes to SERP volatility as Google tries to minimize potential impact before big holidays. But something happened around December 14, something that Barry Schwartz called the Maccabees Update, or the pre-holiday update. Sensor spotted the highest SERP volatility on mobile (slightly less on a desktop) across most categories, most affected on mobile being Autos & Vehicles, Law & Government, Reference.

In fact, right now, on December 19, Sensor is reporting another extreme spike in volatility. Now, Hobbies & Leisure, Science, Jobs & Education, Home & Garden, Internet & Telecom, have been affected the most. And the biggest fluctuations again take place on mobile.

Of course, it’s too early to come to conclusions on what’s going on and how to adjust to the changes (as we can’t really predict what exactly has changed), but what we know for now is that some new tweaks or updates were rolled out on December 19 for the US, and with a domino effect, the dramatic rise in volatility caught up in the UK, Germany, France, Australia and Spain the next day, which means that the update that was tested on the Google US on December 19 is now spreading across the world.”

We also reached out to Searchmetrics for their analysis and Founder and CTO Marcus Tober noted that they prefer to conduct a deep analysis of algorithmic fluctuations after a sustained change has taken place, saying “At first we saw some changes that at first look looked like typical Panda and Phantom symptoms, but not on a large systematic scale. Many sites have lost visibility that has no Schema.org integration, but we can’t determine based on such a short time what are the overall systematic changes.”

The MozCast continues to likewise show rankings turbulence as the updates roll out:

MozCast for Tuesday, December 19

With the holidays upon us and what would otherwise have been a slow week ahead, now is a good time to check your rankings and start auditing if, where, and why you might see changes.

 Source: This article was published searchengineland.com By Michelle Robbins

Publishers and webmasters might not like this new feature Google is testing.

Google has started testing and potentially rolling out a new feature in search that shows a carousel with a list of answers directly within the search results snippets. It shows the main search result snippet, and below it, it shows a carousel of answers picked from the content on the page the snippet is linking to.

This comes in handy with forum-related threads where someone asks a question and multiple people give their answers. In addition, Google is labeling which answer is the “best” and shows that answer first in the search results.

Here is a picture from @glenngabe:

I suspect Google is picking the best answer from a label in the thread itself.

Of course, this can be a concern for those who run answer sites. Instead of a searcher clicking from Google’s search results to an answer site webpage, the searcher can quickly see a snippet or the full answer in these answer carousels.

 Source: This article was published searchengineland.com By Barry Schwartz

Mozilla rolled out a major update to its Firefox web browser on Tuesday with a bevy of new features, and one old frenemy: Google.

In a blog post, Mozilla said Firefox’s default search engine will be Google in the U.S., Canada, Hong Kong and Taiwan. The agreement recalls a similar, older deal that was scuttled when Firefox and Google’s Chrome web browser became bitter rivals. Three years ago, Mozilla switched from Google to Yahoo as the default Firefox search provider in the U.S. after Yahoo agreed to pay more than $300 million a year over five years — more than Google was willing to pay.

The new Firefox deal could boost Google’s already massive share of the web-search market. When people use Firefox, Google’s search box will be on the launch page, prompting users to type in valuable queries that Google can sell ads against. But the agreement also adds another payment that Alphabet’s Google must make to partners that send online traffic to its search engine, a worrisome cost for shareholders.

 

 

It’s unclear how much Google paid to reclaim this prized digital spot. A Google spokeswoman confirmed the deal but declined to comment further, and Mozilla didn’t disclose financial details.

As Google’s ad sales keep rising, so too has the amount it must dole out to browsers, mobile device makers and other distribution channels to ensure that Google’s search, video service and digital ads are seen. Those sums, called Traffic Acquisition Costs or TAC, rose to $5.5 billion during the third quarter, or 23 percent of ad revenue.

Last quarter, the increase in TAC was primarily due to “changes in partner agreements,” Google Chief Financial Officer Ruth Porat said on the earnings call. She declined to disclose specific partners. A lot of these payments go to Apple, which runs Google search as the default on its Safari browser. In September, Apple added Google search as the default provider for questions people ask Apple’s voice-based assistant Siri, replacing Microsoft’s Bing. In the third quarter, the TAC Google paid to distribution partners, like Apple, jumped 54 percent to $2.4 billion.

Google is likely paying Mozilla less than Apple for search rights. In 2014, Yahoo’s then-Chief Executive Officer, Marissa Mayer, lobbied heavily for the Firefox deal by agreeing to pay $375 million a year, according to regulatory filings. Google paid $1 billion to Apple in 2014 to keep its search bar on iPhones, according to court records.

Firefox once commanded roughly a fourth of the web browser market, but its share has slid in recent years. It now controls 6 percent of the global market, according to research firm Statcounter. Apple’s Safari holds 15 percent followed by Alibaba’s UC Browser with 8 percent. Google’s Chrome browser has 55 percent of the market.

Source: This article was published siliconvalley.com By Mark Bergen

Tuesday, 28 November 2017 14:53

The Problems With Searching the Deep Web

For more than 20 years, researchers have worked to conceptualize methods for making web searching more comprehensive—going beyond the surface sites that are easily accessed by today’s search engines to truly create a system of universal access to the world’s knowledge. The task is proving to be far more complicated than computer scientists had thought. “The existing approaches,” notes one recent analysis, “lack [the ability] to efficiently locate the deep web which is hidden behind the surface web.”

Today, it is estimated that more than 65% of all internet searches in the U.S. are done using Google. Both Bing and Yahoo continue to be major players as well.

Avoiding the Dark Side

We all want searching to be more comprehensive, targeting the exact information that we need with the least amount of effort and frustration. However, nestled near the abyss of the information ocean is the dark web, a space where hackers and criminals create fake sites and conduct their commerce. The dark web continues to frustrate efforts to control illegal activity, including credit scams, drug sales, and the exploitation of international relations. Clearly, this isn’t what we are looking for in information retrieval.

By analyzing available data, Smart Insights says that more than 6.5 billion web searches are made each day around the globe. Current hacking scandals are making it clear that the concept of safe searching is more than just about protecting children from predators. There are a variety of search options that have been designed with privacy in mind:

  • DuckDuckGo, which bills itself as “the search engine that doesn’t track you”
  • Gibiru, which offers “Uncensored Anonymous Search”
  • Swisscows, a Switzerland-based option that calls itself “the efficient alternative for anyone who attaches great importance to data integrity and the protection of privacy”
  • Lukol, which works as a proxy server from Google and removes traceable entities
  • MetaGer, a German search engine that removes any traces of your electronic footprints and also allows for anonymous linking
  • Oscobo, a British product that does not track you and provides a robust option of search types, including images, videos, and maps

And there are others as well, demonstrating that concern for privacy over profits is creating reliable solutions for searchers across the globe.

Going Deeper

Google and other standard web search engines can be infuriating when you’re trying to do intensive background research, due to their lack of deep searching into the content of the databases and websites they retrieve. Given the amount of information on the web, this isn’t surprising, but we need better performance if we are truly able to rely on web searching as a legitimate option for research. Information professionals are used to the structured searching of verifiable information. What is missed is that deep web content—the “meat” of information that searchers need and expect.

Researchers Andrea Cali and Umberto Straccia noted in a 2017 article, “the Deep Web (a.k.a. the Hidden Web) is the set of data that are accessible on the Internet, usually through HTML forms, but are not indexable by search engines, as they are returned only in dynamically-generated pages.” This distinction has made reaching the content in these databases very difficult. The most successful sites, to date, have been targeting specific types of hidden data.

Working largely from public data, “whether researching arrest records, phone numbers, addresses, demographic data, census data, or a wide variety of other information,” Instant Checkmate is a fee-based service that retrieves data from public databases containing arrest reports, court records, government license information, social media profiles, and more. By doing so, it claims to help “thousands of Americans find what they’re looking for each and every day.” Searches seem to take forever, which, given the size of the databases it is searching, isn’t unreasonable. The data is encrypted to protect the searcher’s identity. Reports are far more detailed than anything we might otherwise be able to find in a more timely manner. Similar services include MyLifePipl, and Yippy.

Information professionals are perhaps most familiar with the Internet Archive’s Wayback Machine, the front-end search engine to more than 308 billion archived webpages and link addresses to even more. The Internet Archive itself takes up 30-plus petabytes of server space. For comparison, a single petabyte of data would fill 745 million floppy disksor1.5 million CD-ROMs.

Related...

And that’s just the size of the information that can be searched.Google Scholar andGoogle Books are two search engines that are working to dig deeper into the content of websites for scholarly information. Searchers can do their own searching by using the “site:” command; however, this is a tedious and hit-or-miss process, since these search engines are only able to scan the indexed pages linked to some domain homepages.

Deep Web Search Engines

A variety of search engines are working to provide improved access to key information otherwise hidden inside of websites or behind paywalls. Methods to get to this deep web are currently still under development—and are not regulated to protect users from unethical practices. Deep web search engines are able to uncover more information and links and improve the results of a search to include an estimated 500% more information than traditional search engines.

Examples of today’s search engines that are designed to reach these deep web sites include:

None of these are exceptional resources for information professionals that solve our problems of deep searching. These websites pop up and get taken down very frequently, and others pop up in their place. And none of these systems necessarily has staying power.

To thoroughly access deep web information, you’ll need to install and use a Tor browser, which also provides the basis for access to the dark web. The real issue facing researchers is how to control the search process in these huge, individually structured databases.

Creating a Stable Deep Web Search Tool Is Harder Than You Might Think

In August 2017, a deep web search engine was being touted as bringing better quality deep searching while promising to protect the privacy of users. DeepSearch from TSignal was to be the focus of this NewsBreak; however, it recently disappeared from the web—perhaps it was acquired by another company or taken down for more development and testing. This has happened before and probably will happen again. As researchers noted in a 2013 article, “While crawling the deep-web can be immensely useful for a variety of tasks including web indexing and data integration, crawling the deep-web content is known to be hard.”

Earlier this year, two Indian researchers reported on their goal of creating a dynamic, focused web crawler that would work in two stages: first, to collect relevant sites, and second, for in-site exploring. They noted that the deep web itself remains a major stumbling block because its databases “change continuously and so cannot be easily indexed by a search engine.”

The deep web’s complications are many—query design, requirements for some level of user registration, variant extraction protocols and procedures, etc. Let alone the linguistic complications as global searching confronts meanings and connections of terminology across disciplines and languages. Today’s open web search is so ubiquitous that we rarely think about the potential complications; however, the deep web is another animal, and some researchers question whether it would be possible to bridge this divide without doing much work to modify the “present architecture of [the] web.”

Information professionals can easily see the need for better search techniques to handle the complex, evolving nature of the web—and increasingly, so can other professionals. Psychiatrists studying addiction have initiated their own efforts to better access and study the deep web and dark web due to their role in the “marketing or sale or distribution” of drugs and developing an “easily renewable and anarchic online drug-market [which] is gradually transforming indeed the drug market itself, from a ‘street’ to a ‘virtual’ one.”

What can we do as we wait for a better solution to web search? Reliable scholarly databases can easily be supplemented with existing search sites and mega-search engines. Information professionals have always been aware of the complex nature of search, and today, computer scientists and web designers are confronting these realities as well. There is no ultimate solution—which, if nothing else, guarantees the future of our field.

Source: This article was published newsbreaks.infotoday By Email Nancy K. Herther

Google is the search engine that we all know and love, but most of us are barely scratching the surface of what this amazing tool can really accomplish. In this article, we're going to look at eleven little-known Google search tricks that will save you time, energy, and maybe even a little bit of cash. Some of these are just for fun (like making Google do a barrel roll), others can help you make better purchasing decisions, take major shortcuts, or dig up information on your favorite band, author, or even favorite foods.

Don't buy it until you Google it

shopping

When you're looking to purchase something from your favorite e-commercestore on the Web, don't click on that final checkout button until you've searched for the name of the store plus the word coupon. These promo codes can help you get free shipping, a percentage off your purchase, or entitle you to future savings. It's always worth a look!

Find works from your favorite authors and artists

books

Find all the books your favorite author has ever written simply by typing in "books by", then your author's name. You can do this with albums ("albums by") as well. This is a great way to find past works (or future works) that you might not be aware of.

Find the origins of common words

dictionary

Find out the origins - or etymology - of a specific word by typing in the word plus "etymology. For example, if you type in "flour etymology" you'll see that it is Middle English: a specific use of flower in the sense ‘the best part,’ used originally to mean ‘the finest quality of ground wheat’....The spelling flower remained in use alongside flour until the early 19th cent."

Compare the nutritional value of one food with another

Recipe
Credit: Alexandra Grablewski

Not sure if that piece of pizza is going to be better for you than say a cup of broccoli? Ask Google to compare the nutritional value by typing in "pizza vs. broccoli", or anything else you'd like to compare. Google will come back with all pertinent nutritional and caloric information - it's up to you what you choose to do with that information, of course.

Listen to songs by your favorite artist

Music

If you want to listen to a particular song by your favorite artist, or maybe even explore their discography, just type in "artist" and "songs", i.e., "Carole King songs". You'll get a complete list of songs, plus videos and biographical information. You can also listen to the songs right there within your Web browser; note that this feature isn't always available for all artists.

Find what those symptoms are similar to

medical

Type in something you're experiencing health-wise, and Google will list out similar diagnoses based on what you're experiencing. For example, a search for "headache with eye pain" brings back "migraine", "cluster headache", "tension headache", etc. NOTE: This information is not meant to substitute for that of a licensed medical provider.

Use Google as a timer

Timer

Credit: Flashpop

Need to keep those cookies from burning while you're browsing your favorite sites? Simply type "set timer for" whatever amount of minutes you're looking to keep track of and Google will run it in the background. If you attempt to close the window or tab that is running the timer, you'll get a popup alert asking if you really want to do that.

Make Google do tricks

barrel roll

There are a multitude of fun tricks that you can make Google do with just a couple simple instructions:

  • Type in "do a barrel roll" or "Z or R twice", and Google will rotate the results page a full 360 degrees. 
  • Type in "tilt" or "askew", and your page 'leans' to the right a bit. Searching for anything else via Google puts it back to where it was.
  • Type in  "zerg rush" and your search page returns with 'O's eating the search results. Clicking each 'O' three times stops this.

Find the roster of any sports team

Get a detailed roster breakdown of your favorite sports team simply by typing in "team roster" (substituting the name of your team for the word "team"). You'll see a full-page color roster, with player information.

Find a quote

Use quotation marks to search for an exact quote and its origin. For example, if you knew the partial lyrics to a song, but weren’t sure of the singer or the songwriter, you could simply frame the snippet that you did know in quotation marks and plug it into Google. More often than not, you’ll receive the full song lyrics as well as author, when it was first released, and other identifying information.

Find related sites

top ten search tips

Using Google, you can use a little known command that will bring up sites related to a specified site. This comes in handy especially if you really enjoy a particular site, and you’d like to see if there are others that are similar. Use “related:” to find sites that are similar; for example, “related:nytimes.com”.

 Source: This article was published lifewire.com By Wendy Boswell

Page 1 of 27

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Read more content?
Register or Login as "AIRS Guest"
Enjoy Guest Account
or

x
Create an account
x

or