Search engines are internet encyclopedias that allow us to find and filter out relevant information. With any given search engine, it takes some skill to find exactly what you are looking for. You must understand how the search engine works and how your search queries are interpreted.

More advanced search engines will meet you halfway, by providing forms for advanced searches, better interpreting your queries, suggesting keywords, or finding unusual context.

In this article I introduce five search engines with such advanced features.

General Search

Whenever you are looking for written information, the general search engines will do the trick. The advanced search gives access to additional features that easily let you refine your search query.


advanced search engines


  • keeps setting new standards.
  • comprehensive, yet easy to use interface.
  • excellent search term suggestions.

Reverse Image Search

While most general search engines can search for images based on file names or tags, more advanced search engines can read the image and make its content searchable.


advanced search engines


  • creates an image fingerprint.
  • does reverse image search based on the fingerprint.
  • reveals where and how images are used.

Similar Image Search

Similar image search doesn’t recognize exact copies of a given image, but similar features, such as color, texture, or structures within the image.


all advanced search engines


  • extracts general image characteristics, such as color and shape.
  • searches similar images based on their general characteristics.
  • works with uploaded images and image URLs.

Invisible Search

Information that is stored in databases is largely invisible to standard search engines because they merely index the contents of websites, following one link after the next. Invisible search engines specialize in hidden data in the so-called Deep Web.


search invisible data


  • access dynamic databases.
  • search within data range.
  • well-documented help section.

Semantic Search

Semantic search is concerned with the exact meaning of a search term, its definition and the search context. Search engines based on semantic search algorithms are thus better at eliminating irrelevant results.

all advanced search engines


  • choose intended meaning for ambiguous terms.
  • save a myriad of personal settings.
  • search other search engines from DuckDuckGo using its !bang feature.

Source : http://www.makeuseof.com/

Auhtor : Tina Sieber

Categorized in Search Engine

(NaturalNews) Having immediate access to the sum of human knowledge and experience via the internet is inimitably amazing. However, new research suggests that being able to pull up almost any information with the click of a button could be making people more stupid while simultaneously imparting a false sense of self-perceived intelligence.

A team of psychologists from Yale University evaluated more than 1,000 students who took part in a psychological experiment on the impact of searching the internet. After being asked the question "How does a zip work?", some of the students were told to click on an internet link for the answer while the rest were given a printed sheet containing the same information.

Later, the two groups were quizzed on an unrelated question: "Why are cloudy nights warmer?" Individuals in the group that searched online for the answer to the first question were found to believe themselves smarter than they actually were compared to the group that read the same information on the printed sheet.

Experts believe that this phenomenon occurs because searching the internet puts people's brains in a type of "search mode" where they feel more powerful and smarter than they actually are. In other words, having access to the internet makes people feel like the wealth of knowledge therein is somehow part of their own brains.

"The Internet is such a powerful environment, where you can enter any question, and you basically have access to the world's knowledge at your fingertips," stated lead researcher Matthew Fisher, a fourth-year doctoral candidate in psychology at Yale University

"It becomes easier to confuse your own knowledge with this external source. When people are truly on their own, they may be wildly inaccurate about how much they know and how dependent they are on the Internet."

Excessive smartphone use is causing widespread cognitive atrophy, study finds

Experts claim that smartphones are making the problem even worse because people increasingly rely on them for immediate access to information in lieu of using their brains. Research out of the University of Waterloo found that people who frequently use their smartphones tend to use their brains less frequently, and vice versa.

They found that using internet search engines to pull up information makes people cognitively lazy and diminishes their ability to solve problems using their own critical thinking skills. The results of this study indicate that intuitive thinkers, or people who tend to use their guts to make decisions, also tend to use search engines to make decisions. Analytical thinkers, on the other hand, tend to rely more on their own brainpower

"They may look up information that they actually know or could easily learn, but are unwilling to make the effort to actually think about it," stated study co-author Gordon Pennycook when discussing what smartphones and internet search engines do to people's cognitive skills.

"Decades of research has revealed that humans are eager to avoid expending effort when problem-solving and it seems likely that people will increasingly use their smartphones as an extended mind," added Nathaniel Barr, the paper's other lead author.

What is the solution? Spend less time searching the internet and wasting time on your smartphone and focus instead on actually reading and processing information the old-fashioned way.

"With the internet, the lines become blurry between what you know and what you think you know," added Fisher. "In cases where decisions have big consequences, it could be important for people to distinguish their own knowledge and not assume they know something when they actually don't."

Source : http://www.naturalnews.com

Auhtor : Ethan A. Huff, staff writer 

Categorized in Search Engine

Q. When I type in web searches in the box at the top of the Safari program on my Mac, the browser always brings me Google results. Is there a way to use Bing without having to first go to the Bing page and then type in keywords?

A. Apple’s Safari has several built-in features intended to make web browsing more efficient, including sending your keywords to a default browser when you type them into the Smart Search field at the top of the window. If you want to change the search engine that is automatically used, you can pick a different one in the Safari settings.

Click the Search tab in the Safari settings to change the default search engine. CreditThe New York Times

Open the Safari program, and in the Safari menu in the top-left corner of the toolbar, select Preferences. As a shortcut, you can also press the Command and comma keys on the keyboard to open the Preferences box without going through menus.

In the Safari Preferences box, click the Search tab. Here, you can change the browser’s default search engine. If you do not want to use Google, you can switch to YahooBing or the privacy-minded DuckDuckGo (which does not collect personal information when you use it).

The Search tab has a few other settings you can change, like whether to include search-engine suggestions or get Safari Suggestions (which bring results from iTunes, the App Store and places near your location, among other sources). Additionally, Safari allows you to turn off the ability to search within a site from the Smart Search field — just disable the “Enable Quick Website Search” option.

If you would rather Safari not start immediately displaying a page it thinks best matches your request (based on your browsing history), turn off the checkbox next to “Pre-load top hit in the background.” Finally, if you don’t like the big window of icons from your Favorite sites, you can shut it down by turning off the checkbox next to Show Favorites.


Source:  http://www.nytimes.com/

Categorized in Search Engine

Google has been incorporating many new features to make the search engine more useful and valuable to the users. Keyword search, time, length, weight and many other conversions, maps, meanings and pronunciations of words, addresses, contact numbers, flight schedules, blogs, google drive and many more.

We are all well-acquainted with the ‘Google Translate’ feature. You just have to type a word and google can translate it to any language of your choice.


Google has recently added Roman Hindi translation feature to its search algorithm. As roman Hindi and Roman Urdu are quite similar, so people, in both India and Pakistan, can avail this feature.

Here is how it works:

Google Search
If translation from Urdu to English doesn’t work for you then you can also try ‘translate Hindi to English’.

Pakistanis are not much in approval of “Hindi-detected” thing but it is just a beginning. Soon Google will incorporate ‘Urdu’ too. Until then, it is pretty handy solution to the problem of searching specific word in specific language to get it translated into English. You can now just click on Google, write Roman Urdu, and there you go!

Author:  Maheen Kanwal

Source:  https://www.techjuice.pk

Categorized in Search Engine

Have an issue with your listings in Google? Getting an official answer might be tough. And when that happens to a Google competitor, as it did with ProtonMail, it could come back to harm Google's defense from antitrust charges.

Did Google deliberately try to reduce the rankings of ProtonMail, a tiny rival to Google’s own Gmail service? Almost certainly not. Even Proton doesn’t seem to believe that. But the case highlights how Google’s problems with publisher, business and webmaster communication can hurt it as it faces challenges on antitrust grounds.

What happened with Proton

Proton Technologies is a Swiss-based company offering a secure, encrypted email service called ProtonMail. It might be an attractive alternative for those who worry a service like Gmail isn’t private enough, either from government requests or Google’s own ad uses.

Last November, Proton noticed that they were seeing a drop in daily signups for ProtonMail. Wondering why, the company started looking into its rankings on Google and determined there was a problem. In particular, ProtonMail wasn’t showing in the top results for “secure email” or “encrypted email,” as it assumed was the case in the past.

Proton then suffered a problem that’s not unique for businesses and publishers. It had no guaranteed way to get an official answer from Google if there was a problem.

Google offers a wide-ranging toolset called Google Search Console that tells businesses if they have problems with their sites. Proton told Search Engine Land it even made use of the toolset. The problem is that the system doesn’t allow site publishers to contact Google if they suspect something is wrong on Google’s end. There’s no way to ask for help, unless you have received what’s called a “manual action,” a penalty placed on your site by a human being. Proton had no manual actions, it told us.

Without such an option, Proton ended up using Google’s spam reporting tool earlier this year. There was no indication that Proton had been spamming Google. But it appears Proton hoped that by using the form, it might trigger a review by Google which, in turn, would uncover what the real problem was.

That didn’t solve the issue. Finally, ProtonMail tweeted out for help in August to Google and to Google’s former head of web spam, Matt Cutts, who’s on leave from the company and hasn’t been involved with it for over two years. Moreover, a new head of web spam was named ages ago.

Still, reaching out to a semi-former Googler seems to have done the trick. Within about a week, the problem was resolved. Exactly what happened was never explained.

Enter the antitrust concerns

Last week, this all drew attention it hadn’t really received before because Proton did a blog postabout it, one that raised the specter that it was perhaps related to competitive issues.

This incident however highlights a previously unrecognized danger that we are now calling Search Risk. The danger is that any service such as ProtonMail can easily be suppressed by either search companies, or the governments that control those search companies.

The only reason we survived to tell this story is because the majority of ProtonMail’s growth comes from word of mouth, and our community is too loud to be ignored. Many other companies won’t be so fortunate. This episode illustrates that Search Risk is serious, which is why we now agree with the European Commission that given Google’s dominant position in search, more transparency and oversight is critical.

Could that have really been the situation here?

Unlikely competitive reasons were to blame

It’s unlikely. Google has over one billion daily active Gmail users. ProtonMail has just over a million, according to its recent post. It shows no growth trajectory that’s going to cause it to rival Google even in years to come.

Given all this, would Google really have actively worked to suppress it while not bothering to do the same for real email rivals? For example, Outlook ranks in the top results on Google for a popular term like email.

It doesn’t make sense. Even Proton isn’t saying the issue was due to competitive reasons, with cofounder Andy Yen telling Search Engine Land via email:

From the data we have, it is impossible to draw a concrete conclusion. We are willing to give Google the benefit of the doubt here and in our blog post, we aren’t drawing any conclusions in this regard.

We are grateful to the individual Googlers who stepped in to fix the issue, but overall this was a very difficult and costly situation for us. We are software developers ourselves, so we know that software bugs do happen, and Google isn’t infallible either, but when Google isn’t behaving correctly, the stakes can be very high.

At the end of the day, we hope that by sharing our experience, more people will become aware of Search Risk, as it is a challenge the internet community is going to have to confront.

“Search Risk?” I’ll get back to that. But even with Proton thinking it could be a completely innocent technical glitch, this case will probably come back to haunt Google. In particular, it’s harmful as the European Union continues its antitrust review and actions with the company.

Indeed, years ago, Google almost certainly wasn’t trying to act anti-competitively against tiny UK-based shopping search engine Foundem. But spam actions against that company were the seed for other complaints and concerns to grow. Last year’s antitrust charges levied against Google by the EU grew directly out of that.

In short, Google’s can’t really afford to be making mistakes with anyone who can be deemed a competitor, because they have a big club to swing that other publishers don’t get — that of Google acting competitively. And even if Proton doesn’t swing that club, others may take its situation as an example to challenge Google.

Despite glitch, Google did still send Proton traffic

It’s a bit of a side-issue among the bigger issues here, but it’s worth addressing. Proton said it had a growth rate drop of 25 percent because of the Google change. However, it really has no idea how much the Google drop harmed it. This is because, as it turns out, Proton had no idea how much traffic Google was sending it before, after or even now.

Proton is focused on the fact that it didn’t rank for well for a period of time for the two keywords mentioned above. Unfortunately, rank checking is a terrible way to assess how well you’re doing with Google or search engines in general. Sites are typically found for many different terms. Focusing only on some is far from the full picture.

These terms might have been traffic drivers for ProtonMail or not. Proton doesn’t know directly, because the company told Search Engine Land that it’s not running any type of analytics that would show how much traffic it gets from Google or other sources.

The company said it doesn’t use Google Analytics specifically because of privacy worries. It could, of course, find this type of data without using Google Analytics, such as by processing its own server logs directly. That’s much more complicated and time-consuming, but it’s an option.

So where’s that 25-percent drop in growth come from? Proton emailed us this:

We saw a noticeable drop in the number of daily sign ups with everything else held equal. More strikingly, after Google fixed the problem, we saw a >25% increase overnight (we changed nothing on our side). For us, this 25% was the difference between bleeding money each month and being able to break even.

Keep in mind, it’s not that Google wasn’t sending Proton traffic. It’s just that the loss of ranking well for those terms, and perhaps others, caused it to get less traffic than before. That drop in traffic edged the company out of making money to breaking even. That leads to the whole “search risk” issue.

Everyone has “search risk”

The bottom line is that any business or publisher is at “search risk,” as Proton dubbed it in its post, where losing search visibility could jeopardize your business. It’s not a new risk. It’s one that literally goes back over 20 years, to the days when Yahoo was deemed the internet “gatekeeper” that could make or break businesses.

Don’t depend on search engines, Google or otherwise. For that matter, don’t build any business on the idea that you’re going to somehow get free traffic from a source, such as Facebook, Pinterest or whatever. That should be common sense. If you’re not paying for something, you’re not guaranteed to get anything.

Wise search marketers know this. Smart SEOs know you don’t want to have an overdependence on Google. Algorithms change all the time. But apparently in 2016, it’s still a lesson that people need to learn.

Google needs to improve communication

That said, Google could and should do a better job with communication. Something was wrong with the ProtonMail site, in terms of how Google was processing it. We know that, because something was fixed. Google just won’t say what. All it will say is the statement it sent us below:

Google’s algorithms rely on hundreds of unique signals or “clues” that make it possible to surface the results we think will be most relevant to users.

While we understand that situations like this may raise questions, we typically don’t comment on how specific algorithms impact specific websites. We’re continually refining these algorithms and appreciate hearing from users and webmasters.

While in many cases search ranking changes reflect algorithmic criteria working as intended, in some cases we’re able to identify unique features that lead to varied results.

We’re sorry that it took so look to connect in this case and are glad the issue is resolved. For webmasters who have questions about their own sites, our Webmaster team provides support through the Webmaster Forums and office hours.

It shouldn’t have taken so long for the problem to be fixed. Google itself shouldn’t want it to take so long. The company needs to find a better way for publishers to report potential errors and get resolutions. I wished for that back in 2006 and again in 2011, as part of my revisiting my “25 Things I Hate About Google” post:

Sure, a paid support option might put you under fire that you might be making algorithm updates like Farmer/Panda just to generate support revenue. But others might appreciate a guaranteed route.

If not paid, maybe you could give anyone who registers with Google [Search Console] one or two free guaranteed express support tickets, so that we don’t have bloggers talkingabout getting in contact with Google being a “crap shoot” and diminishing the huge amount of resources you do put in to support through Google Webmaster Central.

Now it’s been 15 years since I first had that wish, and it still hasn’t been solved. Yes, there would be time and cost involved. But that might be well worth it, versus adding more ammunition for those who might use glitches to attack Google on antitrust grounds.

Google’s main advice for those with problems is to use its Google Webmaster Forums. Proton was even going to try that next, it told us, if its tweets didn’t help. Personally, I’d never want someone to go there because:

  1. while Googlers are there, they’re not guaranteed to review your problem; and
  2. some problems (as with Proton’s) can only be diagnosed by Googlers; and
  3. most people who answer in the forums are not Googlers; and
  4. non-Googlers might not even give the right answer.

For instance, here’s a non-Googler telling someone they have a manual action against them because the site can’t be found in Google’s search results and the Google URL shortener doesn’t work for it. Maybe. Probably, even. But that person is guessing. The only way to actually know if there’s a manual action is by going into Google Search Console as the publisher and checking. A third-party person can’t tell you.

Still, that’s the option you have. Unless you catch someone from Google’s attention another way, as ProtonMail did. Or MetaFilter did in 2014.

In both of those cases, Google took a public relations blow. Improve the communications, and everyone wins.

Author:  Danny Sullivan

Source:  http://searchengineland.com/

Categorized in Search Engine

RepUPress.com has launched a new Social Media Search Engine that will help individuals navigate the vast realm of social networks more easily, help businesses better measure their marketing efforts and help students accomplish more effective and efficient research.

(OPENPRESS) RepUPress.com, an Indianapolis based tech company, has launched a new Social Media Search Engine in cooperation with search giant Google (http://www.repupress.com/social-search). The free search engine allows visitors to quickly access results from all major social media networks in an aggregated view, as well as, search each individual network by simply clicking a tab at the top of the results. The results can also be further customized by either popularity (relevance) or by date.

When asked about the benefit to users, Founder Rob Gelhausen had this to say, “We believe the application for a dedicated Social Media Search Engine is abundant. Individuals can more easily find what they are looking for within and across their favorite networks, companies can gauge their marketing impact, and students can do more effective, as well as efficient research. These are just a few of the current benefits and uses.”

According to a report published by the Pew Research Center 52% of adults have at least 2 social media accounts across various networks. That means as many as 1.26 million US citizens over the age of 18 visit multiple social media networks to search and post content. The idea of a way to search from a single source is an intriguing and time saving proposition.

The Social Media Search Engine was built on top of and with the cooperation of Google’s Web Index. With the recent deal between Twitter and Google to allow access and indexing of all 200 billion plus Tweets being generated every year, the incredible categorization of the vast amount of Facebook content, the fact that YouTube is owned by Google, as well as, many additional strategic reasons, Founder Rob Gelhausen said “It was a no brainer using Google’s index to power our search.”

Rep U Press is a digital media company that offers an array of solutions which include SaaS Applications, Marketing Services and even eLearning Video Tutorials Courses. Their latest endeavor is sure to make waves. The web search industry is dominated by three major players who are Google, Yahoo and Bing. Google currently controls approximately 65% of the search share in the US with Bing and Yahoo combing for approximately 24%. All other search engines make up the remaining 11%.

Harking back to the days of Mark Cuban selling Broadcast.com to Yahoo for 5.7 Billion Dollars leaving the door open for Google to crush them in search, it seems as though Yahoo has always been one step behind in the race for online user search acquisition. It is amazing that there has not already been some sort of play like this by either of thetwo  two second tier search options. Will the Rep U Press solution take even more of the market share? Well, that remains to be seen.

Author:  Anna Chmielewska

Source:  http://military-technologies.net/

Categorized in Search Engine

Founder of the dominant Chinese search engine urges Silicon Valley’s entrepreneurs and coders to set up shop in China

Software coders, engineers and Silicon Valley entrepreneurs are welcomed in China if they are put off by the anti-immigration comments espoused by the president-elect of the United States, said Baidu Inc’s founder and chairman Robin Li.

Stephen Bannon, executive chairman of Breitbart News and strategy adviser to Donald Trump, noted during a November 2015 interview with Trump on the website’s Sirius XM radio talk show that two-thirds or three-quarters of Silicon Valley’s CEOs are from South Asia or from Asia, according to a Washington Post report this week.

“I hope these migrants would come to China, so that the country can play a bigger role in the world’s innovations,” Li said on Friday during the World Internet Conference in Wuzhen. “Many entrepreneurs have said that they are worried that Trump’s victory will hurt creativity in the US.”

China, home to the world’s largest Internet-using population and biggest number of smartphone users, is throwing its doors open to attract talent and capital to help give the country a leg up in technology.

Interested technologists and entrepreneurs will have to contend with China’s “cyberspace sovereignty,” espoused by president Xi Jinping last year in Wuzhen and reiterated this year by the Communist Party’s propaganda chief Liu Yunshan, an unambiguous affirmation of Beijing’s tight grip on censorship and control of the Internet.

Still, the country’s size and growth pace offer rewards for entrepreneurs who are willing to live without accessing Facebook, Twitter, Google, or websites including The New York Times, and the South China Morning Post. Baidu, operator of the dominant Internet search engine in China, owes almost all its revenue to the country’s advertisers and users.

“China is the largest internet market in the world, and it’s also the fastest-growing market,” Li said. “I hope more talent comes to China, and we can embrace entrepreneurship together.”

Along with larger peers Tencent Holdings and Alibaba Group -- which owns the South China Morning Post -- Baidu is at the forefront of China’s push to harness artificial intelligence to drive its business growth.

This week, Baidu showed off a fleet of 18 self-driving cars in Wuzhen, demonstrating its ability to power vehicles using its AI technology.

The lack of talents in the field has been a bottleneck that’s stumped the industry’s progress, analysts said.

There’s urgent demand for engineers specialising in artificial intelligence in China, but the current education system is unable to churn out enough talent, said Hao Jian, chief consultant at online recruiter Zhaopin.com

“China’s college training is unable to catch up with the changes in the Internet sector, forcing many companies to look overseas for talent,” he said.

Author:  Phoenix Kwong

Source:  http://www.scmp.com/

Categorized in Search Engine

Google has become such an ingrained part of our society that people simply say, “I Googled it.” The search engine counts millions of internet users among its loyal followers, often making it seem as if no other search engine is even relevant anymore. But, in actuality, Google has some stiff competition, and if you’re willing to look, you’re going to find numerous websites that are actually even better than Google, including:



Google’s in for some stiff competition when DuckDuckGo, now still a relatively secret search engine, spreads to the masses. Perhaps the biggest benefit of DuckDuckGo is it doesn’t collect nor does it share your personal information like Google does. In addition, DuckDuckGo doesn’t make users scroll through dozens of pages to find an answer. Let’s say you want to find out when the 2012 Presidential Election will be held. DuckDuckGo will return the answer at the top of your search page. Web users also enjoy the Web of Trust, which allows them to determine which sites are safe enough to visit, and pointless pages thrown up just to make revenue but without any real content never appear in search results.


StartPage by Ixquick Search Engine-ixquick_com

The self-described “most private search engine” in the world, Ixquick does not store users’ browsing histories, nor does it keep track of IP addresses, making it an ideal option for web browsers who want to keep their information private. In fact, all searches are encrypted to provide you with complete privacy.


Yippy_com - Yippy Search Engine-yippy_com

Yippy is an ideal search engine for families and those who are fed up with adult sites ending up in their search results. The search engine promises extremely tight security. It asserts that it doesn’t store any of your private information, your search history, your email addresses, and other vital information. Search results return with a box on the left hand of the screen, allowing you to choose which way information is best presented to you: By time, by source, or by sites.


Gigablast - An Alternative Open Source Search Engine-gigablast_com

Gigablast advertises itself as the “Green Search Engine,” as it runs on wind energy, providing search results for an estimated 10 million web users. Gigablast has been around for nearly a decade, and the search engine searches through all websites on a particular keyword or phrase, rather than just individual pages. Parents can use the “family filter,” and Gigablast implements numerous spam filters to ensure users aren’t greeted with spam websites on search result pages. While Google, Yahoo, and Bing are currently the most popular search engines, particularly with English speakers, web users do have other options that offer them more flexibility, more privacy, and promise they won’t be inundated by spam and useless content that often show up in the search results from the more well-known search engines.

Blekko (Update: Blekko is now Watson fro IBM.)

Blekko Home Page

Blekko is a dream come true for those web users fed up with spam and being taken to pages from content farms and promises spam-free results. If a website’s content does not live up to Blekko’s strict requirements for quality, it isn’t included in the search results, quite a difference from Google. Users can also use the settings to ensure theirs searches are related to specific topics, such as news or the date content was published.

Author:  Shell Harris

Source:  http://www.bigoakinc.com/

Categorized in Search Engine

No, it’s not Spiderman’s latest web slinging tool but something that’s more real world. Like the World Wide Web.

The Invisible Web refers to the part of the WWW that’s not indexed by the search engines. Most of us think that that search powerhouses like Google and Bing are like the Great Oracle”¦they see everything. Unfortunately, they can’t because they aren’t divine at all; they are just web spiders who index pages by following one hyperlink after the other.

But there are some places where a spider cannot enter. Take library databases which need a password for access. Or even pages that belong to private networks of organizations. Dynamically generated web pages in response to a query are often left un-indexed by search engine spiders.

Search engine technology has progressed by leaps and bounds. Today, we have real time search and the capability to index Flash based and PDF content. Even then, there remain large swathes of the web which a general search engine cannot penetrate. The term, Deep Net, Deep Web or Invisible Weblingers on.

To get a more precise idea of the nature of this ‘Dark Continent’ involving the invisible and web search engines, read what Wikipedia has to say about the Deep Web. The figures are attention grabbers – the size of the open web is 167 terabytes. The Invisible Web is estimated at 91,000terabytes. Check this out – the Library of Congress, in 1997, was figured to have close to 3,000terabytes!

How do we get to this mother load of information?

That’s what this post is all about. Let’s get to know a few resources which will be our deep diving vessel for the Invisible Web. Some of these are invisible web search engines with specifically indexed information.


invisible web search engines

Infomine has been built by a pool of libraries in the United States. Some of them are University of California, Wake Forest University, California State University, and the University of Detroit. Infomine ‘mines’ information from databases, electronic journals, electronic books, bulletin boards, mailing lists, online library card catalogs, articles, directories of researchers, and many other resources.

You can search by subject category and further tweak your search using the search options. Infomine is not only a standalone search engine for the Deep Web but also a staging point for a lot of other reference information. Check out its Other Search Tools and General Reference links at the bottom.

The WWW Virtual Library

invisible web search engines

This is considered to be the oldest catalog on the web and was started by started by Tim Berners-Lee, the creator of the web. So, isn’t it strange that it finds a place in the list of Invisible Web resources? Maybe, but the WWW Virtual Library lists quite a lot of relevant resources on quite a lot of subjects. You can go vertically into the categories or use the search bar. The screenshot shows the alphabetical arrangement of subjects covered at the site.


invisible web search engines

Intute is UK centric, but it has some of the most esteemed universities of the region providing the resources for study and research. You can browse by subject or do a keyword search for academic topics like agriculture to veterinary medicine. The online service has subject specialists who review and index other websites that cater to the topics for study and research.

Intute also provides free of cost over 60 free online tutorials to learn effective internet research skills. Tutorials are step by step guides and are arranged around specific subjects.

Complete Planet

search invisible web

Complete Planet calls itself the ‘front door to the Deep Web’. This free and well designed directory resource makes it easy to access the mass of dynamic databases that are cloaked from a general purpose search. The databases indexed by Complete Planet number around 70,000 and range from Agriculture to Weather. Also thrown in are databases like Food & Drink and Military.

For a really effective Deep Web search, try out the Advanced Search options where among other things, you can set a date range.


search invisible web

Infoplease is an information portal with a host of features. Using the site, you can tap into a good number of encyclopedias, almanacs, an atlas, and biographies. Infoplease also has a few nice offshoots like Factmonster.com for kids and Biosearch, a search engine just for biographies.


search invisible web

DeepPeep aims to enter the Invisible Web through forms that query databases and web services for information. Typed queries open up dynamic but short lived results which cannot be indexed by normal search engines. By indexing databases, DeepPeep hopes to track 45,000 forms across 7 domains.

The domains covered by DeepPeep (Beta) are Auto, Airfare, Biology, Book, Hotel, Job, and Rental. Being a beta service, there are occasional glitches as some results don’t load in the browser.


how to use the invisible web

IncyWincy is an Invisible Web search engine and it behaves as a meta-search engine by tapping into other search engines and filtering the results. It searches the web, directory, forms, and images. With a free registration, you can track search results with alerts.


how to use the invisible web

DeepWebTech gives you five search engines (and browser plugins) for specific topics. The search engines cover science, medicine, and business. Using these topic specific search engines, you can query the underlying databases in the Deep Web.


how to use the invisible web

Scirus has a pure scientific focus. It is a far reaching research engine that can scour journals, scientists’ homepages, courseware, pre-print server material, patents and institutional intranets.


TechXtra concentrates on engineering, mathematics and computing. It gives you industry news, job announcements, technical reports, technical data, full text eprints, teaching and learning resources along with articles and relevant website information.

Just like general web search, searching the Invisible Web is also about looking for the needle in the haystack. Only here, the haystack is much bigger. The Invisible Web is definitely not for the casual searcher. It is a deep but not dark because if you know what you are searching for, enlightenment is a few keywords away.

Do you venture into the Invisible Web? Which is your preferred search tool?

Author:  Saikat Basu

Source:  http://www.makeuseof.com/

Categorized in Search Engine

The Washington Post reports that a new search engine called Omnity is on the way, which is targeted at researchers and students. Not only is it being recognized for unique features that Google doesn’t offer, many publications are calling it “smarter than Google.”

Reports indicate that Omnity separates itself from the pack by serving up results which best match the search term entered in. There’s also the added capability of indicating how those results relate to one another.

If you’re researching a subject you know little about, for example, you can type it in as a search term and immediately see which resources are getting cited the most.In addition you can see who has conducted the most influential research on the subject as well as which university is leading when it comes to research on that subject.

Omnity will pull information from a variety of sets of data including: SEC filings, publicly available news, organizational reports, scientific journals, financial reports, and legal histories.

Alternatively, you can input your own data sources. For example, you can upload a piece of your own research, or some research papers found elsewhere, and the search engine will return the links to other resources that are relevant but not directly cited in sources you’ve uploaded. With this feature, you can easily find you can find unique sources of information to add to your research.

The Washington Post argues that Omnity overcomes one of the problems of modern search engines, which is the fact that today search engines are based on keywords. With that being the case, today search engines can only return results if the keywords in the title of the page match what’s being search for.  Omnity improves on the current search model by scanning through the entirety of a document.

The Post concedes that Omnity is not likely to overtake Google at any point in time, but niche search engines still have a place in the market. As search  continues to evolve, we may see Omnity being used in a way we can’t predict at this time. The Washington Post gives the example of niche search engine Wolfram Alpha, originally marketed as a computational search engine, now helps to power a search giant known as Siri.

It’s worth keeping an eye on new search engines like this because it’s an indication of where other search engine's might be going. It also demonstrates how our search habits are changing over time.

Author:  Matt Southern

Source:  https://www.searchenginejournal.com

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media