[Source: This article was published in observer.com By Harmon Leon - Uploaded by the Association Member: Paul L.]

On HBO’s Silicon Valley, the Pied Piper crew’s mission is to create a decentralized internet that cuts out intermediaries like FacebookGoogle, and their fictional rival, Hooli. Surely a move that would make Hooli’s megalomaniac founder Gavin Belson (also fictional) furious.

In theory, no one owns the internet. No. Not Mark Zuckerberg, not Banksy, not annoying YouTube sensation Jake Paul either. No—none of these people own the internet because no one actually owns the internet.

But in practice, a small number of companies really control how we use the internet. Sure, you can pretty much publish whatever you want and slap up a website almost instantaneously, but without Google, good luck getting folks to find your site. More than 90 percent of general web searches are handled by the singular humongous search engine—Google.

If things go sour with you and Google, the search giant could make your life very difficult, almost making it appear like you’ve been washed off the entire internet planet. Google has positioned itself as pretty much the only game in town.

Colin Pape had that problem. He’s the founder of Presearch, a decentralized search engine powered by a community with roughly 1.1 million users. Presearch uses cryptocurrency tokens as an incentive to decentralize search. The origin story: Before starting Presearch, Google tried to squash Pape’s business, well not exactly squash, but simply erase it from searches.

Let’s backtrack.

In 2008, Pape founded a company called ShopCity.com. The premise was to support communities and get their local businesses online, then spread that concept to other communities in a franchise-like model. In 2011, Pape’s company launched a local version in Google’s backyard of Mountain View, California.

End of story, right? No.

“We woke up one morning in July to find out that Google had demoted almost all of our sites onto page eight of the search results,” Pape explained. Pape and his crew thought it was some sort of mistake; still, the demotion of their sites was seriously hurting the businesses they represented, as well as their company. But something seemed fishy.

Pape had read stories of businesses that had essentially been shut down by Google—or suffered serious consequences such as layoffs and bankruptcy—due to the jockeying of the search engine.

“Picture yourself as a startup that launches a pilot project in Google’s hometown,” said Pape, “and 12 months later, they launch a ‘Get Your City Online’ campaign with chambers of commerce, and then they block your sites. What would you think?”

It was hard for Pape not to assume his company had been targeted because it was easy enough for Google to simply take down sites from search results.

“We realized just how much market power Google had,” Pape recalled. “And how their lack of transparency and responsiveness was absolutely dangerous to everyone who relies on the internet to connect with their customers and community.”

google

Google’s current search engine model makes us passive consumers who are fed search results from a black box system into which none of us have any insight. Chris Jackson/Getty Images

 

Fortunately, Pape’s company connected with a lawyer leading a Federal Trade Commission (FTC) investigation into Google’s monopolistic practices. Through the press, they put pressure on Google to resolve its search issues.

This was the genesis for Presearch, ‘the Switzerland of Search,’ a resource dedicated to the more open internet on a level playing field.

“The vision for Presearch is to build a framework that enables many different groups to build their own search engine with curated information and be rewarded for driving usage and improving the platform,” Pape told Observer.

But why is this so important?

“Because search is how we access the amazing resources on the web,” Pape continued. “It’s how we find things that we don’t already know about. It’s an incredibly powerful position for a single entity [Google] to occupy, as it has the power to shape perceptions, shift spending and literally make or break entire economies and political campaigns, and to determine what and how we think about the world.”

You have to realize that nothing is truly free.

Sure, we use Google for everything from looking for a local pet groomer to finding Tom Arnold’s IMDB page. (There are a few other things in between.) Google isn’t allowing us to search out of the goodness of its heart. When we use Google, we’re essentially partaking in a big market research project, in which our information is being tracked, analyzed and commoditized. Basically, our profiles and search results are sold to the highest bidders. We are the product—built upon our usage. Have you taken the time to read Google’s lengthy terms of service agreement? I doubt it.

How else is Sergey Brin going to pay for his new heliport or pet llama?

Stupid free Google.

Google’s current model makes us passive consumers who are fed search results from a black box system into which none of us have any insight. Plus, all of those searches are stored, so good luck with any future political career if a hacker happens to get a hold of that information.

Presearch’s idea is to allow the community to look under the hood and actively participate in this system with the power of cryptocurrency to align participant incentives within the ecosystem to create a ground-up, community-driven alternative to Google’s monopoly.

“Every time you search, you receive a fraction of a PRE token, which is our cryptocurrency,” explained Pape. “Active community members can also receive bonuses for helping to improve the platform, and everyone who refers a new user can earn up to 25 bonus PRE.”

Tokens can be swapped for other cryptocurrencies, such Bitcoin, used to buy advertising, sold to other advertisers or spent on merchandise via Presearch’s online platform.

Presearch’s ethos is to personalize the search engine rather than allowing analytics to be gamed against us, so users are shown what they want to see. Users can specify their preferences to access the information they want, rather than enveloping them in filter bubbles that reinforce their prejudices and bad behaviors, simply to makes them click on more ads.

“We want to empower people rather than control them,” Pape said. “The way to do that is to give them choices and make it easy for them to ‘change the channel,’ so to speak if the program they’re being served isn’t resonating with them.”

Another thing to fear about Google, aside from the search engine being turned on its head and being used as a surveillance tool in a not-so-distant dystopian future, is an idea that’s mentioned in Jon Ronson book, So You’ve Been Publicly Shamed. People’s lives have been ruined due to Google search results that live on forever after false scandalous accusations.

How will Presearch safeguard us against this?

“We are looking at a potential model where people could stake their tokens to upvote or downvote results, and then enable community members to vote on those votes,” said Pape. “This would enable mechanisms to identify false information and provide penalties for those who promote it. This is definitely a tricky subject that we will involve the community in developing policies for.”

Pape’s vision is very much aligned with Pied Piper’s on HBO’s Silicon Valley.

“It is definitely pretty accurate… a little uncanny, actually,” Pape said after his staff made him watch the latest season. “It was easy to see where the show drew its inspiration from.”

But truth is stranger than fiction. “The problems a decentralized internet are solving are real, and will become more and more apparent as the Big Tech companies continue to clamp down on the original free and open internet in favor of walled gardens and proprietary protocols,” he explained. “Hopefully the real decentralized web will be the liberating success that so many of us envision.”

Obviously an alternative to Google’s search monopoly is a good thing. And Pape feels that breaking up Google might help in the short term, but “introducing government control is just that—introducing more control,” Pape said. “We would rather offer a free market solution that enables people to make their own choices, which provides alignment of incentives and communities to create true alternatives to the current dominant forces.”

Presearch may or may not be the ultimate solution, but it’s a step in the right direction

Categorized in Search Engine

[Source: This article was published in techbullion.com By Linda S. Davis - Uploaded by the Association Member: Anna K. Sasaki]

A large research project can be overwhelming, but there are techniques that you can use to make your project more manageable. You simply need to start with a specific plan, and focus on effective search techniques that take advantage of all available resources. By working smarter rather than harder, you will not only finish your project more quickly, but you will also acquire higher quality information than you would be able to find through hours of unfocused researching. Follow the tips below to increase your efficiency and improve the quality of your work.

Try to Start with Broad Overviews

The best way to understand a topic is to start your research by reading a general overview. This will help you to focus your research question, lead you to valuable sources and give you context for understanding your topic. High school students and students in introductory courses can consider beginning a research project by reading encyclopedia articles. Students doing more advanced or specialized research should look for review articles in appropriate journals. In addition to helping you understand your subject, the resources section of an encyclopedia article or the extensive bibliography of a review article will provide you with quality sources on your informative topics without the need to search for them. This can save you hours of needless work.

Formulate Questions

If you go to the library or perform a computer search in order to research a large topic, like the American Revolution or genetic theory, you will be quickly overwhelmed. For this reason, you should formulate specific, focused questions to answer. By asking yourself how England’s involvement with other European powers influenced the American Revolution or how imaging techniques contributed to the development of genetic theory, for example, you will be able to focus your research and save yourself time.

Have a Plan

Before you start your research, have a goal in mind, and make a plan for reaching this goal. If you just go poking around the internet or the library, you are unlikely to get much accomplished. Instead of searching blindly, focus on answering the specific questions you have formulated, locating a certain number of resources, getting a broad overview of your topic or some other specific goal. Setting small, achievable goals will make your task less overwhelming and easier to complete.

Take Notes

Many students gather sources by collecting books, articles, and lists of bookmarks without reading or even skimming them until the project deadline looms. This creates a time crunch. To avoid this situation, spend time taking notes in a notebook, on note cards or on your computer. As you research, jot down applicable information from your sources, and note where this information is located. By taking notes as you go, you will be better able to gauge how much more research is necessary. You will also make writing your paper, preparing your presentation or completing your project a quicker and simpler undertaking.

Master Google Searches

Google is a search engine with many powerful features that allow you to find what you want quickly. Unfortunately, many students are unaware of these features, so they spend needless hours wading through pages of irrelevant search results. By spending a few minutes on Google’s tips pages, you can learn how to get the most out of internet searching.

Take Advantage of Top Lists

Most university and regional libraries have various subject-specific lists of resources on their web pages. These are well-organized, comprehensive listings of quality sources from each library’s specific collection of databases and e-resources, and they cover a large variety of topics. Rather than spending your time wading through substandard resources, peruse the lists offered by your library, and save yourself some time.

Ask a Librarian for Help

Reference librarians can help you find what you need quickly and teach you research tricks that will help you on your current project in addition to future projects. By asking for help, you can save yourself time and frustration. Even if you cannot go to the physical library, most libraries also offer consultations over the phone, by e-mail or through virtual chat platforms.

Categorized in Online Research

[Source: This article was published in nytimes.com By Whitson Gordon - Uploaded by the Association Member: Patrick Moore]

Even if you’re already a Google pro, these tricks will get you to your desired results even faster.

Like it or not, Google is most people’s portal to the internet. And when you’re searching for something simple — like the latest news about Iran — Google will usually get you what you want on the first try. But if you’re trying to find something a bit more niche, you may need to do some digging. Here are a few tricks to keep up your sleeve that will make life easier.

Use quotation marks to find a specific phrase

It’s one thing to search for a couple of words, like Sony HT-Z9F soundbar, and find the product(s) you’re seeking. But let’s say you need more specific information — like the dimensions of the speaker drivers inside that soundbar. Searching for HT-Z9F soundbar driver diameter does not return any pages that list that particular spec, nor does including the word inches. Instead, we need to think about how this would exactly be phrased on the page, and use quotation marks to narrow our search.

When you put quotation marks around a collection of words, it tells Google to look for the words only in that order. So, sony HT-Z9F inch drivers (don’t worry, capitalization doesn’t matter) will search for any page that has the words “inch” and “drivers” on it — but not necessarily together. Searching HT-Z9F soundbar “inch drivers” on the other hand, narrows our search considerably, producing a result right at the top that lists the exact spec we’re looking for: 2.5-inch drivers. (If you can’t find the terms you searched for on the resulting page, press Ctrl+F on your keyboard — Command+F on a Mac — to locate your words on that page.) Bonus tip: If you’re looking for a specific page but aren’t sure the exact words it uses, you can put an asterisk in those quotes to symbolize any word. For example, if you forgot the title of Taylor Swift’s dance-pop single from “1989,” you could search taylor swift “* it off” and find the “Shake It Off” lyrics you’re hunting down.

Exclude words with the minus sign

It’s frustrating when a search returns oodles of results that have nothing to do with what you’re looking for. This is especially common with homonyms — words that are spelled and pronounced the same but have different meanings. For example, let’s say you’re searching for a music group to play at your wedding. Searching for wedding bands brings up a ton of results, but most are for wedding rings — often called bands — not musicians that play at wedding receptions. The minus sign is your friend here. Think of a word that would appear on all the irrelevant pages — in this case, “jewelry” or “jeweler” is probably a good bet — and include it with a minus sign in your search: wedding bands -jewelry. Just like that, you’ve got yourself a bunch of sites that review wedding bands across the country.

I also use this often for products with similarly-named siblings — say, Apple’s MacBook line, which includes the MacBook, MacBook Air, and MacBook Pro. Getting too many results for the Air and Pro? Just eliminate them from your search with macbook -air -pro and you’ll get more relevant results.

Narrow your search to a specific time period

If your head is spinning after that last one, here’s an easy tip for you. Occasionally, search results will consist of older articles that have ranked on a given topic but haven’t been updated to include recent changes. If you encounter this problem, you can put a date restriction on the results by clicking the Tools button under Google’s search bar, and then clicking the “Any Time” drop-down. You can narrow your results to the previous week, month, year, or a custom time frame.

Search your favorite sites with the “site:” operator

If you’re looking for an article you read a while back, but can’t find now — or if you specifically want to see what one of your most trusted sites has to say about a topic — you can use the site: operator to limit your search to that specific publication. (This is especially useful for sites that don’t have a search function — though it’s often better than a site’s built-in search bar, too.)

Let’s say I want to read about the Iran nuclear deal, but I prefer coverage from The New York Times. Instead of just Googling US iran deal for the latest news, I can search site:nytimes.com Iran deal to see coverage only from The Times. This also allows me to see everything The Times has done on the topic going back weeks or months, rather than my results getting cluttered with versions of today’s news from other publications.

Add search shortcuts to your browser’s address bar

Ready for a more advanced lesson? Tricks like the site: operator are great, but they take a while to type out — especially if you search for Times content regularly. You can save yourself precious seconds on every search by creating a short keyword for bits of text you search regularly, if your browser supports it, and most do. That way, instead of typing site:nytimes.com every time, you can just type nyt in your browser’s address bar, add your search terms, and get right to the good stuff.

To do this, perform an example search on Google, then copy the URL from the address bar. Using the above example, my

URL is: https://www.google.com/search?q=site%3Anytimes.com+iran+deal

This is what we’ll use to create our shortcut. In Chrome, right-click the address bar, choose “Edit Search Engines,” and click “Add” to create a new one with nyt as the keyword. In Firefox, right-click the Bookmarks Bar and create a new bookmark instead with nyt as the keyword. Paste the search URL you copied earlier into the “Search Engine” or “Location” box, and replace your search terms with %s (making sure to leave in any terms you want to keep as part of the keyword). So, since I want my nyt shortcut to search site:nytimes.com and whatever search terms I add, my URL would look like this: https://www.google.com/search?q=site%3Anytimes.com+%s

See how I replaced iran+deal with %s in the URL? Now, whenever I type nyt into the address bar, I can search The New York Times for any terms I want. I use this for all kinds of common searches: sites I like (nyt searches site:nytimes %s), authors I trust (jk searches Jolie Kerr %s), or — if you want to get really advanced — other URL tricks, like getting driving directions from Google Maps (http://maps.google.com/maps?f=q&source=s_q&hl=en&q=from+123+main+street+to+%s).

Find the source of a photo with reverse image search

Finally, not all searches are made up of words. Sometimes, it can be handy to know where a certain photo came from, or to find a larger version of it. You probably know you can type a few words to find a photo with Google’s Image Search, but you might not have realized it works in the other direction too: Drag an image into Image Search and Google will find other versions of that photo for you. A few years ago, I was searching for an apartment, and found one that looked great — it had the number of bedrooms I needed, in the part of town I wanted to be in, and the photos looked nice. But I found it on one of those “members only” apartment listing sites, so I had to pay a monthly subscription in order to get the name, address and contact info of the complex. Not to be outdone, I dragged the building’s photo to my desktop, then dragged it into Google Images. Google immediately found another site that had used that photo: the building’s official website, where I could call or email and ask directly about open units for rent.

Google isn’t the only site that has this feature, either. TinEye is a similar tool with a few more options, if you’re trying to find where the image first appeared. EBay’s iPhone and Android apps also let you search by image, which is useful if you’re trying to find a rare piece of china with no markings, or something like that. It doesn’t always work, but when you’re in a bind, it’s worth a shot — and if nothing else, it may give you another clue to add to your search terms.

Categorized in Search Engine

[Source: This article was published in heartland.org By Chris Talgo and Emma Kaden - Uploaded by the Association Member: Grace Irwin]

The U.S. Department of Justice announced it will launch a wide-ranging probe into possible antitrust behavior by social media and technology giants. Although no companies were specifically named, it’s not hard to guess which corporations will be in the limelight: Amazon, Facebook, and, of course, the mother of all technology titans, Google.

There is certainly a case to be made that these companies have been shady with private user data, stifled competition, and manipulated the flow of information to their benefit. But it’s worth considering whether or not a federal government investigation and possible destruction of these influential companies are really necessary.

As perhaps one of the most powerful companies in the world, Google has the most to lose if the federal government intervenes. According to research by Visual Capitalist, 90.8 percent of all internet searches are conducted via Google and its subsidiaries. For comparison’s sake, Google’s two main competitors — Bing and Yahoo! — comprise less than 3 percent of total searches.

Due to its overwhelming dominance of the search engine industry, Google has nearly complete control over the global flow of information. In other words, Google determines the results of almost all web-based inquiries.

Of course, this is a potentially dangerous situation. With this amount of control over the dispersal of information, Google has the unique ability to sway public opinion, impact economic outcomes, and influence any and all matters of public information. For instance, by altering search results, Google can bury content that it deems unworthy of the public’s view.

The truth is, not only can Google do these things, it already has done them. The tech giant has a long history of manipulating search results and promoting information based on political bias.

On its face, one can easily see how supporting the regulation and breakup of Google could serve the public good. If executed properly (unlike most government interventions), Google web searches would be free of bias and manipulation. The possible unintended consequences of such an intrusion, however, could dwarf any benefits it might bring.

The internet is the most highly innovative and adaptive medium ever developed. In less than two decades, it has brought about a revolution in most aspects of our daily lives, from how we conduct commerce and communicate to how we travel, learn, and access information. The primary reason for this breathtaking evolution is the complete lack of government regulation, intervention, and intrusion into the infrastructure of the internet.

Right now, Google serves as one of the primary pillars of the internet framework. Yes, Google is far from perfect — after all, it is run by humans — but it is an essential component to a thriving internet ecosystem. But this does not mean Google will forever serve as the foundation of the internet — 20 years ago, it didn’t even exist, and 20 years from now, something new will most likely take its place.

As tempting as it is to tinker with the internet and the companies that are currently fostering the dynamic growth and innovation that make the internet so unique, regulating such a complex and intricate system could lead to its downfall at worst and its petrification at best.

Wanting to keep Google from manipulating consumers is a noble notion, but this should happen from the bottom up, not the top down. Consumers should be the ultimate arbiters of which internet-based companies thrive.

Remember (for those who are old enough) that when the internet became mainstream, navigating it through search engines was extremely primitive and challenging. Early search engines such as AltaVista, Ask Jeeves, and Infoseek barely met consumer expectations.

Fast forward to 2019, and ponder how much more convenient Google has made everyday life. From optimized search capability to email to video sharing to navigation, Google provides an all-inclusive package of services that billions of people find useful — at this point in time. Someday, though, a company will surely produce a product superior to Google that protects user data, takes bias out of the equation, and allows for robust competition, all while maintaining and elevating the quality of service. No doubt customers will flock to it.

The awesome, rapid technology innovations of the past 20 years are due in large part to a lack of government regulation. Imagine what progress could be made in the years to come if the government refrains from overregulating and destroying internet companies. That’s not to say that the government shouldn’t take action against illegal activities, but overregulating this dynamic industry to solve trivial matters would do much more harm than good.

The government should take a laissez-faire approach to regulation, especially when it comes to the internet. Consumers should be able to shape industries according to their needs, wants, and desires without the heavy hand of government intervention.

[Originally Published at American Spectator]

Categorized in Search Engine

[Source: This article was published in aei.org By Shane Tews - Uploaded by the Association Member: Jennifer Levin]

Mozilla announced last week that its Firefox browser will begin using the DNS over HTTPS (DoH) protocol by default in late September. Google plans to begin testing DoH in an upcoming version of Google Chrome in October.

To provide some context, it’s important to note that there are multiple pathways through which internet traffic runs across the world that are supported by numerous back-up structures managed by ISPs and enterprise systems.

The strength of these networks and the internet as a whole has been in the decentralized system of global servers that manage the ever-growing amount of internet traffic. Multiple servers provide redundancy and eliminate single points of failure, and the decentralized process allows many users to use the internet infrastructure without having just a few companies own the routes for the internet’s traffic.

Companies that provide these underlying services are responsible for the transport layer that gives the internet its robust nature. They are the navigators of web traffic from consumers to endpoint providers. These networks mitigate cybersecurity risks for web traffic by deploying cybersecurity tools, detecting and mitigating malware and botnet attacks, and more. They also deploy site blockers mandated by the governments for schools and libraries, and parental controls on home networks.

DoH was designed to encrypt web-lookup traffic as part of a new privacy setting, and fundamentally changes how traffic moves on the web. Under DoH, the Chrome or Firefox browser will send all search traffic to a preferred DNS resolver by default, not by the user’s request. This enhances the browser’s knowledge of a user’s habits and interests. It will also obfuscate details about web traffic, breaking many of the Domain Name System (DNS) based controls around malware and monitoring which will no longer be visible or detectable to the network operator passing the traffic directly to Google (in the case of Chrome), or Cloudflare (in the case of FireFox).

The re-engineering by Google’s Chrome and Mozilla’s Firefox browsers is thus looking to change the architectural structure of how their users resolve internet queries making the browser the top of the pyramid, rather than the traditional endpoint. This means Google and Mozilla are working (again) on making network operators such as internet service providers (ISPs) “dumb” pipes whose job will be to transmit and receive encrypted information that only the Google browser, Chrome, or the Firefox browser served up by Cloudflare will be able to see.

As I explained in a previous blog, there are significant concerns around changing the way traffic flows from the current decentralized-by-design process, to a company-specific, centralized process that pushes consumers’ web queries directly to a specific search engine. By nature, browsers are designed to serve up ads to users, not monitor or filter traffic for security concerns.

This change to the usual path of internet traffic will enhance the browsers’ consumer data collection and create security concerns regarding the operation of the network. Google sees the change to its Chrome browser and Android mobile operating system as a method to centralize all traffic and have it flow to their network first. This ensures that it runs under Google’s control, moving from Google’s search engine to the next stop, the actual web address the user wants to go to.

The security concerns arise from the fact that DoH in its current design disables many cybersecurity tools on user devices. Due to the fact that web query traffic will go directly to the application layer of a specific browser through the chosen path of the browser company, not the choice of the enterprise IT system or ISP, the monitoring filters on ISPs or enterprise network servers will no longer see the DNS query traffic. DoH-enhanced encryption means only the browser sees the traffic, bypassing standard security management tools.

This plan has network operators concerned about what will be affected, modified, or broken once this change takes place. What are the trade-offs? What one group calls “surveillance” another calls ad traffic for revenue. DNS was designed to be a decentralized network for efficiency. Now its engineers are concerned about concentrating so much traffic through an edge provider’s browser.

Why does this matter?

The advent of internet governance was meant to ensure a multi-stakeholder audience of the technical community, businesses, law enforcement, and advocacy groups for end users was engaged in any discussion around a change of the network architecture, as well as changes in policies for the use of the internet.  It was always the expectation that the networks comprising the backbone infrastructure would be a significant part of these discussions to ensure operational integrity and security for all internet users.

Allowing a few companies to gain control over even more internet traffic by making a simple change in how users request and receive data could be a game-changer for the entire system. Paul Vixie, one of the original engineers of the Domain Name System, recently stated that “DoH is incompatible with the basic architecture of the DNS because it moves control plane (signaling) messages to the data plane (message forwarding), and that’s a no-no.”

Now is an excellent time to hit the pause button on the DoH proposal and let internet operators do what they do best. It would be better for all internet users to ensure no harm to the underlying network will be done before making a significant change to the architecture of the digital economy’s engine.

Categorized in Search Engine

[Source: This article was published in enca.com - Uploaded by the Association Member: Rene Meyer]

In this file illustration picture taken on July 10, 2019, the Google logo is seen on a computer in Washington, DC. 

SAN FRANCISCO - Original reporting will be highlighted in Google’s search results, the company said as it announced changes to its algorithm.

The world’s largest search engine has come under increasing criticism from media outlets, mainly because of its algorithms - a set of instructions followed by computers - that newspapers have often blamed for plumenting online traffic and the industry’s decline.

Explaining some of the changes in a blog post, Google's vice president of news Richard Gingras said stories that were critically important and labor intensive -- requiring experienced investigative skills, for example -- would be promoted.

Articles that demonstrated “original, in-depth and investigative reporting,” would be given the highest possible rating by reviewers, he wrote on Thursday.

These reviewers - roughly 10,000 people whose feedback contributes to Google’s algorithm - will also determine the publisher’s overall reputation for original reporting, promoting outlets that have been awarded Pulitzer Prizes, for example.

It remains to be seen how such changes will affect news outlets, especially smaller online sites and local newspapers, who have borne the brunt of the changing media landscape.

And as noted by the technology website TechCrunch, it is hard to define exactly what original reporting is: many online outlets build on ‘scoops’ or exclusives with their own original information, a complexity an algorithm may have a hard time picking through.

The Verge - another technology publication - wrote the emphasis on originality could exacerbate an already frenetic online news cycle by making it lucrative to get breaking news online even faster and without proper verification.

The change comes as Google continues to face criticism for its impact on the news media.

Many publishers say the tech giant’s algorithms - which remain a source of mysterious frustration for anyone outside Google -- reward clickbait and allow investigative and original stories to disappear online.

Categorized in Search Engine

[Source: This article was published in flipweb.org By Abhishek - Uploaded by the Association Member: Jay Harris]

One of the first question that someone who is getting into SEO would have is how exactly does Google rank the websites that you see in Google Search. Ranking a website means that giving them rank in terms of positions. The first position URL that you see in Google Search is ranked number 1 and so on. Now, there are various factors involved in ranking websites on Google Search. It is also not the case that you can’t rank higher if your website’s rank is decided once. Therefore, you would have the question of how does Google determine which URL of a website should come first and which should be lower.

For this reason, Google’s John Mueller has now addressed this question and explains in a video how Google picks website URL for its Search. John explains that there are site preference signals which are involved in determining the rank of a website. However, the most important signals are the preference of the site and the preference of the user accessing the site.

Here are the Site preference signals:

  • Link rep canonical annotations
  • Redirects
  • Internal linking
  • URL in the sitemap file
  • HTTPS preference
  • Nicer looking URLs

One of the keys, as John Mueller has previously mentioned, is to remain consistent. While John did not explain what he means by being consistent, it should mean that you should keep on doing whatever you do. Now, one of the best examples of being consistent is to post on your website every day in order to rank higher up in search results. If you are not consistent, your website’s ranking might get lost and you will have to start all over again. Apart from that, you have to be consistent when it comes to performing SEO as well. If you stop that, your website will suffer in the long run.

Categorized in Search Engine

[Source: This article was published in blogs.scientificamerican.com By Daniel M. Russell and Mario Callegaro - Uploaded by the Association Member: Anthony Frank]

Researchers who study how we use search engines share common mistakes, misperceptions and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926, when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and run with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains  credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites, and the first search results for those terms provide just that. Even the difference between a query like [the who] versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Internet Search

[Source: This article was published in ibtimes.co.uk By Anthony Cuthbertson - Uploaded by the Association Member: Robert Hensonw]

A search engine more powerful than Google has been developed by the US Defence Advanced Research Projects Agency (DARPA), capable of finding results within dark web networks such as Tor.

The Memex project was ostensibly developed for uncovering sex-trafficking rings, however the platform can be used by law enforcement agencies to uncover all kinds of illegal activity taking place on the dark web, leading to concerns surrounding internet privacy.

Thousands of sites that feature on dark web browsers like Tor and I2P can be scraped and indexed by Memex, as well as the millions of web pages ignored by popular search engines like Google and Bing on the so-called Deep Web.

The difference between the dark web and the deep web

The dark web is a section of the internet that requires specialist software tools to access, such as the Tor browser. Originally designed to protect privacy, it is often associated with illicit activities.

The deep web is a section of the open internet that is not indexed by search engines like Google - typically internal databases and forums within websites. It comprises around 95% of the internet.

Websites operating on the dark web, such as the former Silk Road black marketplace, purport to offer anonymity to their users through a form of encryption known as Onion Routing.

While users' identities and IP addresses will still not be revealed through Memex results, the use of an automated process to analyse content could uncover patterns and relationships that could potentially be used by law enforcement agencies to track and trace dark web users.

"We're envisioning a new paradigm for search that would tailor content, search results, and interface tools to individual users and specific subject areas, and not the other way round," said DARPA program manager Chris White.

"By inventing better methods for interacting with and sharing information, we want to improve search for everybody and individualise access to information. Ease of use for non-programmers is essential."

Memex achieves this by addressing the one-size-fits-all approach taken by mainstream search engines, which list results based on consumer advertising and ranking.

 us internet surveillance DARPA TOR Memex dark web

 Memex raises further concerns about internet surveillance US Web Home

'The most intense surveillance state the world has literally ever seen'

The search engine is initially being used by the US Department of Defence to fight human trafficking and DARPA has stated on its website that the project's objectives do not involve deanonymising the dark web.

The statement reads: "The program is specifically not interested in proposals for the following: attributing anonymous services, deanonymising or attributing identity to servers or IP addresses, or accessing information not intended to be publicly available."

Despite this, White has revealed that Memex has been used to improve estimates on the number of services there are operating on the dark web.

"The best estimates there are, at any given time, between 30,000 and 40,000 hidden service Onion sites that have content on them that one could index," White told 60 Minutes earlier this month.

Internet freedom advocates have raised concerns based on the fact that DARPA has revealed very few details about how Memex actually works, which partners are involved and what projects beyond combating human trafficking are underway.

"What does it tell about a person, a group of people, or a program, when they are secretive and operate in the shadows?" author Cassius Methyl said in a post to Anti Media. "Why would a body of people doing benevolent work have to do that?

"I think keeping up with projects underway by DARPA is of critical importance. This is where the most outrageous and powerful weapons of war are being developed.

"These technologies carry the potential for the most intense surveillance/ police state that the world has literally ever seen."

Categorized in Deep Web

[Source: This article was published in csoonline.com By Josh Fruhlinger- Uploaded by the Association Member: Eric Beaudoin] 

Catch a glimpse of what flourishes in the shadows of the internet.

Back in the 1970s, "darknet" wasn't an ominous term: it simply referred to networks that were isolated from the mainstream of ARPANET for security purposes. But as ARPANET became the internet and then swallowed up nearly all the other computer networks out there, the word came to identify areas that were connected to the internet but not quite of it, difficult to find if you didn't have a map.

The so-called dark web, a catch-all phrase covering the parts of the internet not indexed by search engines, is the stuff of grim legend. But like most legends, the reality is a bit more pedestrian. That's not to say that scary stuff isn't available on dark web websites, but some of the whispered horror stories you might've heard don't make up the bulk of the transactions there.

Here are ten things you might not know about the dark web.

New dark web sites pop up every day...

A 2015 white paper from threat intelligence firm Recorded Future examines the linkages between the Web you know and the darknet. The paths usually begin on sites like Pastebin, originally intended as an easy place to upload long code samples or other text but now often where links to the anonymous Tor network are stashed for a few days or hours for interested parties. 

While searching for dark web sites isn't as easy as using Google—the point is to be somewhat secretive, after all—there are ways to find out what's there.  The screenshot below was provided by Radware security researcher Daniel Smith, and he says it's the product of "automatic scripts that go out there and find new URLs, new onions, every day, and then list them. It's kind of like Geocities, but 2018"—a vibe that's helped along by pages with names like "My Deepweb Site," which you can see on the screenshot.

fresh onions

..and many are perfectly innocent

Matt Wilson, chief information security advisor at BTB Security, says that "there is a tame/lame side to the dark web that would probably surprise most people. You can exchange some cooking recipes—with video!—send email, or read a book. People use the dark web for these benign things for a variety of reasons: a sense of community, avoiding surveillance or tracking of internet habits, or just to do something in a different way."

It's worth remembering that what flourishes on darknet is material that's been banned elsewhere online. For example, in 2015, in the wake of the Chinese government cracking down on VPN connections through the so-called "great firewall," Chinese-language discussions started popping up on the darknet — mostly full of people who just wanted to talk to each other in peace.

Radware's Smith points out that there are a variety of news outlets on the dark web, ranging from the news website from the hacking group Anonymous to the New York Times, shown in the screenshot here, all catering to people in countries that censor the open internet.

nytimes

 

Some spaces are by invitation only

Of course, not everything is so innocent, or you wouldn't be bothering to read this article. Still, "you can't just fire up your Tor browser and request 10,000 credit card records, or passwords to your neighbor’s webcam," says Mukul Kumar, CISO and VP of Cyber Practice at Cavirin. "Most of the verified 'sensitive' data is only available to those that have been vetted or invited to certain groups.

"

How do you earn an invite into these kinds of dark web sites? "They're going to want to see history of crime," says Radware's Smith. "Basically it's like a mafia trust test. They want you to prove that you're not a researcher and you're not law enforcement. And a lot of those tests are going to be something that a researcher or law enforcement legally can't do."

There is bad stuff, and crackdowns means it's harder to trust

As recently as last year, many dark web marketplaces for drugs and hacking services featured corporate-level customer service and customer reviews, making navigating simpler and safer for newbies. But now that law enforcement has begun to crack down on such sites, the experience is more chaotic and more dangerous.

"The whole idea of this darknet marketplace, where you have a peer review, where people are able to review drugs that they're buying from vendors and get up on a forum and say, 'Yes, this is real' or 'No, this actually hurt me'—that's been curtailed now that dark marketplaces have been taken offline," says Radware's Smith. "You're seeing third-party vendors open up their own shops, which are almost impossible to vet yourself personally. There's not going to be any reviews, there's not a lot of escrow services. And hence, by these takedowns, they've actually opened up a market for more scams to pop up."

Reviews can be wrong, products sold under false pretenses—and stakes are high

There are still sites where drugs are reviewed, says Radware's Smith, but keep in mind that they have to be taken with a huge grain of salt. A reviewer might get a high from something they bought online, but not understand what the drug was that provided it.

One reason these kinds of mistakes are made? Many dark web drug manufacturers will also purchase pill presses and dyes, which retail for only a few hundred dollars and can create dangerous lookalike drugs. "One of the more recent scares that I could cite would be Red Devil Xanax," he said. "These were sold as some super Xanax bars, when in reality, they were nothing but horrible drugs designed to hurt you."

The dark web provides wholesale goods for enterprising local retailers...

Smith says that some traditional drug cartels make use of the dark web networks for distribution—"it takes away the middleman and allows the cartels to send from their own warehouses and distribute it if they want to"—but small-time operators can also provide the personal touch at the local level after buying drug chemicals wholesale from China or elsewhere from sites like the one in the screenshot here. "You know how there are lots of local IPA microbreweries?" he says. "We also have a lot of local micro-laboratories. In every city, there's probably at least one kid that's gotten smart and knows how to order drugs on the darknet, and make a small amount of drugs to sell to his local network."

xanax

 

...who make extensive use of the gig economy

Smith describes how the darknet intersects with the unregulated and distributed world of the gig economy to help distribute contraband. "Say I want to have something purchased from the darknet shipped to me," he says. "I'm not going expose my real address, right? I would have something like that shipped to an AirBnB—an address that can be thrown away, a burner. The box shows up the day they rent it, then they put the product in an Uber and send it to another location. It becomes very difficult for law enforcement to track, especially if you're going across multiple counties."

Not everything is for sale on the dark web

We've spent a lot of time talking about drugs here for a reason. Smith calls narcotics "the physical cornerstone" of the dark web; "cybercrime—selling exploits and vulnerabilities, web application attacks—that's the digital cornerstone. Basically, I'd say a majority of the darknet is actually just drugs and kids talking about little crimes on forums."

Some of the scarier sounding stuff you hear about being for sale often turns out to be largely rumors. Take firearms, for instance: as Smith puts it, "it would be easier for a criminal to purchase a gun in real life versus the internet. Going to the darknet is adding an extra step that isn't necessary in the process. When you're dealing with real criminals, they're going to know someone that's selling a gun."

Specific niches are in

Still, there are some very specific darknet niche markets out there, even if they don't have the same footprint that narcotics does. One that Smith drew my attention to was the world of skimmers, devices that fit into the slots of legitimate credit and ATM card readers and grab your bank account data.

And, providing another example of how the darknet marries physical objects for sale with data for sale, the same sites also provide data manual sheets for various popular ATM models. Among the gems available in these sheets are the default passwords for many popular internet-connected models; we won't spill the beans here, but for many it's the same digit repeated five times.

atm skinners

 

It's still mimicking the corporate world

Despite the crackdown on larger marketplaces, many dark web sites are still doing their best to simulate the look and feel of more corporate sites

elude

 

The occasional swear word aside, for instance, the onion site for the Elude anonymous email service shown in this screenshot looks like it could come from any above-board company.

One odd feature of corporate software that has migrated to the dark web: the omnipresent software EULA. "A lot of times there's malware I'm looking at that offers terms of services that try to prevent researchers from buying it," he says. "And often I have to ask myself, 'Is this person really going to come out of the dark and trying to sue someone for doing this?'"

And you can use the dark web to buy more dark web

And, to prove that any online service can, eventually, be used to bootstrap itself, we have this final screenshot from our tour: a dark web site that will sell you everything you need to start your own dark web site.docker

 

Think of everything you can do there—until the next crackdown comes along.

Categorized in Internet Privacy

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now