fbpx

Protect yourself by learning about this mysterious digital world

Below the surface, the internet you recognize and use for your browsing is a shadowy, digital netherworld. According to a report by Cybersecurity Ventures, cybercrime is projected to cost the world more than $6 trillion annually by 2021. At the heart of most cybercrime is the Dark Web.

The Dark Web is making its way into the public sphere more and more, but much remains unclear and misunderstood about this mysterious digital world that most of us will never see. Here’s what you need to know:

Three Layers of the Web

The World Wide Web has three distinct layers. The first is the Surface Web, where most people do searches using standard browsers. The second is the Deep Web, which is not indexed in standard search engines and is accessed by logging in directly to a site; it often requires some form of authentication for access. Finally, there is the Dark Web, which is only accessible through specific browsers. Its most common browser, Tor, encrypts all traffic and allows users to remain anonymous.

Gaining access to Dark Web sites often requires an invitation which is offered only after a substantial vetting process. Purveyors of these sites want to keep out law enforcement, although “white hat” hackers (computer security experts) and law enforcement have successfully broken through. Some identity theft protection services provide Dark Web monitoring to see if your personal information, such as your credit card, has been stolen. Often it is through the monitoring of the Dark Web that security professionals first become aware of massive data breaches by researching the commonality of large troves of personal information being sold.

Never click on any links in an email regardless of how legitimate the email may appear unless you have confirmed the email is indeed legitimate.

It is on these criminal Dark Web sites that all kinds of malware, like ransomware, are bought and sold. Other goods and services bought, sold and leased on these Dark Web cybercrime websites include login credentials to bank accounts, personal information stolen through data breaches, skimmers (devices to attack credit card processing equipment and ATMs) and ATM manuals that include default passwords.

Be Aware of Cybercrime Tools

Amazingly, the Dark Web sites have ratings and reviews, tech support, software updates, sales and loyalty programs just like regular retail websites. Many also offer money laundering services. Additionally, botnets (short for “robot network”) of compromised computers can be leased on the Dark Web to deliver malware as well as phishing and spear phishing emails (these appear to be sent from a trusted sender, but are seeking confidential information).

While the actual number of cybercriminal geniuses is relatively small, they’ve developed a lucrative business model. They create sophisticated malware, other cybercrime tools and their delivery systems, then sell or lease those tools to less sophisticated criminals.

The proliferation of ransomware attacks provides a good example of how this business model operates. Ransomware infects your computer and encrypts all of your data. Once your data has been encrypted, you, the victim of a ransomware attack, are told that a ransom must be paid within a short period of time or your data will be destroyed. Ransomware attacks have increased dramatically in the past few years and are now the fastest growing cybercrime.

Cybersecurity Ventures says companies are victimized by ransomware every 14 seconds, at a cost of $11.5 billion worldwide this year. While the creation and development of new ransomware strains requires great knowledge and skill, most ransomware attacks are being perpetrated by less sophisticated cybercriminals who purchase the ransomware on the Dark Web.

Regardless of how protective you are of your personal information, you are only as safe as the legitimate institutions that have your information.

Phishing, and more targeted spear phishing, have long been the primary way that malware, such as ransomware and keystroke logging malware used for identity theft purposes, are delivered. Phishing and spear phishing lure victims into clicking links within emails that download malware onto their computer systems.

Sophisticated cybercriminals now use artificial intelligence to gather personal information from social media such as Twitter, Facebook, Instagram and other sites to produce spear phishing emails with high success rates.

How to Protect Yourself

The best thing you can do to protect yourself from having your information turn up on the Dark Web is to avoid downloading the malware that can lead to your information being stolen or your computer being made a part of a botnet. Never click on any links in an email regardless of how legitimate the email may appear unless you have confirmed that the email is indeed legitimate.

Relying on security software is not enough to protect you, because the best security software is always at least a month behind the latest strains of malware. Regardless of how protective you are of your personal information, you are only as safe as the legitimate institutions that have your information.

In this era of constant data breaches, it is advisable to use an identity theft protection service that will monitor the Dark Web and alert you if your information appears there.  And there are websites which offer guidance on what to do if this happens to you. These monitors are a small flashlight shedding a beam on a very dark section of the digital universe and may help avoid major headaches before it’s too late.

[Source: This article was published in nextavenue.org By Steve Weisman - Uploaded by the Association Member: David J. Redcliff]

Categorized in Deep Web

"In the future, everyone will be anonymous for 15 minutes." So said the artist Banksy, but following the rush to put everything online, from relationship status to holiday destinations, is it really possible to be anonymous - even briefly - in the internet age?

That saying, a twist on Andy Warhol's famous "15 minutes of fame" line, has been interpreted to mean many things by fans and critics alike. But it highlights the real difficulty of keeping anything private in the 21st Century.

"Today, we have more digital devices than ever before and they have more sensors that capture more data about us," says Prof Viktor Mayer-Schoenberger of the Oxford Internet Institute.

And it matters. According to a survey from the recruitment firm Careerbuilder, in the US last year 70% of companies used social media to screen job candidates, and 48% checked the social media activity of current staff.

Also, financial institutions can check social media profiles when deciding whether to hand out loans.

_108600940_banksybarelylegal2006.jpg

Meanwhile, companies create models of buying habits, political views and even use artificial intelligence to gauge future habits based on social media profiles.

One way to try to take control is to delete social media accounts, which some did after the Cambridge Analytica scandal, when 87 million people had their Facebook data secretly harvested for political advertising purposes.

While deleting social media accounts may be the most obvious way to remove personal data, this will not have any impact on data held by other companies.

Fortunately, in some countries the law offers protection.

In the European Union the General Data Protection Regulation (GDPR) includes the "right to be forgotten" - an individual's right to have their personal data removed.

In the UK the that is policed by the Information Commissioner's Office. Last year it received 541 requests to have information removed from search engines, according to data shown to the BBC, up from 425 the year before, and 303 in 2016-17.

The actual figures may be higher as ICO says it often only becomes involved after an initial complaint made to the company that holds the information has been rejected.

But ICO's Suzanne Gordon says it is not clear-cut: "The GDPR has strengthened the rights of people to ask for an organisation to delete their personal data if they believe it is no longer necessary for it to be processed.

"However, this right is not absolute and in some cases must be balanced against other competing rights and interests, for example, freedom of expression."

The "right to be forgotten" shot to prominence in 2014 and led to a wide-range of requests for information to be removed - early ones came from an ex-politician seeking re-election, and a paedophile - but not all have to be accepted.

Companies and individuals, that have the money, can hire experts to help them out.

A whole industry is being built around "reputation defence" with firms harnessing technology to remove information - for a price - and bury bad news from search engines, for example.

One such company, Reputation Defender, founded in 2006, says it has a million customers including wealthy individuals, professionals and chief executives. It charges around £5,000 ($5,500) for its basic package.

It uses its own software to alter the results of Google searches about its clients, helping to lower less favourable stories in the results and promote more favourable ones instead.

_108600440_googlegettyimages-828896324-1.jpg

"The technology focuses on what Google sees as important when indexing websites at the top or bottom of the search results," says Tony McChrystal, managing director.

"Generally, the two major areas Google prioritises are the credibility and authority the web asset has, and how users engage with the search results and the path Google sees each unique individual follow.

"We work to show Google that a greater volume of interest and activity is occurring on sites that we want to promote, whether they're new websites we've created, or established sites which already appear in the [Google results pages], while sites we are seeking to suppress show an overall lower percentage of interest."

The firm sets out to achieve its specified objective within 12 months.

"It's remarkably effective," he adds, "since 92% of people never venture past the first page of Google and more than 99% never go beyond page two."

Prof Mayer-Schoenberger points out that, while reputation defence companies may be effective, "it is hard to understand why only the rich that can afford the help of such experts should benefit and not everyone".

_108598284_warhol.jpg

So can we ever completely get rid of every online trace?

"Simply put, no," says Rob Shavell, co-founder and chief executive of DeleteMe, a subscription service which aims to remove personal information from public online databases, data brokers, and search websites.

"You cannot be completely erased from the internet unless somehow all companies and individuals operating internet services were forced to fundamentally change how they operate.

"Putting in place strong sensible regulation and enforcement to allow consumers to have a say in how their personal information can be gathered, shared, and sold would go a long way to addressing the privacy imbalance we have now."

[Source: This article was published in bbc.com By Mark Smith - Uploaded by the Association Member: Jay Harris]

Categorized in Internet Privacy

Reverse image search is one of the most well-known and easiest digital investigative techniques, with two-click functionality of choosing “Search Google for image” in many web browsers. This method has also seen widespread use in popular culture, perhaps most notably in the MTV show Catfish, which exposes people in online relationships who use stolen photographs on their social media.

However, if you only use Google for reverse image searching, you will be disappointed more often than not. Limiting your search process to uploading a photograph in its original form to just images.google.com may give you useful results for the most obviously stolen or popular images, but for most any sophisticated research project, you need additional sites at your disposal — along with a lot of creativity.

This guide will walk through detailed strategies to use reverse image search in digital investigations, with an eye towards identifying people and locations, along with determining an image’s progeny. After detailing the core differences between the search engines, Yandex, Bing, and Google are tested on five test images showing different objects and from various regions of the world.

Beyond Google

The first and most important piece of advice on this topic cannot be stressed enough: Google reverse image search isn’t very good.

As of this guide’s publication date, the undisputed leader of reverse image search is the Russian site Yandex. After Yandex, the runners-up are Microsoft’s Bing and Google. A fourth service that could also be used in investigations is TinEye, but this site specializes in intellectual property violations and looks for exact duplicates of images.

Yandex

Yandex is by far the best reverse image search engine, with a scary-powerful ability to recognize faces, landscapes, and objects. This Russian site draws heavily upon user-generated content, such as tourist review sites (e.g. FourSquare and TripAdvisor) and social networks (e.g. dating sites), for remarkably accurate results with facial and landscape recognition queries.

Its strengths lie in photographs taken in a European or former-Soviet context. While photographs from North America, Africa, and other places may still return useful results on Yandex, you may find yourself frustrated by scrolling through results mostly from Russia, Ukraine, and eastern Europe rather than the country of your target images.

To use Yandex, go to images.yandex.com, then choose the camera icon on the right.

yandex instructions1

From there, you can either upload a saved image or type in the URL of one hosted online.

yandex instructions2 1536x70

If you get stuck with the Russian user interface, look out for Выберите файл (Choose file), Введите адрес картинки (Enter image address), and Найти (Search). After searching, look out for Похожие картинки (Similar images), and Ещё похожие (More similar).

The facial recognition algorithms used by Yandex are shockingly good. Not only will Yandex look for photographs that look similar to the one that has a face in it, but it will also look for other photographs of the same person (determined through matching facial similarities) with completely different lighting, background colors, and positions. While Google and Bing may just look for other photographs showing a person with similar clothes and general facial features, Yandex will search for those matches, and also other photographs of a facial match. Below, you can see how the three services searched the face of Sergey Dubinsky, a Russian suspect in the downing of MH17. Yandex found numerous photographs of Dubinsky from various sources (only two of the top results had unrelated people), with the result differing from the original image but showing the same person. Google had no luck at all, while Bing had a single result (fifth image, second row) that also showed Dubinsky.

Screenshot 4

Screenshot 5

Yandex is, obviously, a Russian service, and there are worries and suspicions of its ties (or potential future ties) to the Kremlin. While we at Bellingcat constantly use Yandex for its search capabilities, you may be a bit more paranoid than us. Use Yandex at your own risk, especially if you are also worried about using VK and other Russian services. If you aren’t particularly paranoid, try searching an un-indexed photograph of yourself or someone you know in Yandex, and see if it can find yourself or your doppelganger online.

Bing

Over the past few years, Bing has caught up to Google in its reverse image search capabilities, but is still limited. Bing’s “Visual Search”, found at images.bing.com, is very easy to use, and offers a few interesting features not found elsewhere.

bing visualsearch

Within an image search, Bing allows you to crop a photograph (button below the source image) to focus on a specific element in said photograph, as seen below. The results with the cropped image will exclude the extraneous elements, focusing on the user-defined box. However, if the selected portion of the image is small, it is worth it to manually crop the photograph yourself and increase the resolution — low-resolution images (below 200×200) bring back poor results.

Below, a Google Street View image of a man walking a couple of pugs was cropped to focus on just the pooches, leading to Bing to suggest the breed of dog visible in the photograph (the “Looks like” feature), along with visually similar results. These results mostly included pairs of dogs being walked, matching the source image, but did not always only include pugs, as French bulldogs, English bulldogs, mastiffs, and others are mixed in.

bing results cropped 1536x727

Google

By far the most popular reverse image search engine, at images.google.com, Google is fine for most rudimentary reverse image searches. Some of these relatively simple queries include identifying well-known people in photographs, finding the source of images that have been shared quite a bit online, determining the name and creator of a piece of art, and so on. However, if you want to locate images that are not close to an exact copy of the one you are researching, you may be disappointed.

For example, when searching for the face of a man who tried to attack a BBC journalist at a Trump rally, Google can find the source of the cropped image, but cannot find any additional images of him, or even someone who bears a passing resemblance to him.

trumprally

trump results google

While Google was not very strong in finding other instances of this man’s face or similar-looking people, it still found the original, un-cropped version of the photograph the screenshot was taken from, showing some utility.

Five Test Cases

For testing out different reverse image search techniques and engines, a handful of images representing different types of investigations are used, including both original photographs (not previously uploaded online) and recycled ones. Due to the fact that these photographs are included in this guide, it is likely that these test cases will not work as intended in the future, as search engines will index these photographs and integrate them into their results. Thus, screenshots of the results as they appeared when this guide was being written are included.

These test photographs include a number of different geographic regions to test the strength of search engines for source material in western Europe, eastern Europe, South America, southeast Asia, and the United States. With each of these photographs, I have also highlighted discrete objects within the image to test out the strengths and weaknesses for each search engine.

Feel free to download these photographs (every image in this guide is hyperlinked directly to a JPEG file) and run them through search engines yourself to test out your skills.

Olisov Palace In Nizhny Novgord, Russia (Original, not previously uploaded online)

test-a-1536x1134.jpg

Isolated: White SUV in Nizhny Novgorod

test-a-suv.jpg

Isolated: Trailer in Nizhny Novgorod

test-a-trailer.jpg

Cityscape In Cebu, Philippines (Original, not previously uploaded online)

test-b-1536x871.jpg

Isolated: Condominium complex, “The Padgett Place

b-toweronly.jpg

Isolated: “Waterfront Hotel

b-tower2only.jpg

Students From Bloomberg 2020 Ad (Screenshot from video)

test-c-1536x1120.jpg

Isolated: Student

c-studentonly.jpg

Av. do Café In São Paulo, Brazil (Screenshot Google Street View)

test-d-1536x691.jpg

Isolated: Toca do Açaí

d-tocadoacai.jpg

Isolated: Estacionamento (Parking)

d-estacionameno-1536x742.jpg

Amsterdam Canal (Original, not previously uploaded online)

test-e-1536x1150.jpg

Isolated: Grey Heron

test-e-bird.jpg

Isolated: Dutch Flag (also rotated 90 degrees clockwise)

test-e-flag.jpg

Results

Each of these photographs were chosen in order to demonstrate the capabilities and limitations of the three search engines. While Yandex in particular may seem like it is working digital black magic at times, it is far from infallible and can struggle with some types of searches. For some ways to possibly overcome these limitations, I’ve detailed some creative search strategies at the end of this guide.

Novgorod’s Olisov Palace

Predictably, Yandex had no trouble identifying this Russian building. Along with photographs from a similar angle to our source photograph, Yandex also found images from other perspectives, including 90 degrees counter-clockwise (see the first two images in the third row) from the vantage point of the source image.

a-results-yandex.jpg

Yandex also had no trouble identifying the white SUV in the foreground of the photograph as a Nissan Juke.

a-results-suv-yandex.jpg

Lastly, in the most challenging isolated search for this image, Yandex was unsuccessful in identifying the non-descript grey trailer in front of the building. A number of the results look like the one from the source image, but none are an actual match.

a-results-trailer-yandex.jpg

Bing had no success in identifying this structure. Nearly all of its results were from the United States and western Europe, showing houses with white/grey masonry or siding and brown roofs.

a-results-bings-1536x725.jpg

Likewise, Bing could not determine that the white SUV was a Nissan Juke, instead focusing on an array of other white SUVs and cars.

a-suvonly-bing-1536x728.jpg

Lastly, Bing failed in identifying the grey trailer, focusing more on RVs and larger, grey campers.

a-trailoronly-bing-1536x730.jpg

Google‘s results for the full photograph are comically bad, looking to the House television show and images with very little visual similarity.

a-results-google-1536x1213.jpg

Google successfully identified the white SUV as a Nissan Juke, even noting it in the text field search. As seen with Yandex, feeding the search engine an image from a similar perspective as popular reference materials — a side view of a car that resembles that of most advertisements — will best allow reverse image algorithms to work their magic.

a-suvonly-google.jpg

Lastly, Google recognized what the grey trailer was (travel trailer / camper), but its “visually similar images” were far from it.

a-trailoronly-google-1536x1226.jpg

Scorecard: Yandex 2/3; Bing 0/3; Google 1/3

Cebu

Yandex was technically able to identify the cityscape as that of Cebu in the Philippines, but perhaps only by accident. The fourth result in the first row and the fourth result in the second row are of Cebu, but only the second photograph shows any of the same buildings as in the source image. Many of the results were also from southeast Asia (especially Thailand, which is a popular destination for Russian tourists), noting similar architectural styles, but none are from the same perspective as the source.

b-results-yandex.jpg

Of the two buildings isolated from the search (the Padgett Palace and Waterfront Hotel), Yandex was able to identify the latter, but not the former. The Padgett Palace building is a relatively unremarkable high-rise building filled with condos, while the Waterfront Hotel also has a casino inside, leading to an array of tourist photographs showing its more distinct architecture.

b-tower1-yandex.jpg

b-tower2-yandex.jpg

Bing did not have any results that were even in southeast Asia when searching for the Cebu cityscape, showing a severe geographic limitation to its indexed results.

b-results-bing-1536x710.jpg

Like Yandex, Bing was unable to identify the building on the left part of the source image.

b-tower1-bing-1536x707.jpg

Bing was unable to find the Waterfront Hotel, both when using Bing’s cropping function (bringing back only low-resolution photographs) and manually cropping and increasing the resolution of the building from the source image. It is worth noting that the results from these two versions of the image, which were identical outside of the resolution, brought back dramatically different results.

b-tower2-bing-1536x498.jpg

b-tower2-bing2-1536x803.jpg

As with Yandex, Google brought back a photograph of Cebu in its results, but without a strong resemblance to the source image. While Cebu was not in the thumbnails for the initial results, following through to “Visually similar images” will fetch an image of Cebu’s skyline as the eleventh result (third image in the second row below).

b-results-google-1536x1077.jpg

As with Yandex and Bing, Google was unable to identify the high-rise condo building on the left part of the source image. Google also had no success with the Waterfront Hotel image.

b-tower1-google-1536x1366.jpg

b-tower2-google-1536x1352.jpg

Scorecard: Yandex 4/6; Bing 0/6; Google 2/6

Bloomberg 2020 Student

Yandex found the source image from this Bloomberg campaign advertisement — a Getty Images stock photo. Along with this, Yandex also found versions of the photograph with filters applied (second result, first row) and additional photographs from the same stock photo series. Also, for some reason, porn, as seen in the blurred results below.

c-results-yandex.jpg

When isolating just the face of the stock photo model, Yandex brought back a handful of other shots of the same guy (see last image in first row), plus images of the same stock photo set in the classroom (see the fourth image in the first row).

c-studentonly-results-yandex.jpg

Bing had an interesting search result: it found the exact match of the stock photograph, and then brought back “Similar images” of other men in blue shirts. The “Pages with this” tab of the result provides a handy list of duplicate versions of this same image across the web.

c-results-bing-1536x702.jpg

c-results-bing2.jpg

Focusing on just the face of the stock photo model does not bring back any useful results, or provide the source image that it was taken from.

c-studentonly-results-bing-1536x721.jpg

Google recognizes that the image used by the Bloomberg campaign is a stock photo, bringing back an exact result. Google will also provide other stock photos of people in blue shirts in class.

c-results-google.jpg

In isolating the student, Google will again return the source of the stock photo, but its visually similar images do not show the stock photo model, rather an array of other men with similar facial hair. We’ll count this as a half-win in finding the original image, but not showing any information on the specific model, as Yandex did.

c-studentonly-results-google.jpg

Scorecard: Yandex 6/8; Bing 1/8; Google 3.5/8

Brazilian Street View

Yandex could not figure out that this image was snapped in Brazil, instead focusing on urban landscapes in Russia.

d-results-yandex.jpg

For the parking sign [Estacionamento], Yandex did not even come close.

d-parking-yandex.jpg

Bing did not know that this street view image was taken in Brazil.

d-results-bing-1536x712.jpg

…nor did Bing recognize the parking sign

d-parking-bing-1536x705.jpg

…or the Toca do Açaí logo.

d-toco-bing-1536x498.jpg

Despite the fact that the image was directly taken from Google’s Street View, Google reverse image search did not recognize a photograph uploaded onto its own service.

d-results-google-1536x1188.jpg

Just as Bing and Yandex, Google could not recognize the Portuguese parking sign.

d-parking-google.jpg

Lastly, Google did not come close to identifying the Toca do Açaí logo, instead focusing on various types of wooden panels, showing how it focused on the backdrop of the image rather than the logo and words.

d-toca-google-1536x1390.jpg

Scorecard: Yandex 7/11; Bing 1/11; Google 3.5/11

Amsterdam Canal

Yandex knew exactly where this photograph was taken in Amsterdam, finding other photographs taken in central Amsterdam, and even including ones with various types of birds in the frame.

e-results-yandex.jpg

Yandex correctly identified bird in the foreground of the photograph as a grey heron (серая цапля), also bringing back an array of images of grey herons in a similar position and posture as the source image.

e-bird-yandex.jpg

However, Yandex flunked the test of identifying the Dutch flag hanging in the background of the photograph. When rotating the image 90 degrees clockwise to present the flag in its normal pattern, Yandex was able to figure out that it was a flag, but did not return any Dutch flags in its results.

e-flag-yandex.jpg

test-e-flag2.jpg

e-flag2-yandex.jpg

Bing only recognized that this image shows an urban landscape with water, with no results from Amsterdam.

e-results-bing-1536x723.jpg

Though Bing struggled with identifying an urban landscape, it correctly identified the bird as a grey heron, including a specialized “Looks like” result going to a page describing the bird.

e-bird-bing-1536x1200.jpg

However, like with Yandex, the Dutch flag was too confusing for Bing, both in its original and rotated forms.

e-flag-bing-1536x633.jpg

e-flag2-bing-1536x491.jpg

Google noted that there was a reflection in the canal of the image, but went no further than this, focusing on various paved paths in cities and nothing from Amsterdam.

e-results-google-1536x1365.jpg

Google was close in the bird identification exercise, but just barely missed it — it is a grey, not great blue, heron.

e-bird-google-1536x1378.jpg

Google was also unable to identify the Dutch flag. Though Yandex seemed to recognize that the image is a flag, Google’s algorithm focused on the windowsill framing the image and misidentified the flag as curtains.

e-flag-google-1536x1374.jpg

e-flag2-google-1536x1356.jpg

Final Scorecard: Yandex 9/14; Bing 2/14; Google 3.5/14

Creative Searching

Even with the shortcomings described in this guide, there are a handful of methods to maximize your search process and game the search algorithms.

Specialized Sites

For one, you could use some other, more specialized search engines outside of the three detailed in this guide. The Cornell Lab’s Merlin Bird ID app, for example, is extremely accurate in identifying the type of birds in a photograph, or giving possible options. Additionally, though it isn’t an app and doesn’t let you reverse search a photograph, FlagID.org will let you manually enter information about a flag to figure out where it comes from. For example, with the Dutch flag that even Yandex struggled with, FlagID has no problem. After choosing a horizontal tricolor flag, we put in the colors visible in the image, then receive a series of options that include the Netherlands (along with other, similar-looking flags, such as the flag of Luxembourg).

flagsearch1.jpgflagsearch2.jpg

Language Recognition

If you are looking at a foreign language with an orthography you don’t recognize, try using some OCR or Google Translate to make your life easier. You can use Google Translate’s handwriting tool to detect the language* of a letter that you hand-write, or choose a language (if you know it already) and then write it out yourself for the word. Below, the name of a cafe (“Hedgehog in the Fog“) is written out with Google Translate’s handwriting tool, giving the typed-out version of the word (Ёжик) that can be searched.

*Be warned that Google Translate is not very good at recognizing letters if you do not already know the language, though if you scroll through enough results, you can find your handwritten letter eventually.

yozhikvtumane.jpg

yozhik-1536x726.jpg

yozhik2-1536x628.jpg

Pixelation And Blurring

As detailed in a brief Twitter thread, you can pixelate or blur elements of a photograph in order to trick the search engine to focus squarely on the background. In this photograph of Rudy Giuliani’s spokeswoman, uploading the exact image will not bring back results showing where it was taken.

2019-12-16_14-55-50-1536x1036.jpg

However, if we blur out/pixelate the woman in the middle of the image, it will allow Yandex (and other search engines) to work their magic in matching up all of the other elements of the image: the chairs, paintings, chandelier, rug and wall patterns, and so on.

blurtest.jpg

After this pixelation is carried out, Yandex knows exactly where the image was taken: a popular hotel in Vienna.

yandexresult.jpg

2019-12-16_15-02-32.jpg

Conclusion

Reverse image search engines have progressed dramatically over the past decade, with no end in sight. Along with the ever-growing amount of indexed material, a number of search giants have enticed their users to sign up for image hosting services, such as Google Photos, giving these search algorithms an endless amount of material for machine learning. On top of this, facial recognition AI is entering the consumer space with products like FindClone and may already be used in some search algorithms, namely with Yandex. There are no publicly available facial recognition programs that use any Western social network, such as Facebook or Instagram, but perhaps it is only a matter of time until something like this emerges, dealing a major blow to online privacy while also (at that great cost) increasing digital research functionality.

If you skipped most of the article and are just looking for the bottom line, here are some easy-to-digest tips for reverse image searching:

  • Use Yandex first, second, and third, and then try Bing and Google if you still can’t find your desired result.
  • If you are working with source imagery that is not from a Western or former Soviet country, then you may not have much luck. These search engines are hyper-focused on these areas, and struggle for photographs taken in South America, Central America/Caribbean, Africa, and much of Asia.
  • Increase the resolution of your source image, even if it just means doubling or tripling the resolution until it’s a pixelated mess. None of these search engines can do much with an image that is under 200×200.
  • Try cropping out elements of the image, or pixelating them if it trips up your results. Most of these search engines will focus on people and their faces like a heat-seeking missile, so pixelate them to focus on the background elements.
  • If all else fails, get really creative: mirror your image horizontally, add some color filters, or use the clone tool on your image editor to fill in elements on your image that are disrupting searches.

[Source: This article was published in bellingcat.com By Aric Toler - Uploaded by the Association Member: Issac Avila] 

Categorized in Investigative Research

The internet is an iceberg. And, as you might guess, most of us only reckon with the tip. While the pages and media found via simple searches may seem unendingly huge at times, what is submerged and largely unseen – often referred to as the invisible web or deep web – is in fact far, far bigger.

THE SURFACE WEB

What we access every day through popular search engines like Google, Yahoo or Bing is referred to as the Surface Web. These familiar search engines crawl through tens of trillions of pages of available content (Google alone is said to have indexed more than 30 trillion web pages) and bring that content to us on demand. As big as this trove of information is, however, this represents only the tip of the iceberg.

Eric Schmidt, the CEO of Google, was asked to estimate the size of the World Wide Web. He estimated that of roughly 5 million terabytes of data, Google has indexed roughly 200 terabytes, or only .004% of the total internet.

THE INVISIBLE WEB

Beneath the Surface Web is what is referred to as the Deep or Invisible Web. It is comprised of:

  • Private websites, such as VPN (Virtual Private networks) and sites that require passwords and logins
  • Limited access content sites (which limit access in a technical way, such as using Captcha, Robots Exclusion Standard or no-cache HTTP headers that prevent search engines from browsing or caching them)
  • Unlinked content, without hyperlinks to other pages, which prevents web crawlers from accessing information
  • Textual content, often encoded in image or video files or in specific file formats not handled by search engines
  • Dynamic content created for a single purpose and not part of a larger collection of items
  • Scripted content, pages only accessible using Java Script, as well as content downloaded using Flash and Ajax solutions

There are many high-value collections to be found within the invisible web. Some of the material found there that most people would recognize and, potentially, find useful include:

  • Academic studies and papers
  • Blog platforms
  • Pages created but not yet published
  • Scientific research
  • Academic and corporate databases
  • Government publications
  • Electronic books
  • Bulletin boards
  • Mailing lists
  • Online card catalogs
  • Directories
  • Many subscription journals
  • Archived videos
  • Images

But knowing all these materials are out there, buried deep within the web doesn't really help the average user. What tools can we turn to in order to make sense of the invisible web? There really is no easy answer. Sure, the means to search and sort through massive amounts of invisible web information are out there, but many of these tools have an intense learning curve. This can mean sophisticated software that requires no small amount of computer savvy; it can mean energy-sucking search tools that require souped up computers to handle the task of combing through millions of pages of data; or, it can require the searching party to be unusually persistent – something most of us, with our expectations of instantaneous Google search success, won't be accustomed to.

All that being said, we can become acquainted with the invisible web by degrees. The many tools considered below will help you access a sizable slice of the invisible web's offerings. You will find we've identified a number of subject-specific databases and engines; tools with an established filter, making their searches much more narrow.

OPEN ACCESS JOURNAL DATABASES

Open access journal databases (OAJD) are compilations of free scholarly journals maintained in a manner that facilitates access by researchers and others who are seeking specific information or knowledge. Because these databases are comprised of unlinked content, they are located in the invisible web.

The vast majority of these journals are of the highest quality, with peer reviews and extensive vetting of the content before publication. However, there has been a trend of journals that are accepting scholarship without adequate quality controls, and with arrangements designed to make money for the publishers rather than furtherance of scholarship. It is important to be careful and review the standards of the database and journals chosen. "This helpful guide" explains what to look for.

Below is a sample list of well-regarded and reputable databases.

  • "AGRIS" (International Information System for Agricultural Science and Technology) is a global, public domain database maintained in multiple languages by the Food and Agriculture Organization of the United Nations. They provide free access to agricultural research and information.
  • "BioMed Central" is the UK-based publisher of 258 peer-reviewed open access journals. Their published works span science, technology and medicine and include many well-regarded titles.
  • "Copernicus Publications" has been an open-access scientific publisher in Germany since 2001. They are strong supporters of the researchers who create these articles, providing top-level peer review and promotion for their work.
  • "DeGruyter Open" (formerly Versita Open) is one of Germany's leading publishers of open access content. Today DeGruyter Open (DGO) publishes about 400 owned and third-party scholarly journals and books across all major disciplines.
  • "Directory of Open Access Journals is focused on providing access only to those journals that employ the highest quality standards to guarantee content. They are presently a repository of 9,740 journals with more than 1.5 million articles from 133 countries.
  • "EDP Sciences" (Édition Diffusion Presse Sciences) is a France-based scientific publisher with an international mission. They publish more than 50 scientific journals, with some 60,000 published pages annually.
  • "Elsevier of Amsterdam is a world leader in advancing knowledge in the science, technology and health fields. They publish nearly 2,200 journals, including The Lancet and Cell, and over 25,000 book titles, including Gray's Anatomy and Nelson' s Pediatrics.
  • "Hindawi Publishing Corporation", based in Egypt, publishes 434 peer-reviewed, open access journals covering all areas of Science, Technology and Medicine, as well as a variety of Social Sciences.
  • "Journal Seek" (Genamics) touts itself as "the largest completely categorized database of freely available journal information available on the internet," with more than 100,000 titles currently. Categories range from Arts and Literature, through both hard- and soft-sciences, to Sports and Recreation.
  • "The Multidisciplinary Digital Publishing Institute" (MDPI), based in Switzerland, is a publisher of more than 110 peer-reviewed, open access journals covering arts, sciences, technology and medicine.
  • "Open Access Journals Search Engine" (OAJSE), based in India, is a search engine for open access journals from throughout the world, except for India. An extremely simple interface. Note: the site was last updated June 21, 2013.
  • "Open J-Gate" is an India-based e-journal database of millions of journal articles in open access domain. With a worldwide reach, Open J-Gate is updated every day with new academic, research and industry articles.
  • "Open Science Directory" contains about 13,000 scientific journals, with another 7,000 special programs titles.
  • "Springer Open" offers a roster of more than 160 peer-reviewed, open access journals, as well as their more recent addition of free access books, covering all scientific disciplines.
  • "Wiley Open Access", a subsidiary of New Jersey-based global publishers John Wiley & Sons, Inc., publishes peer reviewed open access journals specific to biological, chemical and health sciences.

INVISIBLE WEB SEARCH ENGINES

Your typical search engine's primary job is to locate the surface sites and downloads that make up much of the web as we know it. These searches are able to find an array of HTML documents, video and audio files and, essentially, any content that is heavily linked to or shared online. And often, these engines, Google chief among them, will find and organize this diversity of content every time you search.

The search engines that deliver results from the invisible web are distinctly different. Narrower in scope, these deep web engines tend to access only a single type of data. This is due to the fact that each type of data has the potential to offer up an outrageous number of results. An inexact deep web search would quickly turn into a needle in a haystack. That's why deep web searches tend to be more thoughtful in their initial query requirements.
Below is a list of popular invisible web search engines:

  • "Clusty" is a meta search engine that not only combines data from a variety of different source documents, but also creates "clustered" responses, automatically sorting by category.
  • "CompletePlanet" searches more than 70,000 databases and specialty search engines found only in the invisible web. A search engine as well-suited to casual searchers as it is to researchers.
  • "DigitalLibrarian": A Librarian's Choice of the Best of the Web is maintained by a real librarian. With an eclectic mix of some 45 broad categories, Digital Librarian offers data from categories as diverse as Activism/Non Profits and Railroads and Waterways.
  • "InfoMine" is another librarian-developed internet resource collection, this time from The Regents of the University of California.
  • "InternetArchive" has an eclectic array of categories, starting with the ‘Wayback Machine,' which allows the searcher to locate archived documents, and including an archive of Grateful Dead audience and soundboard recordings. They offer 6 million texts, 1.5 million videos, 1.9 million audio recordings and 126K live music concerts.
  • "The Internet Public Library" (ipl and ipl2) is a non-profit, student-run website at Drexel University. Students volunteer to act as librarians and respond to questions from visitors. Categories of data include those directed to Children and Teens.
  • "SurfWax" is a metasearch engine that offers "practical tools for Dynamic Search Navigation." It offers the option of grabbing results from multiple search engines at the same time, or even designing "SearchSets," which are individualized groups of sources that can be used over and over in searches.
  • "UC Santa Barbara Library" offers access to a diverse group of research databases useful to students, researchers and the casual searcher. It should be noted that many of these resources are password protected. Those that do not display a lock icon are publicly accessible.
  • "USA.gov" offers acess to a huge volume of information, including all types of forms, databases, and information sites representing most government agencies.
  • "Voice of the Shuttle" (VoS) offers access to a diverse assortment of sites, including literature, literary theory, philosophy, history and cultural studies, and includes the daily update of all things "cool."

SUBJECT -SPECIFIC DATABASES

The following lists pool together some mainstream and not so mainstream databases dedicated to particular fields and areas of interest. While only a handful of these tools are able to surface deep web materials, all of the search engines and collections we have highlighted are powerful, extensive bodies of work. Many of the resources these tools surface would likely be overlooked if the same query were made on one of the mainstream engines most users fall back on, like Bing, Yahoo and even Google.

Art & Design

  • "ArtNet" deals with pricing and sourcing work in the art market. They also keep track of the latest news and artists in the industry.
  • "The Metropolitan Museum of Art" site hosts an impressively interactive body of information on their collections, exhibitions, events and research.
  • "Musée du Louvre", the renowned museum, maintains a site filled with navigable sections covering its collections.
  • "The National Gallery of Art" premier museum of arts in our nation's capital, also maintains a site detailing the highlights, exhibitions and education efforts the institution oversees.
  • "Public Art Online" is a resource detailing sources, creators, prices, projects, legal issues, success stories, resources, education and all other aspects of the creation of public art.
  • "Smithsonian Art Inventories Catalog" is a subset of the Smithsonian Institution Research Information System (SIRIS). A browsable database of over 400,000 art inventory items held in public and private collections.
  • "Web Gallery of Art" is a searchable database of European art, containing nearly 34,000 reproductions. Additional database information includes artist biographies, period music and commentaries.

Business

  • "Better Business Bureau" (BBB) Information System Search allows consumers to locate the details of ratings, consumer experience, governmental action and more of both BBB accredited and non-accredited businesses.
  • "BPubs.com" is the business publications search engine. They offer more than 200 free subscriptions to business and trade publications.
  • "BusinessUSA" is an excellent and complete database of everything a new or experienced business owner or employer should know.
  • "EDGAR: U.S. Securities and Exchange Commission" contains a database of Securities and Exchange Commission. Posts copies of corporate filings from US businesses, press releases and public statements.
  • "Global Edge" delivers a comprehensive research tool for academics, students and businesspeople to seek out answers to international business questions.
  • "Hoover's", a subsidiary of Dun & Bradstreet, is one of the best known databases of American and International business. A complete source of company and industry information, especially useful for investors.
  • "The National Bureau of Economic Research is perhaps the leading private, non-partisan research organization dedicated to unbiased analysis of economic policy. This database maintains archives of research data, meetings, activities, working papers and publications.
  • "U.S. Department of Commerce", Bureau of Economic Analysis is the source of many of the economic statistics we hear in the news, including national income and product accounts (NIPAs), gross domestic product, consumer spending, balance of payments and much more.

Legal & Social Services

Science & Technology

  • "Environmental Protection Agency" rganizes the agency's laws and regulations, science and technology, and the many issues affecting the agency and its policies.
  • "National Science Digital Library" (NSDL) is a source for science, technology, engineering and mathematics educational data. It is funded by the National Science Foundation.
  • "Networked Computer Science Technical Reports Library (NCSTRL) was developed as a collaborative effort between NASA Langley, Virginia Tech, Old Dominion University and University of Virginia. It serves as an archive for submitted scientific abstracts and other research products.
  • "Science.gov" is a compendium of more than 60 US government scientific databases and more than 200 websites. Governed by the interagency Science.gov Alliance, this site provides access to a range of government scientific research data.
  • "Science Research" is a free, publicly available deep web search engine that purports to use a sophisticated technology that permits queries to more than 300 science and technology sites simultaneously, with the results collated, ranked and stripped of duplications.
  • "WebCASPAR" provides access to science and engineering data from a variety of US educational institutions. It incorporates a table builder, allowing a combined result from various National Science Foundation and National Center for Education Statistics data sources.
  • "WebCASPAR" World Wide Science is a global scientific gateway, comprised of US and international scientific databases. Because it is multilingual, it allows real-time search and translation of reporting from an extensive group of databases.

Healthcare

  • "Cases Database" is a searchable database of more than 32,000 peer-reviewed medical case reports from 270 journals covering a variety of medical conditions.
  • "Center for Disease Control" (CDC) WONDER's online databases permit access to the substantial public health data resources held by the CDC.
  • "HCUPnet" is an online query system for those seeking access to statistical data from the Agency for Healthcare Research and Quality.
  • "Healthy People" provides rolling 10-year national objectives and programs for improving the health of Americans. They currently operate under the Healthy People 2020 decennial agenda.
  • "National Center for Biotechnology Information" (NCBI) is an offshoot of the National Institutes of Health (NIH). This site provides access to some 65 databases from the various project categories currently being researched.
  • "OMIM" offers access to the combined research of many decades into genetics and genetic disorders. With daily updates, it represents perhaps the most complete single database of this sort of data.
  • "PubMed is a database of more than 23 million citations from the US National Library of Medicine and National Institutes of Health.
  • "TOXNET" is the access portal to the US Toxicology Data Network, an offshoot of the National Library of Medicine.
  • "U.S. National Library of Medicine" is a database of medical research, available grants, available resources. The site is maintained by the National Institutes of Health.
  • "World Health Organization" (WHO) is a comprehensive site covering the many initiatives the WHO is engaged in around the world.

[Source: This article was published in onlineuniversities.com By hilip Bump - Uploaded by the Association Member: Robert Hensonw]

Categorized in How to

Annotation of a doctored image shared by Rep. Paul A. Gosar on Twitter. (Original 2011 photo of President Barack Obama with then-Indian Prime Minister Manmohan Singh by Charles Dharapak/AP)

To a trained eye, the photo shared by Rep. Paul A. Gosar (R-Ariz.) on Monday was obviously fake.

pual gosar

At a glance, nothing necessarily seems amiss. It appears to be one of a thousand (a million?) photos of a president shaking a foreign leader’s hand in front of a phalanx of flags. It’s easy to imagine that, at some point, former president Barack Obama encountered this particular official and posed for a photo.

Except that the photo at issue is of Iranian President Hassan Rouhani, someone Obama never met. Had he done so, it would have been significant news, nearly as significant as President Trump’s various meetings with North Korean leader Kim Jong Un. Casual observers would be forgiven for not knowing all of this, much less who the person standing next to Obama happened to be. Most Americans couldn’t identify the current prime minister of India in a New York Times survey; the odds they would recognize the president of Iran seem low.

Again, though, there are obvious problems with the photo that should jump out quickly. There’s that odd, smeared star on the left-most American flag (identified as A in the graphic above). There’s Rouhani’s oddly short forearm (B). And then that big blotch of color between the two presidents (C), a weird pinkish-brown blob of unexpected uniformity.

Each of those glitches reflects where the original image — a 2011 photo of Obama with then-Indian Prime Minister Manmohan Singh — was modified. The truncated star was obscured by Singh’s turban. The blotch of color is an attempt to remove the circle from the middle of the Indian flag behind the leaders. The weird forearm is a function of the slightly different postures and sizes of the Indian and Iranian leaders.

Screenshot 1

President Barack Obama meets with Indian Prime Minister Manmohan Singh in Nusa Dua, on the island of Bali, Indonesia, on Nov. 18, 2011. (Charles Dharapak/AP)

Compared with the original, the difference is obvious. What it takes, of course, is looking.

Tools exist to determine whether a photo has been altered. It’s often more art than science, involving a range of probability more than a certain final answer. The University of California at Berkeley professor Hany Farid has written a book about detecting fake images and shared quick tips with The Washington Post.

  • Reverse image search. Save the photo to your computer and then drop it into Google Image Search. You’ll quickly see where it might have appeared before, useful if an image purports to be over a breaking news event. Or it might show sites that have debunked it.
  • Check fact-checking sites. This can be a useful tool by itself. Images of political significance have a habit of floating around for a while, deployed for various purposes. The fake Obama-Rouhani image, for example, has been around since at least 2015 — when it appeared in a video created by a political action committee supporting Sen. Ron Johnson (R-Wis.).
  • Know what’s hard to fake. In an article for Fast Company, Farid noted that some things, like complicated physical interactions, are harder to fake than photos of people standing side by side. Backgrounds are also often tricky; it’s hard to remove something from an image while accurately re-creating what the scene behind them would have looked like. (It’s not a coincidence that both the physical interaction and background of the “Rouhani” photo were clues that it was fake.)

But, again, you have to care that you’re passing along a fake photo. Gosar didn’t. Presented with the image’s inaccuracy by a reporter from the Intercept, Gosar replied via tweet that “no one said this wasn’t photoshopped.”

“No one said the president of Iran was dead. No one said Obama met with Rouhani in person,” Gosar wrote to the “dim-witted reporter.” “The point remains to all but the dimmest: Obama coddled, appeased, nurtured and protected the worlds No. 1 sponsor of terror.”

As an argument, that may be evaluated on the merits. It is clearly the case, though, that Gosar had no qualms about sharing an edited image. He recognizes, in fact, that the photo is a lure for the point he wanted to make: Obama is bad.

That brings us to a more important point, one that demands a large-type introduction.

The Big Problem with social media

There exists a concept in social psychology called the “Dunning-Kruger effect.” You’ve probably heard of it; it’s a remarkable lens through which to consider a lot of what happens in American culture, including, specifically, politics and social media.

The idea is this: People who don’t know much about a subject necessarily don’t know how little they know. How could they? So after learning a little bit about the topic, there’s sudden confidence that arises. Now knowing more than nothing and not knowing how little of the subject they know, people can feel as though they have some expertise. And then they offer it, even while dismissing actual experts.

“Their deficits leave them with a double burden,” David Dunning wrote in 2011 about the effect, named in part after his research. “Not only does their incomplete and misguided knowledge lead them to make mistakes, but those exact same deficits also prevent them from recognizing when they are making mistakes and other people choosing more wisely.”

The effect is often depicted in a graph like this. You learn a bit and feel more confident talking about it — and that increases and increases until, in a flash, you realize that there’s a lot more to it than you thought. Call it the “oh, wait” moment. Confidence plunges, slowly rebuilding as you learn more, and learn more about what you don’t know. This affects all of us, myself included.

Screenshot 2(Philip Bump/The Washington Post)

Dunning’s effect is apparent on Twitter all the time. Here’s an example from this week, in which the “oh, wait” moment comes at the hands of an actual expert.

Screenshot 3

One value proposition for social media (and the Internet more broadly) is that this sort of Marshall-McLuhan-in-“Annie-Hall” moment can happen. People can inform themselves about reality, challenge themselves by accessing the vast scope of human knowledge and even be confronted directly by those in positions of expertise.

In reality, though, the effect of social media is often to create a chorus of people who are at a similar, overconfident point in the Dunning-Kruger curve. Another value of the Internet is in its ability to create ad hoc like-minded communities, but that also means it can convene like-minded groups of wrong-minded opinions. It’s awfully hard to feel chastened or uninformed when there is any number of other people who vocally share your view. (Why one could fill hours on a major cable-news network simply by filling panels with people on the dashed-line part of the graph above!)

The Internet facilitates ignorance as readily as it does knowledge. It allows us to build reinforcements around our errors. It allows us to share a fake image and wave away concerns because the target of the image is a shared enemy for your in-group. Or, simply, to accept a faked image as real because you’re either unaware of obvious signs of fakery or unaware of the unlikely geopolitics that surrounds its implications.

I asked Farid, the fake-photo expert, how normal people lingering at the edge of an “oh, wait” moment might avoid sharing altered images.

“Slow down!” he replied. “Understand that most fake news/images/videos are designed to be sensational or outrageous and get you to respond quickly before you’ve had time to think. When you find yourself reacting viscerally, take a breath, slow down, and don’t be so quick to share/like/retweet.”

Unless, of course, your goals are both to be sensational and to get retweets. In that case, go ahead and share the image. You can always rationalize it later.

[Source: This article was published in washingtonpost.com By Philip Bump - Uploaded by the Association Member: Alex Gray]

Categorized in Investigative Research

Friends, you're going to wish you were still making the scene with a magazine after reading this sentence: Google's web trackers are all up in your fap time and there's pretty much nothing (except maybe using a more secure browser like Firefox, read up on cybersecurity tips from the EFF, refusing to sign into a Google account and never going online without the protection of a VPN) that anyone can do about it.

From The Verge:

Visitors to porn sites have a “fundamentally misleading sense of privacy,” warn the authors of a new study that examines how tracking software made by tech companies like Google and Facebook is deployed on adult websites.

The authors of the study analyzed 22,484 porn sites and found that 93 percent of them leak data to third parties, including when accessed via a browser’s “incognito” mode. This data presents a “unique and elevated risk,” warn the authors, as 45 percent of porn site URLs indicate the nature of the content, potentially revealing someone’s sexual preferences.

According to the study, trackers baked up by Google and its creepy always-watching-you subsidiaries were found on over 74% of the porn sites that researchers checked out... for purely scientific reasons, of course. And the fun doesn't stop there! Facebook's trackers appeared on 10% of the websites and, for the discerning surveillance aficionado, 24% of the sites the researchers checked in on were being stalked by Oracle. According to The Verge, "...the type of data collected by trackers varies... Sometimes this information seems anonymous, like the type of web browser you’re using, or your operating system, or screen resolution. But this data can be correlated to create a unique profile for an individual, a process known as “fingerprinting.” Other times the information being collected is more obviously revealing like a user’s the IP address or their phone’s mobile identification number.

It's enough to give someone performance anxiety.

[Source: This article was published in boingboing.net By SEAMUS BELLAMY - Uploaded by the Association Member: Jay Harris]

Categorized in Search Engine

Google and Facebook collect information about us and then sell that data to advertisers. Websites deposit invisible “cookies” onto our computers and then record where we go online. Even our own government has been known to track us.

When it comes to digital privacy, it’s easy to feel hopeless. We’re mere mortals! We’re minuscule molecules in their machines! What power do we possibly have to fight back?

That was the question I posed to you, dear readers, in the previous “Crowdwise.”

Many of you responded with valuable but frequently repeated suggestions: Use a program that memorizes your passwords, and makes every password different. Install an ad blocker in your web browser, like uBlock Origin. Read up on the latest internet scams. If you must use Facebook, visit its Privacy Settings page and limit its freedom to target ads to you.

What I sought, though, was non-obvious ideas.

It turns out that “digital privacy” means different things to different people.

“Everyone has different concerns,” wrote Jamie Winterton, a cybersecurity researcher at Arizona State University. “Are you worried about private messaging? Government surveillance? Third-party trackers on the web?” Addressing each of these concerns, she noted, requires different tools and techniques.

“The number one thing that people can do is to stop using Google,” wrote privacy consultant Bob Gellman. “If you use Gmail and use Google to search the web, Google knows more about you than any other institution. And that goes double if you use other Google services like Google Maps, Waze, Google Docs, etc.”

Like many other readers, he recommended DuckDuckGo, a rival web search engine. Its search results often aren’t as useful as Google’s, but it’s advertised not to track you or your searches.

And if you don’t use Gmail for email, what should you use? “I am a huge advocate for paying for your email account,” wrote Russian journalist Yuri Litvinenko. “It’s not about turning off ads, but giving your email providers as little incentive to peek into your inbox as possible.” ProtonMail, for example, costs $4 a month and offers a host of privacy features, including anonymous sign-up and end-to-end encryption.

The ads you see online are based on the sites, searches, or and Facebook posts that get your interest. Some rebels, therefore, throw a wrench into the machinery — by demonstrating phony interests.

“Every once in a while, I Google something completely nutty just to mess with their algorithm,” wrote Shaun Breitbart. “You’d be surprised what sort of coupons CVS prints for me on the bottom of my receipt. They are clearly confused about both my age and my gender.”

It’s “akin to radio jamming,” noted Frank Paiano. “It does make for some interesting browsing, as ads for items we searched for follow us around like puppy dogs (including on The New York Times, by the way.)”

Barry Joseph uses a similar tactic when registering for an account on a new website. “I often switch my gender (I am a cisgender male), which delivers ads less relevant to me — although I must admit, the bra advertising can be distracting.”

He notes that there are side effects. “My friends occasionally get gendered notifications about me, such as ‘Wish her a happy birthday.’” But even that is a plus, leading to “interesting conversations about gender norms and expectations (so killing two birds with one digital stone here).”

It’s perfectly legitimate, by the way, to enjoy seeing ads that align with your interests. You could argue that they’re actually more useful than irrelevant ones.

But millions of others are creeped out by the tracking that produces those targeted ads.

If you’re in that category, Ms. Winterton recommended Ghostery, a free plug-in for most web browsers that “blocks the trackers and lists them by category,” she wrote. “Some sites have an amazing number of trackers whose only purpose is to record your behavior (sometimes across multiple sites) and pitch better advertisements.”

Most public Wi-Fi networks — in hotels, airports, coffee shops, and so on — are eavesdroppable, even if they require a password to connect. Nearby patrons, using their phones or laptops, can easily see everything you’re sending or receiving — email and website contents, for example — using free “sniffer” programs.

You don’t have to worry Social, WhatsApp and Apple’s iMessage, all of which encrypts your messages before they even leave your phone or laptop. Using websites whose addresses begin with https are also safe; they, too, encrypt their data before it’s sent to your browser (and vice versa).

(Caution: Even if the site’s address begins with https, the bad guys can still see which sites you visit — say, https://www.NoseHairBraiding.com. They just can’t see what you do there once you’re connected.)

The solution, as recommended by Lauren Taubman and others: a Virtual Private Network program. These phone and computer apps encrypt everything you send or receive — and, as a bonus, mask your location. Wirecutter’s favorite VPNTunnelBear, is available for Windows, Mac, Android, and iOS. It’s free for up to 500 megabytes a month, or $60 a year for up to five devices.

“I don’t like Apple’s phones, their operating systems, or their looks,” wrote Aaron Soice, “but the one thing Apple gets right is valuing your data security. Purely in terms of data, Apple serves you; Google serves you to the sharks.”

Apple’s privacy website reveals many examples: You don’t sign into Apple Maps or Safari (Apple’s web browser), so your searches and trips aren’t linked to you. Safari’s “don’t track me” features are turned on as the factory setting. When you buy something with Apple Pay, Apple receives no information about the item, the store, or the price.

Apple can afford to tout these features, explained software developer Joel Potischman, because it’s a hardware company. “Its business model depends on us giving them our money. Google and Facebook make their money by selling our info to other people.”

Mr. Potischman never registers with a new website using the “Sign in with Facebook” or “Sign in with Google” shortcut buttons. “They allow those companies to track you on other sites,” he wrote. Instead, he registers the long way, with an email address and password.

(And here’s Apple again: The “Sign in with Apple” button, new and not yet incorporated by many websites, is designed to offer the same one-click convenience — but with a promise not to track or profile you.)

My call for submissions drew some tips from a surprising respondent: Frank Abagnale, the former teenage con artist who was the subject of the 2002 movie “Catch Me if You Can.”

After his prison time, he went began working for the F.B.I., giving talks on scam protection, and writing books. He’s donating all earnings from his latest book, “Scam Me If You Can,” to the AARP, in support of its efforts to educate older Americans about internet rip-offs.

His advice: “You never want to tell Facebook where you were born and your date of birth. That’s 98 percent of someone stealing your identity! And don’t use a straight-on photo of yourself — like a passport photo, driver’s license, graduation photo — that someone can use on a fake ID.”

Mr. Abagnale also notes that you should avoid sharing your personal data offline, too. “We give a lot of information away, not just on social media, but places we go where people automatically ask us all of these questions. ‘What magazines do you read?’ ‘What’s your job?’ ‘Do you earn between this and that amount of money?’”

Why answer if you don’t have to?

A few more suggestions:

  • “Create a different email address for every service you use,” wrote Matt McHenry. “Then you can tell which one has shared your info, and create filters to silence them if necessary.” 
  • “Apps like Privacy and Token Virtual generate a disposable credit-card number with each purchase — so in case of a breach, your actual card isn’t compromised,” suggested Juan Garrido. (Bill Barnes agreed, pointing out the similar Shopsafe service offered by from Bank of America’s Visa cards. “The number is dollar and time limited.”)
  • “Your advertisers won’t like to see this, so perhaps you won’t print it,” predicted Betsy Peto, “but I avoid using apps on my cellphone as much as possible. Instead, I go to the associated website in my phone’s browser: for example, www.dailybeast.com. My data is still tracked there, but not as much as it would be by the app.”

There is some good news: Tech companies are beginning to feel some pressure.

In 2017, the European Union passed the General Data Protection Regulation (G.D.P.R.), which requires companies to explain what data they’re collecting — and to offer the option to edit or delete it. China, India, Japan, Brazil, South Korea, and Thailand have passed, or are considering, similar laws, and California’s Consumer Privacy Act takes effect on January 1.

In the meantime, enjoy these suggestions, as well as this bonus tip from privacy researcher Jamie Winterton:

“Oh yeah — and don’t use Facebook.”

For the next “Crowdwise”: We all know that it’s unclassy and cruel to break up with a romantic partner in a text message — or, worse, a tweet. (Well, we used to know that.) Yet requesting an unusual meeting at a sidewalk cafe might strike your partner as distressingly ominous.

[Source: This article was published in nytimes.com By David Pogue - Uploaded by the Association Member: Issac Avila]

Categorized in Search Engine

The Internet has made researching subjects deceptively effortless for students -- or so it may seem to them at first. Truth is, students who haven't been taught the skills to conduct good research will invariably come up short.

That's part of the argument made by Wheaton College Professor Alan Jacobs in The Atlantic, who says the ease of search and user interface of fee-based databases have failed to keep up with those of free search engines. In combination with the well-documented gaps in students’ search skills, he suggests that this creates a perfect storm for the abandonment of scholarly databases in favor of search engines. He concludes: “Maybe our greater emphasis shouldn’t be on training users to work with bad search tools, but to improve the search tools.”

His article is responding to a larger, ongoing conversation about whether the ubiquity of Web search is good or bad for serious research. The false dichotomy short-circuits the real question: “What do students really need to know about an online search to do it well?” As long as we’re not talking about this question, we’re essentially ignoring the subtleties of Web search rather than teaching students how to do it expertly. So it’s not surprising that they don’t know how to come up with quality results. Regardless of the vehicle--fee databases or free search engines--we owe it to our students to teach them to search well.

So what are the hallmarks of a good online search education?

SKILL-BUILDING CURRICULUM. Search competency is a form of literacy, like learning a language or subject. Like any literacy, it requires having discrete skills as well as accumulating experience in how and when to use them. But this kind of intuition can't be taught in a day or even in a unit – it has to be built up through exercise and with the guidance of instructors while students take on research challenges. For example, during one search session, teachers can ask students to reflect on why they chose to click on one link over another. Another time, when using the Web together as a class, teachers can demonstrate how to look for a definition of an unfamiliar word. Thinking aloud when you search helps, as well.

A THOROUGH, MULTI-STEP APPROACH. Research is not a one-step process. It has distinct phases, each with its own requirements. The first stage is inquiry, the free exploration of a broad topic to discover an interesting avenue for further research, based on the student's curiosity. Web search, with its rich cross-linking and the simplicity of renewing a search with a single click, is ideally suited to this first open-ended stage. When students move on to a literature review, they seek the key points of authority on their topic, and pursue and identify the range of theories and perspectives on their subject. Bibliographies, blog posts, and various traditional and new sources help here. Finally, with evidence-gathering, students look for both primary- and secondary-source materials that build the evidence for new conclusions. The Web actually makes access to many --

but not all -- types of primary sources substantially easier than it's been in the past, and knowing which are available online and which must be sought in other collections is critical to students’ success. For example, a high school student studying Mohandas Gandhi may do background reading in Wikipedia and discover that Gandhi's worldview was influenced by Leo Tolstoy; use scholarly secondary sources to identify key analyses of their acquaintance, and then delve into online or print books to read their actual correspondence to draw an independent conclusion. At each step of the way, what the Web has to offer changes subtly.

TOOLS FOR UNDERSTANDING SOURCES. Some educators take on this difficult topic, but it's often framed as a simple black-and-white approach: “These types of sources are good. These types of sources are bad.” Such lessons often reject newer formats, such as blogs and wikis, and privilege older formats, such as books and newspaper articles. In truth, there are good and bad specimens of each, and each has its appropriate uses. What students need to be competent at is identifying the kind of source they're finding, decoding what types of evidence it can appropriately provide, and making an educated choice about whether it matches their task.

DEVELOPING THE SKILLS TO PREDICT, ASSESS, PROBLEM-SOLVE, AND ITERATE. It's important for students to ask themselves early on in their search, “When I type in these words, what do I expect to see in my results?” and then evaluate whether the results that appear match those expectations. Identifying problems or patterns in results is one of the most important skills educators can help students develop, along with evaluating credibility. When students understand that doing research requires more than a single search and a single result, they learn to leverage the information they find to construct tighter or deeper searches. Say a student learns that workers coming from other countries may send some of their earnings back to family members. An empowered searcher may look for information on [immigrants send money home], and notice that the term remittances appears in many results. An unskilled searcher would skip over words he doesn't recognize know, but the educated student can confirm the definition of remittance, then do another search, [remittances immigrants], which brings back more scholarly results.

TECHNICAL SKILLS FOR ADVANCED SEARCH. Knowing what tools and filters are available and how they work allows students to find what they seek, such as searching by colordomainfiletype, or date. Innovations in technology also provide opportunities to visualize data in new ways. But most fundamentally, good researchers remember that it takes a variety of sources to carry out scholarly research. They have the technical skills to access Web pages, but also books, journal articles, and people as they move through their research process.

Centuries ago, the teacher Socrates famously argued against the idea that the written word could be used to transmit knowledge. This has been disproved over the years, as authors have developed conventions for communicating through the written word and educators have effectively taught students to extract that knowledge and make it their own. To prepare our students for the future, it's time for another such transition in the way we educate. When we don’t teach students how to manage their online research effectively, we create a self-perpetuating cycle of poor-quality results. To break that cycle, educators can engage students in an ongoing conversation about how to carry out excellent research online. In the long term, students with stronger critical thinking skills will be more effective at school, and in their lives.

[Source: This article was published in kqed.org By Tasha Bergson-Michelson - Uploaded by the Association Member: Patrick Moore]

Categorized in Search Engine

Overview | Do Internet search engines point us to the information that we need or confuse us with irrelevant or questionable information? How can Internet users improve their searches to find reliable information? What are some ways to perform effective searches? In this lesson, students conduct Web searches on open-ended questions and draw on their experiences to develop guides to searching effectively and finding reliable information online.

Materials | Computers with Internet access

Warm-Up | Invite students to share anecdotes about times when they used an Internet search engine to look for information and found something they were not expecting, or when they could not find what they were looking for.

After several students have shared, ask for a show of hands of students who have experienced frustration using an Internet search engine. Then ask: How often do you use search engines? Which ones do you use most? Why? What are the most common problems you face when searching? Do you consider yourself a skilled searcher? Do you have any search strategies? Do you search the Internet more for personal reasons and entertainment, or more for school? Do you believe that improving your Internet searching skills will benefit you academically? Socially? Personally?

Give students the following search assignment, from The New York Times article “Helping Children Find What They Need on the Internet”: “Which day [will] the vice president’s birthday falls on the next year?” (Alternatively, give students a multistep question that relates to your subject matter. For example, a geography teacher might ask “How many miles away is Shanghai?”) Tell students to type this question into Google, Bing or any other favorite search engine, and have them share the top results in real-time. Did the answer appear? If not, what’s the next step to take to get this question answered?

Ask: What information do you need to be able to answer the question? Ideas might include the name of the vice president, the date of his birthday, and a copy of next year’s calendar. Have them try to find this information and keep working until they can answer the question. (You may want to add a competitive component to this activity, rewarding the student who finds out the right answer the fastest.)

When one or more students have found the answer, have one student take the class through the steps he or she took to find the answer; if possible, do this on a screen so that everyone can watch. Along the way, ask probing questions. What keywords did you type into the search engine? Why did you choose these words? Which results did you click on? Why did you choose those sources over the others on the page? How many steps did it take? Are you sure the sources are reliable and that the answers are correct? How can you tell? How would you verify the information? If time permits, play around by using different keywords and clicking on different results, to see how the search for the answer to the question changes.

To end this activity, ask: What did you notice about the search to find the answer to this question? Did this exercise give help you understand something new about Internet searching? If so, what?

When considering children, search engines had long focused on filtering out explicit material from results. But now, because increasing numbers of children are using search as a starting point for homework, exploration or entertainment, more engineers are looking to children for guidance on how to improve their tools.

Search engines are typically developed to be easy for everyone to use. Google, for example, uses the Arial typeface because it considers it more legible than other typefaces. But advocates for children and researchers say that more can be done technologically to make it easier for young people to retrieve information. What is at stake, they say, are the means to succeed in a new digital age.

Read the article with your class, using the questions below.

Questions | For discussion and reading comprehension:

  1. What problems does the article mention that children run into when they use search engines?
  2. What suggestions have been offered for how search engines can improve their product to lessen children’s problems searching?
  3. Do you search using keywords or questions? How does the article characterize these two types of searching?
  4. Have you tried using images or videos to search? How does the article characterize this type of searching?
  5. What advice would you give to Internet search engine developers for how they should improve their product? Do you think any of the improvements mentioned in the article are particularly promising? Why?

Activity | Before class, ask teachers of several different subjects for questions that they have asked or will ask students to research on the Internet. Alternatively, collect from students their own research questions – for another class or for a personal project, like I-Search. Be sure that the questions are sufficiently open-ended so that they cannot be answered definitively with a quick, simple search – they might contain an element of opinion or interpretation, rather than just be a matter of simple fact.

Put the class into pairs, and provide each pair with the following multipart task:

  • Seek to answer your assigned question by conducting an Internet search.
  • You must use different search engines and strategies, and keep track of how the search “goes” using the various resources and methods.
  • Once you find an answer that you are confident in, do another search to verify the information.
  • When you are finished, evaluate the reliability of all of the Internet resources that you used.
  • Prepare to tell the story of your search, including what worked and what didn’t, anything surprising that happened, things that would be good for other searchers to know, “lessons learned,” etc.

Provide pairs with the following resources to research their assigned topics. Let them know that these are starting points and that they may use additional resources.

Search Engines, Metasearch Engines, and Subject Directories:

Choosing Effective Search Words:

Evaluating Source Reliability:

When pairs have completed their research, bring the class together and invite pairs to share their stories. Then tell them that they will use their notes to create a page for a class guide, in booklet or wiki form, on how to use Internet search engines effectively for research, to be made available to the school community to help other students. As much as possible, the tips and guidance in the guide should be illustrated with the students’ stories and examples.

Tell students that their booklet/wiki entries should or might include the following, among other types of guidance and insight:

  • Ways and examples of using keywords and Boolean logic effectively.
  • Ineffective examples of keyword searches that result in too much, too little or useless information.
  • Examples of how to sequence searches and why.
  • Sites they find that answer their question and how they can tell whether these pages are reliable.
  • Any information they found that was questionable or incorrect, where they found it, and how they discovered that it was wrong.
  • Why it is important to scroll past the top result to pages listed farther down the page or on a later page in order to find complete answers to the question.
  • How using different search engines yielded different results.

In addition to the handbook or wiki, you might also have students make their own videos, à la the Google ad “Parisian Love,” chronicling their search.

Going Further | Students read the New York Times Magazine article “The Google Alphabet,” by Virginia Heffernan, who writes the column “The Medium,” and keep a tally of the number of advertisements and commercial sites that they see while doing schoolwork on the Internet for one or two days.

Then hold a class discussion on advertising and commercial interests on the Internet. If students are using the Internet to complete their homework, are schools requiring students to expose themselves to corporate advertisements in order to succeed academically? Do any ethical questions arise around the prevalence of corporate advertising in Web searching for academic purposes?

Alternatively or additionally, students develop ideas for the search engines of the future, like ways to use and find images, audio and video, rank results and so on, and “pitch” their ideas to classmates acting as search engine developers.

And for fun, students might try to come up with “Googlewhacks.”

Standards | From McREL, for Grades 6-12:

Technology
2. Knows the characteristics and uses of computer software programs.
3. Understands the relationships among science, technology, society, and the individual.

Language Arts
1. Demonstrates competence in the general skills and strategies of the writing process.
4. Gathers and uses the information for research purposes.
7. Uses reading skills and strategies to understand and interpret a variety of informational texts.

Life Work
2. Uses various information sources, including those of a technical nature, to accomplish specific tasks.

[Source: This article was published in nytimes.com By Sarah Kavanagh And Holly Epstein Ojalvo - Uploaded by the Association Member: Rene Meyer]

Categorized in Search Engine

[Source: This article was published in observer.com By Harmon Leon - Uploaded by the Association Member: Paul L.]

On HBO’s Silicon Valley, the Pied Piper crew’s mission is to create a decentralized internet that cuts out intermediaries like FacebookGoogle, and their fictional rival, Hooli. Surely a move that would make Hooli’s megalomaniac founder Gavin Belson (also fictional) furious.

In theory, no one owns the internet. No. Not Mark Zuckerberg, not Banksy, not annoying YouTube sensation Jake Paul either. No—none of these people own the internet because no one actually owns the internet.

But in practice, a small number of companies really control how we use the internet. Sure, you can pretty much publish whatever you want and slap up a website almost instantaneously, but without Google, good luck getting folks to find your site. More than 90 percent of general web searches are handled by the singular humongous search engine—Google.

If things go sour with you and Google, the search giant could make your life very difficult, almost making it appear like you’ve been washed off the entire internet planet. Google has positioned itself as pretty much the only game in town.

Colin Pape had that problem. He’s the founder of Presearch, a decentralized search engine powered by a community with roughly 1.1 million users. Presearch uses cryptocurrency tokens as an incentive to decentralize search. The origin story: Before starting Presearch, Google tried to squash Pape’s business, well not exactly squash, but simply erase it from searches.

Let’s backtrack.

In 2008, Pape founded a company called ShopCity.com. The premise was to support communities and get their local businesses online, then spread that concept to other communities in a franchise-like model. In 2011, Pape’s company launched a local version in Google’s backyard of Mountain View, California.

End of story, right? No.

“We woke up one morning in July to find out that Google had demoted almost all of our sites onto page eight of the search results,” Pape explained. Pape and his crew thought it was some sort of mistake; still, the demotion of their sites was seriously hurting the businesses they represented, as well as their company. But something seemed fishy.

Pape had read stories of businesses that had essentially been shut down by Google—or suffered serious consequences such as layoffs and bankruptcy—due to the jockeying of the search engine.

“Picture yourself as a startup that launches a pilot project in Google’s hometown,” said Pape, “and 12 months later, they launch a ‘Get Your City Online’ campaign with chambers of commerce, and then they block your sites. What would you think?”

It was hard for Pape not to assume his company had been targeted because it was easy enough for Google to simply take down sites from search results.

“We realized just how much market power Google had,” Pape recalled. “And how their lack of transparency and responsiveness was absolutely dangerous to everyone who relies on the internet to connect with their customers and community.”

google

Google’s current search engine model makes us passive consumers who are fed search results from a black box system into which none of us have any insight. Chris Jackson/Getty Images

 

Fortunately, Pape’s company connected with a lawyer leading a Federal Trade Commission (FTC) investigation into Google’s monopolistic practices. Through the press, they put pressure on Google to resolve its search issues.

This was the genesis for Presearch, ‘the Switzerland of Search,’ a resource dedicated to the more open internet on a level playing field.

“The vision for Presearch is to build a framework that enables many different groups to build their own search engine with curated information and be rewarded for driving usage and improving the platform,” Pape told Observer.

But why is this so important?

“Because search is how we access the amazing resources on the web,” Pape continued. “It’s how we find things that we don’t already know about. It’s an incredibly powerful position for a single entity [Google] to occupy, as it has the power to shape perceptions, shift spending and literally make or break entire economies and political campaigns, and to determine what and how we think about the world.”

You have to realize that nothing is truly free.

Sure, we use Google for everything from looking for a local pet groomer to finding Tom Arnold’s IMDB page. (There are a few other things in between.) Google isn’t allowing us to search out of the goodness of its heart. When we use Google, we’re essentially partaking in a big market research project, in which our information is being tracked, analyzed and commoditized. Basically, our profiles and search results are sold to the highest bidders. We are the product—built upon our usage. Have you taken the time to read Google’s lengthy terms of service agreement? I doubt it.

How else is Sergey Brin going to pay for his new heliport or pet llama?

Stupid free Google.

Google’s current model makes us passive consumers who are fed search results from a black box system into which none of us have any insight. Plus, all of those searches are stored, so good luck with any future political career if a hacker happens to get a hold of that information.

Presearch’s idea is to allow the community to look under the hood and actively participate in this system with the power of cryptocurrency to align participant incentives within the ecosystem to create a ground-up, community-driven alternative to Google’s monopoly.

“Every time you search, you receive a fraction of a PRE token, which is our cryptocurrency,” explained Pape. “Active community members can also receive bonuses for helping to improve the platform, and everyone who refers a new user can earn up to 25 bonus PRE.”

Tokens can be swapped for other cryptocurrencies, such Bitcoin, used to buy advertising, sold to other advertisers or spent on merchandise via Presearch’s online platform.

Presearch’s ethos is to personalize the search engine rather than allowing analytics to be gamed against us, so users are shown what they want to see. Users can specify their preferences to access the information they want, rather than enveloping them in filter bubbles that reinforce their prejudices and bad behaviors, simply to makes them click on more ads.

{youtube}fRndVmGXNME{/youtube}

“We want to empower people rather than control them,” Pape said. “The way to do that is to give them choices and make it easy for them to ‘change the channel,’ so to speak if the program they’re being served isn’t resonating with them.”

Another thing to fear about Google, aside from the search engine being turned on its head and being used as a surveillance tool in a not-so-distant dystopian future, is an idea that’s mentioned in Jon Ronson book, So You’ve Been Publicly Shamed. People’s lives have been ruined due to Google search results that live on forever after false scandalous accusations.

How will Presearch safeguard us against this?

“We are looking at a potential model where people could stake their tokens to upvote or downvote results, and then enable community members to vote on those votes,” said Pape. “This would enable mechanisms to identify false information and provide penalties for those who promote it. This is definitely a tricky subject that we will involve the community in developing policies for.”

Pape’s vision is very much aligned with Pied Piper’s on HBO’s Silicon Valley.

“It is definitely pretty accurate… a little uncanny, actually,” Pape said after his staff made him watch the latest season. “It was easy to see where the show drew its inspiration from.”

But truth is stranger than fiction. “The problems a decentralized internet are solving are real, and will become more and more apparent as the Big Tech companies continue to clamp down on the original free and open internet in favor of walled gardens and proprietary protocols,” he explained. “Hopefully the real decentralized web will be the liberating success that so many of us envision.”

Obviously an alternative to Google’s search monopoly is a good thing. And Pape feels that breaking up Google might help in the short term, but “introducing government control is just that—introducing more control,” Pape said. “We would rather offer a free market solution that enables people to make their own choices, which provides alignment of incentives and communities to create true alternatives to the current dominant forces.”

Presearch may or may not be the ultimate solution, but it’s a step in the right direction

Categorized in Search Engine
Page 1 of 26

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now