fbpx

An EPFL laboratory has developed DataShare Network, a decentralized search engine paired with a secure messaging system that allows investigative journalists to exchange information securely and anonymously. An scientific article on this subject will be presented during the Usenix Security Symposium which will be held online from August 12 to 14.

The International Consortium of Investigative Journalists (ICIJ), which has over 200 members in 70 countries, has broken a number of important stories, particularly ones that expose medical fraud and tax evasion. One of its most famous investigations was the Panama Papers, a trove of millions of documents that revealed the existence of several hundred thousand shell companies whose owners included cultural figures, politicians, businesspeople and sports personalities. To complete an investigation of this size is only possible through international cooperation between journalists. When sharing such sensitive files, however, a leak can jeopardize not only the story’s publication, but also the safety of the journalists and sources involved. At the ICIJ’s behest, EPFL’s Security and Privacy Engineering (SPRING) Lab recently developed DataShare Network, a fully anonymous, decentralized system for searching and exchanging information. A paper about it will be presented during the Usenix Security Symposium, a worldwide reference for specialists, which will be held online from 12 to 14 August.

Anonymity at every stage

Anonymity is the backbone of the system. Users can search and exchange information without revealing their identity, or the content of their queries, either to colleagues or to the ICIJ. The Consortium ensures that the system is running properly but remains unaware of any information exchange. It issues virtual secure tokens that journalists can attach to their messages and documents to prove to others that they are Consortium members. A centralized file management system would be too conspicuous a target for hackers; since the ICIJ does not have servers in various jurisdictions, documents are typically stored on its members’ servers or computers. Users provide only the elements that enable others to link to their investigation.

[Source: This article was published in miragenews.com - Uploaded by the Association Member: Edna Thomas]

Categorized in Search Engine

Ohio and Washington emerged as new hotspots for internet crime in 2019, though California continues to lead with the largest online fraud victim losses and number of victims, according to research from the Center for Forensic Accounting in Florida Atlantic University's College of Business.

California online victim losses increased 27 percent from 2018 to $573.6 million in 2019. The number of victims in California increased by 2 percent to 50,000.

Florida ranked second in victim losses ($293 million) and also posted the largest annual increase in both victim losses and number of victims over the past five years. The average loss per victim in the Sunshine State grew from $4,700 in 2015 to $10,800 in 2019, while the average victim loss jumped 46 percent from 2018.

When victim losses are adjusted for population, Ohio had the largest loss rate in 2019 at $22.6 million per 1 million in population, rising sharply from $8.4 million in 2018. Washington had the highest victim rate at 1,720 per 1 million in population.

Ohio and Washington replaced North Carolina and Virginia, which ranked among the top states in 2018.

The other top states in the latest   report were New York and Texas. The report is based on statistics from the FBI, which collects data from victims reporting alleged internet crimes.

"Fraudsters are getting more efficient at going after where the money is," said Michael Crain, DBA, director of FAU's Center for Forensic Accounting. "There doesn't seem to be any mitigation of the growing trend of online crime. The first line of defense from online fraud is not a technology solution or even law enforcement; it's user awareness. From a policy perspective, governments and other institutions should get the word out more so that individuals and organizations are more sensitive to online threats."

Crimes such as extortion, government impersonation and spoofing became more noticeable last year for their increases in victim losses and number of victims, according to the report. Business email compromise/email account compromise (BEC/EAC) remains the top internet crime in 2019 with reported losses of $1.8 billion, followed by confidence fraud/romance ($475 million) and spoofing ($300 million) schemes.

Spoofing, the falsifying of email contact information to make it appear to have been originated by a trustworthy source, was the crime with the largest percentage increase in victim losses (330 percent) of the top states during 2019.

BEC/EAC, in which business or personal email accounts are hacked or spoofed to request wire transfers, accounted for 30 percent to 90 percent of all victim losses last year in the top states and has grown significantly since 2015.

In confidence fraud/romance, an online swindler pretends to be in a friendly, romantic or family relationship to win the trust of the victim to obtain money or possessions.

For online investment fraud, in which scammers often lure seniors with promises of high returns, California leads the top states with $37.8 million in victim losses, but Florida's population-adjusted loss rate of $1.1 million makes it the state where victims are likely to lose the most money.

A major problem is that most internet crime appears to originate outside the United States and the jurisdiction of U.S. authorities.

"Foreign sources of internet crimes on U.S. residents and businesses make it challenging for whether  levels can be reduced as the public becomes more connected and dependent on the internet," the report states.

[Source: This article was published in phys.org By Paul - Uploaded by the Association Member: James Gill]

Categorized in Online Research

“Kilos." A new dark web search engine that has quickly become the “Google” for cybercriminal marketplaces, forums and illicit products. Why is this new cybercriminal engine quickly becoming popular and what are the threats that security researchers and operations team face with Kilos? 

After the recent indictment of Larry Harmon, alleged operator of the Bitcoin tumbling service Helix and darknet search engine Grams, Digital Shadows decided to profile Kilos. According to the firm, in November 2019, "Kilos" emerged from the cybercriminal underground and has become one of the most sophisticated dark web search engines to date, having indexed more platforms and added more search functionalities than other search engines while introducing updates, new features, and services that ensure more security and anonymity for its users. Kilos also maintains a stronger human element not previously seen on other prominent dark web-based search engines, says a new Digital Shadows blog

"Kilos possibly evolved from the well-known dark web search engine “Grams”, which ceased operations in 2017. Both Grams and Kilos are dark web search engines that clearly imitate the well-known design and functionalities of the Google search engine and, in a clever play on words, both follow a naming convention inspired by units of measure," writes the firm. 

Grams was launched in early April 2014 and back in the day, says Digital Shadows, "Grams was a revolutionary tool that allowed users to explore the darker corners of the Internet with relative ease. However, its index was somewhat limited. According to its administrator—whom Wired interviewed anonymously in April 2014—the team behind Grams did not “have the capabilities yet to spider all of the darknet” and had instead resolved to work on “making an automated site submitter for people to submit their sites and get listed” on the search engine."

Now, Kilos enters the cybercriminal sphere. "Though it can’t be conclusively confirmed whether Kilos has pivoted directly from Grams or whether the same administrator is behind both projects, the initial similarities are uncanny. The same popular search engine-like aesthetics have been applied and the naming convention has remained," says the blog. 

Why is Kilos more threatening than Grams? It is allowing users to perform even more specific searches from a larger index than Grams did, enabling users to search across six of the top dark web marketplaces for vendors, listings and reviews. These marketplaces include CannaHome, Cannazon, Cryptonia, Empire, Samsara and Versus.

 According to Digital Shadows, Kilos has already indexed the following from a total of seven marketplaces and six forums:

  • 553,994 forum posts
  • 68,860 listings
  • 2,844 vendors
  • 248,159 reviews

Since the site's creation in November 2019, the Digital Shadows team writes that the unprecendented amount of dark web content found in Kilos appears to increase by the day, providing invaluable insight into the contents, products, and vendors of current prominent cybercriminal markets and forums - thus adding "a human element to the site not previously seen on dark web-based search engines, by allowing direct communication between the administrator and the users, and also between the users themselves," claims the blog. 

New updates to the site include:

  1. A new type of CAPTCHA that prompts users to rank randomized product and vendor feedback by their level of positive or negative sentiment for added security. 
  2. A new Bitcoin mixer service called “Krumble”, which is now available in Beta mode, to ensure user anonymity compared with other Bitcoin mixers.
  3. Added features that allow for more direct communication, both between the users themselves and between users and the administrator. 
  4. A live chat function to allow users to discuss a variety of topics with each other. 

Digital Shadows warns that Kilos’ growing index, new features and additional services combined could allow Kilos to continue to grow and position itself as a natural first stop for an increasingly large user base - further increasing the amount of data readily available for threat actors and security researchers alike.

Harrison Van Riper, Threat Research, Team Lead at Digital Shadows, tells Security Magazine that, "Dark web search engines bring more visibility to criminal platforms which, in turn, direct more traffic and lead to more sales from marketplaces or forums, which could increase the risk to organizations. Criminals looking to find sensitive documents or credentials for sale on the dark web can use Kilos to search across different marketplaces to find their goods, increasing the likelihood of account takeovers or the impact of a data leakage, for example."

Van Riper notes that search engines have "transformed the way everyday people use the internet when they were introduced, giving users freedom to search for the exact information they were looking for. That same innovation translates to cybercriminals as well, a topic Digital Shadows heavily covered in our blog detailing the similarities between the real world and cybercriminal underground https://www.digitalshadows.com/blog-and-research/how-the-cybercriminal-underground-mirrors-the-real-world.  These sites were made intentionally difficult to find unless you already had an idea of where you were going to begin with, however, a search engine with the ability to look across multiple sources could give more malicious actors opportunity to conduct more attacks," he says. 

[Source: This article was published in securitymagazine.com - Uploaded by the Association Member: Jennifer Levin]

Categorized in Search Engine

In the popular consciousness, the dark web is mostly known as the meeting place of terrorists and extortionist hackers. While there are other, less malicious players afoot, corporations and organizations need to know the real dangers and how to protect against them.

Dark. Mysterious. A den of thieves. A front for freedom fighters. It is many things for many different kinds of people, all of whom by nature or necessity find themselves driven to the fringes of digital society. It is the dark web. 

There’s still plenty of misinformation floating around out there about this obscure corner of the internet. The average cyber citizen is unaware of its very existence. Even for those intimately familiar with the dark web, accurate predictions as to its behavior and future effect on broader internet culture have remained elusive; predictions foretelling its mainstreaming, for instance, seem less and less likely with each passing year. The problem is, this is one case where ignorance isn’t always bliss. Dark web relevance to the general population is becoming more painfully apparent with every breaking news story about yet another data breach.

The amount of personal information accessible via a web connection these days is staggering. Names, addresses, and phone numbers are only the tip of the iceberg. Credit card information, marital status, browsing histories, purchase histories, medical histories (a favorite target of hackers these days) and so much more—every bit and byte of this data is at risk of theft, ransom, exposure and exploitation. A person’s entire life can be up for sale on the dark web without them being any the wiser. That is until their credit card comes up overdrawn, or worse, a mysterious and threatening email graces their inbox threatening to expose some very private information.

But despite the fact that it is the individual being exposed, the ones who truly have to worry are those entities entrusted with storing the individual data of their millions of users. The dark web is a potential nightmare for banks, corporations, government bureaus, health care providers—pretty much any entity with large databases storing sensitive (i.e., valuable) information. Many of these entities are waking up to the dangers, some rudely so, and are too late to avoid paying out a hefty ransom or fine depending on how they handle the situation. Whatever the case, the true cost is often to the reputation of the entity itself, and it is sometimes unrecoverable.

It should be obvious at this point that the dark web cannot be ignored. The first step to taking it seriously is to understand what it is and where it came from.

The landscape

Perhaps the most common misconception regarding the dark web begins with the internet itself. Contrary to popular sentiment, Google does not know all. In fact, it is not even close. Sundar Pichai and his legions of Googlers only index pages they can access, which by current estimates hover in and around the $60 billion mark. Sounds like a lot, but in reality this is only the surface web, a paltry 0.2% to 0.25% of digital space.

Home for the bulk of our data, the other 99.75% is known as the deep web. Research on deep web size is somewhat dated but the conditions the findings are based on appear to point to a growing size disparity, if any changes have occurred at all.

Unlike the surface web, which is made up of all networked information discoverable via public internet browsing, the deep web is all networked information blocked and hidden from public browsing.

Take Amazon as an example. It has its product pages, curated specifically to customer browsing habits and seemingly eerily aware of conversations people have had around their Alexa—this is the Surface Web. But powering this streamlined customer experience are databases storing details for hundreds of millions of customers; including personal identifiable information (PII), credit card and billing information, purchase history, and the like. Then there are databases for the millions of vendors, warehouse databases, logistical databases, corporate intranet, and so on. All in all you are looking at a foundational data well some 400 to 500 times larger than the visible surface.

The dark web is technically a part of this deep web rubric, meeting the criteria of being hidden from indexing by common web browsers. And although microscopically small in comparison it can have an outsized effect on the overall superstructure, sort of like a virus or a cure, depending on how it is used. In the Amazon example, where the dark web fits in is that a portion of its members would like nothing better than to access its deep web data for any number of nefarious purposes, including sale, ransom, or just to sow a bit of plain old anarchic chaos.

Such activities do not interest all dark web users, of course, with many seeing anonymity as an opportunity to fight off corruption rather than be a part of it. The dark web is a complex place, and to fully appreciate this shadow war of villains and vigilantes, how it can affect millions of people every now and then when it spills over into the light, first you have to understand its origins.

Breaking down the numbers

Anonymity is not without its challenges when it comes to mapping out hard figures. The key is to focus on commerce, a clear and reliable demarcating line. For the most part, those only seeking anonymity can stick to hidden chat rooms and the like. However, if a user is looking to engage in illegal activity, in most instances they’re going to have to pay for it. Several past studies and more recent work provide workable insight when extrapolating along this logic path.

First, a 2013 study analyzing 2,618 services being offered found over 44% to involve illicit activity. That number jumped to 57% in a follow up study conducted in 2016. These studies alone project an accelerating upward trend. Short of a more recent comprehensive study, the tried and true investigative maxim of “follow the money” should suffice in convincing the rational mind that this number is only going to grow dramatically. Especially when comparing the $250 million in bitcoin spent in 2012 on the dark web with the projected $1 billion mark for 2019.

Origins and operation

It was the invention of none other than the U.S. military—the Navy, of all branches, if you’d believe it. Seeking an easy way for spy networks to communicate without having to lug heavy encryption equipment to remote and hostile corners of the globe, the U.S. Naval Research Laboratory (NRL) came up with an ingenious solution. Ditching the equipment, it created an overlay network of unique address protocols and a convoluted routing system, effectively masking both the source and destination of all its traffic. By forgoing the traditional DNS system and relying instead on software specific browsers like Tor and Freenet and communication programs like I2P among others, dark web traffic was rendered invisible to traditional crawlers. Furthermore, with these browsers routing traffic through multiple user stations around the world, accurate tracking became extremely difficult. This solution afforded both flexibility and mobility for quick and easy insertion and extraction of human assets while securing sensitive communication to and from the field.

There was only one element missing. As co-creator Roger Dingledine explained, if only U.S. Department of Defense (DoD) personnel used the network it wouldn’t matter that source and destination were masked between multiple user stations. All users would be identifiable as part of the spy network. It would be like trying to hide a needle in a stack of needles. What the dark web needed was a haystack of non DoD users. And so in 2002 the software was made open source and anyone seeking the option to communicate and transact globally was invited to download it. Thousands of freedom-conscious people heeded the call and thus the dark web was born.

But freedom is morally ambiguous, granting expression to the best and worst urges of humanity. This is why security officers and senior executives in banks and businesses, insurance providers and intelligence agencies, all need to know who is using the dark web, what it is being used for, and how imminent is the threat it poses to their operations.

 [This article is originally published in calcalistech.com By riel Yosefi and Avraham Chaim Schneider - Uploaded by AIRS Member: Eric Beaudoin]

Categorized in Deep Web

A Boolean search, in the context of a search engine, is a type of search where you can use special words or symbols to limit, widen, or define your search.

This is possible through Boolean operators such as ANDORNOT, and NEAR, as well as the symbols + (add) and - (subtract).

When you include an operator in a Boolean search, you're either introducing flexibility to get a wider range of results, or you're defining limitations to reduce the number of unrelated results.

Most popular search engines support Boolean operators, but the simple search tool you'll find on a website probably doesn't.

Boolean Meaning

George Boole, an English mathematician from the 19th century, developed an algebraic method that he first described in his 1847 book, The Mathematical Analysis of Logic and expounded upon in his An Investigation of the Laws of Thought (1854).

Boolean algebra is fundamental to modern computing, and all major programming languages include it. It also figures heavily in statistical methods and set theory.

Today's database searches are largely based on Boolean logic, which allows us to specify parameters in detail—for example, combining terms to include while excluding others. Given that the internet is akin to a vast collection of information databases, Boolean concepts apply here as well.

Boolean Search Operators

For the purposes of a Boolean web search, these are the terms and symbols you need to know:

Boolean Operator Symbol Explanation Example
AND + All words must be present in the results football AND nfl
OR Results can include any of the words paleo OR primal
NOT - Results include everything but the term that follows the operator  diet NOT vegan
NEAR The search terms must appear within a certain number of words of each other swedish NEAR minister

Note: Most search engines default to using the OR Boolean operator, meaning that you can type a bunch of words and it will search for any of them, but not necessarily all of them.

Tips: Not all search engines support these Boolean operators. For example, Google understands - but doesn't support NOT. Learn more about Boolean searches on Google for help.

Why Boolean Searches Are Helpful

When you perform a regular search, such as dog if you're looking for pictures of dogs, you'll get a massive number of results. A Boolean search would be beneficial here if you're looking for a specific dog breed or if you're not interested in seeing pictures for a specific type of dog.

Instead of just sifting through all the dog pictures, you could use the NOT operator to exclude pictures of poodles or boxers.

A Boolean search is particularly helpful after running an initial search. For instance, if you run a search that returns lots of results that pertain to the words you entered but don't actually reflect what you were looking for, you can start introducing Boolean operators to remove some of those results and explicitly add specific words.

To return to the dog example, consider this: you see lots of random dog pictures, so you add +park to see dogs in parks. But then you want to remove the results that have water, so you add -water. Immediately, you've cut down likely millions of results.

More Boolean Search Examples

Below are some more examples of Boolean operators. Remember that you can combine them and utilize other advanced search options such as quotes to define phrases.

AND

free AND games

Helps find free games by including both words.

"video chat app" iOS AND Windows

Searches for video chat apps that can run on both Windows and iOS devices.

OR

"open houses" saturday OR sunday

Locate open houses that are open either day.

"best web browser" macOS OR Mac

If you're not sure how the article might be worded, you can try a search like this to cover both words.

NOT

2019 movies -horror

Finds movies mentioning 2019, but excludes all pages that have the word horror.

"paleo recipes" -sugar

Locates web pages about paleo recipes but ensures that none of them include the word sugar.

Note: Boolean operators need to be in all uppercase letters for the search engine to understand them as an operator and not a regular word.

[Source: This article was published in lifewire.com By Tim Fisher - Uploaded by the Association Member: Jason bourne] 

Categorized in Research Methods

[Source: This article was published in searchengineland.com By Awario - Uploaded by the Association Member: Robert Hensonw]

Boom! Someone just posted a tweet praising your product. On the other side of the world, an article featuring your company among the most promising startups of 2019 was published. Elsewhere, a Reddit user started a thread complaining about your customer care. A thousand miles away, a competitor posted an announcement about a new product they are building. 

What if you (and everyone on your team, from Social Media to PR to Product to Marketing) could have access to that data in real time?

That’s exactly where social listening steps in.

What is social media listening?

Social listening is the process of tracking mentions of certain words, phrases, or even complex queries across social media and the web, followed by an analysis of the data.

A typical word to track would be a brand name, but the possibilities of social media monitoring go way beyond that: you can monitor mentions of your competitors, industry, campaign hashtags, and even search for people who’re looking for office space in Seattle if that’s what you’re after.

Despite its name, social listening isn’t just about social media: many listening tools also monitor news websites, blogs, forums, and the rest of the web.

But that’s not the only reason why the concept can be confusing. Social listening goes by many different names: buzz analysis, social media measurement, brand monitoring, social media intelligence… and, last but not least, social media monitoring. And while these terms don’t exactly mean the same thing, you’ll often see them used interchangeably today.

The benefits of social listening

The exciting thing about social media listening is that it gives you access to invaluable insights on your customers, market, and competition: think of it as getting answers to questions that matter to your business, but without having to ask the actual questions.

There’s an infinite number of ways you can use this social media data; here’re just a few obvious ones.

1. Reputation management.

A sentiment graph showcasing a reputation crisis. Screenshot from Awario.

This is one of the most common reasons companies use social listening. Businesses monitor mentions of their brand and products to track brand health and react to changes in the volume of mentions and sentiment early to prevent reputation crises.

2. Competitor analysis.

Social media share of voice for the airlines. Screenshot from the Aviation Industry 2019 report.

Social media monitoring tools empower you with an ability to track what’s being said about your competition on social networks, in the media, on forums and discussion boards, etc. 

This kind of intelligence is useful at every step of competitor analysis: from measuring Share of Voice and brand health metrics to benchmark them against your own, to learning what your rivals’ customers love and hate about their products (so you can improve yours), to discovering the influencers and publishers they partner with… The list goes on. For more ways to use social media monitoring for competitive intelligence, this thorough guide to competitor analysis comes heavily recommended.

3. Product feedback.

The topic cloud for Slack after its logo redesign. Screenshot from Awario.

By tracking what your clients are saying about your product online and monitoring key topics and sentiment, you can learn how they react to product changes, what they love about your product, and what they believe is missing from it. 

As a side perk, this kind of consumer intelligence will also let you learn more about your audience. By understanding their needs better and learning to speak their language, you’ll be able to improve your ad and website copy and enhance your messaging so that it resonates with your customers.

4. Customer service.

Recent tweets mentioning British Airways. Screenshot from Awario.

Let’s talk numbers.

Fewer than 30% of social media mentions of brands include their handle — that means that by not using a social listening tool you’re ignoring 70% of the conversations about your business. Given that 60% of consumers expect brands to respond within an hour and 68% of customers leave a company because of its unhelpful (or non-existent) customer service, not reacting to those conversations can cost your business actual money.

5. Lead generation.

Social media leads for smartwatch manufacturers. Screenshot from Awario.

While lead generation isn’t the primary use case for most social listening apps, some offer social selling add-ons that let you find potential customers on social media. For the nerdy, Boolean search is an extremely flexible way to search for prospects: it’s an advanced way to search for mentions that uses Boolean logic to let you create complex queries for any use case. Say, if you’re a NYC-based insurance company, you may want to set up Boolean alerts to look for people who’re about to move to New York so that you can reach out before they’re actually thinking about insurance. Neat, huh? 

6. PR.

Most influential news articles about KLM. Screenshot from Awario.

Social listening can help PR teams in more than one way. First, it lets you monitor when press releases and articles mentioning your company get published. Second, PR professionals can track mentions of competitors and industry keywords across the online media to find new platforms to get coverage on and journalists to partner with.

7. Influencer marketing.

Top influencers for Mixpanel. Screenshot from Awario.

Most social media monitoring tools will show you the impact, or reach, of your brand mentions. From there, you can find who your most influential brand advocates are. If you’re looking to find new influencers to partner with, all you need to do is create a social listening alert for your industry and see who the most influential people in your niche are. Lastly, make sure to take note of your competitors’ influencers — they will likely turn out to be a good fit for your brand as well.

8. Research.

Analytics for mentions of Brexit over the last month. Screenshot from Awario.

Social listening isn’t just for brands — it also lets you monitor what people are saying about any phenomenon online. Whether you’re a journalist writing an article on Brexit, a charity looking to evaluate the volume of conversations around a social cause, or an entrepreneur looking to start a business and doing market research, social listening software can help.

 

3 best social media listening tools

Now that we’re clear on the benefits of social media monitoring, let’s see what the best apps for social listening are. Here are our top 3 picks for every budget and company size.

1. Awario

Awario is a powerful social listening and analytics tool. With real-time search, a Boolean search mode, and extensive analytics, it’s one of the most popular choices for companies of any size.

Awario offers the best value for your buck. With it, you’ll get over 1,000 mentions for $1 — an amazing offer compared to similar tools. 

Key features: Boolean search, Sentiment Analysis, Topic clouds, real-time search.

Supported platforms: Facebook, Instagram, Twitter, YouTube, Reddit, news and blogs, the web.

Free trial: Try Awario free for 7 days by signing up here.

Pricing: Pricing starts at $29/mo for the Starter plan with 3 topics to monitor and 30,000 mentions/mo. The Pro plan ($89/mo) includes 15 topics and 150,000 mentions. Enterprise is $299/mo and comes with 50 topics and 500,000 mentions. If you choose to go with an annual option, you’ll get 2 months for free. 

2. Tweetdeck

TweetDeck is a handy (and free) tool to manage your brand’s presence on Twitter. It lets you schedule tweets, manage several Twitter accounts, reply to DMs, and monitor mentions of anything across the platform — all in a very user-friendly, customizable dashboard. 

For social media monitoring, TweetDeck offers several powerful ways to search for mentions on Twitter with a variety of filters for you to use. You can then engage with the tweets without leaving the app. 

TweetDeck is mostly used for immediate engagement — the tool doesn’t offer any kind of analytics.

Key features: User-friendly layout, ability to schedule tweets, powerful search filters.

Supported platforms: Twitter.

Free trial: N/A

Pricing: Free.

3. Brandwatch

Brandwatch is an extremely robust social media intelligence tool. It doesn’t just let you monitor brand mentions on social: the tool comes with image recognition, API access, and customizable dashboards that cover just about any social listening metric you can think of. 

Brandwatch’s other product, Vizia, offers a way to visualize your social listening data and even combine it with insights from a number of other sources, including Google Analytics.

Key features: Powerful analytics, exportable visualizations, image recognition.

Supported platforms: Facebook, Twitter, Instagram, YouTube, Pinterest, Sina Weibo, VK, QQ, news and blogs, the web.

Free trial: No.

Pricing: Brandwatch is an Enterprise-level tool. Their most affordable Pro plan is offered at $800/month with 10,000 monthly mentions. Custom plans are available upon request.

Before you go

Social media is an invaluable source of insights and trends in consumer behavior but remember: social listening doesn’t end with the insights. It’s a continuous learning process — the end goal of which should be serving the customer better.

Categorized in Social

[Source: This article was published in halifaxtoday.ca By Ian Milligan - Uploaded by the Association Member: Deborah Tannen]

Today, and into the future, consulting archival documents increasingly means reading them on a screen

Our society’s historical record is undergoing a dramatic transformation.

Think of all the information that you create today that will be part of the record for tomorrow. More than half of the world’s population is online and maybe doing at least some of the following: communicating by email, sharing thoughts on Twitter or social media or publishing on the web.

Governments and institutions are no different. The American National Archives and Records Administration, responsible for American official records, “will no longer take records in paper form after December 31, 2022.

In Canada, under Library and Archives Canada’s Digital by 2017 plan, records are now preserved in the format that they were created in: that means a Word document or email will be part of our historical record as a digital object.

Traditionally, exploring archives meant largely physically collecting, searching and reviewing paper records. Today, and into the future, consulting archival documents increasingly means reading them on a screen.

This brings with it an opportunity — imagine being able to search for keywords across millions of documents, leading to radically faster search times — but also challenge, as the number of electronic documents increases exponentially.

As I’ve argued in my recent book History in the Age of Abundance, digitized sources present extraordinary opportunities as well as daunting challenges for historians. Universities will need to incorporate new approaches to how they train historians, either through historical programs or newly-emerging interdisciplinary programs in the digital humanities.

The ever-growing scale and scope of digital records suggests technical challenges: historians need new skills to plumb these for meaning, trends, voices and other currents, to piece together an understanding of what happened in the past.

There are also ethical challenges, which, although not new in the field of history, now bear particular contemporary attention and scrutiny.

Historians have long relied on librarians and archivists to bring order to information. Part of their work has involved ethical choices about what to preserve, curate, catalogue and display and how to do so. Today, many digital sources are now at our fingertips — albeit in raw, often uncatalogued, format. Historians are entering uncharted territory.

Digital abundance

Traditionally, as the late, great American historian Roy Rosenzweig of George Mason University argued, historians operated in a scarcity-based economy: we wished we had more information about the past. Today, hundreds of billions of websites preserved at the Internet Archive alone is more archival information than scholars have ever had access to. People who never before would have been included in archives are part of these collections.

Take web archiving, for example, which is the preservation of websites for future use. Since 2005, Library and Archives Canada’s web archiving program has collected over 36 terabytes of information with over 800 million items.

Even historians who study the middle ages or the 19th centuries are being affected by this dramatic transformation. They’re now frequently consulting records that began life as traditional parchment or paper but were subsequently digitized.

Historians’ digital literacy

Our research team at the University of Waterloo and York University, collaborating on the Archives Unleashed Project, uses sources like the GeoCities.com web archive. This is a collection of websites published by users between 1994 and 2009. We have some 186 million web pages to use, created by seven million users.

Our traditional approaches for examining historical sources simply won’t work on the scale of hundreds of millions of documents created by one website alone. We can’t read page by page nor can we simply count keywords or outsource our intellectual labor to a search engine like Google.

As historians examining these archives, we need a fundamental understanding of how records were produced, preserved and accessed. Such questions and modes of analysis are continuous with historians’ traditional training: Why were these records created? Who created or preserved them? And, what wasn’t preserved?

Second, historians who confront such voluminous data need to develop more contemporary skills to process it. Such skills can range from knowing how to take images of documents and make them searchable using Optical Character Recognition, to the ability to not only count how often given terms appear, but also what contexts they appear in and how concepts begin to appear alongside other concepts.

You might be interested in finding the “Johnson” in “Boris Johnson,” but not the “Johnson & Johnson Company.” Just searching for “Johnson” is going to get a lot of misleading results: keyword searching won’t get you there. Yet emergent research in the field of natural language processing might!

Historians need to develop basic algorithmic and data fluency. They don’t need to be programmers, but they do need to think about how code and data operates, how digital objects are stored and created and humans’ role at all stages.

Deep fake vs. history

As historical work is increasingly defined by digital records, historians can contribute to critical conversations around the role of algorithms and truth in the digital age. While both tech companies and some scholars have advanced the idea that technology and the internet will strengthen democratic participation, historical research can help uncover the impact of socio-economic power throughout communications and media history. Historians can also help amateurs parse the sea of historical information and sources now on the Web.

One of the defining skills of a historian is an understanding of historical context. Historians instinctively read documents, whether they are newspaper columns, government reports or tweets, and contextualise them in terms of not only who wrote them, but their environment, culture and time period.

As societies lose their physical paper trails and increasingly rely on digital information, historians, and their grasp of context, will become more important than ever.

As deepfakes — products of artificial intelligence that can alter images or video clips — increase in popularity online, both our media environment and our historical record will increasingly be full of misinformation.

Western societies’ traditional archives — such as those held by Library and Archives Canada or the National Archives and Records Administration — contain (and have always contained) misinformation, misrepresentation and biased worldviews, among other flaws.

Historians are specialists in critically reading documents and then seeking to confirm them. They synthesise their findings with a broad array of additional sources and voices. Historians tie together big pictures and findings, which helps us understand today’s world.

The work of a historian might look a lot different in the 21st century — exploring databases, parsing data — but the application of their fundamental skills of seeking context and accumulating knowledge will serve both society and them well in the digital age.

Categorized in Investigative Research

[Source: This article was published in globalnews.ca By Jessica Vomiero - Uploaded by the Association Member: Anna K. Sasaki]

Amid the frenzy of a cross-country RCMP manhunt for two young men who’ve been charged in one murder and are suspects in another double homicide a photo of an individual who looked like one of the suspects began circulating online.

Bryer Schmegelsky, 18, and Kam McLeod, 19, have been charged with the second-degree murder of Leonard Dyck and are suspects in the double homicide of Lucas Fowler and Chyna Deese. The two men are currently on the run and police have issued nationwide warrants for their arrest.

The search has focused on northern Manitoba, where the men were believed to have been sighted on Monday. The photo was sent to police on Thursday evening by civilians following an RCMP request that anyone with any information about the whereabouts of the suspects reports it to police.

It depicts a young man who strikingly resembles the photos police released of McLeod holding up a copy of the Winnipeg Sun paper featuring the two suspects on the front page.

RCMP say the man in the photo is not the suspect Kam McLeod. Experts say police always have to follow up on online rumours and pictures like this.
RCMP say the man in the photo is not the suspect Kam McLeod. Experts say police always have to follow up on online rumours and pictures like this.

Police eventually determined that the photo did not depict either of the suspects.

“It appears to be an instance where a photo was taken and then ended up unintentionally circulated on social media,” RCMP Cpl. Julie Courchaine said at a press conference on Friday.

She also warned against sharing or creating rumours online.

“The spreading of false information in communities across Manitoba has created fear and panic,” she said.

While this particular photo did not show a suspect, the RCMP confirmed to Global News that their investigators follow up on “any and all tips” to determine their validity. Experts note that this mandate may force the RCMP to pull resources away from the primary investigation.

“They have to assign investigators to take a look at the information and then to follow up,” explained Kim Watt-Senner, who served as an RCMP officer for almost 20 years and is now a Fraser Lake, B.C., city councillor. “They physically have to send members out to try and either debunk or to corroborate that yes, this is, in fact, a bona fide lead.”

After seeing the photo, she noted that a trained eye would be able to see some distinct differences in the eyes and the facial structure, but “if a person wasn’t trained to look for certain things, I can see why the general public would think that was the suspect.”

She added that while she believes getting public input through digital channels is largely a good thing, it can also be negative.

“There’s a whole wave that happens after the information is shared on social media and the sharing of the posts and everything else, then it goes viral and it can go viral before the RCMP or the police have a chance to authenticate that information.”

While she knows through her experience as a Mountie that people are trying to help, “it can also impede the investigation, too.”

Near the beginning of the investigation, the RCMP appealed to the public for any information they had about Schmegelsky and McLeod, or the victims. Kenneth Gray, a retired FBI agent and lecturer at the University of New Haven, explained that the internet has also changed the way police respond when receiving public tips.

“Whenever you asked the public for assistance on a case and you start receiving tips, every one of those tips has to be examined to determine whether or not it is useful to solve whatever case you’re working on and that takes time,” said Gray.

“In this particular case with the photograph, it had to be examined to determine whether this was actually the suspect or whether it was just a lookalike that took vital resources that could have been being devoted to actually finding this guy.“ 

He explained that if he’d gone about verifying or debunking the photo himself, he’d attempt to determine where the information came from and trace that back to the person in the photo. He suggested performing an electronic search on the image to ultimately determine who is in the photograph.

In addition, the internet has added a new layer of complexity to screening public leads. With the advent of social media, “you get inundated with information that is coming from all over the place.”

“At one point, you would put out local information and you’d only get back local-type tips. But now, with the advent of the internet, tips can come in from all over the world. It casts such a large net that you get information from everywhere,” he said.

“That gives you a lot more noise.”

The model that most departments have pursued to deal with this, he said, is one that requires investigators to pursue all leads while setting priorities to determine which ones should be given the most resources.

While the widened reach that the internet affords can complicate things, some experts suggest that this isn’t always a negative thing.

Paul McKenna, the former director of the Ontario Provincial Police Academy and a former policing consultant for the Nova Scotia Department of Justice, agrees.

“All leads are potentially useful for the police until they are proven otherwise,” he said in a statement. “Every lead may hold something of value and police always remind the public that even the most apparently inconsequential thing may turn out to have relevance.”

Social media has played a role in a number of high-profile arrests over the years, including that of Brock Turner, who in 2016 was convicted of four counts of felony sexual assault and allegedly took photos of the naked victim and posted them on social media, and Melvin Colon, a gang member who was arrested in New York after police were given access to his online posts.

In this particular case, Watt-Senner explained that a command centre would likely be set up close to where the RCMP are stationed in Manitoba. She said that all information will be choreographed out of that command centre, where a commander will decipher the leads that come through.

“Those commanders would be tasked and trained on how to obtain information, filter information, and disseminate information and to choreograph the investigative avenues of that information in real-time,” Watt-Senner said.

She notes that the RCMP would likely have used facial recognition software to determine for certain whether the man depicted in the photo was, in fact, the suspect, and the software would also be used as citizens began to report sightings of the suspects.

“This is really integral. This is a really important part of the investigation, especially when you start to get sightings from different areas. That information would be sent to people that are specifically trained in facial recognition.”

While the investigation may take longer because of the higher volume of leads being received through digital channels, all three experts conclude that the good that comes from social media outweighs the bad.

 

Categorized in Investigative Research

[Source: This article was published in techdirt.com By Julia Angwin, ProPublica - Uploaded by the Association Member: Dana W. Jimenez]

The East German secret police, known as the Stasi, were an infamously intrusive secret police force. They amassed dossiers on about one quarter of the population of the country during the Communist regime.

But their spycraft — while incredibly invasive — was also technologically primitive by today's standards. While researching my book Dragnet Nation, I obtained the above hand drawn social network graph and other files from the Stasi Archive in Berlin, where German citizens can see files kept about them and media can access some files, with the names of the people who were monitored removed.

The graphic shows forty-six connections, linking a target to various people (an "aunt," "Operational Case Jentzsch," presumably Bernd Jentzsch, an East German poet who defected to the West in 1976), places ("church"), and meetings ("by post, by phone, meeting in Hungary").

Gary Bruce, an associate professor of history at the University of Waterloo and the author of "The Firm: The Inside Story of the Stasi," helped me decode the graphic and other files. I was surprised at how crude the surveillance was. "Their main surveillance technology was mail, telephone, and informants," Bruce said.

Another file revealed a low-level surveillance operation called an IM-vorgang aimed at recruiting an unnamed target to become an informant. (The names of the targets were redacted; the names of the Stasi agents and informants were not.) In this case, the Stasi watched a rather boring high school student who lived with his mother and sister in a run-of-the-mill apartment. The Stasi obtained a report on him from the principal of his school and from a club where he was a member. But they didn't have much on him — I've seen Facebook profiles with far more information.

A third file documented a surveillance operation known as an OPK, for Operative Personenkontrolle, of a man who was writing oppositional poetry. The Stasi deployed three informants against him but did not steam open his mail or listen to his phone calls. The regime collapsed before the Stasi could do anything further.

I also obtained a file that contained an "observation report," in which Stasi agents recorded the movements of a forty-year-old man for two days — September 28 and 29, 1979. They watched him as he dropped off his laundry, loaded up his car with rolls of wallpaper, and drove a child in a car "obeying the speed limit," stopping for gas and delivering the wallpaper to an apartment building. The Stasi continued to follow the car as a woman drove the child back to Berlin.

The Stasi agent appears to have started following the target at 4:15 p.m. on a Friday evening. At 9:38 p.m., the target went into his apartment and turned out the lights. The agent stayed all night and handed over surveillance to another agent at 7:00 a.m. Saturday morning. That agent appears to have followed the target until 10:00 p.m. From today's perspective, this seems like a lot of work for very little information.

And yet, the Stasi files are an important reminder of what a repressive regime can do with so little information.

Categorized in Search Engine

[Source: This article was published in ibtimes.co.uk By Anthony Cuthbertson - Uploaded by the Association Member: Robert Hensonw]

A search engine more powerful than Google has been developed by the US Defence Advanced Research Projects Agency (DARPA), capable of finding results within dark web networks such as Tor.

The Memex project was ostensibly developed for uncovering sex-trafficking rings, however the platform can be used by law enforcement agencies to uncover all kinds of illegal activity taking place on the dark web, leading to concerns surrounding internet privacy.

Thousands of sites that feature on dark web browsers like Tor and I2P can be scraped and indexed by Memex, as well as the millions of web pages ignored by popular search engines like Google and Bing on the so-called Deep Web.

The difference between the dark web and the deep web

The dark web is a section of the internet that requires specialist software tools to access, such as the Tor browser. Originally designed to protect privacy, it is often associated with illicit activities.

The deep web is a section of the open internet that is not indexed by search engines like Google - typically internal databases and forums within websites. It comprises around 95% of the internet.

Websites operating on the dark web, such as the former Silk Road black marketplace, purport to offer anonymity to their users through a form of encryption known as Onion Routing.

While users' identities and IP addresses will still not be revealed through Memex results, the use of an automated process to analyse content could uncover patterns and relationships that could potentially be used by law enforcement agencies to track and trace dark web users.

"We're envisioning a new paradigm for search that would tailor content, search results, and interface tools to individual users and specific subject areas, and not the other way round," said DARPA program manager Chris White.

"By inventing better methods for interacting with and sharing information, we want to improve search for everybody and individualise access to information. Ease of use for non-programmers is essential."

Memex achieves this by addressing the one-size-fits-all approach taken by mainstream search engines, which list results based on consumer advertising and ranking.

 us internet surveillance DARPA TOR Memex dark web

 Memex raises further concerns about internet surveillance US Web Home

'The most intense surveillance state the world has literally ever seen'

The search engine is initially being used by the US Department of Defence to fight human trafficking and DARPA has stated on its website that the project's objectives do not involve deanonymising the dark web.

The statement reads: "The program is specifically not interested in proposals for the following: attributing anonymous services, deanonymising or attributing identity to servers or IP addresses, or accessing information not intended to be publicly available."

Despite this, White has revealed that Memex has been used to improve estimates on the number of services there are operating on the dark web.

"The best estimates there are, at any given time, between 30,000 and 40,000 hidden service Onion sites that have content on them that one could index," White told 60 Minutes earlier this month.

Internet freedom advocates have raised concerns based on the fact that DARPA has revealed very few details about how Memex actually works, which partners are involved and what projects beyond combating human trafficking are underway.

"What does it tell about a person, a group of people, or a program, when they are secretive and operate in the shadows?" author Cassius Methyl said in a post to Anti Media. "Why would a body of people doing benevolent work have to do that?

"I think keeping up with projects underway by DARPA is of critical importance. This is where the most outrageous and powerful weapons of war are being developed.

"These technologies carry the potential for the most intense surveillance/ police state that the world has literally ever seen."

Categorized in Deep Web
Page 1 of 17

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media