Fermat, a collaborative, open-source technology project, has announced the launch of the Internet of People (IoP) Consortium, an initiative aimed at boosting academic research and encourage university-led pilot projects related to the “person-to-person economy.”

The IoP is meant to allow people to hold direct control and ownership of their data and digital footprint. The project seeks to develop and provide individuals with the tools to freely interact electronically, both for social and commercial purposes, “without unnecessary third party interferences.”

The newly formed consortium will provide opportunities to universities and research institutions to develop and participate in innovative projects in that field. Current members include ELTE, Infota, Virtual Planet and Cyber Services PLC.

First pilot project

In March, the consortium launched its first pilot project through a research lab at ELTE, the largest and one of the most prestigious universities in Hungary, in cooperation with the EIT Digital Internet of Things Open Innovation Lab.

Focusing on the shipping industry, the pilot project found that with disintermediating technology, multinational companies in a wide range of verticals can significantly increase effectiveness and reduce costs. Technology which removes unnecessary intermediaries and creates a decentralised system, improves privacy for both senders and receivers, allows on-demand contractors to better monitor failure situations, and helps smaller shipping companies enter the market.

“Our first project has already delivered important findings on the power of IoP technology,” Csendes said. “Though the study focused on the shipping industry, the technology developed could improve the logistics industry as a whole.”

The Internet of People

Fermat's Internet of People projectFermat, an organization based in Switzerland, is in charge of building the decentralized infrastructure for the IoT, which includes an open social graph, a direct, peer-to-peer access channel to individual people, and a direct device-to-device communication layer.

The IoT intends to be an information space where people’s profiles are identified by a public key and interlinked by profile relationship links. Profiles can be accessed via the Internet.

The project aims to empower people by allowing them freedom to administer their online privacy, protect themselves from spying, censorship or data mining, by establishing direct person-to-person interactions.

Speaking to CoinJournal, Fermat founder Luis Molina explained:

“The information on the Internet of People is controlled by end users using their profile private key, in the same way they control their bitcoin balances using their bitcoin private keys. This means that only them can modify the information of their profiles and the relationship with others profiles as well.”

Similarly to Facebook, an individual is able to configure the privacy level of his or her profile and choose which information is public.

“A profile uploaded to the IoT does not mean that everyone can access all the information on it,” Molina said.

“The main difference is that when you upload your info to Facebook, Facebook is in control and they monetize your information using it for their own profit. On the other hand the Internet of People allows you to sell pieces of your private data or digital footprint on a global marketplace to whoever you choose and as many times you want, even the same piece of data.”

The IoP uses a new type of cryptographically secured data structure called the graphchain. The main difference between a graphchain and a blockchain is that the first acts as a cryptographically secured data structure in which no blocks or transactions have to be stored.

According to Molina, Fermat’s graphchain technology enables a global mapping of everybody with verified proof of how they are related, and also people-to-people and company-to-people interactions without going through intermediaries.

Csendes said that the graphchain technology brings “endless business opportunities because of the additional network components and methodologies added on top of blockchain technology.”

“The IoP Consortium was formed in response to the need for concrete and developed use cases demonstrating this value,” he concluded.

Source : This article was published in coinjournal.net By Diana Ngo

Categorized in Online Research

In a bid to fight fake news and low-quality content, Google is updating its search algorithms. In addition to making improvements to search ranking, the search engine giant wants to offer people easier ways to directly report offensive or misleading content.

In a blog post, Google Vice President of Search Ben Gomes said that Google has improved its evaluation methods and made algorithmic updates to surface more authoritative content.

For the first time, users will be able to directly flag content that appears in Autocomplete and Featured Snippets in Google Search.

moduleplant id="583"]

Autocomplete helps predict the searches people might be typing, while Featured Snippets appear at the top of search results showing a highlight of the information relevant to what people are looking for.

“Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system. The most high profile of these issues is the phenomenon of “fake news,” where content on the web has contributed to the spread of blatantly misleading, low quality, offensive or downright false information,” Gomes said in the blog.

Google has a team of evaluators – real people – to monitor the quality of Google’s search results. Their ratings will help the company gather data on the quality of its results and identify areas for improvements.

moduleplant id="535"]

Last month, Google updated its Search Quality Rater Guidelines to provide more detailed examples of low-quality web pages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories. “These guidelines will begin to help our algorithms in demoting such low-quality content and help us to make additional improvements over time,” Gomes said.

Featured Snippets

moduleplant id="558"]

Meanwhile, Google recently updated its How Search Works site to provide detailed info to users and website owners about the technology behind Google Search.

This article was published in marketexclusive.com by David Zazoff


Categorized in Search Engine

Here is today’s roundup of news related to local marketing and advertising, local media, technology, local commerce, consumer behavior and more.

Categorized in Search Engine

As Congress and State governments continue to seek internet privacy regulations, and internet users are beholden to take assertive action on privacy, safety, and accuracy of content on the internet, businesses such as the Wayback Machine are working to preserve internet content.

The Internet Archive’s Wayback Machine is a digital archive of public pages on the World Wide Web.

Internet Archive began its work of copying the internet in 1996, and in 2001 rolled out the Wayback Machine. The program is a 3-dimensional index, allowing public access to historical documents, using a unique feature that allows expansive searches, over multiple time periods, of hundreds of billions of links, and the ability to access, use and link to historical documents.

The average internet user may not need to use a service such as Wayback Machine in order to find what they need, but they can.

There are many available archives, especially in industry, such as Smithsonian Institute, Britannica or Library of Congress. There are also similar businesses such as WebCite that offer access to internet libraries.

No matter which archive is used or what information sought, one skill is universally required to get desired information. The key is in the search terms.

Freelance Writer Anthony Dejolde published an article, How to Google Like a Boss. Dejolde referenced basic and extensive search terms, such as: Use quotes (“) around a word for exact match results, use the dash (-) before a word to exclude it from search results, and use the tilde (~) before a term to include results with synonyms.

With the massive volume of information on the web, it is easy to underestimate the value of saving historical internet content as we do books. It is also easy to forget the value of a good search term for obtaining and verifying desired information and doing research, such as looking up and tracking solutions, in Congress and in the States, for regulating internet providers and securing internet privacy for all users.

Source : arapahoenews.com

Categorized in Search Engine

The internet, Facebook, smartphones and other technology might be a challenging new frontier for many seniors, but there are benefits to learning and embracing the evolving technology.

A study at UCLA showed that simply using search engines such as Google triggered key centers in the brains of middle-aged and older adults, areas that control complex reasoning and decision-making, according to a press release. Researchers involved said the results suggest that searching might help stimulate and possibly improve the function of the brain.

"The study results are encouraging, that emerging computerized technologies may have physiological effects and potential benefits for middle-aged and older adults," said principal investigator Dr. Gary Small, a professor at the Semel Institute for Neuroscience and Human Behavior at UCLA. He also holds UCLA's Parlow-Solomon Chair on Aging and directs the university's Memory and Aging Research Center.

You might be familiar with the posit that crosswords, word searches and other puzzles help keep the brain active. But as technology becomes more a part of our daily lives, the influence of computer use, including the web, also helps keep the mind engaged and may help preserve cognitive ability.

Study volunteers were between the ages of 55 and 76; half of them had search experience and half of them did not. Gender, age and education level were kept similar between the two groups, which performed web searches and book-reading tasks.

While all the participants showed significant brain activity during the book-reading task, internet searches were another matter. All the participants showed the same brain activity as in the book-reading task, but those familiar with online searches also showed activity "in the frontal, temporal and cingulate areas of the brain, which control decision-making and complex reasoning," the study revealed.

"Our most striking finding was that Internet searching appears to engage a greater extent of neural circuitry that is not activated during reading -- but only in those with prior internet experience," said Small.

He said the minimal brain activation found in the less-experienced group may be due to participants not quite grasping the strategies needed to successfully engage in an online search. That is common while learning a new activity, he said.

What does this mean? In addition to helping seniors keep up with ever-developing technology, being actively engaged with the internet can help stimulate brain activity as we age.

Those who haven't embraced the web might consider classes offered at senior centers or other locations. Or there's always a computer-savvy grandchild, who might provide an easy introduction.

Source : chieftain.com

Categorized in Search Engine

Every time you use a search engine to look something up on the Internet personally identifiable information will be collected by all major search engines. The search terms submitted to the search engine, as well as the time, date, and geographical location of the computer carrying out the search will be logged and stored.The search words you enter are often stored within search boxes in your browser, your computer will normally cache those words and pages you visit, your searched for terms can be retrieved by anyone with access to the hard disk.Do you really want search engines like Google or Bing to know everything you search for on the internet?

What information do search engines keep?

1) IP Address: Your personal computer IP address can be traced back to you through a reverse DNS lookup with tools finding out not only your ISP but also your approximate location such as State or Province.

2) Date & Time: The exact date and time you were searching for a certain keyword will be logged. The browser you use is normally also stored in search engines logs.

3) Query Terms: The terms your searched for will be stored.

4) Cookie ID: A unique code is embedded into the cookie and assigned to a particular computer by the search engine. It allows a search engine to learn if requests came from a particular computer, as long as that identifiable cookie is still stored in the browser Internet searches can be linked and traced back to you independently of what computer IP you use.

Notice that after some pressure from privacy groups some major search engines have begun to mask the computer user IP address on their search logs but this does not make your search history anonymous.

What information do search engines send to webmasters?

After you click on one of the results given by the search engine, your search terms are passed to the website server logs, that webmaster will know what search terms you used to find that site, the referring URL and your IP address, as well as other data like your Internet browser and operating system you are using and even your default browser language, all of this can help to identify you.

Google maps searchGoogle maps search

Privacy search engine Duck Duck GoYour web browser automatically sends information about your user agent and IP address to the search engine but Duck Duck Go will not store it at all. This information could be used to link you to your searches and other search engines will use it to show you more targeted advertising. Duck Duck Go will go out of its way to delete that data.

At Duck Duck Go no cookies are used by default and they do not work with any affiliate program that will share personally identifiable information like name and address. Feedback at Duck Duck Go can also be given anonymous not having to enter an email address in the form (it can be left blank). This privacy search engine also allows searching via its SSL website and lots of customization options.

Duck Duck Go pulls results from Microsoft’s Bing and Google search APIs, a lot of what you’re getting are results you could find on those search engines with the added advantage that your personal privacy is respected while searching the Internet. Duck Duck Go also has its own web crawler and web index.https://www.duckduckgo.com

Duck Duck Go no logs search engineDuck Duck Go no logs search engine

Privacy search engine IxQuick & Startpage IxQuick was awarded the first European Privacy Seal, IxQuick privacy search engine will not record your IP address, other data like the search queries are deleted from the log files within a maximum of 48 hours, often sooner.

IxQuick uses the POST method to keep your search terms out of the logs of webmasters of sites that you reach from their results, the major search engines on the other hand, use the GET method which allows web servers to log what search terms you used to reach them.

You can use encrypted Secure Socket Layer (SSL) connections to carry out your search stopping your ISP from snooping on you, this is of vital importance if you are using a public computer in an internet cafe, library or at work where the network administrator can easily spy on your search terms.IxQuick uses a single anonymous cookie to remember the search preferences you saved for your next visit, it will not use cookies with a unique ID like many other websites do.IxQuick also allows for advanced syntax search and being a Metasearcher, it pulls some of it results from other major search engines like Bing, Ask or the Open Directory.

IxQuick also lets you visit the chosen page with a built in proxy,  the webmaster server logs will only see/log IxQuick IP address and not yours.

I tested IxQuick search proxy on my server and it also spoofs your agent ID and operating system, identifying itself as Google Chrome and Windows 7, this is a good practice as it makes even more difficult to pin you down.

The Dutch IP IxQuick search proxy gives once reversed identified itself as Webhosting customers, making it obvious it is not an ISP but a hosted proxy, the URL entry was presented as blank in the server logs, overall, their proxy for searching in privacy does a good job at keeping your privacy online.https://www.ixquick.com or https://www.startpage.com

IxQuick privacy search engineIxQuick privacy search engine

 Search engine Blippex This search engine claims that it was built with privacy in mind, results are passed P2P without a central server and your computer IP, browser user agent string and referral page are not stored.

Your connection to the site is encrypted with TLS and the company is very detailed and clear about what information they collect about you.

Users can help out Blippex installing a browser extension that improves search results for other people based on how long you dwell on a site, the data is collected anonymously and it is not necessary to have the addon installed to search Blippex, it is just an extra for those who want to contribute create a search engine based on how popular a page is with people, results that are not viewed in 90 days are deleted.

Blippex privacy standards far surpass those of Google and Yahoo, the search engines does not show advertisements or sells user’s data, to fund development some results for commercial products might have an affiliate code embedded so that Blippex can earn a commission. https://www.blippex.com

privacy search engine Blippexprivacy search engine Blippex

 Usenet search engine BinSearch This is not an anonymous Internet searcher but it is included on the list because it carries results that nobody else does. BinSearch specialises in crawling binary Usenet newsgroups results that are ignored by all major search engines. You can search for Usenet posts subject, filenames or.nfo and limit your search to certain newsgroup or timeframe.

Due to the huge amount of data that Usenet carries, results are refreshed every few weeks and old ones dropped, Binsearch crawls thousands of groups but it is not possible to index all of them, only the major newsgroups.http://www.binsearch.info

BinSearch binaries Unsenet search engineBinSearch binaries Unsenet search engine

 Disconnect.me searchA desktop and mobile phone app to search the Internet with privacy, it can be accessed using a website too without the need to install anything. Disconnect.me resorts to major search engines for your results, like Google and Bing, but it hides your computer IP when you perform a search, Disconnect has its own proxy server and it forwards search terms so that your own IP is never connected to those words.

Disconnect has deployed secure TLS to stop others from eavesdropping on the search terms you enter. You can use this search interface to anonymously search for words, images, news or videos, a drop down menu lets you select what country you would like to search from, different results are given depending on the virtual location you choose, a useful utility for SEO professionals to look at search results from different locations without being there.https://search.disconnect.me

Private search engine Disconnect.mePrivate search engine Disconnect.me

Tips to search the Internet with privacyDo not accept any of the major search engines cookies, they might use them to identify you later on, if you already have a Google or Bing search engine cookie on your computer, delete them.

Do not sign up for email at the same search engine where you regularly search, your personal email address can potentially be tied up to your search terms. Using Google and Gmail (both Google products) or Bing and Hotmail (both Microsoft products) together is not a good idea.

Mix up a variety of search engines, this will spread all of your searched terms across different companies and servers. Varying the physical location you search from can also be helpful, you can use a VPN or proxy to change your computer and country IP and delete all of your search engine cookies before starting a new private searching session.

Source : hacker10.com

Categorized in Search Engine

Internet access is so fundamental that it is starting to be considered a basic human right. However, access to the internet remains uneven. As more devices connect to the internet, more bandwidth is eaten up. Li-Fi is a new route to connectivity that will provide more bandwidth and speed once the technology is completely developed — and it’s very close.


Li-Fi uses an LED bulb’s modulated light signal instead of a modulated radio signal to send data and connect to the internet. The LiFi-X system from PureLiFi transmits data using waves in the visible portion of the electromagnetic spectrum that an LED bulb with a microchip generates. The LED light fixture and a dongle for a USB port comprise the LiFi-X system which delivers speeds of up to 42Mbps, up and down. The system is already in use as its parent company, PureLiFi, has been collaborating with tech companies around the world to trial and improve the technology.

A Li-Fi system offers a business many advantages, including improved security. Sending and receiving data through light means that access can be limited much more easily than with Wi-Fi because light does not penetrate walls. On the other hand, this also presents a challenge in terms of making Li-Fi as convenient as Wi-Fi. Smart architecture will be required to increase Li-Fi’s range, and dim LEDs will make it possible to have Li-Fi access that follows users and works even in the dark.

Li-Fi can also be applied in settings that are impossible for Wi-Fi. For example, Li-Fi is ideal for in-flight internet access and high security installations like petrochemical plants in which risk of sparks makes radio antennas too dangerous to be used.

The equipment for Li-Fi is too big to be used in mobile devices — but perhaps only for the moment. Miniaturizing the technology is one of the biggest goals for PureLiFi and, according to Digital Trends, a newly redesigned LiFi-X with a much smaller dongle is coming later this year to use for laptops. This version is still too large to fit into a smartphone, but since “LiFiCapability” language was found in iOS code for a future iPhone model, it seems likely that the smartphone version is coming.


Consumer demand for wireless data is pressuring existing Wi-Fi technology more every day. The ongoing, exponentially growing number of mobile devices in particular is expected to reach 11.6 billion by 2021 — exceeding the projected population of the planet at that time (7.8 billion). This translates into a monthly information level of about 35 quintillion (1018) bytes — a level that will be unsustainable with current wireless infrastructure and technology.

Li-Fi can relieve this pressure because the visible light frequencies it uses are relatively underutilized. PureLifi and other companies working to develop the technology are already partnering with businesses in the lighting industry to grow the lighting ecosystem now so that, hopefully, by the time Li-Fi tech is ready to go online at scale the infrastructure it needs will be ready.

In February 2016, Li-Fi technology sent data at up to 1GB per second in trials, which is 100 times faster than currently available Wi-Fi technology. These trial runs were obviously slower than the lab tests, but they demonstrated that Li-Fi connections should be able to transmit up to 224 gigabits per second. By August, researchers were sending data 20 times faster than they did in February. Speeds are expected to continue to improve.

New smartphone and computer designs could incorporate this technology, perhaps in doubly innovative ways. For example, Li-Fi connectivity cells might also provide an opportunity for solar charging capabilities in smart devices. And, while it is unlikely that Li-Fi will entirely replace Wi-Fi, it will almost surely become the exclusive source of data transmission in high security areas, on planes, or in older buildings that disrupt Wi-Fi signals.

Source : malaysiandigest.com

Categorized in Search Engine

The internet can be a harsh place. It seems like for every feel-good story or picture of a puppy playing with a kitten, there are 1,000 trolls rummaging through the depths of their minds to post the most vile comments they can imagine. And if you’re a woman or person of color, well, multiply that troll army by 10.

But hey, that’s the internet, right? Except it doesn’t have to be that way. And it might not be for much longer if the folks at Google (GOOG, GOOGL) subsidiary Jigsaw have their way. A kind of high-powered startup inside Google’s parent company Alphabet, Jigsaw focuses on how technology can defend international human rights.

The toxicity of trolls

The company’s latest effort is called the Perspective API. Available Thursday, Feb. 23, Perspective is the result of Jigsaw’s Conversation AI project and uses Google’s machine learning technologies to provide online publishers with a tool that can automatically rank comments in their forums and comments sections based on the likelihood that they will cause someone to leave a conversation. Jigsaw refers to this as a “toxicity” ranking.

“At its core, Perspective is a tool that simply takes a comment and returns back this score from 0 to 100 based on how similar it is to things that other people have said that are toxic,” explained product manager CJ Adams.

Jigsaw doesn’t become the arbiter of what commenters can and can’t say in a publisher’s comment section, though. Perspective is only a tool that publishers use as they see fit. For example, they can give their readers the ability to filter comments based on their toxicity level, so they’ll only see non-toxic posts. Or the publisher could provide a kind of feedback mechanism that tells you if your comments are toxic.

The tool won’t stop you from submitting toxic comments, but it will provide you with the nudge to rethink what you’re writing.

Perspective isn’t just a bad word filter, though. Google’s machine learning actually gives the tool the ability to understand context. So it will eventually be able to tell the difference between telling someone a vacuum cleaner can really suck and that they suck at life.

Perspective still makes mistakes, as I witnessed during a brief demo. But the more comments and information it’s fed, the more it can learn about how to better understand the nuances of human communication.

Jigsaw’s global efforts

In its little over a year of existence, Jigsaw has implemented a series of projects designed to improve the lives of internet users around the world. Project Shield, for example, is a free service that protects news sites from distributed denial of service (DDoS) attacks. Redirect Method uses Adwords targeting tools to help refute ISIS’ online recruitment messages, while Montage helps researchers sort through thousands of hours of YouTube videos to find evidence of potential war crimes.

“We wake up and come to work everyday to try to find ways to use technology to make people around the world safer,” Jigsaw President Jared Cohen said. “We are at this nexus between international security and business.”

Cohen said Jigsaw’s engineers travel around the world to meet with internet users vulnerable to harassment and other online-based rights abuses, such as individuals promoting free speech or opposing authoritarian regimes, to understand their unique challenges. And one of the biggest problems, Cohen explained, has been online harassment.

Trolls aren’t always just cruel

Dealing with trolls is par for the course in the US. But in other countries, harassment in comment sections and forums can have political implications.

“In lots of parts of the world where we spend time [harassment] takes on a political motivation, sectarian motivation, ethnic motivation and it’s all sort kind of heightened and exacerbated,” Cohen explained.

But with Perspective, Jigsaw can start to cut down on those forms of harassing comments, and bring more people into online conversations.

“Our goal is to get as many people to rejoin conversations as possible and also to get people who everyday are sort of entering the gauntlet of toxicity to have an opportunity to see that environment improve,” said Cohen.

The path to a better internet?

Jigsaw is already working with The New York Times and Wikipedia to improve their commenting systems. At The New York Times, the Perspective API is being used to let The Gray Lady enable more commenting sections on its articles.

Prior to using Perspective, The Times relied on employees to manually read and filter comments from the paper’s online articles. As a result, just 10% of stories could have comments activated. The Times is using Perspective to create an open source tool that will help reviewers run through comments more quickly and open up a larger number of stories to comments.

Wikipedia, meanwhile, has been using Perspective to detect personal attacks on its volunteer editors, something Jigsaw and the online encyclopedia recently published a paper on.

With the release of Perspective, publishers and developers around the world can take advantage of Google technologies to improve their users’ experiences. And the conversation filtering won’t just stop hateful comments. Cohen said the company is also working to provide publishers and their readers with the ability to filter out comments that are off-topic or generally don’t contribute to conversations.

If Perspective takes off, and a number of publications end up using the technology, the internet could one day have far fewer trolls lurking in its midst.

Source :https://www.yahoo.com/tech/how-google-is-fighting-the-war-on-internet-trolls-123048658.html 

Categorized in How to

"Dad, what happens when you die?" "I don't know, son. Nobody knows for sure." "Why don't you ask Google?"

Of course, Google isn't clever enough to tell us whether there is life after death, but the word "google" does crop up in conversation more often than either "clever" or "death", according to researchers at the UK's University of Lancaster.

It took just two decades for Google to reach this cultural ubiquity, from its humble beginnings as a student project at Stanford University in California.

It is hard to remember just how bad search technology was before Google. In 1998, for example, if you typed "cars" into Lycos - then a leading search engine - you would get a results page filled with porn websites.

Why? Owners of porn websites inserted many mentions of popular search terms such as "cars" in tiny text or in white on a white background.

The Lycos algorithm saw many mentions of "cars", and concluded the page would be interesting to someone searching for "cars". In the Google era, this seems almost laughably simplistic.

First, download the internet

But Google's founders Larry Page and Sergey Brin were not, initially, interested in designing a better way to search.

Their Stanford project had a more scholarly motivation.

In academia, how often a published paper is cited is a measure of its credibility, and if it is cited by papers that themselves are cited many times, that bestows even more credibility.

Mr Page and Mr Brin realised that if they could find a way to analyse all the links on the nascent world wide web, they could rank the credibility of each web page in any given subject.

To do this, they first had to download the entire internet.

This caused some consternation. It gobbled up nearly half of Stanford's bandwidth. Irate webmasters showered the university with complaints that Google's crawler was overloading their servers.

Google founders Sergey Brin (L) and Larry Page (R)Image copyrightGETTY IMAGESImage captionSergey Brin (L) and Larry Page (R) were trying to map the credibility of academic papers when they devised Google

But as Mr Page and Mr Brin refined their algorithm, it became clear they had discovered a vastly better way to search the web.

Porn websites with tiny text saying "cars cars cars" don't get many links from other websites that discuss cars. If you searched Google for "cars", its analysis would be likely to yield results about… cars.

Mr Page and Mr Brin quickly attracted investors, and Google went from student project to private company. It is now among the world's biggest, bringing in profits by the tens of billions.

But for the first few years, Mr Page and Mr Brin burned through money without knowing how or if they would make it back. They were not alone.

During the dotcom boom, shares in loss-making internet companies traded at absurd prices, in anticipation that they would eventually figure out viable business models.

Speed and transparency

Google found its model in 2001: pay-per-click advertising. Advertisers pay Google when someone clicks through to their website, having searched for specified terms. Google displays the highest-bidders' ads alongside its "organic" search results.

From an advertiser's perspective, the appeal is clear: you pay only when you reach people who have demonstrated an interest in your offering.

It is much more efficient than paying to advertise in a newspaper.

NewspapersImage copyrightTHINKSTOCKImage captionNewspapers have seen a significant decline in display advertising

Even if its readership matches your target demographic, inevitably most people who see your newspaper advert won't be interested in what you are selling.

No wonder newspaper advertising revenue has fallen off a cliff.

The media's scramble for new business models is one obvious economic impact of Google search.

But the invention of functional search technology has created value in many ways. A few years ago, McKinsey tried to list the most important.

One is timesaving. Studies suggest that googling is about three times as quick as finding information in a library, even discounting the time spent getting there.

Likewise, finding a business online is about three times faster than using a printed directory such as the Yellow Pages.

A Yellow Pages directoryImage copyrightGETTY IMAGESImage captionTraditional directories such as the Yellow Pages have struggled to compete with online search tools

McKinsey put the productivity gains of this into the hundreds of billions.

Another benefit is price transparency - economist jargon for being able to stand in a shop, take out your phone, google a product you're thinking of buying and seeing if it's available more cheaply elsewhere, then using that knowledge to haggle - annoying for the shop, helpful for the customer.

Then there are "long tail" effects. In physical shops, it makes no sense to display aisle after aisle of obscure products that will be bought only rarely - they focus on a limited range of bestsellers instead.

Natural monopoly?

But a decent search facility makes it easy to find a needle in the product haystack, and that has enabled the rise of online shops offering more variety.

Customers with specific desires are more likely to find exactly what they want, rather than settling for the nearest thing available in the local supermarket. And entrepreneurs can launch niche products, more confident they will find a market.

This all sounds like excellent news for consumers and businesses.


But there is a problem.

Google dominates the search market, handling close to 90% of searches worldwide. Many businesses rely on ranking highly in its organic search results.

And Google constantly tweaks the algorithm that decides them.

Google gives general advice about how to do well, but it is not transparent about how it ranks results - not least because that would give away the information necessary to game the system. We would be back to searching for cars and getting porn.

Google search results about how the company's algorithm worksImage copyrightGOOGLEImage captionGoogle explains how its search works in principle but guards the details of its all-important algorithms

You don't have to look far online (thanks, Google) to find business owners and search strategy consultants gnashing their teeth over the company's power to make or break them.

If Google thinks you are employing tactics it considers unacceptable, it will downgrade you.

One blogger complains that Google is "judge, jury and executioner".

"You get penalised on suspicion of breaking the rules, [and] you don't even know what the rules are," they say.

Trying to figure out how to please Google's algorithm is rather like trying to appease an omnipotent, capricious and ultimately unknowable god.

You may say as long as Google's top results are useful to searchers, it's tough luck on those who rank lower - and if those results stop being useful, then some other pair of students at Stanford will spot the gap in the market and dream up a better way. Right?

Maybe - or maybe not. Search was a competitive business in the late 1990s. But now, it may be a natural monopoly - in other words, an industry that is extremely hard for a second entrant to succeed in.

The reason? Among the best ways to improve the usefulness of search results is to analyse which links were ultimately clicked by people who previously performed the same search, as well as what the user has searched for before.

Google has far more of that data than anyone else. That suggests the company may continue to shape our access to knowledge for generations to come.

Author : Tim Harford

Source : BBC.com

Categorized in Search Engine

One of the men accused of running the Hamilton Ponzi scheme is no stranger to criminal probes.

In January, the U.S. Securities and Exchange Commission charged Joseph Meli and Matthew Harriton with perpetrating a $97 million Ponzi scheme involving tickets for Hamilton and the planned Broadway run of Harry Potter and the Cursed Child. Over 138 individuals, including billionaires Paul Tudor Jones and Michael Dell, invested in the suspected scam.

According to the government, the defendants claimed to have an agreement with Jeffrey Seller, the producer of Hamilton, to procure 35,000 tickets to the Tony Award-winning musical. Investor funds were sought in order to purchase the block of tickets, which the two men said would be resold at a profit. The backers were promised the return of their investments within eight months, as well as a 10% annualized return and 50% of residual profits.

In addition, federal authorities allege, the defendants purported to have a similar agreement to buy 250,000 tickets to the planned Broadway production of Harry Potter and the Cursed Child. Cash was raised in order to purchase the block of tickets for $62.5 million, and investors were promised the return of their investments and a pro rata share of certain profits.

However, federal authorities insist that the defendants never had a deal with either show, and no investor money was ever used to purchase blocks of tickets. Instead, less than 14% of the funds were used to pay entities engaged in the ticket sales or live entertainment business, and over $74 million was diverted to “to perpetuate a Ponzi scheme and to enrich themselves and certain family members and others.”

The complaint indicates that the entrusted funds were spent on expensive jewelry, private school tuition, summer camps, automobiles, private club memberships, travel expenses and casino bills. One of the men also bought a $3 million house in East Hampton using cash from the scheme.

Paul Ryan, a former government lawyer, told Bloomberg that “the idea that there were blocks of Hamilton tickets available for purchase should have been a giveaway.” No one could get their hands on a large set of tickets.

But, exercising simple due diligence should have revealed another potential red flag. One of the defendants, Matthew Harriton, was reported to be a subject in a large white-collar criminal investigation two decades ago.

His father, Richard Harriton, was banned from the securities sector back in 2000 for helping a boiler-room business stay afloat while evading its net capital requirements. He served as the president of Bear Stearns' clearing subsidiary firm, and the government found that, “[t]o protect [the subsidiary] from having to absorb large losses, [the subsidiary], at Harriton's direction, charged unauthorized trades to [A.R.] Baron customers, liquidated property in customer accounts to pay for unauthorized trades, refused to return customer property that had been liquidated to pay for unauthorized trades and disregarded customer instructions.” But, like most defendants, Richard Harriton did not admit or deny the findings in his settlement with the U.S. Securities and Exchange Commission.

One of the other brokerage firms which cleared its trades through Richard Harriton at Bear Stearns was Sterling Foster, which defrauded thousands of customers, and inspired the popular crime film Boiler Room. Rooney Pace, an old friend of Richard Harriton who was banned from the securities business, secretly controlled the firm and crafted illegal arrangements that allowed insiders to sell their restricted shares in small companies when the firms went public.

The government also alleged that Sterling Foster engaged in rampant stock manipulation, and The New York Times reported that “Mr. Harriton's son Matthew was closely involved in three of the five companies whose shares, prosecutors say, were manipulated by Mr. Pace and his colleagues.” One of the firms, where Matthew Harriton served as the Chief Financial Officer, for instance, raised $5 million in its initial public offering before its stock price plunged from $13.25 to $0.03.

The Manhattan District Attorney’s Office launched an investigation into Matthew Harriton. “One question has been whether Mr. Pace and some associates granted business favors to the younger Mr. Harriton as part of an effort to get Bear Stearns, through the elder Mr. Harriton, to clear trades for Sterling Foster,” observed The Wall Street Journal.

Nothing ever came of the investigation, and Matthew Harriton was never charged with a crime.

Yet, some of the most sophisticated investors on Wall Street should have taken a moment to peek into his past. Reports of the probe might have made them more skeptical of him and his investment offer that apparently was too good to be true.

Author : Marc Hershberg

Source : https://www.forbes.com/sites/marchershberg/2017/03/13/internet-search-would-have-revealed-past-probe-into-alleged-broadway-scammer/2/#ac98c23691fd

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media