Articles
Pages
Products
Research Papers
Search - Easy Blog Comment
Blogs
Search Engines
Events
Webinar, Seminar, Live Classes

[Source: This article was published in enca.com - Uploaded by the Association Member: Rene Meyer]

In this file illustration picture taken on July 10, 2019, the Google logo is seen on a computer in Washington, DC. 

SAN FRANCISCO - Original reporting will be highlighted in Google’s search results, the company said as it announced changes to its algorithm.

The world’s largest search engine has come under increasing criticism from media outlets, mainly because of its algorithms - a set of instructions followed by computers - that newspapers have often blamed for plumenting online traffic and the industry’s decline.

Explaining some of the changes in a blog post, Google's vice president of news Richard Gingras said stories that were critically important and labor intensive -- requiring experienced investigative skills, for example -- would be promoted.

Articles that demonstrated “original, in-depth and investigative reporting,” would be given the highest possible rating by reviewers, he wrote on Thursday.

These reviewers - roughly 10,000 people whose feedback contributes to Google’s algorithm - will also determine the publisher’s overall reputation for original reporting, promoting outlets that have been awarded Pulitzer Prizes, for example.

It remains to be seen how such changes will affect news outlets, especially smaller online sites and local newspapers, who have borne the brunt of the changing media landscape.

And as noted by the technology website TechCrunch, it is hard to define exactly what original reporting is: many online outlets build on ‘scoops’ or exclusives with their own original information, a complexity an algorithm may have a hard time picking through.

The Verge - another technology publication - wrote the emphasis on originality could exacerbate an already frenetic online news cycle by making it lucrative to get breaking news online even faster and without proper verification.

The change comes as Google continues to face criticism for its impact on the news media.

Many publishers say the tech giant’s algorithms - which remain a source of mysterious frustration for anyone outside Google -- reward clickbait and allow investigative and original stories to disappear online.

Categorized in Search Engine

[Source: This article was published in flipweb.org By Abhishek - Uploaded by the Association Member: Jay Harris]

One of the first question that someone who is getting into SEO would have is how exactly does Google rank the websites that you see in Google Search. Ranking a website means that giving them rank in terms of positions. The first position URL that you see in Google Search is ranked number 1 and so on. Now, there are various factors involved in ranking websites on Google Search. It is also not the case that you can’t rank higher if your website’s rank is decided once. Therefore, you would have the question of how does Google determine which URL of a website should come first and which should be lower.

For this reason, Google’s John Mueller has now addressed this question and explains in a video how Google picks website URL for its Search. John explains that there are site preference signals which are involved in determining the rank of a website. However, the most important signals are the preference of the site and the preference of the user accessing the site.

Here are the Site preference signals:

  • Link rep canonical annotations
  • Redirects
  • Internal linking
  • URL in the sitemap file
  • HTTPS preference
  • Nicer looking URLs

One of the keys, as John Mueller has previously mentioned, is to remain consistent. While John did not explain what he means by being consistent, it should mean that you should keep on doing whatever you do. Now, one of the best examples of being consistent is to post on your website every day in order to rank higher up in search results. If you are not consistent, your website’s ranking might get lost and you will have to start all over again. Apart from that, you have to be consistent when it comes to performing SEO as well. If you stop that, your website will suffer in the long run.

Categorized in Search Engine

[Source: This article was published in blogs.scientificamerican.com By Daniel M. Russell and Mario Callegaro - Uploaded by the Association Member: Anthony Frank]

Researchers who study how we use search engines share common mistakes, misperceptions and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926, when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and run with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains  credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites, and the first search results for those terms provide just that. Even the difference between a query like [the who] versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Internet Search

[Source: This article was published in ibtimes.co.uk By Anthony Cuthbertson - Uploaded by the Association Member: Robert Hensonw]

A search engine more powerful than Google has been developed by the US Defence Advanced Research Projects Agency (DARPA), capable of finding results within dark web networks such as Tor.

The Memex project was ostensibly developed for uncovering sex-trafficking rings, however the platform can be used by law enforcement agencies to uncover all kinds of illegal activity taking place on the dark web, leading to concerns surrounding internet privacy.

Thousands of sites that feature on dark web browsers like Tor and I2P can be scraped and indexed by Memex, as well as the millions of web pages ignored by popular search engines like Google and Bing on the so-called Deep Web.

The difference between the dark web and the deep web

The dark web is a section of the internet that requires specialist software tools to access, such as the Tor browser. Originally designed to protect privacy, it is often associated with illicit activities.

The deep web is a section of the open internet that is not indexed by search engines like Google - typically internal databases and forums within websites. It comprises around 95% of the internet.

Websites operating on the dark web, such as the former Silk Road black marketplace, purport to offer anonymity to their users through a form of encryption known as Onion Routing.

While users' identities and IP addresses will still not be revealed through Memex results, the use of an automated process to analyse content could uncover patterns and relationships that could potentially be used by law enforcement agencies to track and trace dark web users.

"We're envisioning a new paradigm for search that would tailor content, search results, and interface tools to individual users and specific subject areas, and not the other way round," said DARPA program manager Chris White.

"By inventing better methods for interacting with and sharing information, we want to improve search for everybody and individualise access to information. Ease of use for non-programmers is essential."

Memex achieves this by addressing the one-size-fits-all approach taken by mainstream search engines, which list results based on consumer advertising and ranking.

 us internet surveillance DARPA TOR Memex dark web

 Memex raises further concerns about internet surveillance US Web Home

'The most intense surveillance state the world has literally ever seen'

The search engine is initially being used by the US Department of Defence to fight human trafficking and DARPA has stated on its website that the project's objectives do not involve deanonymising the dark web.

The statement reads: "The program is specifically not interested in proposals for the following: attributing anonymous services, deanonymising or attributing identity to servers or IP addresses, or accessing information not intended to be publicly available."

Despite this, White has revealed that Memex has been used to improve estimates on the number of services there are operating on the dark web.

"The best estimates there are, at any given time, between 30,000 and 40,000 hidden service Onion sites that have content on them that one could index," White told 60 Minutes earlier this month.

Internet freedom advocates have raised concerns based on the fact that DARPA has revealed very few details about how Memex actually works, which partners are involved and what projects beyond combating human trafficking are underway.

"What does it tell about a person, a group of people, or a program, when they are secretive and operate in the shadows?" author Cassius Methyl said in a post to Anti Media. "Why would a body of people doing benevolent work have to do that?

"I think keeping up with projects underway by DARPA is of critical importance. This is where the most outrageous and powerful weapons of war are being developed.

"These technologies carry the potential for the most intense surveillance/ police state that the world has literally ever seen."

Categorized in Deep Web

[Source: This article was published in csoonline.com By Josh Fruhlinger- Uploaded by the Association Member: Eric Beaudoin] 

Catch a glimpse of what flourishes in the shadows of the internet.

Back in the 1970s, "darknet" wasn't an ominous term: it simply referred to networks that were isolated from the mainstream of ARPANET for security purposes. But as ARPANET became the internet and then swallowed up nearly all the other computer networks out there, the word came to identify areas that were connected to the internet but not quite of it, difficult to find if you didn't have a map.

The so-called dark web, a catch-all phrase covering the parts of the internet not indexed by search engines, is the stuff of grim legend. But like most legends, the reality is a bit more pedestrian. That's not to say that scary stuff isn't available on dark web websites, but some of the whispered horror stories you might've heard don't make up the bulk of the transactions there.

Here are ten things you might not know about the dark web.

New dark web sites pop up every day...

A 2015 white paper from threat intelligence firm Recorded Future examines the linkages between the Web you know and the darknet. The paths usually begin on sites like Pastebin, originally intended as an easy place to upload long code samples or other text but now often where links to the anonymous Tor network are stashed for a few days or hours for interested parties. 

While searching for dark web sites isn't as easy as using Google—the point is to be somewhat secretive, after all—there are ways to find out what's there.  The screenshot below was provided by Radware security researcher Daniel Smith, and he says it's the product of "automatic scripts that go out there and find new URLs, new onions, every day, and then list them. It's kind of like Geocities, but 2018"—a vibe that's helped along by pages with names like "My Deepweb Site," which you can see on the screenshot.

fresh onions

..and many are perfectly innocent

Matt Wilson, chief information security advisor at BTB Security, says that "there is a tame/lame side to the dark web that would probably surprise most people. You can exchange some cooking recipes—with video!—send email, or read a book. People use the dark web for these benign things for a variety of reasons: a sense of community, avoiding surveillance or tracking of internet habits, or just to do something in a different way."

It's worth remembering that what flourishes on darknet is material that's been banned elsewhere online. For example, in 2015, in the wake of the Chinese government cracking down on VPN connections through the so-called "great firewall," Chinese-language discussions started popping up on the darknet — mostly full of people who just wanted to talk to each other in peace.

Radware's Smith points out that there are a variety of news outlets on the dark web, ranging from the news website from the hacking group Anonymous to the New York Times, shown in the screenshot here, all catering to people in countries that censor the open internet.

nytimes

 

Some spaces are by invitation only

Of course, not everything is so innocent, or you wouldn't be bothering to read this article. Still, "you can't just fire up your Tor browser and request 10,000 credit card records, or passwords to your neighbor’s webcam," says Mukul Kumar, CISO and VP of Cyber Practice at Cavirin. "Most of the verified 'sensitive' data is only available to those that have been vetted or invited to certain groups.

"

How do you earn an invite into these kinds of dark web sites? "They're going to want to see history of crime," says Radware's Smith. "Basically it's like a mafia trust test. They want you to prove that you're not a researcher and you're not law enforcement. And a lot of those tests are going to be something that a researcher or law enforcement legally can't do."

There is bad stuff, and crackdowns means it's harder to trust

As recently as last year, many dark web marketplaces for drugs and hacking services featured corporate-level customer service and customer reviews, making navigating simpler and safer for newbies. But now that law enforcement has begun to crack down on such sites, the experience is more chaotic and more dangerous.

"The whole idea of this darknet marketplace, where you have a peer review, where people are able to review drugs that they're buying from vendors and get up on a forum and say, 'Yes, this is real' or 'No, this actually hurt me'—that's been curtailed now that dark marketplaces have been taken offline," says Radware's Smith. "You're seeing third-party vendors open up their own shops, which are almost impossible to vet yourself personally. There's not going to be any reviews, there's not a lot of escrow services. And hence, by these takedowns, they've actually opened up a market for more scams to pop up."

Reviews can be wrong, products sold under false pretenses—and stakes are high

There are still sites where drugs are reviewed, says Radware's Smith, but keep in mind that they have to be taken with a huge grain of salt. A reviewer might get a high from something they bought online, but not understand what the drug was that provided it.

One reason these kinds of mistakes are made? Many dark web drug manufacturers will also purchase pill presses and dyes, which retail for only a few hundred dollars and can create dangerous lookalike drugs. "One of the more recent scares that I could cite would be Red Devil Xanax," he said. "These were sold as some super Xanax bars, when in reality, they were nothing but horrible drugs designed to hurt you."

The dark web provides wholesale goods for enterprising local retailers...

Smith says that some traditional drug cartels make use of the dark web networks for distribution—"it takes away the middleman and allows the cartels to send from their own warehouses and distribute it if they want to"—but small-time operators can also provide the personal touch at the local level after buying drug chemicals wholesale from China or elsewhere from sites like the one in the screenshot here. "You know how there are lots of local IPA microbreweries?" he says. "We also have a lot of local micro-laboratories. In every city, there's probably at least one kid that's gotten smart and knows how to order drugs on the darknet, and make a small amount of drugs to sell to his local network."

xanax

 

...who make extensive use of the gig economy

Smith describes how the darknet intersects with the unregulated and distributed world of the gig economy to help distribute contraband. "Say I want to have something purchased from the darknet shipped to me," he says. "I'm not going expose my real address, right? I would have something like that shipped to an AirBnB—an address that can be thrown away, a burner. The box shows up the day they rent it, then they put the product in an Uber and send it to another location. It becomes very difficult for law enforcement to track, especially if you're going across multiple counties."

Not everything is for sale on the dark web

We've spent a lot of time talking about drugs here for a reason. Smith calls narcotics "the physical cornerstone" of the dark web; "cybercrime—selling exploits and vulnerabilities, web application attacks—that's the digital cornerstone. Basically, I'd say a majority of the darknet is actually just drugs and kids talking about little crimes on forums."

Some of the scarier sounding stuff you hear about being for sale often turns out to be largely rumors. Take firearms, for instance: as Smith puts it, "it would be easier for a criminal to purchase a gun in real life versus the internet. Going to the darknet is adding an extra step that isn't necessary in the process. When you're dealing with real criminals, they're going to know someone that's selling a gun."

Specific niches are in

Still, there are some very specific darknet niche markets out there, even if they don't have the same footprint that narcotics does. One that Smith drew my attention to was the world of skimmers, devices that fit into the slots of legitimate credit and ATM card readers and grab your bank account data.

And, providing another example of how the darknet marries physical objects for sale with data for sale, the same sites also provide data manual sheets for various popular ATM models. Among the gems available in these sheets are the default passwords for many popular internet-connected models; we won't spill the beans here, but for many it's the same digit repeated five times.

atm skinners

 

It's still mimicking the corporate world

Despite the crackdown on larger marketplaces, many dark web sites are still doing their best to simulate the look and feel of more corporate sites

elude

 

The occasional swear word aside, for instance, the onion site for the Elude anonymous email service shown in this screenshot looks like it could come from any above-board company.

One odd feature of corporate software that has migrated to the dark web: the omnipresent software EULA. "A lot of times there's malware I'm looking at that offers terms of services that try to prevent researchers from buying it," he says. "And often I have to ask myself, 'Is this person really going to come out of the dark and trying to sue someone for doing this?'"

And you can use the dark web to buy more dark web

And, to prove that any online service can, eventually, be used to bootstrap itself, we have this final screenshot from our tour: a dark web site that will sell you everything you need to start your own dark web site.docker

 

Think of everything you can do there—until the next crackdown comes along.

Categorized in Internet Privacy

[Source: This article was Published in theverge.com BY James Vincent - Uploaded by the Association Member: Jennifer Levin] 

A ‘tsunami’ of cheap AI content could cause problems for search engines

Over the past year, AI systems have made huge strides in their ability to generate convincing text, churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation, but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance, it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:

You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.

The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.”

The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.

As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.

AI text generation will make the article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.”

Mike Blumenthal, an SEO consultant, and expert says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.

And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue. That doesn’t stop it being politically damaging.

The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.

“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge. “So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.”

Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.

However, as we’ve seen with visual deep fakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger. Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.

Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.

SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.

More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems, but it certainly minimizes the risk from high-volume spam.

The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.

“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic, it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all?

Categorized in Search Engine

 [Source: This article was Published in mirror.co.uk BY Sophie Curtis - Uploaded by the Association Member: Issac Avila]

Google now lets you automatically delete your location history after a fixed period of time

It probably comes as no surprise that Google keeps track of everywhere you go via the apps you use on your smartphone.

This information is used to give you more personalised experiences, like maps and recommendations based on places you've visited, real-time traffic updates about your commute, help to find your phone and more targeted ads.

But while these things can be useful, you may not feel comfortable with the idea of Google holding on to that information indefinitely.

In the past, if you chose to enable Location History, the only way to delete that data was to go into your app settings and remove it manually.

But Google recently introduced a new setting that allows you to automatically delete your location history after a fixed period of time.

There are currently only two options - automatically deleting your Location History after three months or after 18 months - but it beats leaving a trail of information that you might not want Google or others to see.

Here's how to automatically delete your Location History on Android and iOS:

  1. Open the Google Maps app
  2. In the top left, tap the Menu icon and select "Your timeline".
  3. In the top right, tap the More icon and select "Settings and privacy".
  4. Scroll to "Location settings".
  5. Tap "Automatically delete Location History".
  6. Follow the on-screen instructions.

If you'd prefer to turn off Location History altogether, you can do so in the "Location History" section of your Google Account.

 

You can also set time limits how long Google can keep your Web & App Activity, which includes data about websites you visit and apps you use.

Google uses this data to give you faster searches, better recommendations and more personalised experiences in Maps, Search and other Google services.

Again, you have to option to automatically delete this data after three or 18 months.

  1. ​Open the Gmail app.
  2. In the top left, tap the Menu icon and select "Settings".
  3. Select your account and then tap "Manage your Google Account".
  4. At the top, tap Data & personalisation.
  5. Under "Activity controls" tap Web & App Activity"
  6. Tap "Manage activity".
  7. At the top right, tap the More icon and then select "Keep activity for".
  8. Tap the option for how long you want to keep your activity and then tap "Next".
  9. Confirm to save your choice.

The new tools are part of Google's efforts to give users more control over their data.

The company has also introduced "incognito mode" in a number of its smartphone apps, which stops Google tracking your activity

It is also putting pressure on web and app developers to be more transparent about their use of cookies so that users can make more informed choices about whether to accept them.

Categorized in Internet Privacy

[Source: This article was Published in exchangewire.com By Mathew Broughton - Uploaded by the Association Member: Eric Beaudoin]

Talk about Google, along with their domination of the digital ad ecosystem, would not be on the lips of those in ad tech were it not for their original product: the Google Search engine.

Despite negative press coverage and EU fines, some estimates suggest the behemoth continues to enjoy a market share of just under 90% in the UK search market. However, there have been rumblings of discontent from publishers, which populate the results pages, about how they have been treated by the California-based giant.

This anger, combined with concerns over GDPR and copyright law violations, has prompted the launch of new ‘disruptive’ search engines designed to address these concerns. But will these have any effect on Google’s stranglehold on the global search industry? ExchangeWire details the problems publishers are experiencing with Google along with some of the new players in the search market, what effect they have had thus far, and how advertisers could capitalize on privacy-focused competition in the search market.

Google vs publishers

Publishers have experienced margin squeezes for years, whilst Google’s sales have simultaneously skyrocketed, with parent company Alphabet’s revenue reaching USD$36.3bn (£28.7bn) in the first quarter of 2019 alone. Many content producers also feel dismay towards Google’s ‘enhanced search listings’, as these essentially scrape content from their sites and show it in their search results, eliminating the need for users to visit their site, and in turn their monetization opportunity.

Recent changes to the design of the search results page, at least on mobile devices, which are seemingly aimed at making the differences between ads and organic listings even more subtle (an effect which is particularly noticeable on local listings) will also prove perturbing for the publishers which do not use Google paid search listings.

DuckDuckGo: The quack grows louder

Perhaps the best-known disruptive search engine is DuckDuckGo, which markets itself on protecting user privacy whilst also refining results by excluding low-quality sources such as content mills. In an attempt to battle against privacy concerns, and in recognition of anti-competitive investigations, Google has added DuckDuckGo to Chrome as a default search option in over 60 markets including the UK, US, Australia and South Africa. Further reflecting their increased presence in the search market: DuckDuckGo’s quack has become louder recently, adding momentum to the recent calls to transform the toothless ‘Do Not Track’ option into giving more meaningful protections to user privacy, as originally intended.

Qwant: Local search engines fighting Google

Qwant is a France-based search engine which, similar to DuckDuckGo, preserves user privacy by not tracking their queries. Several similar locally-based engines have been rolled out across Europe, including Mojeek (UK) and Unbubble (Germany). Whilst they currently only occupy a small percentage (~6%) of the French search market, Qwant’s market share has grown consistently year-on-year since their launch in 2013, to the extent that they are now challenging established players such as Yahoo! in the country. In recognition of their desire to increase their growth across Europe, whilst continuing to operate in a privacy-focused manner, Qwant has recently partnered with Microsoft to leverage their various tech solutions. A further sign of their growing level of gravitas is the French government’s decision to eschew Chrome in favour of their engine.

Ahrefs: The 90/10 profit share model

A respected provider of performance-monitoring tools within search, Ahrefs is now working on directly competing with Google with their own engine, according to a series of tweets from founder & CEO Dmitry Gerasimenko. Whilst a commitment to privacy will please users, content creators will be more interested in the proposed profit-share model, whereby 90% of the prospective search revenue will be given to the publisher. Though there is every change that this tweet-stage idea will never come to fruition, the Singapore-based firm already has impressive crawling capabilities which are easily transferable for indexing, so it is worth examining in the future.

Opportunity for advertisers

With the launch of Google privacy tools, along with stricter forms of intelligent tracking prevention (ITP) on the Safari and Firefox browsers, discussions have abounded within the advertising industry on whether budgets will be realigned away from display and video towards fully contextual methods such as keyword-based search. Stricter implementation of GDPR and the prospective launch of similar privacy legislation across the globe will further the argument that advertisers need to examine privacy-focused solutions.

Naturally, these factors will compromise advertisers who rely on third-party targeting methods and tracking user activity across the internet, meaning they need to identify ways of diversifying their offering. Though they have a comparatively tiny market share, disruptive search engines represent a potential opportunity for brands and advertisers to experiment with privacy-compliant search advertising.

Categorized in Search Engine

[Source: This article was Published in hannity.com By Hannity Staff - Uploaded by the Association Member: Logan Hochstetler]

Google and other American tech companies were thrust into the national spotlight in recent weeks, with critics claiming the platforms are intentionally censoring conservative voices, “shadow-banning” leading personalities, and impacting American elections in an unprecedented way.

In another explosive exposé, Project Veritas Founder James O’Keefe revealed senior Google officials vowing to prevent the “Trump Situation” from occurring again during the 2020 elections.

The controversy dates back much further. In the fall of 2018, The SEO Tribunal published an article detailing 63 “fascinating Google search statistics.”

The article shows the planet’s largest search engine handles more than 63,000 requests per second, owns more than 90% of the global market share, and generated $95 billion in ad sales during 2017.

1. Google receives over 63,000 searches per second on any given day.

(Source: SearchEngineLand)

That’s the average figure of how many people use Google a day, which  translates into at least 2 trillion searches per year, 3.8 million searches per minute, 228 million searches per hour, and 5.6 billion searches per day. Pretty impressive, right?

2. 15% of all searches have never been searched before on Google.

(Source: SearchEngineLand)

Out of  trillions of searches every year, 15% of these queries have never been seen by Google before. Such queries mostly relate to day-to-day activities, news, and trends, as confirmed per Google search stats.

3. Google takes over 200 factors into account before delivering you the best results to any query in a fraction of a second.

(Source: Backlinko)

Of course, some of them are rather controversial, and others may vary significantly, but there are also those that are proven and important, such as content and backlinks.

4. Google’s ad revenue amounted to almost $95.4 billion in 2017.

(Source: Statista)

According to recent Google stats, that is 25% up from 2016. The search giant saw nearly 22% ad revenue growth in the fourth quarter only.

5. Google owns about 200 companies.

(Source: Investopedia)

That is, on average, as if they’ve been acquiring more than one company per week since 2010. Among those there are companies involved in mapping, telecommunications, robotics, video broadcasting, and advertising.

6. Google’s signature email product has a 27% share of the global email client market.

(Source: Litmus)

This is up by 7% since 2016.

7. Upon going public, Google figures show the company was valued at $27 billion.

(Source: Entrepreneur)

More specifically, the company sold over 19 million shares of stock for $85 per share. In other words, it was valued as much as General Motors.

8. The net US digital display ad revenue for Google was $5.24 billion in 2017.

(Source: Emarketer)

Google statistics show that this number is significantly lower than Facebook, which made $16.33 billion, but much higher than Snapchat, which brought in $770 million from digital display ads.

9. Google has a market value of $739 billion.

(Source: Statista)

As of May 2018, the search market leader has a market value of $739 billion, coming behind Apple, which has a market value of $924 billion, Amazon, which has a market value of $783 billion, and Microsoft, which has a market value of  $753.

10. Google’s owner, Alphabet, reported an 84% rise in profits for the last quarter.

(Source: The Guardian)

The rising global privacy concerns didn’t affect Google’s profits. According to Thomson Reuters I/B/E/S, the quarterly profit of $9.4 billion exceeded estimates of $6.56 billion. Additionally, the price for clicks and views of ads sold by Google rose in its favor mostly due to advertisers who pursued ad slots on its search engine, YouTube video service, and partner apps and websites.

Read the full list at The SEO Tribunal.

Categorized in Search Engine

 [Source: This article was Published in searchenginejournal.com By Barry Schwartz - Uploaded by the Association Member: Martin Grossner]

Google says the June 3 update is not a major one, but keep an eye out for how your results will be impacted.

Google has just announced that tomorrow it will be releasing a new broad core search algorithm update. These core updates impact how search results are ranked and listed in the Google search results.

Here is Google’s tweet:

searchliaison

Previous updates. Google has done previous core updates. In fact, it does one every couple months or so. The last core update was released in March 2019. You can see our coverage of the previous updates over here.

Why pre-announce this one? Google said the community has been asking Google to be more proactive when it comes to these changes. Danny Sullivan, Google search liason, said there is nothing specifically “big” about this update compared to previous updates. Google is being proactive about notifying site owners and SEOs, Sullivan said, so people aren’t left “scratching their heads after-the-fact.”

casey markee

When is it going live? Monday, June 3, Google will make this new core update live. The exact timing is not known yet, but Google will also tweet tomorrow when it does go live.

eric mitz

Google’s previous advice. Google has previously shared this advice around broad core algorithm updates:

“Each day, Google usually releases one or more changes designed to improve our results. Some are focused around specific improvements. Some are broad changes. Last week, we released a broad core algorithm update. We do these routinely several times per year.

As with any update, some sites may note drops or gains. There’s nothing wrong with pages that may now perform less well. Instead, it’s that changes to our systems are benefiting pages that were previously under-rewarded.

There’s no ‘fix’ for pages that may perform less well other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages.”

 

Categorized in Search Engine
Page 1 of 84

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to Our Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media