fbpx

There's an old joke among Windows users: "Internet Explorer is the best browser to download a better browser with."

In other words, Internet Explorer — Microsoft's old flagship internet browser — has been around for years, and few people actually like it. That's a big reason why in 2015 Microsoft released Edge, their new and improved browser.

Microsoft has made a big effort with Edge to improve the browsing experience, and it's paid off. Microsoft Edge has enough features and benefits that it's actually a real alternative to more popular browsers like Chrome or Firefox.

This is especially true with the Edge's most recent update, which overhauled how the browser runs and operates.

Here's everything you need to know about Microsoft Edge, including what it offers, and how to download it on your PCMaciPhone, or Android device.

Microsoft Edge, explained

The newest version of Edge is what's called a "Chromium" browser. This means that it can run hundreds of extensions that were originally meant for Google Chrome users. This includes screen readers, in-browser games, productivity tools, and more. 

This is in addition to the extensions already in the Microsoft Store, which you can also use. If you can think of a feature you'd like the browser to have, there's probably an extension for it.

MS 1

 

You can find the Extensions menu by clicking the three dots at the top-right and clicking "Extensions." 

 

 

If you sign up for a free Microsoft account, you can sync your bookmarks, history, passwords, and more. This means that if you use Edge on a different computer, you'll have all of your browsing data available in moments.

MS 2.jpg

 

Like with Google Chrome, you can sync your browsing information to your email account. 

 

 

Reviews have also said that this new version of Edge runs faster than previous versions, putting it about on par with Chrome and Firefox.

If you'd like to give Microsoft Edge a try, you can download it from the Edge website, here.

The page should automatically detect whether you're using a Mac, PC, iPhone, or Android device. If you think the page has gotten it wrong, click the arrow next to the "Download" button to see all the available versions.

MS 3.jpg

 

[Source: This article was published in businessinsider.com By Ross James - Uploaded by the Association Member: Alex Gray]

Categorized in Search Engine

Unlock censoreship- and AWS-resistant websites.

Unstoppable Domains today launches its native, censorship-resistant crypto browser. Users can now surf the decentralized web and send crypto payments directly to site addresses ending in .zil or .crypto.

Blockchain Domains Built on Ethereum

Like domain names used for surfing the traditional internet, Unstoppable Domains offers enthusiasts an opportunity to host a site on the Ethereum and Zilliqua blockchains. 

Accessing these sites is also straightforward for those unfamiliar with blockchain-based technologies. Users simply add .crypto or .zil, like .com, to a corresponding Unstoppable Domain to navigate to different portions of the decentralized Internet. 

What initially began as a mechanism for easily remembering cryptocurrency addresses, has now turned into a suite of products from the San Francisco-based team. Sending cryptocurrencies from wallet to wallet required users to either memorize a 40-character string of letters, numbers and symbols or copy and paste this string of information. 

The former is cumbersome, while the latter has proven risky. 

In 2018, Bleeping Computer reported a type of malware that would monitor users machines for cryptocurrency addresses. If detected, whenever the user would attempt to copy and paste the address, the malware would swap the address with the attacker’s. This way funds would be sent directly to the attacker rather than the intended recipient. 

Thanks to upstarts like Unstoppable Domains and ENS Domains, both issues can be mitigated. The second step after uncensorable payments, has then been to build out uncensorable domains. 

Screenshot 4

 

nsofar as many of the world’s most popular sites are built on centralized services like Amazon Web Services (AWS), taking down a website is not difficult. In the unlikely case that AWS ever shutdown, much of the Internet as we know it would also disappear. Conversely, websites built on a blockchain are protected from seizures and from being stripped of content. 

Many websites that use either a .crypto or a .zil are already available. When users download the Unstoppable Browser, they can use the Chromium-based browser to visit sites like cryptolark.crypto or timdraper.crypto.

Interested parties can download the browser for either Windows or MacOS today. 

Cosmos Dev Leaves Tendermint, Cites “Untenable” CEO as Reason

Zaki Manian walks after internal conflict.

inside-cosmos-secret-plan-to-dominate-crypto-research-1-768x403.jpg

Tendermint Labs director Zaki Manian has resigned from his post. Tendermint is a core contributor to the Cosmos blockchain network.

Zaki Manian’s Recent Hint at a Departure

In early February, Manian tweeted his discontent with Tendermint CEO Jae Kwon, saying the co-founder “has obsessively focused on Virgo while neglecting and under resourcing IBC… threw a painstakingly planned hiring and resource improvement proposal out the window to become @BitcoinJaesus.”

He labeled the CEO’s conduct “an untenable distraction.”

Screenshot 5

 

Manian intends to continue working on Cosmos, telling Decrypt:

“There are people inside the company that want to portray this as a power struggle between me and Jae, and this as an outcome and me threatening x, y and z. But it was really me saying I don’t see a way in this arrangement to get the work done. And the best way to get the work done was for me to leave.”

Tendermint Continues Development Work

Tendermint is yet to comment on the high profile departure. The company’s vision “to create open networks in order to manage conflict and empower people to align on universal goals to enact positive societal and environmental change” appears to have come unstuck at its own workplace. 

However, it does have a slate of over 100 projects in the Cosmos and Tendermint ecosystems. The Tendermint protocol is an interoperability network, on top of which Cosmos was built.

How Cosmos’ lead developer’s departure will impact the relationship between the two networks remains unclear.

China Sees Red: FCoin Transaction Fee Costs the Exchange Millions

Unique business model costs exchange its business.

Controversial-Ponzi-Transaction-Fee-Mining-Exchanges-In-Decline-768x403.jpg

 

Chinese exchange FCoin today announced insolvency following internal “technical difficulties.” The platform’s founder has already announced a new project to help pay back the multi-million dollar capital reserve.

“The Route to Hell Is Paved with Goodwill”

So reads the first line of an ominous Reddit post from FCoin’s founder on Feb. 17.

The announcement from Jian Zhang, formerly the CTO of Huobi, indicated that it would not be able to process user withdrawals because the exchange had become insolvent. “It is expected that the scale of non-payment is between [7,000 -13,000] BTC,” said the executive.

The culprit behind such malpractice was the very mechanism that helped FCoin briefly become a top exchange in 2018. 

The exchange leveraged a unique “transaction-fee mining” reward to bootstrap adoption. In practice, this meant that for every trade fee paid on FCoin, the users would be reimbursed entirely in the exchange’s native token, FToken (FT). 

Users quickly flocked to the exchange, thus pumping the price of FT and inflating the exchange’s volumes on CoinMarketCap. At one point, the exchange overtook the likes of OKEx and Binance at its peak. 

Zhang indicated that the team raked in between $150 and $200 million at this time, with payouts to “old FCoin users” as high as 6,000 Bitcoin. Soon, however, this very mechanism became the exchange’s downfall. 

In its short existence, FCoin had been periodically paying out users slightly more than they could afford. The team did not notice this discrepancy due to poor analytical tools for measuring payouts. Even after they began buying back FT with company funds, a user base eager to leverage the underdeveloped business model had far outpaced the team’s ability to save a sinking ship. 

Still, in an act of good faith, Zhang is determined to payout all remaining withdrawal requests.

Over the next two to three months, the founder will fulfill all email withdrawal requests as part one of a two-part plan. The second part, relies on the success of a “new project,” said Zhang. He added: 

“Once the new project is on track, I will begin the long-term mail withdrawal process, which may take 1-3 years. In addition, for the other losses of FT and FMEX investors, I am also willing to use the profit of the new project to compensate. The specific calculation method will be discussed with you at the beginning of the compensation.”

At the time of writing, FT finished trading at ~$0.04, down from a high of nearly $0.30 in May 2019. FCoin reports a 24-hour volume in BTC/USDT of  approximately $115 million, according to CoinMarketCap. FT is the seventh highest-traded coin on the platform. 

Binance Cloud to Offer Exchange-in-a-Box Infrastructure Service

Binance set to enter the white-label market

Binance-Launches-Binance-DEX-768x403.jpg

Binance is set to develop white-label crypto exchange infrastructure for use by smaller exchanges, allowing them to focus on regulatory compliance.

Binance Cloud Service Extends to Exchange Infrastructure

Binance’s cloud service was already hinted at by the exchange’s CEO, Changpeng “CZ” Zhao, during an “ask me anything” session on Feb. 8. Their white-label exchange infrastructure will provide spot market and futures trading, bank API integrations, and fiat-to-cryptocurrency exchange services. 

Exchanges will be able to rebrand the software infrastructure, to be hosted on Binance Cloud, to suit the needs of their local markets. A statement from the company explained:

“The Binance Cloud service is an all-in-one solution, featuring an easy-to-use dashboard that allows customers to manage funds, trading pairs and coin listings, as well as multilingual support, depth-sharing with the Binance.com global exchange, and more opportunities to collaborate with the ecosystem.”

White-Label Exchanges Nothing New

White-label crypto exchange infrastructure is not new to the industry. The current market leader is AlphaPoint, which claims to provide infrastructure to “over 100 exchange operators.”

Binance’s entry into the white-label market appears to be in line with the giant’s determination to redefine money and expand cryptocurrency access and services to a worldwide audience.

With Binance-powered matching engines, security, and liquidity solutions, new exchanges would be able to access instant workability and scalability. Startup exchanges have historically faced daunting setup costs, with many failing to gain significant market traction.

The 32nd largest exchange by 24-hour volume, according to CoinMarketCap, had less than half a million dollars in trading activity for the day at press time.

Read More...

[Source: This article was published in cryptobriefing.com By Liam Kelly - Uploaded by the Association Member: Bridget Miller]

Categorized in Search Engine

This was a pretty busy week, we may have had a Google search algorithm update this week and maybe, just maybe, Forbes got hit hard by it. Google is probably going to revert the favicon and black ad label user interface, lots of tests are going on now. Bing hides the ad label as well, it isn’t just Google. I posted a summary of everything you need to know about the Google feature snippet deduplication change, including Google might be giving us performance data on them, images in featured snippets may change, Google will move the right side featured snippet to the top and until then it stopped deduplicating the right side feature snippets. Google Search Console launched a new removals tool with a few set of features. Google may have issues indexing international pages. Google says they treat links in PDFs as nofollowed links but that contradicts earlier statements. Google said schema markup will continue to get more complicated. Google said do not translate your image URLs. I shared a fun people also ask that looks like an ad, but is not an ad. Google Assistant Actions do not give you a ranking boost. Google is still using Chrome 41 as the user agent when requesting resources but not for rendering. Google Ads switched all campaign types to standard delivery. Google My Business suspensions are at an all time high. Google Chrome is testing hiding URLs for the search results page. Google is hiring an SEO. I posted two vlogs this week, one with Thom Craver and one with Lisa Barone. Oh and if you want to help sponsor those vlogs, go to patreon.com/barryschwartz. That was the search news this week at the Search Engine Roundtable.

Make sure to subscribe to our video feed or subscribe directly on iTunes to be notified of these updates and download the video in the background. Here is the YouTube version of the feed:

Search Topics of Discussion:

 [Source: This article was published in seroundtable.com By Barry Schwartz - Uploaded by the Association Member: Olivia Russell]

Categorized in Search Engine

Google is enhancing its Collections in Search feature, making it easy to revisit groups of similar pages.

Similar to the activity cards in search results, introduced last year, Google’s Collections feature allows users to manually create groups of like pages.

Now, using AI, Google will automatically group together similar pages in a collection. This feature is compatible with content related to activities like cooking, shopping, and hobbies.

collection.jpeg

This upgrade to collections will be useful in the event you want to go back and look at pages that weren’t manually saved. Mona Vajolahi, a Google Search Product Manager, states in an announcement:

“Remember that chicken parmesan recipe you found online last week? Or that rain jacket you discovered when you were researching camping gear? Sometimes when you find something on Search, you’re not quite ready to take the next step, like cooking a meal or making a purchase. And if you’re like me, you might not save every page you want to revisit later.”

These automatically generated collections can be saved to keep forever, or disregarded if not useful. They can be accessed any time from the Collections tab in the Google app, or through the Google.com side menu in a mobile browser.

Once a collection is saved, Google can help users discover even more similar pages by tapping on the “Find More” button. Google is also adding a collaboration feature that allow users to share and work on creating collections with other people.

Auto-generated collections will start to appear for US English users this week. The ability to see related content will launch in the coming weeks.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Logan Hochstetler]

Categorized in Search Engine

It’s not paid inclusion, but it is paid occlusion

Happy Friday to you! I have been reflecting a bit on the controversy du jour: Google’s redesigned search results. Google is trying to foreground sourcing and URLs, but in the process it made its results look more like ads, or vice versa. Bottom line: Google’s ads just look like search results now.

I’m thinking about it because I have to admit that I don’t personally hate the new favicon -plus-URL structure. But I think that might be because I am not a normal consumer of web content. I’ve been on the web since the late ‘90s and I parse information out of URLs kind of without thinking about it. (In fact, the relative decline of valuable information getting encoded into the URL is a thing that makes me sad.)

I admit that I am not a normal user. I set up custom Chrome searches and export them to my other browsers. I know what SERP means and the term kind of slips out in regular conversation sometimes. I have opinions about AMP and its URL and caching structure. I’m a weirdo.

As that weirdo, Google’s design makes perfect sense and it’s possible it might do the same for regular folk. The new layout for search result is ugly at first glance — but then Google was always ugly until relatively recently. I very quickly learned to unconsciously take in the information from the top favicon and URL-esque info without it really distracting me.

...Which is basically the problem. Google’s using that same design language to identify its ads instead of much more obvious, visually distinct methods. It’s consistent, I guess, but it also feels deceptive.

Recode’s Peter Kafka recently interviewed Buzzfeed CEO Jonah Peretti, and Peretti said something really insightful: what if Google’s ads really aren’t that good? What if Google is just taking credit for clicks on ads just because people would have been searching for that stuff anyway? I’ve been thinking about it all day: what if Google ads actually aren’t that effective and the only reason they make so much is billions of people use Google?

The pressure to make them more effective would be fairly strong, then, wouldn’t it? And it would get increasingly hard to resist that pressure over time.

I am old enough to remember using the search engines before Google. I didn’t know how bad their search technology was compared to what was to come, but I did have to bounce between several of them to find what I wanted. Knowing what was a good search for WebCrawler and what was good for Yahoo was one of my Power User Of The Internet skills.

So when Google hit, I didn’t realize how powerful and good the PageRank technology was right away. What I noticed right away is that I could trust the search results to be “organic” instead of paid and that there were no dark patterns tricking me into clicking on an ad.

One of the reasons Google won search in the first place with old people like me was that in addition to its superior technology, it drew a harder line against allowing paid advertisements into its search results than its competitors.

With other search engines, there was the problem of “paid inclusion,” which is the rare business practice that does exactly what the phrase means. You never really knew if what you were seeing was the result of a web-crawling bot or a business deal.

This new ad layout doesn’t cross that line, but it’s definitely problematic and it definitely reduces my trust in Google’s results. It’s not so much paid inclusion as paid occlusion.

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

And most of all: if Google is willing to visually muddle ads, how long until its users lose trust in the algorithm itself? With this change, Google is becoming what it once sought to overcome: AltaVista.

Read More...

[Source: This article was published in theverge.com By Barry Schwartz - Uploaded by the Association Member: James Gill]

Categorized in Search Engine

Now that the Google January 2020 core update is mostly rolled out, we have asked several data providers to send us what they found with this Google search update. All of the data providers agree that this core update was a big one and impacted a large number of web sites.

The facts. What we know from Google, as we previously reported, is that the January 2020 core update started to roll out around 12:00 PM ET on Monday, January 13th. That rollout was “mostly done” by Thursday morning, on January 16th. We also know that this was a global update, and was not specific to any region, language or category of web sites. It is a classic “broad core update.”

What the tools are seeing. We have gone to third-party data companies asking them what their data shows about this update.

RankRanger. Mordy Oberstein from RankRanger said, “the YMYL (your money, your life) niches got hit very hard.” “This a huge update,” he added. “There is massive movement at the top of the SERP for the Health and Finance niches and incredible increases for all niches when looking at the top 10 results overall.”

Here is a chart showing the rank volatility broken down by industry and the position of those rankings:

 all-niche-data-jan2020-core-update-800x550.png

“Excluding the Retail niche, which according to what I am seeing was perhaps a focus of the December 6th update, the January 2020 core update was a far larger update across the board and at every ranking position,” Mordy Oberstein added. “However, when looking at the top 10 results overall during the core update, the Retail niche started to separate itself from the levels of volatility seen in December as well.”

SEMRush. Yulia Ibragimova from SEMRush said “We can see that the latest Google Update was quite big and was noticed almost in every category.” The most volatile categories according to SEMRush, outside of Sports and News, were Online communities, Games, Arts & Entertainments, and Finance. But Yulia Ibragimova added that all categories saw major changes and “we can assume that this update wasn’t aimed to any particular topics,” she told us.

SEMRush offers a lot of data available on its web site over here. But they sent us this additional data around this update for us.

Here is the volatility by category by mobile vs desktop search results:

semrush-catts-642x600.png

The top ten winners according to SEMRush were Dictionary.com, Hadith of the Day, Discogs, ABSFairings, X-Rates, TechCrunch, ShutterStock, 247Patience, GettyImages and LiveScores.com. The top ten losers were mp3-youtube.download, TotalJerkFace.com, GenVideos.io, Tuffy, TripSavvy, Honolulu.gov, NaughtyFind, Local.com, RuthChris and Local-First.org.

Sistrix. Johannes Beus from Sistrix posted their analysis of this core update. He said “Domains that relate to YMYL (Your Money, Your Life) topics have been re-evaluated by the search algorithm and gain or lose visibility as a whole. Domains that have previously been affected by such updates are more likely to be affected again. The absolute fluctuations appear to be decreasing with each update – Google is now becoming more certain of its assessment and does not deviate as much from the previous assessment.”

Here is the Sistrix chart showing the change:

 uk.sistrix.com_onhealth.com_seo_visibility-1-800x361.png

According to Sistrix, the big winners were goal.com, onhealth.com, CarGurus, verywellhealth.com, Fandango, Times Of Israel, Royal.uk, and WestField. The big losers were CarMagazine.co.uk, Box Office Mojo, SkySports, ArnoldClark.com, CarBuyer.co.uk, History Extra, Evan Shalshaw, and NHS Inform.

SearchMetrics. Marcus Tober, the founder of SearchMetrics, told us “the January Core Update seems to revert some changes for the better or worse depending on who you are. It’s another core update where thin content got penalized and where Google put an emphasis in YMYL. The update doesn’t seem to affect as many pages as with the March or September update in 2019. But has similar characteristics.”

Here are some specific examples SearchMetrics shared. First was that Onhealth.com has won at March 2019 Core update and lost at September 2019 and won again big time at January 2020 Core update

 onhealth-800x320.png

While Verywellhealth.com was loser during multiple core updates:

 verywell-800x316.png

Draxe.com, which has been up and down during core updates, with this update seems to be a big winner with +83%. but in previous core updates, it got hit hard:

 draxe-800x318.png

The big winners according to SearchMetrics were esty.com, cargurus.com, verywellhealth.com, overstock.com, addictinggames.com, onhealth.com, bigfishgames,com and health.com. The big losers were tmz.com, academy.com, kbhgames.com, orbitz.com, silvergames.com, autolist.com, etonline.com, trovit.com and pampers.com.

What to do if you are hit. Google has given advice on what to consider if you are negatively impacted by a core update in the past. There aren’t specific actions to take to recover, and in fact, a negative rankings impact may not signal anything is wrong with your pages. However, Google has offered a list of questions to consider if you’re site is hit by a core update.

Why we care. It is often hard to isolate what you need to do to reverse any algorithmic hit your site may have seen. When it comes to Google core updates, it is even harder to do so. If this data and previous experience and advice has shown us is that these core updates are broad, wide and cover a lot of overall quality issues. The data above has reinforced this to be true. So if your site was hit by a core update, it is often recommended to step back from it all, take a wider view of your overall web site and see what you can do to improve the site overall.

[Source: This article was published in searchengineland.com By Barry Schwartz - Uploaded by the Association Member: Edna Thomas]

Categorized in Search Engine

If you’re looking for data, your search should start here. Google’s Dataset Search just launched as a full-fledged search tool, and it’s about as good as you’d expect. Google does a masterful job of collating all kinds of datasets from all across the internet with useful info like publication data, authors and file types available before you even click through. From NFL stats from the ’70s to catch records of great white sharks in the northwest Pacific, it seems to have it all. There are about 25 million datasets available now — actually just “a fraction of datasets on the web,” Google told the Verge — but more will be available as data hosts update their metadata.

Is there a word for that? Last week, as I took what must have been my hundredth Uber or Lyft ride at the tail end of two weeks of travel, I publicly wondered if there was a word for the specific type of small talk you make with a rideshare driver. (There isn’t, but I tip my hat to my former editor, Anne Glover, for whipping “chauffeurenfreude” together.) Different languages often feature unique words that capture seemingly indescribable feelings or experiences that don’t translate well at all. Here’s a website that keeps track of them.

This messaging app will self-destruct in 10 seconds. Literally. Well, not literally. There’s no explosion. But with Yap, messages (between up to six people) exist only until you type your next message, taking “ephemeral” to a whole new level. It seems to me that this is more of a proof of concept that shows the internet doesn’t have to be forever (imagine that!) and less of an actual useful tool for journalists. But the folks who subscribe to this newsletter are smart cookies. Prove me wrong.

Facebook just gave you access to some more of what it knows about you. Because of multi-site logins and Facebook ads, Facebook receives all kinds of information about users’ activities on other apps and websites. With the new Off-Facebook Activity tool, you can see and control exactly where that happens. “You might be shocked or at least a little embarrassed by what you find in there,” writes Washington Post tech columnist Geoffrey A. Fowler, and he couldn’t be more right — by piecing info together from my history, you can tell that I have a chronic bad habit of ordering late-night Domino’s pizza.

SPONSORED: Looking for an expert source for a story? Find and request an interview with academics from top universities on the Coursera | Expert Network, a new, free tool built for journalists. The Expert Network highlights those who can speak to the week’s trending news stories and showcases their perspectives on topical issues in short audio and video clips. Quickly and easily access a diverse set of subject matter experts at experts.coursera.org today.

If you needed another reminder to use caution online, here it is. The Tampa Bay Times, which Poynter owns, was the latest news organization to be hit by a nasty ransomware attack. The Times reported that it is unclear how the attack was carried out, so I can’t give you specific tips for avoiding a similar fate, but it’s a good reminder that any organization is only as safe as its weakest link. There are tools that can help — a good password manager and a well-placed firewall, for starters — but exercising good internet safety hygiene is the best first step. Be skeptical of emails from unknown senders, especially those with attachments. Keep your operating systems and software updated. And don’t use weak passwords (and especially don’t use the same weak passwords across multiple websites).

Weird news is often harmful to the most vulnerable members of society. I cringe every time I see a “Florida Man” story (my colleague Al Tompkins lays out why that is here), but many stories labeled “weird” or “dumb/stupid criminals” capitalize on human misery. Some of these stories may seem funny, but at whose expense?

Here’s a tool that displays every road in a city. It’s an interesting way to look at any metropolitan area, town or hamlet — from the world’s biggest city of Chongqing, China (population: 30 million), all the way down to my humble hometown of Gasport, New York (population: 1,248). Plus, you can export each one as a .png or editable .svg file. (Just a warning: Smaller locales seem to take a long time to load, if they even do at all.)

Bookmark this publishing tool in case it’s the next best thing (it probably is). The founding CEO of Chartbeat, a ubiquitous realtime analytics tool for newsrooms, is back at it with new project. It’s called Scroll and it massively improves the reading experience by removing ads and loading pages faster. My colleague, Rick Edmonds, has more about its founder and the future of the platform.

WikiHow’s bizarre art has been plastered all over the internet since 2005. Many of its pieces feature odd scenes that would probably never happen in real life. You’ve probably seen them repurposed in meme form. Here’s the strange story about how they’re made (and yes, it features some human misery, though we’re not making fun of it here).

[This article is originally published in bleepingcomputer.com By Lawrence Abrams - Uploaded by AIRS Member: Eric Beaudoin]

Categorized in Search Engine

Michael struggles to find the search results he’s looking for, and would like some tips for better Googling

 Want to search like a pro? These tips will help you up you Googling game using the advanced tools to narrow down your results. Photograph: Alastair Pike/AFP via Getty Images
Last week’s column mentioned search skills. I’m sometimes on the third page of results before I get to what I was really looking for. I’m sure a few simple tips would find these results on page 1. All advice welcome. Michael

Google achieved its amazing popularity by de-skilling search. Suddenly, people who were not very good at searching – which is almost everyone – could get good results without entering long, complex searches. Partly this was because Google knew which pages were most important, based on its PageRank algorithm, and it knew which pages were most effective, because users quickly bounced back from websites that didn’t deliver what they wanted.

Later, Google added personalisation based on factors such as your location, your previous searches, your visits to other websites, and other things it knew about you. This created a backlash from people with privacy concerns, because your searches into physical and mental health issues, legal and social problems, relationships and so on can reveal more about you than you want anyone else – or even a machine – to know.

When talking about avoiding “the creepy line”, former Google boss Eric Schmidt said: “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

Google hasn’t got to that point, yet, but it does want to save you from typing. Today, Google does this through a combination of auto-complete search suggestions, Answer Boxes, and “People also ask” boxes, which show related questions along with their “feature snippets”. As a result, Google is much less likely to achieve its stated aim of sending you to another website. According to Jumpshot research, about half of browser-based searches no longer result in a click, and about 6% go to Google-owned properties such as YouTube and Maps.

You could get upset about Google scraping websites such as Wikipedia for information and then keeping their traffic, but this is the way the world is going. Typing queries into a browser is becoming redundant as more people use voice recognition on smartphones or ask the virtual assistant on their smart speakers. Voice queries need direct answers, not pages of links.

So, I can give you some search tips, but they may not be as useful as they were when I wrote about them in January 2004 – or perhaps not for as long.

Advanced Search for everyone
Advanced Search for everyone.jpg
 Google’s advanced search page is the tool to properly drill down into the results. Photograph: Samuel Gibbs/The Guardian

The easiest way to create advanced search queries in Google is to use the form on the Advanced Search page, though I suspect very few people do. You can type different words, phrases or numbers that you want to include or exclude into the various boxes. When you run the search, it converts your input into a single string using search shortcuts such as quotation marks (to find an exact word or phrase) and minus signs (to exclude words).

You can also use the form to narrow your search to a particular language, region, website or domain, or to a type of file, how recently it was published and so on. Of course, nobody wants to fill in forms. However, using the forms will teach you most of the commands mentioned below, and it’s a fallback if you forget any.

Happily, many commands work on other search engines too, so skills are transferable.

Use quotation marks
4759.jpg
 Quotation marks can be a powerful tool to specify exact search terms. Photograph: IKEA

If you are looking for something specific, quotation marks are invaluable. Putting quotation marks around single words tells the search engine that you definitely want them to appear on every page it finds, rather than using close matches or synonyms. Google will, of course, ignore this, but at least the results page will tell you which word it has ignored. You can click on that word to insist, but you will get fewer or perhaps no results.

Putting a whole phrase in inverted commas has the same effect, and is useful for finding quotations, people’s names, book and film titles, or particular phrases.

You can also use an asterisk as a wildcard to find matching phrases. For example, The Simpsons episode, Deep Space Homer, popularised the phrase: “I for one welcome our new insect overlords”. Searching for “I for one welcome our new * overlords” finds other overlords such as aliens, cephalopods, computers, robots and squirrels.

Nowadays, Google’s RankBrain is pretty good at recognising titles and common phrases without quote marks, even if they include “stop words” such as a, at, that, the and this. You don’t need quotation marks to search for the Force, The Who or The Smiths.

However, it also uses synonyms rather than strictly following your keywords. It can be quicker to use minus signs to exclude words you don’t want than add terms that are already implied. One example is jaguar -car.

Use site commands

2618.jpg
 Using the ‘site:’ command can be a powerful tool for quickly searching a particular website. Photograph: Samuel Gibbs/The Guardian

Google also has a site: command that lets you limit your search to a particular website or, with a minus sign (-site:), exclude it. This command uses the site’s uniform resource locator or URL.

For example, if you wanted to find something on the Guardian’s website, you would type site:theguardian.com (no space after the colon) alongside your search words.

You may not need to search the whole site. For example, site:theguardian.com/technology/askjack will search the Ask Jack posts that are online, though it doesn’t search all the ancient texts (continued on p94).

There are several similar commands. For example, inurl: will search for or exclude words that appear in URLs. This is handy because many sites now pack their URLs with keywords as part of their SEO (search-engine optimisation). You can also search for intitle: to find words in titles.

Web pages can include incidental references to all sorts of things, including plugs for unrelated stories. All of these will duly turn up in text searches. But if your search word is part of the URL or the title, it should be one of the page’s main topics.

You can also use site: and inurl: commands to limit searches to include, or exclude, whole groups of websites. For example, either site:co.uk or inurl:co.uk will search matching UK websites, though many UK sites now have .com addresses. Similarly, site:ac.uk and inurl:ac.uk will find pages from British educational institutions, while inurl:edu and site:edu will find American ones. Using inurl:ac.uk OR inurl:edu (the Boolean command must be in caps) will find pages from both. Using site:gov.uk will find British government websites, and inurl:https will search secure websites. There are lots of options for inventive searchers.

Google Search can also find different types of file, using either filetype: or ext: (for file extension). These include office documents (docx, pptx, xlxs, rtf, odt, odp, odx etc) and pdf files. Results depend heavily on the topic. For example, a search for picasso filetype:pdf is more productive than one for stormzy.

Make it a date

1700.jpg
 Narrowing your search by date can find older pieces. Photograph: Samuel Gibbs/The Guardian

We often want up-to-date results, particularly in technology where things that used to be true are not true any more. After you have run a search, you can use Google’s time settings to filter the results, or use new search terms. To do this, click Tools, click the down arrow next to “Any time”, and use the dropdown menu to pick a time period between “Past hour” and “Past year”.

Last week, I was complaining that Google’s “freshness algorithm” could serve up lots of blog-spam, burying far more useful hits. Depending on the topic, you can use a custom time range to get less fresh but perhaps more useful results.

Custom time settings are even more useful for finding contemporary coverage of events, which might be a company’s public launch, a sporting event, or something else. Human memories are good at rewriting history, but contemporaneous reports can provide a more accurate picture.

However, custom date ranges have disappeared from mobile, the daterange: command no longer seems to work in search boxes, and “sort by date” has gone except in news searches. Instead, this year, Google introduced before: and after: commands to do the same job. For example, you could search for “Apple iPod” before:2002-05-31 after:2001-10-15 for a bit of nostalgia. The date formats are very forgiving, so one day we may all prefer it.

 [Source: This article was published in theguardian.com - Uploaded by the Association Member: Carol R. Venuti] 

Categorized in Search Engine

Earlier today, Google  announced that it would be redesigning the redesign of its search results as a response to withering criticism from politicians, consumers and the press over the way in which search results displays were made to look like ads.

Google makes money when users of its search service click on ads. It doesn’t make money when people click on an unpaid search result. Making ads look like search results makes Google more money.

It’s also a pretty evil (or at least unethical) business decision by a company whose mantra was “Don’t be evil”(although they gave that up in 2018).

 

Users began noticing the changes to search results last week, and at least one user flagged the changes earlier this week.

There's something strange about the recent design change to google search results, favicons and extra header text: they all look like ads, which is perhaps the point?

Screenshot 1
EO0MQcEU0AAGtVR
 
Google responded with a bit of doublespeak from its corporate account about how the redesign was intended to achieve the opposite effect of what it was actually doing.

“Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded ‘Ad’ label for ads,” the company wrote.

Senator Mark Warner (D-VA) took a break from impeachment hearings to talk to The Washington Post about just how bad the new search redesign was.

“We’ve seen multiple instances over the last few years where Google has made paid advertisements ever more indistinguishable from organic search results,” Warner told the Post. “This is yet another example of a platform exploiting its bottleneck power for commercial gain, to the detriment of both consumers and also small businesses.”

Google’s changes to its search results happened despite the fact that the company is already being investigated by every state in the country for antitrust violations.

For Google, the rationale is simple. The company’s advertising revenues aren’t growing the way they used to, and the company is looking at a slowdown in its core business. To try and juice the numbers, dark patterns present an attractive way forward.

Indeed, Google’s using the same tricks that it once battled to become the premier search service in the U.S. When the company first launched its search service, ads were clearly demarcated and separated from actual search results returned by Google’s algorithm. Over time, the separation between what was an ad and what wasn’t became increasingly blurred.

 
Screenshot 2

Color fade: A history of Google ad labeling in search results http://selnd.com/2adRCdU 

CoOOsx WAAAgFhq
 
“Search results were near-instant and they were just a page of links and summaries – perfection with nothing to add or take away,” user experience expert Harry Brignull (and founder of the watchdog website darkpatterns.org) said of the original Google search results in an interview with TechCrunch.

“The back-propagation algorithm they introduced had never been used to index the web before, and it instantly left the competition in the dust. It was proof that engineers could disrupt the rules of the web without needing any suit-wearing executives. Strip out all the crap. Do one thing and do it well.”

“As Google’s ambitions changed, the tinted box started to fade. It’s completely gone now,” Brignull added.

The company acknowledged that its latest experiment might have gone too far in its latest statement and noted that it will “experiment further” on how it displays results.

 [Source: This article was published in techcrunch.com By Jonathan Shieber - Uploaded by the Association Member: Joshua Simon]

Categorized in Search Engine

"In the future, everyone will be anonymous for 15 minutes." So said the artist Banksy, but following the rush to put everything online, from relationship status to holiday destinations, is it really possible to be anonymous - even briefly - in the internet age?

That saying, a twist on Andy Warhol's famous "15 minutes of fame" line, has been interpreted to mean many things by fans and critics alike. But it highlights the real difficulty of keeping anything private in the 21st Century.

"Today, we have more digital devices than ever before and they have more sensors that capture more data about us," says Prof Viktor Mayer-Schoenberger of the Oxford Internet Institute.

And it matters. According to a survey from the recruitment firm Careerbuilder, in the US last year 70% of companies used social media to screen job candidates, and 48% checked the social media activity of current staff.

Also, financial institutions can check social media profiles when deciding whether to hand out loans.

_108600940_banksybarelylegal2006.jpg

Meanwhile, companies create models of buying habits, political views and even use artificial intelligence to gauge future habits based on social media profiles.

One way to try to take control is to delete social media accounts, which some did after the Cambridge Analytica scandal, when 87 million people had their Facebook data secretly harvested for political advertising purposes.

While deleting social media accounts may be the most obvious way to remove personal data, this will not have any impact on data held by other companies.

Fortunately, in some countries the law offers protection.

In the European Union the General Data Protection Regulation (GDPR) includes the "right to be forgotten" - an individual's right to have their personal data removed.

In the UK the that is policed by the Information Commissioner's Office. Last year it received 541 requests to have information removed from search engines, according to data shown to the BBC, up from 425 the year before, and 303 in 2016-17.

The actual figures may be higher as ICO says it often only becomes involved after an initial complaint made to the company that holds the information has been rejected.

But ICO's Suzanne Gordon says it is not clear-cut: "The GDPR has strengthened the rights of people to ask for an organisation to delete their personal data if they believe it is no longer necessary for it to be processed.

"However, this right is not absolute and in some cases must be balanced against other competing rights and interests, for example, freedom of expression."

The "right to be forgotten" shot to prominence in 2014 and led to a wide-range of requests for information to be removed - early ones came from an ex-politician seeking re-election, and a paedophile - but not all have to be accepted.

Companies and individuals, that have the money, can hire experts to help them out.

A whole industry is being built around "reputation defence" with firms harnessing technology to remove information - for a price - and bury bad news from search engines, for example.

One such company, Reputation Defender, founded in 2006, says it has a million customers including wealthy individuals, professionals and chief executives. It charges around £5,000 ($5,500) for its basic package.

It uses its own software to alter the results of Google searches about its clients, helping to lower less favourable stories in the results and promote more favourable ones instead.

_108600440_googlegettyimages-828896324-1.jpg

"The technology focuses on what Google sees as important when indexing websites at the top or bottom of the search results," says Tony McChrystal, managing director.

"Generally, the two major areas Google prioritises are the credibility and authority the web asset has, and how users engage with the search results and the path Google sees each unique individual follow.

"We work to show Google that a greater volume of interest and activity is occurring on sites that we want to promote, whether they're new websites we've created, or established sites which already appear in the [Google results pages], while sites we are seeking to suppress show an overall lower percentage of interest."

The firm sets out to achieve its specified objective within 12 months.

"It's remarkably effective," he adds, "since 92% of people never venture past the first page of Google and more than 99% never go beyond page two."

Prof Mayer-Schoenberger points out that, while reputation defence companies may be effective, "it is hard to understand why only the rich that can afford the help of such experts should benefit and not everyone".

_108598284_warhol.jpg

So can we ever completely get rid of every online trace?

"Simply put, no," says Rob Shavell, co-founder and chief executive of DeleteMe, a subscription service which aims to remove personal information from public online databases, data brokers, and search websites.

"You cannot be completely erased from the internet unless somehow all companies and individuals operating internet services were forced to fundamentally change how they operate.

"Putting in place strong sensible regulation and enforcement to allow consumers to have a say in how their personal information can be gathered, shared, and sold would go a long way to addressing the privacy imbalance we have now."

[Source: This article was published in bbc.com By Mark Smith - Uploaded by the Association Member: Jay Harris]

Categorized in Internet Privacy
Page 1 of 58

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media

Book Your Seat for Webinar - GET 70% OFF FOR MEMBERS ONLY      Register Now