Articles
Pages
Products
Research Papers
Blogs
Search Engines
Events
Webinar, Seminar, Live Classes

[This article is originally published in today.com vox.com written By Kara Swisher - Uploaded by AIRS Member: Jay Harris]

Forces have been unleashed that seem out of control. But is it the end of the beginning or the beginning of the end?

Exactly a decade ago, Walt Mossberg and I declared the end of Web 2.0 and the beginning of its next iteration: Web 3.0.

There was a recession messing badly with the tech sector at the time, which we dubbed the Econalypse. But we decided to make a loud prediction right before our seventh All Things Digital conference anyway because we saw that the digital tidal wave sweeping the world just wasn’t stopping.

And we said so:

[W]hat’s the seminal development that’s ushering in the era of Web 3.0? It’s the real arrival, after years of false predictions, of the thin client, running clean, simple software, against cloud-based data and services. The poster children for this new era have been the Apple iPhone and iPod Touch, which have sold 37 million units in less than two years and attracted 35,000 apps and one billion app downloads in just nine months.

The excitement and energy around the iPhone and the Touch — and the software and services being written for them — remind us of the formative years of the PC and PC software, in the early 1980s, or the early days of the Web in the mid-1990s.

It’s a big deal.

But this is not just about one company, one platform or even one form factor. No, this new phenomenon is about handheld computers from many companies, with software platforms and distribution mechanisms tightly tied to cloud-based services, whether they are multi-player games, e-commerce offerings or corporate databases.

Some of these handheld computers will make phone calls, but others won’t. Some will fit in a pocket, but others will be tablets or even laptop-type clamshells. But, like the iPhone, all will be fusions of clever new hardware, innovative client software and powerful server-based components.

Pretty prescient, right? Smartphones and the app universe — spawned by the Apple iPhone release in June 2007 — bore a new era of innovation, heralding in a spate of companies dependent on mobile. It was, as I have written before, a Cambrian explosion. Would there be Uber, Lyft, Tinder, and so many others without the mobile phone? Would companies like Facebook, Twitter, and Slack have grown so large in its absence?

Now, I must declare that revolution officially over. After a very long run, Web 3.0 is on its last legs and needs to get out of the way for what’s next.

And what’s that? Well, as it turns out, a lot of things that include tech, politics, social mores, and more that have gained traction over the past few years and that are mashing together in ways that are still sorting themselves out.

That’s why I did a PowerPoint — and trust me when I tell you, I hate PowerPoints — to try to get my own head around it. I edit it almost constantly, as new ideas pop into my head, and am always trying to make new connections between and among the key trends.

Let’s start with the title that I first put on it: “It’s not just Amazon (well, you need to be scared of them too).” Sure, it could be catchier, but it’s that for a good reason. Over the past few years, there has been a big focus on individual companies — especially Amazon, Facebook, and Google — rather than the overall trends they represent around the toxic impacts of tech. While each has grown into a powerful and sometimes troublesome entity, the changes that are happening now across tech are much bigger than any one company. We’re talking about pervasive ecosystem changes.

That’s because tech companies come and go, even the titans, and the natural instinct is to name the company rather than see the bigger ideas that their actions are shaping — and that are shaping them.

Let’s start with the most important trend: artificial intelligence. The others include robotics and automation; self-driving; endless choice; privacy under assault, when data is gold; continuous partial hacking; continuous partial attention; and political and social unrest.

Starting with AI, there’s a line that I think pretty much encapsulates the one thing you absolutely need to know about the future of machine learning that I have used again and again: Everything that can be digitized will be digitized.

Full stop.

What do I mean by that? Computing as we know it is being changed by AI and machine learning. It is already everywhere — from when you talk to Apple’s Siri or Amazon’s Alexa or see your list of movies on Netflix or interact with a chatbot. Its use, both generally and for highly specific things, has and will continue to impact innumerable fields, resulting in massive job disruption.

layout illo 1

Economists and consulting firms have long predicted that AI will change workplaces and the workforce, but I think they’ve been underselling it. For example, when a Google deep learning program can quickly study a super-complex multiplayer strategy game and then kick the crap out of the best human players on the planet, perhaps we should think hard about where it will go next.

Whether it kills jobs or adds them, a matter of much debate, it will lead to things like multi-careers, more gig-oriented work, and a need for reeducation.

On the downside, which jobs will be impacted? It’s not just factory workers, burger flippers, and long-haul truckers. Highly paid lawyers, skilled doctors (don’t let your daughter be a radiologist), and, yes, even lowly journalists will need to find new lines of work. And those tectonic workplace realignments will only become more profound as the AI becomes inevitably — and exponentially — better.

To thrive in this environment will require being in a profession that is creative, where analog interactions are critical — one that cannot be easily made digital. Think art, think the caring professions, think anything in which being human trumps cyborg. And since AI becomes ever smarter, it will make sense to allow it to do more and more as we become ever less so.

It is a path humanity is already on, of course: When was the last time you ever read a map rather than got directions from Google? Or cracked a book to find an errant fact? It’ll be like that for so many things we do, as normal practices change to reflect and take advantage of the convenience and precision of AI.

We’ll also soon see the effects of radical advances in robotics and automation, which will probably be more behind the scenes than having a robot maid in our home (though we will get to that). Yes, the way we wage war is changing dramatically, but it’s not the killer robots we think of when we envision a world in which the Terminator movies become a reality. As Elon Musk told Walt and me in an interview several years ago, these AI-powered robots and platforms will think of us more like house cats than enemies. So I guess that’s a plus.

If advances in sentience and responsiveness are as dramatic as I think they might be, the implications will be even more interesting — and problematic. Will we someday have to contemplate their rights? It’s been a constant in science fiction, but now we seem to be really on the doorstep of rethinking what it means to be human in an era when biotechnology — whether implanting chips that could enhance intelligence, altering genetic code to eliminate disease, or wearing exoskeletons that enhance human strength — is on the cutting edge.

Currently, companies like the FDA-approved ReWalk are focused on using these devices to allow those with spinal cord injuries to walk again, but there are many military and factory experiments ongoing. These “exo suits” and other mechanized clothing are only the vanguard of a body-enhancing movement that is likely to get even larger in the coming decades. Where this innovation emerges is also a consideration, since tech’s increasingly dominant player — China — is making strong moves in the sector.

The next trend will be around transportation, especially the use of cars and trucks. I recently called them the “horses of tomorrow” in a New York Times column, noting that I would never buy and likely never own another car as long as I live.

While many reacted to the piece by proclaiming that Americans will never give up car ownership, especially in rural areas, I am confident that a combination of ride-sharing, autonomous vehicles, and breakthroughs in other modes of transport will lead to profound changes that will impact the entire ecosystem and economy, especially as much of the human race lives near major metropolitan areas. We will need to build smarter cities as this shift happens.

While regulatory and technical issues are still complex, such obstacles are most certainly not impossible as we rethink our relationship with the entire transportation system and are forced to make changes as the planet becomes more congested and more impacted by climate change. To me, the change in this sector will have reverberations far beyond any others, despite what will seem like a slow rollout.

What is moving a lot faster is the concept of everything-on-demand, from instant delivery of goods to the perfect anticipation of needs thanks to AI-enhanced computing. That’s already here in many ways, seen most clearly by the leaps of services like Amazon Prime. Again, here the impact is massive, fundamentally changing the way we shop and consume. Hunting and gathering are dead: Large-footprint stores will be gone, be replaced by those that can differentiate themselves from the commodity-based goods. I’d bet in a decade your favorite little boutique will be there long after a giant Target will be.

(Of course, all this does not mean there will not be a backlash over these changes since they increase consumerism and emissions and, you know, your box pile at home.)

This will be true across all sectors, including media. The old and failed concept of push — in which information was thrown at you, clogging the early internet’s pipes — will return. In that paradigm, you will not pick but be picked for; you will not seek, but it will be found. This is already happening, but it will intensify. Obviously, this has great societal implications, and we are already seeing the downside of screen addiction that is bringing social unrest, depression, and a kind of ennui about human interaction.

That trend has been and will be fueled by the end of any true semblance of privacy, a process that is already well on its way. I do not mean to belabor an issue that has gotten a lot of attention already, except to repeat what former Sun Microsystems CEO Scott McNealy said two decades ago: “You have zero privacy anyway. Get over it.” Yep. You and your data have been the fuel of the internet age, and that will not abate.

Can regulation save us? There has been some pushback to this grim state of affairs, especially in Europe with its General Data Protection Rules — but also in Silicon Valley’s backyard, with California’s Consumer Privacy Act set to go into effect next year. But there is no national privacy bill in the US — and don’t get your hopes up, either, Bernie Bros. Meanwhile, countries like China push the envelope further by building what are essentially surveillance economies using facial recognition and intense monitoring of its citizens’ every move and keystroke.

layout illo 2

That, of course, leaves us deeply vulnerable to even more hacking as we are jacked into the system in ways that were heretofore impossible. And this hacking will not be hard to pull off — just ask any systems engineer or coder — as an increasingly interconnected Internet of Things brings porous platforms into our kitchens, cars, and wallets. The system was built for access and connectivity; malevolent players can simply use the tools on offer. In this fight, companies like Facebook are now battling nation-states, even if they themselves have become the digital equivalent of that.

The will to win this fight — even define its terms — has been weak on the consumer side, but incursions in the 2016 election have shown us the real goal of some of these hackers: to disrupt our society and create discord. The Russians lost the Cold War, but they have proven to be quite a bit better at the Cyber War. Increasingly, as even more nefarious players ramp up, that could mean more attacks on basic infrastructure, from electric grids to phone systems. And as our devices are ever more connected, battling it will be like pushing back the ocean.

Ten years ago, when Walt and I wrote our Web 3.0 missive, the iPhone was just a couple of years old, a novelty for rich people that barely could make a call. Think how quaint that all sounds now. But that one device unleashed a never-ending revolution of change, each cycle accelerating faster than the last. These technologies have unmoored people from their communities and removed once-sacred societal strictures. This is the first time that humanity has been allowed to talk to each other without gatekeepers or any other mechanisms of control.

It is not going well. And we now are surprised when we realize, for example, how so many have been radicalized by videos on YouTube or 8chan message boards. But with new and more immersive technologies just around the corner, you haven’t seen anything yet.

If it appears as if the forces of evil are winning here, it is because they are.

That might seem dire. Well, it is. While it is critical that we now put in some guardrails to make these profound developments less unsettling, having not done so at the start — the original sin of pushing growth over everything else — presents humanity with a massive challenge. If change is the constant, ever-morphing as we move to control it, how can we manage what we have invented?

There’s only one way as far as I can tell. Long ago, Steve Jobs launched a marketing campaign that urged people to “Think Different.” That has never been more true, except I would adjust it slightly. To face the modern age — and the future we have created but don’t yet understand — we not only have to think different as all these new technologies roll out ever more quickly. We have to be different.

If that is a cliffhanger, so be it, because a cliff is exactly where we are.

Categorized in Science & Tech

[This article is originally published in searchengineland.com - Uploaded by AIRS Member: Rene Meyer]

By multiple measures, Google is the internet’s most popular search engine. But Google’s not only a web search engine. Images, videos, mobile content — even Google TV!

Major Google Services

Google releases a dizzying array of new products and product updates on a regular basis, and Search Engine Land keeps you up-to-date with all the news. Here are just a few of our popular Google categories, where you can read past coverage:

Google: Our “everything” category, this lists all stories we’ve written about Google, regardless of subtopic.

Google Web Search: Our stories about Google’s web search engine, including changes and new features. Also see: Google: OneBox, Plus Box & Direct Answers, Google: Universal Search and Google: User Interface.

Google SEO: Articles from us about getting listed for free via SEO in Google’s search engine. Also see the related category of Google Webmaster Central.

Google AdWords: Our coverage of Google’s paid search advertising program.

Google AdSense: Stories about Google’s ad program for publishers, which allows content owners to carry Google ads and earn money.

Google Maps & Local: Coverage of Google Maps, which allows you to locate places, businesses, get directions and much more. Also see Google Earth for coverage of Google’s mapping application.

Google Street View: Articles about Google’s popular yet controversial Street View system that uses cars to take photos of homes and business, which are then made available through Google Maps.

Google YouTube & Video: Articles about Google’s YouTube service, which allows anyone to upload video content. YouTube also has so much search traffic that it stands out as a major search engine of its own.

Google Logos: Google loves to have special logos for holidays and to commemorate special events. We track some of the special “Google Doodles,” as the company calls them. Also see our retrospective story, Those Special Google Logos, Sliced & Diced, Over The Years.

Also see our special guide for searchers, How To Use Google To Search.

Google Resources

Further below is a full list of additional Google topics that we track. But first, here are a few sites that track Google in-depth.

First up is Google’s own Official Google Blog. Google also has many other blogs for individual products, which are listed on the official blog. This feed keeps you up-to-date on any official blog post, from any of Google’s blogs. Google also had a traditional press release area.

Beyond official Googledom are a number of news sites that track Google particularly in-depth. These include: Dirson (in Spanish), eWeek’s Google Watch, Google Blogoscoped, Google Operating System, John Battelle, Search Engine Land, Search Engine Roundtable, WebProNewsand ZDNet Googling Google.

The Full Google List

We said Google is more than just a web search engine, right? Below is the full list of various Google search and search-related products that we track. Click any link to see our stories in that particular area:

  • Google (stories from all categories below, combined)
  • Google: Accounts & Profiles
  • Google: Acquisitions
  • Google: Ad Planner
  • Google: AdSense
  • Google: AdWords
  • Google: Alerts
  • Google: Analytics
  • Google: APIs
  • Google: Apps For Your Domain
  • Google: Audio Ads
  • Google: Base
  • Google: Blog Search
  • Google: Blogger
  • Google: Book Search
  • Google: Browsers
  • Google: Business Issues
  • Google: Buzz
  • Google: Calendar
  • Google: Checkout
  • Google: Chrome
  • Google: Code Search
  • Google: Content Central
  • Google: Critics
  • Google: Custom Search Engine
  • Google: Dashboard
  • Google: Definitions
  • Google: Desktop
  • Google: Discussions
  • Google: Docs & Spreadsheets
  • Google: Domains
  • Google: DoubleClick
  • Google: Earth
  • Google: Editions
  • Google: Employees
  • Google: Enterprise Search
  • Google: FeedBurner
  • Google: Feeds
  • Google: Finance
  • Google: Gadgets
  • Google: Gears
  • Google: General
  • Google: Gmail
  • Google: Groups
  • Google: Health
  • Google: iGoogle
  • Google: Images
  • Google: Internet Access
  • Google: Jet
  • Google: Knol
  • Google: Labs
  • Google: Legal
  • Google: Logos
  • Google: Maps & Local
  • Google: Marketing
  • Google: Mobile
  • Google: Moderator
  • Google: Music
  • Google: News
  • Google: Offices
  • Google: OneBox, Plus Box & Direct Answers
  • Google: OpenSocial
  • Google: Orkut
  • Google: Other
  • Google: Other Ads
  • Google: Outside US
  • Google: Parodies
  • Google: Partnerships
  • Google: Patents
  • Google: Personalized Search
  • Google: Picasa
  • Google: Place Pages
  • Google: Print Ads & AdSense For Newspapers
  • Google: Product Search
  • Google: Q & A
  • Google: Reader
  • Google: Real Time Search
  • Google: Search Customization
  • Google: SearchWiki
  • Google: Security
  • Google: SEO
  • Google: Sidewiki
  • Google: Sitelinks
  • Google: Social Search
  • Google: SpyView
  • Google: Squared
  • Google: Street View
  • Google: Suggest
  • Google: Toolbar
  • Google: Transit
  • Google: Translate
  • Google: Trends
  • Google: TV
  • Google: Universal Search
  • Google: User Interface
  • Google: Voice Search
  • Google: Web History & Search History
  • Google: Web Search
  • Google: Webmaster Central
  • Google: Website Optimizer
  • Google: YouTube & Video

Categorized in Search Engine

[This article is originally published in searchengineland.com written By Greg Sterling - Uploaded by AIRS Member: Rene Meyer]

Google's response is that significant SERP personalization is 'a myth.'

A new study (via Wired) from Google rival DuckDuckGo charges that Google search personalization is contributing to “filter bubbles.” Google disputes this and says that search personalization is mostly a myth.

The notion of filter bubbles in social media or search has been a controversial topic since the term was coined a number of years ago by Eli Pariser to describe how relevance algorithms tend to reinforce users’ existing beliefs and biases.

Significant variation in results. The DuckDuckGo study had 87 U.S. adults search for “gun control,” “immigration,” and “vaccinations” at the same time on June 24, 2018. They searched both in incognito mode and then in non-private-browsing mode. Most of the queries were done on the desktop; a smaller percentage were on mobile devices. It was a small test in terms of the number of participants and query volume.

Below are the top-level findings according to DuckDuckGo’s discussion:

  • Most people saw results unique to them, even when logged out and in private browsing mode.
  • Google included links for some participants that it did not include for others.
  • They saw significant variation within the News and Videos infoboxes.
  • Private browsing mode and being logged out of Google offered almost zero filter bubble protection.

The DuckDuckGo post offers a more in-depth discussion of the findings, as well as the raw data for download.

Comparing the variation in search results 

Comparing the variation in search results

My test found minor differences. I searched “gun control,” “immigration,” and “vaccinations” in private mode and non-incognito mode. I didn’t find the results to be substantially different, though there were some differences in the SERP.

In the case of “immigration” (above), you can see there’s an ad in the incognito results but none in the non-private results. The normal results also feature a larger Knowledge Panel and “people also ask” search suggestions, which didn’t appear on the first page in the incognito results but did appear in subsequent searches on the same term.

Google Search Liaison Danny Sullivan responded to the study with a series of tweets explaining that there’s very limited personalization in search results but that the company does show different results because of location, language differences, platform and time (on occasion). He said, “Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn’t the case. Results can differ, but usually for non-personalized reasons.”

In September this year, Google told CNBC that it essentially doesn’t personalize search results.

Why you should care. Non-personalized search results make the job of SEO practitioners easier because they can better determine the performance of their tactics. Google doesn’t consider “localization” to be personalization, although many SEOs would argue that it is. On mobile devices, proximity is widely seen as a dominant local ranking factor.

Categorized in Search Engine

[This article is originally published in seroundtable.com written by Barry Schwartz - Uploaded by AIRS Member: Jason Bourne]

John Mueller from Google explained on Twitter the difference between the timestamp date in an XML Sitemaps lasmod date and the date on a web page. John said the sitemaps lastmod is when the page as a whole was last changed for crawling/indexing. The date on the page is the date to be associated with the primary content on the page.

John Mueller first said, "A page can change without its primary content changing." He said that doesn't think "crawling needs to be synced to the date associated with the content." The example he gave was for "site redesigns or site moves are pretty clearly disconnected from the content date."

He then added this tweet:

john tweet

So there you have it.

Forum discussion at Twitter.

Categorized in Search Engine

[This article is originally published in theguardian.com written by Carole Cadwalladr - Uploaded by AIRS Member: Jennifer Levin]

Tech-savvy rightwingers have been able to ‘game’ the algorithms of internet giants and create a new reality where Hitler is a good guy, Jews are evil and… Donald Trump becomes president

Here’s what you don’t want to do late on a Sunday night. You do not want to type seven letters into Google. That’s all I did. I typed: “a-r-e”. And then “j-e-w-s”. Since 2008, Google has attempted to predict what question you might be asking and offers you a choice. And this is what it did. It offered me a choice of potential questions it thought I might want to ask: “are Jews a race?”, “are Jews white?”, “are Jews Christians?”, and finally, “are Jews evil?”

Is Jews evil? It’s not a question I’ve ever thought of asking. I hadn’t gone looking for it. But there it was. I press enter. A page of results appears. This was Google’s question. And this was Google’s answer: Jews are evil. Because there, on my screen, was the proof: an entire page of results, nine out of 10 of which “confirm” this. The top result, from a site called Listovative, has the headline: “Top 10 Major Reasons Why People Hate Jews.” I click on it: “Jews today have taken over marketing, militia, medicinal, technological, media, industrial, cinema challenges, etc and continue to face the worlds [sic] envy through unexplained success stories given their inglorious past and vermin like repression all over Europe.”

Google searches. It’s the verb, to Google. It’s what we all do, all the time, whenever we want to know anything. We Google it. The site handles at least 63,000 searches a second, 5.5bn a day. Its mission as a company, the one-line overview that has informed the company since its foundation and is still the banner headline on its corporate website today, is to “organize the world’s information and make it universally accessible and useful”. It strives to give you the best, most relevant results. And in this instance the third-best, most relevant result to the search query “are Jews… ” is a link to an article from stormfront.org, a neo-Nazi website. The fifth is a YouTube video: “Why the Jews are Evil. Why we are against them.”

The sixth is from Yahoo Answers: “Why are Jews so evil?” The seventh result is: “Jews are demonic souls from a different world.” And the 10th is from jesus-is-saviour.com: “Judaism is Satanic!”

There’s one result in the 10 that offers a different point of view. It’s a link to a rather dense, scholarly book review from thetabletmag.com, a Jewish magazine, with the unfortunately misleading headline: “Why Literally Everybody In the World Hates Jews.”

I feel like I’ve fallen down a wormhole, entered some parallel universe where black is white, and good is bad. Though later, I think that perhaps what I’ve actually done is scraped the topsoil off the surface of 2016 and found one of the underground springs that have been quietly nurturing it. It’s been there all the time, of course. Just a few keystrokes away… on our laptops, our tablets, our phones. This isn’t a secret Nazi cell lurking in the shadows. It’s hiding in plain sight.

Are women Googles search results

Stories about fake news on Facebook have dominated certain sections of the press for weeks following the American presidential election, but arguably this is even more powerful, more insidious. Frank Pasquale, professor of law at the University of Maryland, and one of the leading academic figures calling for tech companies to be more open and transparent calls the results “very profound, very troubling”.

He came across a similar instance in 2006 when, “If you typed ‘Jew’ in Google, the first result was jewwatch.org. It was ‘look out for these awful Jews who are ruining your life’. And the Anti-Defamation League went after them and so they put an asterisk next to it which said: ‘These search results may be disturbing but this is an automated process.’ But what you’re showing – and I’m very glad you are documenting it and screenshotting it – is that despite the fact they have vastly researched this problem, it has gotten vastly worse.”

And ordering of search results does influence people, says Martin Moore, director of the Centre for the Study of Media, Communication and Power at King’s College, London, who has written at length on the impact of the big tech companies on our civic and political spheres. “There’s large-scale, statistically significant research into the impact of search results on political views. And the way in which you see the results and the types of results you see on the page necessarily has an impact on your perspective.” Fake news, he says, has simply “revealed a much bigger problem. These companies are so powerful and so committed to disruption. They thought they were disrupting politics but in a positive way. They hadn’t thought about the downsides. These tools offer remarkable empowerment, but there’s a dark side to it. It enables people to do very cynically, damaging things.”

Google is knowledge. It’s where you go to find things out. And evil Jews are just the start of it. There are also evil women. I didn’t go looking for them either. This is what I type: “a-r-e w-o-m-e-n”. And Google offers me just two choices, the first of which is: “Are women evil?” I press return. Yes, they are. Every one of the 10 results “confirms” that they are, including the top one, from a site called sheddingoftheego.com, which is boxed out and highlighted: “Every woman has some degree of prostitute in her. Every woman has a little evil in her… Women don’t love men, they love what they can do for them. It is within reason to say women feel attraction but they cannot love men.”

Next I type: “a-r-e m-u-s-l-i-m-s”. And Google suggests I should ask: “Are Muslims bad?” And here’s what I find out: yes, they are. That’s what the top result says and six of the others. Without typing anything else, simply putting the cursor in the search box, Google offers me two new searches and I go for the first, “Islam is bad for society”. In the next list of suggestions, I’m offered: “Islam must be destroyed.”

Jews are evil. Muslims need to be eradicated. And Hitler? Do you want to know about Hitler? Let’s Google it. “Was Hitler bad?” I type. And here’s Google’s top result: “10 Reasons Why Hitler Was One Of The Good Guys” I click on the link: “He never wanted to kill any Jews”; “he cared about conditions for Jews in the work camps”; “he implemented social and cultural reform.” Eight out of the other 10 search results agree: Hitler really wasn’t that bad.

A few days later, I talk to Danny Sullivan, the founding editor of SearchEngineLand.com. He’s been recommended to me by several academics as one of the most knowledgeable experts on search. Am I just being naive, I ask him? Should I have known this was out there? “No, you’re not being naive,” he says. “This is awful. It’s horrible. It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate. Google is doing a horrible, horrible job of delivering answers here. It can and should do better.”

He’s surprised too. “I thought they stopped offering to autocomplete suggestions for religions in 2011.” And then he types “are women” into his own computer. “Good lord! That answer at the top. It’s a featured result. It’s called a “direct answer”. This is supposed to be indisputable. It’s Google’s highest endorsement.” That every woman has some degree of prostitute in her? “Yes. This is Google’s algorithm going terribly wrong.”

I contacted Google about its seemingly malfunctioning autocomplete suggestions and received the following response: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs – as a company, we strongly value a diversity of perspectives, ideas, and cultures.”

Google isn’t just a search engine, of course. The search was the foundation of the company but that was just the beginning. Alphabet, Google’s parent company, now has the greatest concentration of artificial intelligence experts in the world. It is expanding into healthcare, transportation, energy. It’s able to attract the world’s top computer scientists, physicists, and engineers. It’s bought hundreds of start-ups, including Calico, the whose stated mission is to “cure death” and DeepMind, which aims to “solve intelligence”.

And 20 years ago it didn’t even exist. When Tony Blair became prime minister, it wasn’t possible to Google him: the search engine had yet to be invented. The company was only founded in 1998 and Facebook didn’t appear until 2004. Google’s founders Sergey Brin and Larry Page are still only 43. Mark Zuckerberg of Facebook is 32. Everything they’ve done, the world they’ve remade, has been done in the blink of an eye.

But it seems the implications about the power and reach of these companies is only now seeping into the public consciousness. I ask Rebecca MacKinnon, director of the Ranking Digital Rights project at the New America Foundation, whether it was the recent furore over fake news that woke people up to the danger of ceding our rights as citizens to corporations. “It’s kind of weird right now,” she says, “because people are finally saying, ‘Gee, Facebook, and Google really has a lot of power’ like it’s this big revelation. And it’s like, ‘D’oh.’”

MacKinnon has particular expertise in how authoritarian governments adapt to the internet and bend it to their purposes. “China and Russia are a cautionary tale for us. I think what happens is that it goes back and forth. So during the Arab spring, it seemed like the good guys were further ahead. And now it seems like the bad guys are. Pro-democracy activists are using the internet more than ever but at the same time, the adversary has gotten so much more skilled.”

Last week Jonathan Albright, an assistant professor of communications at Elon University in North Carolina, published the first detailed  cursor: pointer; text-decoration: none !important; border-bottom: 0.0625rem solid rgb(220, 220, 220); transition: border-color 0.15s ease-out 0s;">research on how rightwing websites had spread their message. “I took a list of these fake news sites that was circulating, I had an initial list of 306 of them and I used a tool – like the one Google uses – to scrape them for links and then I mapped them. So I looked at where the links went – into YouTube and Facebook, and between each other, millions of them… and I just couldn’t believe what I was seeing.

“They have created a web that is bleeding through on to our web. This isn’t a conspiracy. There isn’t one person who’s created this. It’s a vast system of hundreds of different sites that are using all the same tricks that all websites use. They’re sending out thousands of links to other sites and together this has created a vast satellite system of rightwing news and propaganda that has completely surrounded the mainstream media system.”

He found 23,000 pages and 1.3m hyperlinks. “And Facebook is just the amplification device. When you look at it in 3D, it actually looks like a virus. And Facebook was just one of the hosts for the virus that helps it spread faster. You can see the New York Times in there and the Washington Post and then you can see how there’s a vast, vast network surrounding them. The best way of describing it is as an ecosystem. This really goes way beyond individual sites or individual stories. What this map shows is the distribution network and you can see that it’s surrounding and actually choking the mainstream news ecosystem.”

Charlie Beckett, a professor in the school of media and communications at LSE, tells me: “We’ve been arguing for some time now that plurality of news media is good. Diversity is good. Critiquing the mainstream media is good. But now… it’s gone wildly out of control. What Jonathan Albright’s research has shown is that this isn’t a byproduct of the internet. And it’s not even being done for commercial reasons. It’s motivated by ideology, by people who are quite deliberately trying to destabilise the internet.”

Albright’s map also provides a clue to understanding the Google search results I found. What these rightwing news sites have done, he explains, is what most commercial websites try to do. They try to find the tricks that will move them up Google’s PageRank system. They try and “game” the algorithm. And what his map shows is how well they’re doing that.

That’s what my searches are showing too. That the right has colonised the digital space around these subjects – Muslims, women, Jews, the Holocaust, black people – far more effectively than the liberal left.

“It’s an information war,” says Albright. “That’s what I keep coming back to.”

But it’s where it goes from here that’s truly frightening. I ask him how it can be stopped. “I don’t know. I’m not sure it can be. It’s a network. It’s far more powerful than any one actor.”

So, it’s almost got a life of its own? “Yes, and it’s learning. Every day, it’s getting stronger.”

The more people who search for information about Jews, the more people will see links to hate sites, and the more they click on those links (very few people click on to the second page of results) the more traffic the sites will get, the more links they will accrue and the more authoritative they will appear. This is an entirely circular knowledge economy that has only one outcome: an amplification of the message. Jews are evil. Women are evil. Islam must be destroyed. Hitler was one of the good guys.

And the constellation of websites that Albright found – a sort of shadow internet – has another function. More than just spreading rightwing ideology, they are being used to track and monitor and influence anyone who comes across their content. “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go.”

Cambridge Analytica, an American-owned company based in London, was employed by both the Vote Leave campaign and the Trump campaign. Dominic Cummings, the campaign director of Vote Leave, has made few public announcements since the Brexit referendum but he did say this: “If you want to make big improvements in communication, my advice is – hire physicists.”

Steve Bannon, the founder of Breitbart News and the newly appointed chief strategist to Trump, is on Cambridge Analytica’s board and it has emerged that the company is in talks to undertake political messaging work for the Trump administration. It claims to have built psychological profiles using 5,000 separate pieces of data on 220 million American voters. It knows their quirks and nuances and daily habits and can target them individually.

“It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius or even a single demographic. Fake news is important but it’s only one part of it. These companies have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”

Did such micro-targeted propaganda – currently legal – swing the Brexit vote? We have no way of knowing. Did the same methods used by Cambridge Analytica help Trump to victory? Again, we have no way of knowing. This is all happening in complete darkness. We have no way of knowing how our personal data is being mined and used to influence us. We don’t realise that the Facebook page we are looking at, the Google page, the ads that we are seeing, the search results we are using, are all being personalised to us. We don’t see it because we have nothing to compare it to. And it is not being monitored or recorded. It is not being regulated. We are inside a machine and we simply have no way of seeing the controls. Most of the time, we don’t even realise that there are controls.

Rebecca MacKinnon says that most of us consider the internet to be like “the air that we breathe and the water that we drink”. It surrounds us. We use it. And we don’t question it. “But this is not a natural landscape. Programmers and executives and editors and designers, they make this landscape. They are human beings and they all make choices.”

But we don’t know what choices they are making. Neither Google or Facebook make their algorithms public. Why did my Google search return nine out of 10 search results that claim Jews are evil? We don’t know and we have no way of knowing. Their systems are what Frank Pasquale describes as “black boxes”. He calls Google and Facebook “a terrifying duopoly of power” and has been leading a growing movement of academics who are calling for “algorithmic accountability”. “We need to have regular audits of these systems,” he says. “We need people in these companies to be accountable. In the US, under the Digital Millennium Copyright Act, every company has to have a spokesman you can reach. And this is what needs to happen. They need to respond to complaints about hate speech, about bias.”

Is bias built into the system? Does it affect the kind of results that I was seeing? “There’s all sorts of bias about what counts as a legitimate source of information and how that’s weighted. There’s enormous commercial bias. And when you look at the personnel, they are young, white and perhaps Asian, but not black or Hispanic and they are overwhelmingly men. The worldview of young wealthy white men informs all these judgments.”

Later, I speak to Robert Epstein, a research psychologist at the American Institute for Behavioural Research and Technology, and the author of the study that Martin Moore told me about (and that Google has publicly criticised), showing how search-rank results affect voting patterns. On the other end of the phone, he repeats one of the searches I did. He types “do blacks…” into Google.

“Look at that. I haven’t even hit a button and it’s automatically populated the page with answers to the query: ‘Do blacks commit more crimes?’ And look, I could have been going to ask all sorts of questions. ‘Do blacks excel at sports’, or anything. And it’s only given me two choices and these aren’t simply search-based or the most searched terms right now. Google used to use that but now they use an algorithm that looks at other things. Now, let me look at Bing and Yahoo. I’m on Yahoo and I have 10 suggestions, not one of which is ‘Do black people commit more crime?’

“And people don’t question this. Google isn’t just offering a suggestion. This is a negative suggestion and we know that negative suggestions depending on lots of things can draw between five and 15 more clicks. And this all programmed. And it could be programmed differently.”

What Epstein’s work has shown is that the contents of a page of search results can influence people’s views and opinions. The type and order of search rankings was shown to influence voters in India in double-blind trials. There were similar results relating to the search suggestions you are offered.

“The general public are completely in the dark about very fundamental issues regarding online search and influence. We are talking about the most powerful mind-control machine ever invented in the history of the human race. And people don’t even notice it.”

Damien Tambini, an associate professor at the London School of Economics, who focuses on media regulation, says that we lack any sort of framework to deal with the potential impact of these companies on the democratic process. “We have structures that deal with powerful media corporations. We have competition laws. But these companies are not being held responsible. There are no powers to get Google or Facebook to disclose anything. There’s an editorial function to Google and Facebook but it’s being done by sophisticated algorithms. They say it’s machines not editors. But that’s simply a mechanised editorial function.”

And the companies says John Naughton, the Observer columnist and a senior research fellow at Cambridge University, are terrified of acquiring editorial responsibilities they don’t want. “Though they can and regularly do tweak the results in all sorts of ways.”

Certainly, the results about Google on Google don’t seem entirely neutral. Google “Is Google racist?” and the featured result – the Google answer boxed out at the top of the page – is quite clear: no. It is not.

But the enormity and complexity of having two global companies of a kind we have never seen before influencing so many areas of our lives is such, says Naughton, that “we don’t even have the mental apparatus to even know what the problems are”.

And this is especially true of the future. Google and Facebook are at the forefront of AI. They are going to own the future. And the rest of us can barely start to frame the sorts of questions we ought to be asking. “Politicians don’t think long term. And corporations don’t think long term because they’re focused on the next quarterly results and that’s what makes Google and Facebook interesting and different. They are absolutely thinking long term. They have the resources, the money, and the ambition to do whatever they want.

“They want to digitize every book in the world: they do it. They want to build a self-driving car: they do it. The fact that people are reading about these fake news stories and realising that this could have an effect on politics and elections, it’s like, ‘Which planet have you been living on?’ For Christ’s sake, this is obvious.”

“The internet is among the few things that humans have built that they don’t understand.” It is “the largest experiment involving anarchy in history. Hundreds of millions of people are, each minute, creating and consuming an untold amount of digital content in an online world that is not truly bound by terrestrial laws.” The internet as a lawless anarchic state? A massive human experiment with no checks and balances and untold potential consequences? What kind of digital doom-mongerer would say such a thing? Step forward, Eric Schmidt – Google’s chairman. They are the first lines of the book, The New Digital Age, that he wrote with Jared Cohen.

And what next? Rebecca MacKinnon’s research has shown how authoritarian regimes reshape the internet for their own purposes. Is that what’s going to happen with Silicon Valley and Trump? As Martin Moore points out, the president-elect claimed that Apple chief executive Tim Cook called to congratulate him soon after his election victory. “And there will undoubtedly be be pressure on them to collaborate,” says Moore.

Journalism is failing in the face of such change and is only going to fail further. New platforms have put a bomb under the financial model – advertising – resources are shrinking, traffic is increasingly dependent on them, and publishers have no access, no insight at all, into what these platforms are doing in their headquarters, their labs. And now they are moving beyond the digital world into the physical. The next frontiers are healthcare, transportation, energy. And just as Google is a near-monopoly for search, its ambition to own and control the physical infrastructure of our lives is what’s coming next. It already owns our data and with it our identity. What will it mean when it moves into all the other areas of our lives?

“At the moment, there’s a distance when you Google ‘Jews are’ and get ‘Jews are evil’,” says Julia Powles, a researcher at Cambridge on technology and law. “But when you move into the physical realm, and these concepts become part of the tools being deployed when you navigate around your city or influence how people are employed, I think that has really pernicious consequences.”

The headline was that DeepMind was going to work with the NHS to develop an app that would provide early warning for sufferers of kidney disease. And it is, but DeepMind’s ambitions – “to solve intelligence” – goes way beyond that. The entire history of 2 million NHS patients is, for artificial intelligence researchers, a treasure trove. And, their entry into the NHS – providing useful services in exchange for our personal data – is another massive step in their power and influence in every part of our lives.

Because the stage beyond search is prediction. Google wants to know what you want before you know yourself. “That’s the next stage,” says Martin Moore. “We talk about the omniscience of these tech giants, but that omniscience takes a huge step forward again if they are able to predict. And that’s where they want to go. To predict diseases in health. It’s really, really problematic.”

For the nearly 20 years that Google has been in existence, our view of the company has been inflected by the youth and liberal outlook of its founders. Ditto Facebook, whose mission, Zuckberg said, was not to be “a company. It was built to accomplish a social mission to make the world more open and connected.”

It would be interesting to know how he thinks that’s working out. Donald Trump is connecting through exactly the same technology platforms that supposedly helped fuel the Arab spring; connecting to racists and xenophobes. And Facebook and Google are amplifying and spreading that message. And us too – the mainstream media. Our outrage is just another node on Jonathan Albright’s data map.

“The more we argue with them, the more they know about us,” he says. “It all feeds into a circular system. What we’re seeing here is new era of network propaganda.”

We are all points on that map. And our complicity, our credulity, being consumers not concerned citizens, is an essential part of that process. And what happens next is down to us. “I would say that everybody has been really naive and we need to reset ourselves to a much more cynical place and proceed on that basis,” is Rebecca MacKinnon’s advice. “There is no doubt that where we are now is a very bad place. But it’s we as a society who have jointly created this problem. And if we want to get to a better place, when it comes to having an information ecosystem that serves human rights and democracy instead of destroying it, we have to share responsibility for that.”

Are Jews evil? How do you want that question answered? This is our internet. Not Google’s. Not Facebook’s. Not rightwing propagandists. And we’re the only ones who can reclaim it.

Categorized in Search Engine

[This article is originally published in london.edu written by Matt Baldwin - Uploaded by AIRS Member: Jasper Solander]

When you walk into a new office for that crunch job interview, several thoughts will pop into your head. While planning for those tough questions, you’ll also be working out how to fit in.

According to new research from Thomas Mussweiler, London Business School (LBS) Professor of Organisational Behaviour, the desire to be accepted is driven by social comparison – a process which takes place almost involuntarily – where you compare yourself to other people.

Much of Professor Mussweiler’s research focuses on social comparison processes – examining how we compare ourselves to others and thereby change our self-image, motivation, and performance. Work in this field has shown that social comparison facilitates thinking in personal perception, emotion, attitudes and problem-solving.

Now, with co-author Dr. Matt Baldwin, Post-Doctoral Research Fellow at the Social Cognition Center in Cologne, their new paper The culture of social comparison published in The Proceedings of the National Academy of Sciences of the United States of America (PNAS) maps US social comparison data from Google, available through Google Correlate.

The paper looks at search frequencies for a variety of emotion-related words that indicate social comparison such as ‘jealous’ and ‘pride’. Using a novel and innovative technique meant a lot of background work went into validating the method used in the paper, to ensure the results captured psychologically relevant reactions.

Why compare?

Humans’ ability to coordinate behavior economically and politically across distance and time is unique. To keep these processes running smoothly, we look to others as a comparison to create standards of behavior. 

On an individual level, everyone compares themselves to peers almost all of the time. Social comparison is an involuntary mechanism about understanding yourself and others. For example, when starting a new job, the newcomer looks around to gauge what everyone else is wearing.

Mussweiler explains: “The fundamental human tendency to look to others for social cues about what to think, and how to feel and behave, can give rise to a range of emotions. But it has also enabled humankind to thrive in a highly complex and increasingly interconnected, social world. We now understand much more about what drives that tendency.

“In socially tight situations like the first day in a new office or a job interview, there is a strong driver to know, and to mimic, what others do.”

Hanging tight and loose

The degree to which a society expects individuals to fit in is defined as ‘tight’ or ‘loose’. A tight society has a lot of conventions and punishes people for breaking them, whereas a loose society is more relaxed and has fewer rules. The scientific study of tight and loose cultures has received some recent attention in social psychology, but this is the first time search term data has been used by researchers to analyze this cultural dimension. One hypothesis the paper explores is whether people in tight cultures are more likely to look to their peers for cues on how to behave, in turn shedding light on the role comparative behavior has in creating cultural norms.

Tight and loose cultures have their own strengths and weaknesses and are considered to have developed in response to environmental pressures that might threaten our species’ survival. It is hypothesized that tight cultures emerged in response to stress like famine and disease. In these challenging environments, you need strict rules and a high degree of behavioral regulation and restriction. In other words, a society under pressure can’t afford to allow people to act however they please because doing so can endanger the group.

Drawing upon the existing research, the paper includes a Culture of Social Comparison Map of the US, showing which states are ‘tight’ and which are ‘loose’. The map is the first time tightness has been linked to making more social comparisons. 

The interactive map compares each US state on this scale of tight through to loose. The southern states led by Mississippi scored as the tightest. Oregon and the north broadly ranked the loosest.

Why use Google to examine social comparison?

Using Google Correlate is valid in this study because these are real queries, from real people, about the world they live in. The challenge was separating the relevant searches from the irrelevant.

“While social comparison has been studied in the lab, the lab has a certain artificiality,” observes Professor Mussweiler. “Typically the social comparative information provided in the laboratory is not the same as the genuine information individuals seek in the real world.

“Nowadays people seek comparative information about others on the internet so that is what makes this data valuable.”

Researchers have previously used the Google Correlate database to map the spread of flu through search queries. Using a similar correlation method, this paper demonstrates that Google searches related to social comparison are more frequent in tight cultures than in loose cultures. 

The implication of the findings is that social comparison is the link between ecological threat, the broader social behavior it creates and the outcomes for individuals. A conclusion which could be drawn is that comparative thinking is not just a building block of our own thoughts but of society as a whole.

The next steps for the methodology are to use it to explore how social comparison varies over time and in response to different events like disease outbreaks, natural disasters, terror threats, and elections.

Categorized in Search Engine

Source: This article was Published computerworld.com By Mike Elgan - Contributed by Member: Dorothy Allen

If you think a search engine exists as an index to the internet, it’s time to update your thinking.

This column is not about politics. It makes no political judgments and takes no political positions. No, really! Stay with me here.

President Trump this week slammed Google, claiming that the company “rigged” Google News Search results to favor stories and news organizations critical of the president.

To drive home his claim about bias, Trump posted a video on Twitter this week with the hashtag #StopTheBias (which, at the time I wrote this, had 4.36 million views), claiming that Google promoted President Barack Obama’s State of the Union addresses, but stopped the practice when Trump took office.

In a statement issued to the press, a Google spokesperson said that the company did not promote on its homepage either Obama’s or Trump’s first “State of the Union” addresses because technically they are considered mere “addresses to a joint session” of Congress, the idea being that brand-new presidents are not in a position to reveal the “state of the nation.” Google also claimed that it did promote Trump’s second and most recent State of the Union, a claim that screenshots found on social media and pages captured by the site Wayback Machine appear to confirm.

The facts around this incident are being funneled into ongoing, rancorous online political debates, which, in my opinion, isn’t particularly interesting.

What is interesting is the Big Question this conflict brings to the surface.

What is a search engine?

A search engine can be four things.

  • An index to the internet

When Google first launched its search engine in 1996, it was clear what a search engine was: an index of the internet.

Google’s killer innovation was its ability to rank pages in a way that was supposed to reflect the relative relevance or importance of each result.

Both the results and the ranking were supposed to be a reflection or a snapshot of the internet itself, not an index to the information out there in the real world.

  • An arbiter of what’s true

In this view, Google Search would favor information that’s objectively true and de-emphasize links to content that’s objectively untrue.

  • An objective source of information

The objective source idea is that Google makes an attempt to present all sides of contentious issues and all sources of information, without favoring any ideas or sources.

  • A customized, personalized source of information

The personalized source concept says that a search engine gives each user a different set of results based on what that user wants regardless of what’s true, what’s happening on the internet or any other factor.

This is all pretty abstract, so here’s a clarifying thought experiment.

When someone searches Google to find out the shape of the Earth, how should Google approach that query? It depends on what Google believes a search engine is.

(Note that it’s likely that flat-Earth proponents generate, link to and chatter about the idea that the Earth is flat more than people who believe it’s spherical. Let’s assume for the sake of argument that, objectively, the content and activity on the actual internet favors the flat-Earth idea.)

If a search engine is supposed to be an index to the internet, then search results for the shape of the Earth should favor the flat-Earth idea.

If a search engine is supposed to be an arbiter of what’s true, then search results should favor the spherical-Earth idea.

If a search engine is supposed to be an objective source of information, then search results should provide a balanced result that equally represents both flat- and spherical-Earth theories.

And if a search engine is supposed to be a customized, personalized source of information, then the results should favor either the flat-Earth idea or the spherical-Earth idea, depending on who is doing the searching.

I use the shape of the Earth as a proxy or stand-in for the real search results people are conducting.

For example, searches for your company, product, brand or even yourself are still subject to the same confusion over what a search engine is supposed to be.

When your customers, prospective business partners, employees or future prospective employees and others search for information about your organization, what results should they get? Should those results reflect what’s “true,” what’s false but popular, or what’s neutral between the two? Or should it depend on who’s doing the searching?

The truth is that Google tries to make Google Search all four of these things at the same time.

Adding to the complexity of the problem is the fact that search engine results are governed by algorithms, which are trade secrets that are constantly changing.

If you were to ask people, I suspect that most would say that Google Search should be Model No. 1 — an index to the internet — and not get involved in deciding what’s true, what’s false or what’s the answer the user wants to hear.

And yet the world increasingly demands that Google embrace Model No. 2 — to be an arbiter of what’s true.

Governments won’t tolerate an accurate index

Trump has claimed repeatedly that, in general, news media coverage is biased against him. If that’s true, and if Google News Search was a passive index of what the media is actually reporting, wouldn’t it be reasonable for Trump to expect anti-Trump coverage on Google News Search?

By slamming Google News Search as “rigged,” Trump appears to reveal an expectation that Google News should reflect what’s happening in the real world as he sees it, rather than what’s happening on news media websites.

Or it reveals that regardless of the weight of activity in favor of news sources Trump believes are biased against him, Google News Search should provide a balanced and neutral representation of all opinions and sources equally.

The rejection of the search-engine-as-internet-index model is common among governments and political leaders worldwide.

One famous example is the “right to be forgotten” idea, which has been put into practice as law in both the European Union and Argentina. The idea is that information on the internet can unfairly stigmatize a person, and citizens have the right for that information to be “forgotten,” which is to say made non-existent in search engine results.

Let’s say, for example, that a prominent person files for bankruptcy, and that 100 news sites and blogs on the internet record the fact. Twenty years later, well after the person has restored financial solvency, the old information is still available and findable via search engines, causing unfounded stigmatization.

A successful right-to-be-forgotten petition can remove reference to those pages from search results. The pages still exist, but the search engines don’t link to them when anyone searches for the person’s name.

The advocates of right-to-be-forgotten laws clearly believe that a search engine exists to reflect the real world as it is, or as it should be, and does not exist to reflect the internet as it is.

Google was recently caught in a controversy over an assumed return to the Chinese market with a custom China-only search engine that censors internet content in the same way that domestic sites are required to by the Chinese government. Hundreds of Google employees signed a letter in protest.

Google wants to “return” to the Chinese market. The Chinese government would not allow Google to operate a search engine accessible to Chinese citizens that accurately reflected what’s actually on the internet.

The examples go on and on.

What governments tend to have in common is that in political circles, it’s very difficult to find people advocating for the index-to-the-internet conception of what a search engine should be.

Why the search-engine-as-index idea is dead

Google’s self-stated mission is to “organize the world’s information and make it universally accessible and useful.”

Nebulous, yes. But for the purposes of this column, it’s telling that Google says that its mission is to organize, not the internet’s information, but the “world’s.”

The reality is that people search Google Search and other search engines because they want information about the world, not because they want information about what the internet collectively “thinks.”

And, in any event, the point is growing moot.

What the internet “thinks” is increasingly being gamed and manipulated by propagandists, bots, fake news, trolls, conspiracy theorists, and hackers.

Accurately reflecting all this manipulated information in search engines is valuable only to the manipulators.

Also: With each passing day, more information “searching” is happening via virtual assistants such as Google Assistant, Siri, Cortana, and Alexa.

In other words, virtual assistants are becoming the new search engines.

With augmented reality glasses and other highly mobile sources of information, search engines such as Google will have to increasingly become arbiters of what’s true, or supposed to be true, because the public will increasingly demand a single answer for its questions.

That’s why the old initiatives for your company’s presence on the internet — SEO, marketing, social media strategy and all the rest — have new urgency.

With each passing day, search engines exist less to index the internet and more to decide for us all what’s “true” and what’s “not true.”

It’s time to redouble your efforts to make sure that what Google thinks is true about your company really is true.

Categorized in Search Engine

 Source: This article was published universalclass.com - Contributed by Member: Bridget Miller

The Internet is often the first place many people go when they need to do research. Though this might be the first place to look for basic information, the key to using the Internet wisely begins with understanding how the Internet works and how it can work for you.

How Internet Search Engines Work

An Internet search engine is akin to a library in the online setting. Within millions of domain names are stored pieces of information you can use for your research.

However, you need to begin somewhere.

Browser: The browser is the entryway to your Internet searches. You can use a variety of different search engines to help you begin your research, including:

  • Google
  • MSN's Bing
  • Ask
  • Yahoo!
  • Dogpile
  • Altavista
  • AOL search

No matter what search engine you decide to use, you will find a vast collection of resources. Many people choose one search engine before all others, and you might choose to do the same.

In collecting your information, assess how quickly the search engine can get your needed materials and then choose the search engine that works consistently for you. It is much easier to use one search engine than to use several.

While search engines are complex in the way they arrange their information, this is the basic setup.

  • Domain name: At the base, each Web site online has its own personal URL. This is the name of the Web site. For example, you might have www.Apple.com. This is Apple's Web site name. If you were to type this name into a browser or search engine, you would find a listing for the Apple site. If you typed in another spelling into a Web browser, you would not reach this site.
  • Domain details: After the domain name, you might see additional words, often after a back slash (/). This allows the site to break up into additional pages so a person can reach different pieces of information.
  • Subpages: Within those pages might be even more subpages, helping you further refine your search and find the results that you need to complete your research.
  • Keywords: Search engines operate much like a computer at a library might. You can type in a word that is related to your topic, a title of a book, an author, a question, or any other number of words to find results that are related to your search. Search engines rank the sites online by the keywords that are most related to the Web sites, as well as to keywords that are used most often on those sites. For example, when you want to look something up about dieting, you do not type in "carrot." You type in "diet" or "dieting." Search engines have complicated algorithms to determine what keywords match best to Web sites online.
  • Popularity: What you might not realize is that search engines also will rank Web sites based on how popular they are with users. For example, when you look up weight loss, you might find a site that talks about the health-related aspects of weight loss, rather than an actual weight loss plan. Why is this? More people decided to choose that Web site over weight loss product Web sites, so the search engine ranks it higher. These popularity rankings might change between search engines or they might change over the course of a week, depending on the popularity of a Web site.

Now that you know how a search engine basically operates, you can begin to see how you need to work with the search engine to find the pages and Web sites you need for your individual research. Though you might have a clear idea in mind of the questions you need to answer, you need to work with the search engine to ensure you can find the best possible information.

The Internet has a lot of information, and the main part of your research process will be sifting through your findings to determine what is useful. 

Search Engine Strategies

When you first use a search engine to look up the answer to a question or to begin a research project, you will notice something: Some of the results you receive are relevant and some are not. This happens because search engines all have different rules about how the search engine results will be listed.

To maximize the efficiency of your search engine search, you need to use strategies that help you find the most relevant results first. This will reduce your research time and ensure the sites on the list will help you with your project.

  • One-word search: The simplest way to use a search engine is to type in one word that is crucial to your search. This might be a word that is in your research title or a certain item you need to know more about to be prepared for a presentation.
  • One-phrase search: If you have a phrase that is often attributed to your main topic, then you can use this in search engines.
  • Multiple term search: When you want to make your search as specific as possible, you might want to type in as many keywords as possible to make sure you are narrowing the results. For example, instead of "diet," you might type in "diet healthy vegetarian."
  • Quotation marks: If you want the search engine to search for something that is spelled the same way that you typed it in, surround the word with quotation marks. This tells the search engine that you want only results that match the spelling exactly.
  • "AND": One of the Boolean operators is "AND," which is a way to tell the search engine that you want to include multiple words in the search engine results. For example, if you want to talk about salt and pepper, then you might type in "salt AND pepper." This will lead to results that include both of the keywords.
  • "NOT": If you have a term you need to research, but you do not want another term associated with it, then you would use another Boolean operator. For example, you want to research "pepper NOT salt." This will exclude any results that include salt.
  • "OR": The last used Boolean operator is "OR." If you are not sure what you need to include, but you need to include both terms, you might put "salt OR pepper." Your results might include one or the other or both keywords.
  • Use common terms: If you need to do some research on sweatshirts, it might be better to use the word "sweatshirt" instead of "hoodie." Think about the most basic term associated with the idea you need to research.
  • Synonyms: You also may want to choose to use synonyms of the topic you need to research if you cannot find the original word online. You can turn to your thesaurus for help with finding synonyms.
  • Related terms: You may also want to create a list of related words that can help you begin to find more research results. When talking about an engagement, for example, you might include "diamond ring" in your search list, too.
  • List the most significant word first: When you have a list of words you will use in your search engine, type in the most important word first. This will ensure the search engine focuses on the most important term.
  • Asterisks: When you are not quite sure how to spell a word or you are missing a part of a phrase, you can use an asterisk to tell the search engine you need help. For example, if you are not sure what Shakespeare's important quote in Hamlet was, you might type "to be * to be." This would return results that answer your question.
  • Question marks: If you are not sure about your keywords or a part of the phrase you are typing into the search engine, then use a question mark.
  • Plus (+) sign: You can also use this to link together the keywords you want to be used as a part of the search process. For example, you might use "peanut+butter+jelly."

It can also help to review the help section of your search engine to see what types of search options it offers. Because the search engines all operate differently, you need to make sure you are playing by their rules to get the best results.

Advanced Search Engine Strategies

When you want to make sure that your search engine is giving you the best results, you can use the strategies above, or you can continue to boost your results by using these more advanced research strategies:

  • Use the "advanced results" option. Some search engines, including Google, offer an advanced results option. When you are unable to find results you need for your research, extend your research into that section. The more boxes you can fill out here, the more you will be able to refine your results.
  • Use another language. If your results might be listed under a different language or in another country, make sure to list other possible languages the text might be in.
  • Specify the date. When you need to have results from a certain time period, add the date or the time period of the results you want to see.
  • Specify the file format. You might want to find a certain document online, but without specifying the type of document, this can be tricky. Instead, add in whether you need a .doc, .docx, .pdf, .ppt, .pptx, or other type of file to refine your results.
  • Specify the type of site. You can also make sure you are only getting useful sites by typing in things like ".edu" and ".gov" with your keywords. This will qualify your results and give you only results that are college and university Web sites or those that are run by government agencies.

The more that you begin to refine your search, the more effective results you will have. The better your research, the better the results. 

Potential Problems with Internet Research

While more people use the Internet than ever before for their research, this is not without its troubles. The Internet contains valuable information, but it also contains information that has not been well-researched.

Another set of problems occurs when a person uses the Internet for all research.

Here are some ideas to consider:

  • Choose respected sites. It is best to choose Web sites that have been used for years and that are run by a team of experts. At the very least, the Web site should have some sort of expertise or have a board of editors that helps ensure that information on the site is accurate.
  • Consider the objectivity of the Web site. When you read a Web site about the benefits of beef, look to see who is sponsoring the site. If a beef company is sponsoring the site, you might want to look at the information more carefully. While a site may not be lying about the information it posts, the site might be influenced by its sponsors.
  • Realize that some publications cannot be posted online. There are some journals and articles that might not be able to be posted online due to copyright issues. Some articles can only be found in print at libraries.
  • Notice that some publications are limited online. Many publications are limiting the content they have online. When this is the case, you might only be able to find a portion of the content you need.
  • Some research can only be obtained online via memberships. Some journals and magazines online will post all of their latest issue's contents, but a person will need to subscribe to be a member to access the information.

The Internet is one research tool, but it is not the only research tool. Instead of looking at the Internet as the only way to find what you need, look at the Internet as a helpful starting point.

You might be able to find the basic information you need, but do not limit yourself to just this research tool.

Categorized in Search Engine

Online Methods to Investigate the Who, Where, and When of a Person. Another great list by Internet search expert Henk Van Ess.

Searching the Deep Web, by Giannina Segnini. Beginning with advanced tips on sophisticated Google searches, this presentation at GIJC17 by the director of Columbia University Journalism School’s Data Journalism Program moves into using Google as a bridge to the Deep Web using a drug trafficking example. Discusses tracking the container, the ship, and customs. Plus, Facebook research and more.

Tools, Useful Links & Resources, by Raymond Joseph, a journalist and trainer with South Africa’s Southern Tip Media. Six packed pages of information on Twitter, social media, verification, domain and IP information, worldwide phonebooks, and more. In a related GICJ17 presentation, Joseph described “How to be Digital Detective.” 

IntelTechniques is prepared by Michael Bazzell, a former US government computer crime investigator and now an author and trainer. See the conveniently organized resources in left column under “Tools.” (A Jan. 2, 2018, blog post discusses newly added material.)

Investigate with Document Cloud, by Doug Haddix, Executive Director, Investigative Reporters and Editors. A guide to using 1.6 million public documents shared by journalists, analyzing and highlighting your own documents, collaborating with others, managing document workflows and sharing your work online.

Malachy Browne’s Toolkit. More than 80 links to open source investigative tools by one of the best open-source sleuths in the business. When this New York Times senior story producer flashed this slide at the end of his packed GIJC17 session, nearly everyone requested access.

Social Media Sleuthing, by Michael Salzwedel. “Not Hacking, Not Illegal,” begins this presentation from GIJC17 by a founding partner and trainer at Social Weaver.

Finding Former Employees, by James Mintz. “10 Tips on Investigative Reporting’s Most Powerful Move: Contacting Formers,” according to veteran private investigator Mintz, founder and president of The Mintz Group.

Investigative Research Links from Margot Williams. The former research editor at The Intercept offers an array of suggestions, from “Effective Google Searching” to a list of “Research Guru” sites.

Bellingcat’s Digital Forensics Tools, a wide variety of resources here: for maps, geo-based searches, images, social media, transport, data visualization, experts and more.

List of Tools for Social Media Research, a tipsheet from piqd.de’s Frederik Fischer at GIJC15.

SPJ Journalist’s Toolbox from the Society of Professional Journalists in the US, curated by Mike Reilley. Includes an extensive list of, well, tools.

How to find an academic research paper, by David Trilling, a staff writer for Journalist’s Resource, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy.

Using deep web search engines for academic and scholarly research, an article by Chris Stobing in VPN & Privacy, a publication of Comparitech.com, a UK company that aims to help consumers make more savvy decisions when they subscribe to tech services such as VPNs.

Step by step guide to safely accessing the darknet and deep web, an article by Paul Bischoff in VPN & Privacy, a publication of Comparitech.com, a UK company that aims to help consumers make more savvy decisions when they subscribe to tech services such as VPNs.

Research Beyond Google: 56 Authoritative, Invisible, and Comprehensive Resources, a resource from Open Education Database, a US firm that provides a comprehensive online education directory for both free and for-credit learning options.

The Engine Room,  a US-based international NGO, created an Introduction to Web Resources, that includes a section on making copies of information to protect it from being lost or changed.

Awesome Public Datasets, a very large community-built compilation organized by topic.

Online Research Tools and Investigative Techniques by the BBC’s ace online sleuth Paul Myers has long been a starting point for online research by GIJN readers. His website, Research Clinic, is rich in research links and “study materials.”

Source: This article was published gijn.org

Categorized in Online Research

When reading Wikipedia’s 1992 Ten Commandments of Computer Ethics you can easily substitute “Internet” for “computer” and it’s amazing what you see…., for example the 1stCommandment “You shall not use the Internet to harm other people.”  Here are all Ten Commandments of Internet Ethics (with my minor edits):

  1. You shall not use the Internet to harm other people.
  2. You shall not interfere with other people’s Internet work.
  3. You shall not snoop around in other people’s Internet files.
  4. You shall not use the Internet to steal.
  5. You shall not use the Internet to bear false witness.
  6. You shall not copy or use proprietary software for which you have not paid (without permission).
  7. You shall not use other people’s Internet resources without authorization or proper compensation.
  8. You shall not appropriate other people’s intellectual output.
  9. You shall think about the social consequences of the program you are writing or the system you are designing.
  10. You shall always use the Internet in ways that ensure consideration and respect for your fellow humans.

For those of us who used the Internet 1992 it’s great to see that the Ethics of the Internet in 1992 (from the Computer Ethics Institute) applies in 2016!

Source: This article was published vogelitlawblog.com By Peter S. Vogel

Categorized in Internet Ethics
Page 1 of 12

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media