Website Search
Research Papers
plg_search_attachments
Articles
FAQs
Easy Profile - Search plugin
Courses & Exams
Pages
Specialized Search Engines
Events Calender
Upcoming Events

Source: This article was Published top10-websitehosting.co.uk By GEORGIE PERU - Contributed by Member: Issac Avila

The internet holds a wealth of information, has literally billions of users worldwide but also contains some really interesting internet statistics and facts. For example, half of the U.K.’s population would be willing to receive their online shopping via drone. This information may seem strange, quirky even, however, it’s relevant in one form or another.

Whether you’re an internet user, website owner, or run a business online, it’s important to know what’s ‘going on’ around the internet, what’s trending, and what’s not. In order to help you succeed in 2018, we’ve put together a helpful and interesting selection of internet facts and statistics for you to gawp at, and share with others!

Facts and Statistics

The Internet – 2018

  • As of 1st January 2018, the total internet users across the world was 4,156,932,140 (that’s over 4 billion users)
  • 2 billion of the world’s internet users are located in Asia, where their population is just over equal to the total internet users across the world
  • In January 2018, data reveals that 3.2 billion internet users were also social media users
  • As of January 2018, the world’s population was estimated to be around 7,634,758,428. Over half of the world’s population is using the internet
  • On 10th April 2018, there were over 1.8 billion websites recorded on the internet
  • In 2018, China has the most active internet users in the world, at 772 million users. In the year 2000, this figure was around 22.5 million
  • Some of 2018’s top Google searches included iPhone 8, iPhone X, How to buy Bitcoin, and Ed Sheeran

Social Media – 2018

  • As of January 2018, Facebook alone had 2.2 billion monthly active users. Facebook was the first social media website to reach over 1 billion accounts
  • YouTube users in 2018 have surpassed the 1.5 billion mark, making YouTube the most popular website for viewing and uploading videos in the world
  • There are now over 3.1 billion social media users worldwide in 2018, which is an increase of around 13% compared to 2017
  • Comparing January 2018 to January 2017 figures, Saudi Arabia is the country with the largest social media usage increase at an estimated 32%
  • Instagram is most popular in the USA and Spain accounting for around 15% of total social media usage in these countries in 2018
  • In France, Snapchat is the second most popular social media user account in 2018, with around 18% of users countrywide
  • Facebook continues to be the fastest growing social media network, with around 527 million increase in users over the last 2 years, followed closely by WhatsApp and Instagram at 400 million
  • In 2018, 90% of businesses are using social media actively
  • 91% of social media users are using their mobile phones, tablets, and smart devices to access social media channels
  • Nearly 40% of users would prefer to spend more money on companies and businesses who are engaging on social media

Websites and Web Hosting – 2018

  • As of 2018, WordPress powers 28% of the world wide web with over 15.5 billion page views each month
  • Apache hosting servers are used by 46.9% of all available websites, followed closely by Nginx at 37.8%
  • 2018 sees 52.2% of website traffic accessed and generated via mobile phones
  • In the last 5 years, since 2013, website traffic accessed by mobile phones has increased by 36%
  • As of January 2018, Japan’s share of website traffic mainly comes from laptops and desktop computers at a measured 69%, compared to 27% on mobile phones
  • With over a billion voice search queries per month, voice is estimated to be a high trending digital marketing strategy in 2018
  • Google is the most popular search engine and visited website recorded in 2018, with over 3.5 billion searches each day
  • Website loading times are now considered a ranking factor in Google. You can find our best web hosting companies here.

eCommerce – 2018

  • In the U.K. for 2018, ZenCart has the biggest market share with over 17% of .uk web address extensions using the software provider
  • In the U.S. as of February 2018, over 133 million mobile users used the Amazon app, compared to 72 million users accessing the Walmart app
  • Nearly 80% of online shopping results in abandoned carts, but we have some handy tips to ensure you can recover your marketing strategy
  • 2018 sees a 13% increase in eCommerce sales since 2016, with the majority of sales being recorded in the U.S. and China
  • 80% of U.K buyers use online commerce research before purchasing a product online or offline
  • Under 33% of U.K. consumers want to pay more for faster delivery, but 50% said they would be willing to accept delivery via drone
  • An estimated 600,000 commercial drones will be in use by the end of 2018 in the U.K. alone

Domain Names – 2018

  • As of April 2018, there are just over 132 million registered .com domain names
  • In the month of January 2018 alone, there were 9 million registered .uk domains
  • 68 million copyright infringing URLs were requested to be removed by Google in January 2018, with 4shared.com being the highest targeted website
  • 46.5% of websites use .com as their top-level domains
  • Approximately 75% of websites registered are not active but have parked domains
  • From 1993 to 2018, the number of hosts in the domain name system (DNS) has more than doubled, reaching over 1 billion

References:

  1. https://www.internetworldstats.com/stats.htm
  2. https://www.statista.com/statistics/617136/digital-population-worldwide/
  3. http://www.internetlivestats.com/
  4. https://techviral.net/top-popular-google-searches-2018/
  5. https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/
  6. https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/
  7. https://coschedule.com/blog/social-media-statistics/
  8. https://wordpress.com/about/
  9. https://w3techs.com/technologies/overview/web_server/all
  10. https://www.lifewire.com/most-popular-sites-3483140
  11. https://www.statista.com/statistics/685438/e-commerce-software-provider-market-share-in-the-uk/
  12. https://www.appnova.com/6-important-uk-ecommerce-statistics-help-plan-2018/
  13. https://www.statdns.com/
  14. http://www.internetlivestats.com/total-number-of-websites/

Published in Online Research

Source: This article was Published hub.packtpub.com By Sugandha Lahoti - Contributed by Member: Carol R. Venuti

Google has launched Dataset Search, a search engine for finding datasets on the internet. This search engine will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Google Dataset Search will allow users to search through datasets across thousands of repositories on the Web whether it be on a publisher’s site, a digital library, or an author’s personal web page.

Google’s Dataset Search scrapes government databases, public sources, digital libraries, and personal websites to track down the datasets. It also supports multiple languages and will add support for even more soon. The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. It may soon expand to include more sources.

Google has developed certain guidelines for dataset providers to describe their data in a way that Google can better understand the content of their pages. Anybody who publishes data structured using schema.org markup or similar equivalents described by the W3C, will be traversed by this search engine. Google also mentioned that Data Search will improve as long as data publishers are willing to provide good metadata. If publishers use the open standards to describe their data, more users will find the data that they are looking for.

Natasha Noy, a research scientist at Google AI who helped create Dataset Search, says that “the aim is to unify the tens of thousands of different repositories for datasets online. We want to make that data discoverable, but keep it where it is.”

Ed Kearns, Chief Data Officer at NOAA, is a strong supporter of this project and helped NOAA make many of their datasets searchable in this tool. “This type of search has long been the dream for many researchers in the open data and science communities,” he said.

Published in Search Engine

Online research involves collecting information from the internet. It saves cost, is impactful and it offers ease of access. Online research is valuable for gathering information. Tools such as questionnaires, online surveys, polls and focus groups aid market research. You can conduct market research with little or no investment for e-commerce development.

Search Engine Optimization makes sure that your research is discoverable. If your research is highly ranked more people will find, read and cite your research.

Steps to improve the visibility of your research include:

  1. The title gives the reader a clear idea of what the research is about. The title is the first thing a reader sees. Make your research title relevant and consistent. Use a search engine friendly title. Make sure your title provides a solution.
  2. Keywords are key concepts in your research output. They index your article and make sure your research is found quickly. Use keywords that are relevant and common to your research field. Places to use relevant keywords include title, heading, description tags, abstract, graphics, main body text and file name of the document.
  3. Abstract convince readers to read an article. It aids return in a search.
  4. When others cite your research your visibility and reputation will increase. Citing your earlier works will also improve how search engines rank your research.
  5. External links from your research to blogs, personal webpage, and social networking sites will make your research more visible.
  6. The type of graphics you use affects your ranking. Use vectors such as .svg, .eps, .as and .ps. Vectors improve your research optimization.
  7. Make sure you are consistent with your name across all publications. Be distinguishable from others.
  8. Use social media sites such as Facebook, Twitter, and Instagram to publicize your research. Inform everyone. Share your links everywhere.
  9. Make sure your research is on a platform indexed properly by search engines.

Online research is developing and can take place in email, chat rooms, instant messaging and web pages.  Online research is done for customer satisfaction, product testing, audience targeting and database mining.

Ethical dilemmas in online research include:

  1. How to get informed consent from the participants being researched?
  2. What constitutes privacy in online research?
  3. How can researchers prove the real identity of participants?
  4. When is covert observation justifiable?

Knowing how to choose resources when doing online research can help you avoid wasted time.

WAYS TO MAKE ONLINE RESEARCH EASY AND EFFECTIVE

  1. Ask: Know the resources recommended for your research from knowledgeable people. You can get information on valuable online journals or websites from an expert or knowledgeable people.
  2. Fact from fiction: Know the sites that are the best for your research topic. Make sure the websites you have chosen are valuable and up to date. Sites with .edu and .gov are usually safe. If you use a .org website make sure it is proper, reliable and credible. If you use a .com site; check if the site advertises, bias is a possibility.

Social media sites, blogs, and personal websites will give you personal opinions and not facts.

  1. Search Smartly: Use established search engines. Use specific terms. Try alternative searches. Use search operators or advanced search. Know the best sites.
  2. Focus: Do not be distracted when conducting an online research. Stay focused and away from social media sites.
  3. Cite Properly: Cite the source properly. Do not just copy and paste for plagiarism can affect your work.

When conducting research use legitimate and trustworthy resources. sites to help you find articles and journals that are reliable include:

  1. BioMedCentral
  2. Artcyclopedia
  3. FindArticles.com
  4. Digital History
  5. Infomine
  6. Internet Public Library
  7. Internet History Sourcebooks
  8. Librarians Internet Index
  9. Intute
  10. Library of Congress
  11. Project Gutenberg
  12. Perseus Digital Library
  13. Research Guide for Students.

No matter what you are researching the internet is a valuable tool. Use sites wisely and you will get all the information you need.

ONLINE RESEARCH METHODS

  1. Online focus group: This is for business to business service research, consumer research and political research. Pre-selected participants who represent specific interest are invited as part of the focus group.
  2. Online interview: This is done using computer-mediated communication (CMC) such as SMS or Email. Online interview is synchronous or asynchronous. In synchronous interviews, responses are received in real-time for example online chat interviews. In asynchronous interviews, responses are not in real-time such as email interviews. Online interviews use feedbacks about topics to get insight into the participants, attitudes, experiences or ideas.
  3. Online qualitative research: This includes blogs, communities and mobile diaries. It saves cost, time and is convenient. Respondents for online qualitative research can be gotten from surveys, databases or panels.
  4. Social network analysis: This has gained acceptance. With social network analysis researchers can measure the relationship between people, groups, organization, URLs and so on.

Other methods of online research include cyber-ethnography, online content analysis, and Web-based experiments.

TYPES OF ONLINE RESEARCH

  1. Customer satisfaction research: This occurs through phone calls or emails. Customers are asked to give feedback on their experience with a product, service or an organization.
  2. New product research: This is carried out by testing a new product with a group of selected individuals and immediately collecting feedback.
  3. Brand loyalty: This research seeks to find out what attracts customers to a brand. The research is to maintain or improve a brand.
  4. Employee satisfaction research: With this research, you can know what employees think about working for your organization. The moral of your organization can contribute to its productivity.

When conducting an online research give open-ended questions and show urgency but be tolerant.

Written by Junaid Ali Qureshi he is a digital marketing specialist who has helped several businesses gain traffic, outperform the competition and generate profitable leads. His current ventures include Progostech, Magentodevelopers.online.eLabelz, Smart Leads.ae, Progos Tech and eCig.

Published in Online Research

 Source: This article was Published theverge.com By Dami Lee - Contributed by Member: Olivia Russell

There’s no mention of ‘fake news,’ though

There are more young people online than ever in our current age of misinformation, and Facebook is developing resources to help youths better navigate the internet in a positive, responsible way. Facebook has launched a Digital Literacy Library in partnership with the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University. The interactive lessons and videos can be downloaded for free, and they’re meant to be used in the classroom, in after-school programs, or at home.

Created from more than 10 years of academic research and “built in consultation with teens,” the curriculum is divided into five themes: Privacy and Reputation, Identity Exploration, Positive Behavior, Security, and Community Engagement. There are 18 lessons in total, available in English; there are plans to add 45 more languages. Lessons can be divided into three different age groups between 11 and 18, and they cover everything from having healthy relationships online (group activities include discussing scenarios like “over-texting”) to recognizing phishing scams.

The Digital Literacy Library is part of Facebook’s Safety Center as well as a larger effort to provide digital literacy skills to nonprofits, small businesses, and community colleges. Though it feels like a step in the right direction, curiously missing from the lesson plans are any mentions of “fake news.” Facebook has worked on a news literacy campaign with the aim of reducing the spread of false news before. But given the company’s recent announcements admitting to the discovery of “inauthentic” social media campaigns ahead of the midterm elections, it’s strange that the literacy library doesn’t call attention to spotting potential problems on its own platform.

Published in Social

 Source: This article was published econsultancy.com By Rebecca Sentance - Contributed by Member: William A. Woods

What does the future hold for voice search? If you search the web for these words – or a version of them – you’ll encounter no shortage of grand predictions.

“By 2020, 30% of web browsing sessions will be done without a screen.” Or, “By 2020, 50% of all searches will be conducted via voice.” (I’ll come back to that one in a second). Or, “2017 will be the year of voice search.” Oops, looks like we might have missed the boat on that last one.

The great thing about the future is that no-one can know exactly what’s going to happen, but you can have fun throwing out wild predictions, which most people will have forgotten about by the time we actually get there.

That’s why you get so many sweeping, ambitious, and often contradictory forecasts doing the rounds – especially with a sexy, futuristic technology like voice. It doesn’t do anyone any real harm unless for some reason your company has decided to stake its entire marketing budget on optimizing for the 50% of the populace who are predicted to be using voice search by 2020.

However, in this state of voice search series, I’ve set out to take a realistic look at voice search in 2018, beyond the hype, to determine what opportunities it really presents for marketers. But when it comes to predicting the future, things get a little murkier.

I've made some cautious predictions to the tune of assuming that if smart speaker ownership increases over the coming years, voice search volume will also likely increase; or that mobile voice search might be dropping away as smart speaker voice search catches on.

In this article, though, I'll be looking at where voice search as a whole could be going: not just on mobile, or on smart speakers, but of any kind. What is the likelihood that voice search will go "mainstream" to the point that it makes up as substantial a portion of overall search volume as is predicted? What are the obstacles to that? And what does this mean for the future of voice optimisation?

Will half of all searches by 2020 really be voice searches?

I'm going to start by looking at one of the most popular predictions that is cited in relation to voice search: "By 2020, 50% of all searches will be carried out via voice."

This statistic is popularly attributed to comScore, but as is often the case with stats, things have become a little distorted in the retelling. The original prediction behind this stat actually came from Andrew Ng, then Chief Scientist at Baidu. In an exclusive interview with Fast Company in September 2014, he stated that "In five years' time, at least 50% of all searches are going to be either through images or speech."

The quote was then popularised by Mary Meeker, who included it on a timeline of voice search in her Internet Trends 2016 Report, with "2020" as the year by which this prediction was slated to come true.

So, not just voice search, but voice and visual search. This makes things a little trickier to benchmark, not least because we don't have any statistics yet on how many searches are carried out through images. (I'm assuming this would include the likes of Google Lens and Pinterest Lens, as well as Google reverse image search).

Let's assume for the sake of argument that 35% of Ng's predicted 50% of searches will be voice search, since voice technology is that bit more widespread and well-supported, while a visual search is largely still in its infancy. How far along are we towards reaching that benchmark?

I'm going to be generous here and count voice queries of every kind in my calculations, even though as I indicated in Part 1, only around 20% of these searches can actually be ranked for. Around 60% of Google searches are carried out on mobile (per Hitwise), so if we use Google's most recent stat that 1 in every 5 mobile searches is carried out via voice, that means about 12% of all Google searches (420 million searches) are mobile voice queries.

In Part 2 I estimated that another 26.4 million queries are carried out via smart speakers, which is an additional 0.75% - so in total that makes 12.75% of searches, or if we're rounding up, 13% of Google searches that are voice queries.

This means that the number of voice queries on Google would need to increase by another 22 percentage points over the next year and a half for Ng's prediction to come true. To reach 50% - the stat most often cited by voice enthusiasts as to why voice is so crucial to optimise for - we would need to find an additional 1.3 billion voice searches per day from somewhere.

That's nearly ten times the number of smart speakers predicted to ship to the US over the next three years. Even if you believe that smart speakers will single-handedly bring voice search into the mainstream, it's a tall order.

So okay, we've established that voice enthusiasts might need to cool their jets a bit when it comes to the adoption of voice search. But if we return to (our interpretation of) Andrew Ng's prediction that 35% of searches by 2020 will be voice, what is going to make the volume of voice search leap up those remaining 22 percentage points in less than two years?

Is it sheer volume of voice device ownership? Is it the increasing normalisation of speaking aloud to a device in public? Or is it something else?

Ng made another prediction, via Twitter this time, in December 2016 which gives us a clue as to his thinking in this regard. He wrote, "As speech-recognition accuracy goes from 95% to 99%, we'll go from barely using it to using all the time!"

So, Andrew Ng believes that sheer accuracy of recognition is what will take voice search into the mainstream. 95% word recognition is actually the same threshold of accuracy as human speech (Google officially reached this threshold last year, to great excitement), so Ng is holding machines to a higher standard than humans – which is fair enough, since we tend to approach new technology and machine interfaces with a higher degree of scepticism, and are less forgiving of errors. In order to win us over, they have to really wow us.

But is a pure vocal recognition the only barrier to voice search going mainstream? Let's consider the user experience of voice search.

The UX problems with voice

As I mentioned in our last installment of natural language and conversational search, when using voice interfaces, we tend to hold the same expectations that we have for a conversation with a human being.

We expect machines to respond in a human way, seamlessly and intuitively carrying on the exchange; when they don't, bringing us up short with an "I'm sorry, I don't understand the question," we're thrown off and turned off.

This explains why voice recognition is weighted so highly as a measure of success for voice interfaces, but it's not the only important factor. Often, understanding you still isn't enough to produce the right response; many voice commands depend on specific phrasing to activate, meaning that you can still be brought up short if you don't know exactly what to utter to achieve the result you want.

The internet is full of examples of what happens when our voice assistants don't quite understand the question.

Or what about if you misspeak – the verbal equivalent of a typo? When typing, you can just delete and retype your query before you submit, but when speaking, there's no way to take back the last word or phrase you uttered. Instead, you have to wait for the device to respond, give you an error, and then start again.

If this happens multiple times, it can prompt the user to give up in exasperation. Writing for Gizmodo, Chris Thomson paints a vivid picture of the frustration experienced by users with speech impediments when trying to use voice-activated smart speakers.

One of the major reasons that voice interfaces are heralded as the future of technology is because speaking your query or command aloud is supposed to be so much faster and more frictionless than typing it. At the moment, though, that's far from being the case.

However, while they might be preventing the uptake of voice interfaces (which is intrinsically linked to the adoption of voice search) at the moment, these are all issues that could reasonably be solved in the future as the technology advances. None of them are deal-breakers.

For me, the real deal-breaker when it comes to voice search, and the reason why I believe it will never see widespread adoption in its present state, is this: it doesn't do what it's supposed to.

One result to rule them all?

Think back for a moment to what web search is designed to do. Though we take it for granted nowadays, before search engines came along, there was no systematic way to find web pages and navigate the world wide web. You had to know the web address of a site already in order to visit it, and the early "weblogs" (blogs) often contained lists of interesting sites that web users had found on their travels.

Web search changed all that by doing the hard work for users – pulling in information about what websites were out there, and presenting it to users so that they could navigate the web more easily. This last part is the issue that I'm getting at, in a sidelong sort of way: so that they could navigate the web.

Contrast that with what voice search currently does: it responds to a query from the user with a single, definitive result. It might be possible to follow up that query with subsequent searches, or to carry out an action (e.g. ordering pizza, hearing a recipe, receiving directions), but otherwise, the voice journey stops there. You can't browse the web using your Amazon Echo. You can use your smartphone, but for all intents and purposes, that's just mobile search. Nothing about that experience is unique to voice search.

This is the reason why voice search is only ever used for general knowledge queries or retrieving specific pieces of information: it's inherently hampered by an inability to explore the web.

It's why voice search in its present state is mostly a novelty: not just because voice devices themselves are a novelty, but because it's difficult to really search with it.

One result to rule them all?

Even when voice devices like smart speakers catch on and become part of people's daily lives, it's because of their other capabilities, not because of search. Search is always incidental.

This is also why Google, Amazon and other makers of smart speakers are more interested in expanding the commands that their devices respond to and the places they can respond to them. For them, that is the future of voice.

What does this mean for voice search?

What true voice search could sound like

I see two possible future scenarios for voice search.

One, voice search remains as a "single search result" tool which is mostly useful for fact-finding exercises and questions that have a definitive answer, in which case there will always be a limit to how big voice search can get, and voice will only ever be a minor channel in the grand scheme of search and SEO. Marketers should recognise the role that it plays in their overall search strategy (if any), think about the use cases realistically, and optimise for those – or not – if it makes sense to.

Or two, voice search develops into a genuine tool for searching the web. This might involve a user being initially read the top result for their search, and then being presented with the option to hear more search results – perhaps three or four, to keep things concise.

If they then want to hear content from one of the results, they can instruct the voice assistant to navigate to that webpage, and then proceed to listen to an audio version of the news article, blog post, Wikipedia page, or other websites that they've chosen.

Duane Forrester, VP Insights at Yext, envisages just such an eventuality during a wide-ranging video discussion on the future of voice search with Stone Temple Consulting's Eric Enge and PeakActivity's Brent Csutoras. The whole discussion is excellent and well, well worth a watch or a read (the transcript is available beneath the video).

Duane Forrester: We may see a resurgence in [long-form content] a couple of years from now if our voice assistants are now reading these things out loud.

Brent Csutoras: Sure. Like an audible.

Duane: Exactly, like a built-in native audible, like “I’m on this page, do you want me to read it? “Yes, read it out loud to me.” There we go.

Brent: Yes because in that sense, I’m going to want to hear more. I’m driving down the street and want to hear about what’s happening and I want to hear follow up pieces.

Duane: It immediately converts every single website, every page of content, every blog, it immediately converts all of those into on-demand podcasts. That’s a cool idea, it’s a cool adaptation. I’m not sure if we’ll get there. We will when we get to the point of having a digital agent. But that’s still years in the future.

At first, I was sceptical of the idea that people would ever want to consume web content primarily via audio. Surely it would be slower and less convenient than visually scanning the same information?

Then I thought about the fast-growing popularity of podcasts and audiobooks and realized that the audio web could fit into our lives in many of the same ways that other types of audio have – especially if voice devices become as omnipresent as many techs and marketing pundits are predicting they will.

Is this a distant future? Perhaps. But this is how I imagine voice search truly entering the mainstream, the same way that web search did: as a means of exploring the web.

The future of voice search might not be Google

What surprises me is that for all the hype surrounding voice search and its possibilities, hardly anyone has pointed out the obvious drawback of the single search result or considered what it could mean for voice adoption.

An article by Marieke van de Rakt of Yoast highlights it as an obstacle but believes that screen connectivity is the answer. This is a possibility, especially as Google and Amazon are now equipping their smart speakers with screens - but I think that requiring a screen removes some of the convenience of voice as a user interface, one that can be interacted with while doing other things (like driving) without pulling the user's attention away.

For the most part, however, it seems to me that marketers and SEOs have been too content to just follow Google's lead (and Bing's, because realistically, where Google goes, Bing will follow) when it comes to things like voice search. Is Google presenting the user with a single search result? Everyone optimize for single search results; the future of search will be one answer!

Why? What about that makes for a good user experience? Is this what search was meant to do?

I understand letting Google set the agenda when it comes to SEO more broadly because realistically it's so dominant that any SEO strategy has to mainly cater to Google. However, I don't think we should assume that Google will remain the leader of the search in every new, emerging area like voice or visual search.

Oh, Google is doing its best to stay on top, and there's no denying that it's taken an early lead; its speech recognition and conversational search capabilities are currently second to none. But Google isn't the hot young start-up that it was when it came along and challenged the web search status quo. It's much bigger now, and has investors to answer to.

Google makes a huge amount of revenue from its search and advertising empire; its primary interest is in maintaining that. One search result suits Google just fine, if it means that users won't leave its walled garden.

Marketers and SEOs should remember that Google wasn't always the king of web search; other web search engines entered the game first, and were very popular – but Google changed the game because the way it had of doing the search was so much better, and users loved it. Eventually, the other search engines couldn't compete.

The same thing could easily happen with voice search.

The logos of some of the early search engines that Google out-competed in its quest for web search dominance.

The future of voice optimisation

So where does that leave the future of voice optimisation?

Many of these eventualities seem like far-off possibilities at best, and there’s no way of being certain how they will pan out. How should marketers go about optimising for voice now and in the near future?

Though I’ve taken a fairly sceptical stance throughout this series, I do believe that voice is worth optimising for. However, the opportunity around voice search specifically is limited, and so I believe that brands should consider all the options for being present on voice as a whole – whether that’s on mobile, as a mobile voice search result, or on smart speakers, as an Alexa Skill or Google Home Action – and pursue whatever strategy makes most sense for their brand.

I’m interested in seeing us move away from thinking about voice and voice devices as a search channel, and more as a general marketing channel that it’s possible to be present on in various different ways – like social media.

It’s still extremely early days for this technology, and while the potential is huge, there are still many things we don’t know about what the future of voice will look like, so it’s important not to jump the gun.

Brent Csutoras sums things up extremely well in the future of voice search discussion:

This is an important technology I really think you should pay attention to. What I worry about is that people start feeling like they have to be involved, right? It’s like, “Oh crap, I don’t want to be left behind.”

What I would say is that in this space, it’s like the example of Instagram. Everybody wanted to have an Instagram account and they had nothing visual to show, so they just started creating crap to show it. If you have something that fits for voice search right now, then you should absolutely take the steps that you can to participate with it. If you don’t, then definitely just pay attention to it.

This space is going to open up, it is going to provide an opportunity for just about everyone, so stay abreast of what’s happening in this space, what’s the technology, and start envisioning your company in that space, and then wait until you have that opportunity to make that a reality. But don’t overstress yourself and feel like you’re failing because you’re not in the space right now.

Published in Search Engine

WE FACE a crisis of computing. The very devices that were supposed to augment our minds now harvest them for profit. How did we get here?

Most of us only know the oft-told mythology featuring industrious nerds who sparked a revolution in the garages of California. The heroes of the epic: Jobs, Gates, Musk, and the rest of the cast. Earlier this year, Mark Zuckerberg, hawker of neo-Esperantist bromides about “connectivity as panacea” and leader of one of the largest media distribution channels on the planet, excused himself by recounting to senators an “aw shucks” tale of building Facebook in his dorm room. Silicon Valley myths aren’t just used to rationalize bad behavior. These business school tales end up restricting how we imagine our future, limiting it to the caprices of eccentric billionaires and market forces.

What we need instead of myths are engaging, popular histories of computing and the internet, lest we remain blind to the long view.

At first blush, Yasha Levine’s Surveillance Valley: The Secret Military History of the Internet (2018) seems to fit the bill. A former editor of The eXile, a Moscow-based tabloid newspaper, and investigative reporter for PandoDaily, Levine has made a career out of writing about the dark side of tech. In this book, he traces the intellectual and institutional origins of the internet. He then focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community.

He remains unremittingly dark, however, claiming that these technologies were developed from the beginning with surveillance in mind and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”

To be sure, these constitute provocative theses, ones that attempt to confront not only the standard Silicon Valley story, but also established lore among the small group of scholars who study the history of computing. He falls short, however, of backing up his claims with sufficient evidence. Indeed, he flirts with creating a mythology of his own — one that I believe risks marginalizing the most relevant lessons from the history of computing.

The scholarly history is not widely known and worth relaying here in brief. The internet and what today we consider personal computing came out of a unique, government-funded research community that took off in the early 1960s. Keep in mind that, in the preceding decade, “computers” were radically different from what we know today. Hulking machines, they existed to crunch numbers for scientists, researchers, and civil servants. “Programs” consisted of punched cards fed into room-sized devices that would process them one at a time. Computer time was tedious and riddled with frustration. A researcher working with census data might have to queue up behind dozens of other users, book time to run her cards through, and would only know about a mistake when the whole process was over.

Users, along with IBM, remained steadfast in believing that these so-called “batch processing” systems were really what computers were for. Any progress, they believed, would entail building bigger, faster, better versions of the same thing.

But that’s obviously not what we have today. From a small research, a community emerged an entirely different set of goals, loosely described as “interactive computing.” As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind.

It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today is known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.

At ARPA, he enjoyed relative freedom in addition to plenty of cash, which enabled him to fund projects in computing whose military relevance was decidedly tenuous. He established a nationwide, multi-generational network of researchers who shared his vision. As a result, almost every significant advance in the field from the 1960s through the early 1970s was, in some form or another, funded or influenced by the community he helped establish.

Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.

Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.

The first piece of evidence he cites comes by way of ARPA’s Project Agile. A counterinsurgency research effort in Southeast Asia during the Vietnam War, it was notorious for its defoliation program that developed chemicals like Agent Orange. It also involved social science research and data collection under the guidance of an intelligence operative named William Godel, head of ARPA’s classified efforts under the Office of Foreign Developments. On more than one occasion, Levine asserts or at least suggests that Licklider and Godel’s efforts were somehow insidiously intertwined and that Licklider’s computing research in his division of ARPA had something to do with Project Agile. Despite arguing that this is clear from “pages and pages of released and declassified government files,” Levine cites only one such document as supporting evidence for this claim. It shows how Godel, who at one point had surplus funds, transferred money from his group to Licklider’s department when the latter was over budget.

This doesn’t pass the sniff test. Given the freewheeling nature of ARPA’s funding and management in the early days, such a transfer should come as no surprise. On its own, it doesn’t suggest a direct link in terms of research efforts. Years later, Taylor asked his boss at ARPA to fund the ARPANET — and, after a 20-minute conversation, he received $1 million in funds transferred from ballistic missile research. No one would seriously suggest that ARPANET and ballistic missile research were somehow closely “intertwined” because of this.

Sharon Weinberger’s recent history of ARPA, The Imagineers of War: The Untold Story of DARPA, The Pentagon Agency that Changed the World(2017), which Levine cites, makes clear what is already known from the established history. “Newcomers like Licklider were essentially making up the rules as they went along,” and were “given broad berth to establish research programs that might be tied only tangentially to a larger Pentagon goal.” Licklider took nearly every chance he could to transform his ostensible behavioral science group into an interactive computing research group. Most people in wider ARPA, let alone the DoD, had no idea what Licklider’s researchers were up to. His Command and Control division was even renamed the more descriptive Information Processing Techniques Office (IPTO).

Licklider was certainly involved in several aspects of counterinsurgency research. Annie Jacobsen, in her book The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency (2015), describes how he attended meetings discussing strategic hamlets in Southeast Asia and collaborated on proposals with others who conducted Cold War social science research. And Levine mentions Licklider’s involvement with a symposium that addressed how computers might be useful in conducting counterinsurgency work.

But Levine only points to one specific ARPA-funded computing research project that might have had something to do with counterinsurgency. In 1969, Licklider — no longer at ARPA — championed a proposal for a constellation of research efforts to develop statistical analysis and database software for social scientists. The Cambridge Project, as it was called, was a joint effort between Harvard and MIT. Formed at the height of the antiwar movement, when all DoD funding was viewed as suspicious, it was greeted with outrage by student demonstrators. As Levine mentions, students on campuses across the country viewed computers as large, bureaucratic, war-making machines that supported the military-industrial complex.

Levine makes a big deal of the Cambridge Project, but is there really a concrete connection between surveillance, counterinsurgency, computer networking, and this research effort? If there is, he doesn’t present it in the book. Instead, he relies heavily on an article in the Harvard Crimson by a student activist. He doesn’t even directly quote from the project proposal itself, which should contain at least one or two damning lines. Instead, he lists types of “data banks” the project would build, including ones on youth movements, minority integration in multicultural societies, and public opinion polls, among others. The project ran for five years but Levine never tells us what it was actually used for.

It’s worth pointing out that the DoD was the only organization that was funding computing research in a manner that could lead to real breakthroughs. Licklider and others needed to present military justification for their work, no matter how thin. In addition, as the 1960s came to a close, Congress was tightening its purse strings, which was another reason to trump up their relevance. It’s odd that an investigative reporter like Levine, ever suspicious of the standard line, should take the claims of these proposals at face value.

I spoke with John Klensin, a member of the Cambridge Project steering committee who was involved from the beginning. He has no memory of such data banks. “There was never any central archive or effort to build one,” he told me. He worked closely with Licklider and other key members of the project, and he distinctly recalls the tense atmosphere on campuses at the time, even down to the smell of tear gas. Oddly enough, he says some people worked for him by day and protested the project by night, believing that others elsewhere must be doing unethical work. According to Klensin, the Cambridge Project conducted “zero classified research.” It produced general purpose software and published its reports publicly. Some of them are available online, but Levine doesn’t cite them at all. An ARPA commissioned study of its own funding history even concluded that, while the project had been a “technical success” whose systems were “applicable to a wide variety of disciplines,” behavioral scientists hadn’t benefited much from it. Until Levine or someone else can produce documents demonstrating that the project was designed for, or even used in, counterinsurgency or surveillance efforts, we’ll have to take Klensin at his word.

As for the ARPANET, Levine only provides one source of evidence for his claim that, from its earliest days, the experimental computer network was involved in some kind of surveillance activity. He has dug up an NBC News report from the 1970s that describes how intelligence gathered in previous years (as part of an effort to create dossiers of domestic protestors) had been transferred across a new network of computer systems within the Department of Defense.

This report was read into the Congressional record during joint hearings on Surveillance Technology in 1975. But what’s clear from the subsequent testimony of Assistant Deputy Secretary of Defense David Cooke, the NBC reporter had likely confused several computer systems and networks across various government agencies. The story’s lone named source claims to have seen the data structure used for the files when they arrived at MIT. It is indeed an interesting account, but it remains unclear what was transferred, across which system, and what he saw. This incident hardly shows “how military and intelligence agencies used the network technology to spy on Americans in the first version of the Internet,” as Levine claims.

The ARPANET was not a classified system — anyone with an appropriately funded research project could use it. “ARPANET was a general purpose communication network. It is a distortion to conflate this communication system’s development with the various projects that made use of its facilities,” Vint Cerf, creator of the internet protocol, told me. Cerf concedes, however, that a “secured capability” was created early on, “presumably used to communicate classified information across the network.” That should not be surprising, as the government ran the project. But Levine’s evidence merely shows that surveillance information gathered elsewhere might have been transferred across the network. Does that count as having surveillance “baked in,” as he says, to the early internet?

Levine’s early history suffers most from viewing ARPA or even the military as a single monolithic entity. In the absence of hard evidence, he employs a jackhammer of willful insinuations as described above, pounding toward a questionable conclusion. Others have noted this tendency. He disingenuously writes that, four years ago, a review of Julian Assange’s book in this very publication accused him of being funded by the CIA, when in fact its author had merely suggested that Levine was prone to conspiracy theories. It’s a shame because today’s internet is undoubtedly a surveillance platform, both for governments and the companies whose cash crop is our collective mind. To suggest this was always the case means ignoring the effects of the hysterical national response to 9/11, which granted unprecedented funding and power to private intelligence contractors. Such dependence on private companies was itself part of a broader free market turn in national politics from the 1970s onward, which tightened funds for basic research in computing and other technical fields — and cemented the idea that private companies, rather than government-funded research, would take charge of inventing the future. Today’s comparatively incremental technical progress is the result. In The Utopia of Rules (2015), anthropologist David Graeber describes this phenomenon as a turn away from investment in technologies promoting “the possibility of alternative futures” to investment in those that “furthered labor discipline and social control.” As a result, instead of mind-enhancing devices that might have the same sort of effect as, say, mass literacy, we have a precarious gig economy and a convenience-addled relationship with reality.

Levine recognizes a tinge of this in his account of the rise of Google, the first large tech company to build a business model for profiting from user data. “Something in technology pushed other companies in the same direction. It happened just about everywhere,” he writes, though he doesn’t say what the “something” is. But the lesson to remember from history is that companies on their own are incapable of big inventions like personal computing or the internet. The quarterly pressure for earnings and “innovations” leads them toward unimaginative profit-driven developments, some of them harmful.

This is why Levine’s unsupported suspicion of government-funded computing research, regardless of the context, is counterproductive. The lessons of ARPA prove inconvenient for mythologizing Silicon Valley. They show a simple truth: in order to achieve serious invention and progress — in computers or any other advanced technology — you have to pay intelligent people to screw around with minimal interference, accept that most ideas won’t pan out, and extend this play period to longer stretches of time than the pressures of corporate finance allow. As science historian Mitchell Waldrop once wrote, the polio vaccine might never have existed otherwise; it was “discovered only after years of failure, frustration, and blind alleys, none of which could have been justified by cost/benefit analysis.” Left to corporate interests, the world would instead “have gotten the best iron lungs you ever saw.”

Computing for the benefit of the public is a more important concept now than ever. In fact, Levine agrees, writing, “The more we understand and democratize the Internet, the more we can deploy its power in the service of democratic and humanistic values.” Power in the computing world is wildly unbalanced — each of us mediated by and dependent on, indeed addicted to, invasive systems whose functionality we barely understand. Silicon Valley only exacerbates this imbalance, in the same manner, that oil companies exacerbate climate change or financialization of the economy exacerbates inequality. Today’s technology is flashy, sexy, and downright irresistible. But, while we need a cure for the ills of late-stage capitalism, our gadgets are merely “the best iron lungs you ever saw.”

 Source: This article was published lareviewofbooks.org By Eric Gade

Published in Online Research

Consumers do enjoy the convenience of the apps they use but are individually overwhelmed when it comes to defending their privacy.

When it comes to our collective sense of internet privacy, 2018 is definitely the year of awareness. It’s funny that it took Facebook’s unholy partnership with a little-known data-mining consulting firm named Cambridge Analytica to raise the alarm. After all, there were already abundant examples of how our information was being used by unidentified forces on the web. It really took nothing more than writing the words "Cabo San Lucas" as part of a throwaway line in some personal email to a friend to initiate a slew of Cabo resort ads and Sammy Hagar’s face plastering the perimeters of our social media feeds.

In 2018, it’s never been more clear that when we embrace technological developments, all of which make our lives easier, we are truly taking hold of a double-edged sword. But has our awakening come a little too late? As a society, are we already so hooked on the conveniences internet-enabled technologies provide us that we’re hard-pressed making the claim that we want the control of our personal data back?

It’s an interesting question. Our digital marketing firm recently conducted a survey to better understand how people feel about internet privacy issues and the new movement to re-establish control over what app providers and social networks do with our personal information.

Given the current media environment and scary headlines regarding online security breaches, the poll results, at least on the surface, were fairly predictable. According to our study, web users overwhelmingly object to how our information is being shared with and used by third-party vendors. No surprise here, a whopping 90 percent of those polled were very concerned about internet privacy. In a classic example of "Oh, how the mighty have fallen," Facebook and Google have suddenly landed in the ranks of the companies we trust the least, with only 3 percent and 4 percent of us, respectively, claiming to have any faith in how they handled our information.

Despite consumers’ apparent concern about online security, the survey results also revealed participants do very little to safeguard their information online, especially if doing so comes at the cost of convenience and time. In fact, 60 percent of them download apps without reading terms and conditions and close to one in five (17 percent) report that they’ll keep an app they like, even if it does breach their privacy by tracking their whereabouts.

While the survey reveals only 18 percent say they are “very confident” when it comes to trusting retails sites with their personal information, the sector is still on track to exceed a $410 billion e-commerce spend this year. This, despite more than half (54 percent) reporting they feel less secure purchasing from online retailers after reading about online breach after online breach.

What's become apparent from our survey is that while people are clearly dissatisfied with the state of internet privacy, they feel uninspired or simply ill-equipped to do anything about it. It appears many are hooked on the conveniences online living affords them and resigned to the loss of privacy if that’s what it costs to play.

The findings are not unique to our survey. In a recent Harvard Business School study, people who were told the ads appearing in their social media timelines had been selected specifically based on their internet search histories showed far less engagement with the ads, compared to a control group who didn't know how they'd been targeted. The study revealed that the actual act of company transparency, coming clean about the marketing tactics employed, dissuaded user response in the end.

As is the case with innocent schoolchildren, the world is a far better place when we believe there is an omniscient Santa Claus who magically knows our secret desires, instead of it being a crafty gift exchange rigged by the parents who clearly know the contents of our wish list. We say we want safeguards and privacy. We say we want transparency. But when it comes to a World Wide Web, where all the cookies have been deleted and our social media timeline knows nothing about us, the user experience becomes less fluid.

The irony is, almost two-thirds (63 percent) of those polled in our survey don’t believe that companies having access to our personal information leads to a better, more personalized, online experience at all, which is the chief reason companies like Facebook state for wanting our personal information in the first place. And yet, when an app we've installed doesn't let us tag our location to a post or inform us when a friend has tagged us in a photo or alerted us that the widget we were searching for is on sale this week, we feel slighted by our brave new world.

With the introduction of GDPR regulations this summer, the European Union has taken, collectively, the important first steps toward regaining some of the online privacy that we, as individuals, have been unable to take. GDPR casts the first stone at the Goliath that’s had free rein leveraging our personal information against us. By doling out harsh penalties and fines for those who abuse our private stats -- or at least those who aren’t abundantly transparent as to how they intend to use those stats -- the EU, and by extension, those countries conducting online business with them, has finally initiated a movement to curtail the hitherto laissez-faire practices of commercial internet enterprises. For this cyberspace Wild West, there’s finally a new sheriff in town.

I imagine that our survey takers applaud this action, although only about 25 percent were even aware of GDPR. At least on paper, the legislation has given us back some control over the privacy rights we’ve been letting slip away since we first signed up for a MySpace account. Will this new regulation affect our user experience on the internet? More than half of our respondents don’t think so, and perhaps, for now, we are on the way toward a balancing point between the information that makes us easier to market to and the information that’s been being used for any purpose under the sun. It’s time to leverage this important first step, and stay vigilant of its effectiveness with a goal of gaining back even more privacy while online.

Source: This article was published entrepreneur.com By Brian Byer

Published in Internet Privacy

The kids of today are comfortable in the digital space. They use digital diaries and textbooks at school, communicate via instant messaging, and play games on mobile devices.

However, as much as the Internet is an incredible resource, access to it can be dangerous for children, and parents who want their child to spend time online safely and productively, need to understand the basic concepts of digital security and the associated threats and be able to explain them to their children.

With this in mind, Kaspersky Lab compiles an annual report, based on statistics received from its solutions and modules with child protection features, which examines the online activities of children around the world.

Video content

According to the report, globally, video content made up 17% of Internet searches. Although many videos watched as a result of these searches may be harmless, it is still possible for children to accidentally end up watching videos that contain harmful or inappropriate content.

The report presents search results on the ten most-popular languages for the last six months. The data shows that the 'video and audio' category, which covers requests related to any video content, streaming services, video bloggers, series, and movies, are the most regularly 'Googled', and make up 17% of the total requests.

Second and third places go to translation (14%) and communication (10%) Web sites respectively. Gaming Websites sit in fourth place, generating only 9% of the total search requests.

Harnessing smart wearables to spy on owners

Kaspersky Lab has also noted a clear language difference for search requests. "For example, video and music Web sites are typically searched for in English, which can be explained by the fact that the majority of movies, TV series and musical groups have English names. Spanish-speaking kids carry out more requests for translation sites, while communication services are mostly searched for in Russian."

Chinese-speaking children look for education services, while French kids are more interested in sport and games Web sites. German children dominate in the "shopping" category, Japanese kids search for Anime, and the highest number of search requests for pornography are in Arabic.

Anna Larkina, the Web-content analysis expert at Kaspersky Lab, says children around the world have varying interests and online behaviors, but what links them all is their need to be protected online from potentially harmful content.

"Children looking for animated content could accidentally open a porn video. Or they could start searching for innocent videos and unintentionally end up on Web sites containing violent content, both of which could have a long-term impact on their impressionable and vulnerable minds," she says.

A local view

In addition to analyzing searches, the report also delves into the types of Web sites children visit, or attempt to visit, which contain potentially harmful content that falls under one of the 14 pre-set categories, which cover Internet communication sites, adult content, narcotics, computer games, gambling and many others.

The data revealed that in South Africa, communication sites (such as social media, messengers, or e-mails) were the most popular (69%) of pages visited.

However, the percentage for this category is dropping each year, as mobile devices play an increasingly bigger role in children's online activities.

The second most popular category of Web sites visited in SA is 'software, audio, and video', accounting for 17%. Websites with this content have become significantly more popular since last year when it was only the fifth most popular category globally at 6%.

Others in the top four are electronic commerce (4.2%) and alcohol, tobacco, and Web sites about narcotics (3.9%), which is a new addition compared to this time last year.

Education

Irrespective of what children are doing online, it is important for parents not to leave their children's digital activities unattended, says Larkina.

"While it is important to trust your children and educate them about how to behave safely online, even your good advice cannot protect them from something unexpectedly showing up on the screen. That's why advanced security solutions are key to ensuring children have positive online experiences, rather than harmful ones," she concludes.

Source: This article was published itweb.co.za

Published in Internet Privacy

Our editors delve into Curiosity's top stories every day on a podcast that's shorter than your commute. Click here to listen and learn — in just a few minutes!

The internet contains at least 4.5 billion websites that have been indexed by search engines, according to one Dutch researcher. That huge number barely scratches the surface of what's really out there, however. The rest is known as the deep web, which is 400 to 500 times larger than the surface internet, according to some estimates.

What Makes The Deep Web ... Deep?

It's not deep like sad, non-rhyming poetry, nor is it deep like the unexplored depths of the ocean. The deep web is actually so accessible that you use it every time you check your email. What sets it apart is that its sites can't be reached via search engine; the "let me Google that for you" meme, delightful as it is, doesn't apply. You need to know the URL or have access permissions to view a deep-web site.

The deep web is about as mundane as the surface web, really — it's just wrapped in a thin layer of secrecy. Mostly, it's emails, social media profiles, subscription sites like Netflix, and anything you need to fill out a form to access. But because the deep web is hidden from search engines, some people use it for more nefarious purposes.

Welcome to the Dark Web

The dark web and the deep web aren't synonymous. The dark web is a sliver of the deep web made up of encrypted sites. Here, near-total anonymity reigns. Encrypted sites lack the DNS and IP addresses that usually make websites identifiable. More confusing still: To access them, users have to use encrypting software that masks their IP addresses, making the users hard to identify, too.

Unsurprisingly, many dark-web sites specialize in illegal goods and services. The now-defunct Silk Road, for instance, was an online drug store — and not in the CVS sense. When its creator, Ross Ulbricht, was arrested in 2013, Silk Road had 12,000 listings for everything from weed to heroin. (Ulbricht was sentenced to life in prison.) The dark web also provides shady resources for hitmen, terrorists, and other criminals; overall, its illicit marketplaces generate more than $500,000 per day. Just accessing the dark web can set off red flags at the FBI.

This is ironic, since Tor, the most popular software for making and accessing dark websites, was originally created by the U.S. Navy. Even today, Tor is funded by the U.S. government. Washington isn't secretly supporting the online heroin trade, though — there are actually plenty of other, less shady uses for Tor's encrypting services. When activists speak out against authoritarian regimes, for instance, Tor can help them protect their privacy; the same goes for whistleblowers, and Average Joes spooked by Facebook's forthcoming eye-tracking feature. Never forget: Tor can also get you into ... dark web bookclubs? If you're into that.

 Source: This article was published curiosity.com

Published in Deep Web

There are so many different tools and technologies available on the internet today, and so many associated terms and concepts. As I think about topics to focus on here in the coming months, I want to make sure we're touching on the most important ones. What are the most important internet technologies for educators to be aware of, and informed about?

I'm sure many people would probably come up with a slightly different list, but based on my observations and experiences, and feedback from faculty at my institution, I have selected the following technologies. I do not mean to imply that every educator should be expected to use all of these technologies in the classroom, but rather that every educator should understand what these are, the potential they have in the classroom, and how their students may already be using them.

1. Video and Podcasting – One of the most widely adopted internet technologies for use in instructional settings is video streaming. Between YouTubeTeacherTubeEduTube, and many other video hosting sites, there are an abundance of lectures, how-to videos, and supporting materials available in the form of web based video. Podcasting has also been used to provide similar offerings of audio materials through popular sites like iTunes. [Click here to learn more about video hosting for education, or here to learn more about podcasting for education.]

2. Presentation Tools – This category is vast and rich. There are hundreds (perhaps thousands) of tools on the Internet that can be used to create and share presentations, from simple Powerpoint slide players like Slideshare to multimedia timeline tools like Vuvox and OneTrueMedia. These tools can be used to support classroom teaching or distance learning, or for student reports and presentations.

Have you considered outsourcing your call center?

3. Collaboration & Brainstorming Tools – This is another wide ranging category, including thought-organizing tools like mindmap and bubbl.us, and collaborative tools like web based interactive whiteboards and Google Documents. Additionally, some of the other tools in this list, such as wikis and virtual worlds, also serve as collaboration tools.

4. Blogs & Blogging – Bloggers and many other regular Internet users are well aware of blogs and blogging, but there are many other professionals who really are not frequenters of the “blogosphere”. In addition to a basic familiarity with this technology, educators should be aware of sites like Blogger and WordPress, where users can quickly and easily create their own blogs for free.

5. Wikis – The use of Wikis in educational settings is growing every day. Sites like Wetpaint and others allow users to create free wiki web sites and are a great way to get started with using wikis for educational applications. [Click here to learn more about the use of Wikis in education].

6. Social Networking – All educators should have a basic understanding of sites like Facebook and MySpace and how they are used. This doesn't mean they need accounts on these sites (and many educators would recommend against using these sites to communicate with their students), but they should understand what they are and how they are being used. Educators should also be aware of the professional social networking site LinkedIn.

7. IM – A large percentage of students use IM regularly, via Aim, IM aggregator site Meebo (Meebo allows users to combine messaging from Aim, Yahoo, MySpace, Facebook, and other sites), or other tools. It behooves educators to be aware of this, and I have even come across various articles about using IM within the classroom setting (such as this one from Educause).

8. Twitter – This listing is really focused on technologies, not specific applications, but this application is currently just too popular to ignore. You should at least understand what it is and the fundamentals of how it is used. [Click here for some insight into how Twitter can be used in education.]

9. Virtual Worlds – This technology has received a lot of press, with SecondLife being the clear leader thus far in this application area. In my experience, the use of SecondLife has been somewhat constrained by high bandwidth and processing power requirements, but this also means that there is still considerable room for increased adoption of the application as systems continue to become more powerful and higher speed bandwidth more prevalant. Active Worlds is one of a number of competitive technologies, and provides a “universe” dedicated to education that has been popular with educators.

10. RSS Feeds – RSS allows users to create their own “push” data streams (that is, define data flows you want coming to you automatically, rather than having to go and “pull” the information with a Google search or other browsing effort). RSS feeds enable you to take advantage of streams of published content that will be sitting in your In Box, or in an RSS reader, when you want them. There are RSS feeds available for many topics and many web sites.

While many readers may have their own interpretation of which technologies are essential for educators to be aware of, I think this is a great list to get started with. Of course, this list will require updating over time, as technologies change, and as educator's uses of these technologies evolve. As always, reader input is welcomed. What do you think? Is this a good top 10? Would you like to see some other technologies listed here? Feel free to comment and offer your insights, please. Thanks!

Source: This article was published emergingedtech.com Kelly Walsh

Published in Science & Tech
Page 1 of 16

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait
online research banner

airs logo

AIRS is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Subscribe to AIRS Newsletter

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Follow us on Facebook