fbpx
Jay Harris

Jay Harris

As Google increasingly incorporates direct answers and other types of featured snippets into search results pages, columnist Andrew Shotland points out that businesses may want to get smarter about marking up their pages.

I have been noticing a lot of Google Answer Boxes showing up for queries with local intent these days. My recent post, Are You Doing Local Answers SEO? pointed out this fantastic result HomeAdvisor is getting for “replace furnace” queries:

Replace Your Damn Furnace Already

When clients get these local answer boxes, they often perform significantly better than regular #1 organic listings. In our opinion, these seem to be driven primarily by the following factors:

Domain/page authority

Text that appears to answer the query
Easy-to-understand page structures (broken up into sections that target specific queries, tables, prices and so on). Schema is not necessary here, but it helps.
For more of a deep dive on how these work, see Mark Traphagen’s excellent summary of last year’s SMX West panel on The Growth of Answers SEO.

But I am not here to talk about how great answer boxes are. I am here to talk about this result that recently popped up for “university of illinois apartments”:

Google Answer Boxes Gone Wild

At first glance, you might think this was a basic list of apartments for rent near the university. But if you look closer at the grid of data, you will see that it looks more like part of a calendar, which is pretty useless.

Many searchers may look past this and just click on the link, but this got me thinking that I really don’t want Google controlling what parts of my site get shown in the SERPs, particularly when it looks more like a Lack of Knowledge Box.

Think about if you had some unsavory user comments on the page that appeared in the answer box. Not only would this be a useless result, but it also might be damaging to your brand. The apartments result might make some searchers think ApartmentFinder is a bad site. So what went wrong here?

If you examine the ApartmentFinder URL in the answer box, you’ll notice that it doesn’t display any calendar in the UI. But if you search the code for “calendar,” you’ll see:

Calendar Code

This shows that there is some kind of calendaring app in a contact form.

As you can see from the next screen shot, the first Contact button that appears on the page is fairly close to the h1 that reads, “81 apartments for rent near the University of Illinois”:

Calendar Contact

And if you click on the Contact button, you get a pop-up form with a calendar:

Calendar Pop Up

It seems that Google is:

assuming the query deserves a more complex list of results than the standard SERP;
looking for the data closest to the strongest instance of the query text on the page (the h1); and
taking the first thing that looks like a table of data and putting it on the SERP. (I am sure it’s more complicated than that, but not too sure.)

So what can you do to avoid this?

Mark up your data with schema.org markup. This should give you the best chance of avoiding Google getting your info wrong. (On that note, the Schema.org site itself is kind of a drag to use. Try Google’s own site on Structured Data. It has all of the schema stuff you’ll need, plus some stuff that isn’t on Schema.org.)
Make sure the content you want to appear in answer boxes is closest to the on-page text that has the strongest match for the query — often the h1, but this could be a subheading, as well. If possible, make multiple subheadings that target different queries (e.g., “cheap apartments for rent,” “pet friendly apartments,” and so on) that might be the best results. For more on why this might be important, check out Dave Davies’ great take on the recent presentation from SMX West on how Google works by Google’s Paul Haahr. And while you’re at it, Rae Hoffman’s take on it is pretty great, too.
Put your content in a simple table on the page, or at least make it easy for Google to build its own. The fact that ApartmentFinder doesn’t mark any of its listings on that page with what type of listing it is makes it hard for Google to show a table of, say, one-bedroom apartments for rent at specific prices. Just adding “1BR” in text on each one-bedroom result may be enough to fix the problem.

Figuring out how to impact the answer box displays is akin to what we all went through trying to figure out how to influence what shows up for titles, descriptions and rich snippets. It can take a bit of trial and error, but when it works, it can be the answer to your SEO prayers.

Source:  http://searchengineland.com/dont-trust-google-structure-local-data-246585

Wednesday, 18 May 2016 13:09

What Am I Searching (And Do I Care?)

Indexed discovery services are a lot like Google: you can search simply and get results quickly, but the sources you are rocky-trailsearching tend to be mysterious. Do you know if you are searching specialized sources or generic sources? Authoritative or with an agenda? When a researcher pushes the search button, they get whatever results are deemed relevant from whatever sources are included, and they can’t limit their search to only the sources that matter to them.

 

For some researchers, knowing the sources behind the search really makes no difference to them at all. To these researchers, often undergraduates, it’s the results that count. Most results nowadays do show the source, publication or journal that result is from. This makes it somewhat easier to eyeball a page of results, disregard those from irrelevant sources, or select results as appropriate if they are from an authoritative source. But that research methodology seems inefficient to say the least.

 

Serious researchers, on the other hand, want to know what they are searching. If they know that their information will most likely be in three or four specific resources out of the 20 or 30 their organization subscribes to, then why should they wade through a massive results list or spend one iota of extra time filtering out the extra sources to view their nuggets of information? (Quick answer, they shouldn’t.)

 

To begin to combat this resource transparency problem, libraries are creating separate web pages of source lists and descriptions for serious researchers. Who is the provider? What is the resource and what information exactly does the resource provide? These pages also include the categories of sources searched such as ebooks, articles, multimedia collections, etc. While these lists of resources are certainly helpful to document, is it fair to ask researchers to reference a separate web page to understand what digital content is included in their search, particularly when they are urgently trying to find something from a particular source of information? Or should we ask researchers to disregard knowing what sources they are searching and to just pay attention to the results?  Neither of these seems appropriate in this day and age.

 

Most of us know the benefits of a single search of all resources. One search improves the efficiency of searching disparate sources, makes comparing and contrasting results faster, and provides an opportunity to save, export or email selected results. However, Explorit Everywhere! goes one step further by lending transparency of sources to researchers so they can search even faster. One of our customers mentioned that they moved from a well known discovery service because they were frustrated with all of the news results that were returned. It didn’t help that their researchers couldn’t select specific sources to search, particularly when their searches always seemed to bring back less than relevant results.

 

Explorit Everywhere! helps to narrow a search up front with not only the standard Advanced Search fields, but a list of sources to pick and choose from. A researcher looking to search in four different sources doesn’t want to run a search against 25 sources. They can narrow the playing field to hone in on the needle in the haystack faster. And from the results page, they can limit to each individual source to view only those results, in the order that the source ranked them.  A serious researcher’s dream? That’s what we’ve heard. 

 

Not all researcher’s care about drilling down into individual sources like this. But in Explorit Everywhere! the option is there to search the broad or narrow path. We even filter out the rocks.

 

Source:  http://www.deepwebtech.com/2016/03/what-am-i-searching-and-do-i-care/

Indexed discovery services are a lot like Google: you can search simply and get results quickly, but the sources you are rocky-trailsearching tend to be mysterious. Do you know if you are searching specialized sources or generic sources? Authoritative or with an agenda? When a researcher pushes the search button, they get whatever results are deemed relevant from whatever sources are included, and they can’t limit their search to only the sources that matter to them.

For some researchers, knowing the sources behind the search really makes no difference to them at all. To these researchers, often undergraduates, it’s the results that count. Most results nowadays do show the source, publication or journal that result is from. This makes it somewhat easier to eyeball a page of results, disregard those from irrelevant sources, or select results as appropriate if they are from an authoritative source. But that research methodology seems inefficient to say the least.

Serious researchers, on the other hand, want to know what they are searching. If they know that their information will most likely be in three or four specific resources out of the 20 or 30 their organization subscribes to, then why should they wade through a massive results list or spend one iota of extra time filtering out the extra sources to view their nuggets of information? (Quick answer, they shouldn’t.)

To begin to combat this resource transparency problem, libraries are creating separate web pages of source lists and descriptions for serious researchers. Who is the provider? What is the resource and what information exactly does the resource provide? These pages also include the categories of sources searched such as ebooks, articles, multimedia collections, etc. While these lists of resources are certainly helpful to document, is it fair to ask researchers to reference a separate web page to understand what digital content is included in their search, particularly when they are urgently trying to find something from a particular source of information? Or should we ask researchers to disregard knowing what sources they are searching and to just pay attention to the results?  Neither of these seems appropriate in this day and age.

Most of us know the benefits of a single search of all resources. One search improves the efficiency of searching disparate sources, makes comparing and contrasting results faster, and provides an opportunity tosave, export or email selected results. However, Explorit Everywhere! goes one step further by lending transparency of sources to researchers so they can search even faster. One of our customers mentioned that they moved from a well known discovery service because they were frustrated with all of the news results that were returned. It didn’t help that their researchers couldn’t select specific sources to search, particularly when their searches always seemed to bring back less than relevant results.

Explorit Everywhere! helps to narrow a search up front with not only the standard Advanced Search fields, but a list of sources to pick and choose from. A researcher looking to search in four different sources doesn’t want to run a search against 25 sources. They can narrow the playing field to hone in on the needle in the haystack faster. And from the results page, they can limit to each individual source to view only those results, in the order that the source ranked them.  A serious researcher’s dream? That’s what we’ve heard

Not all researcher’s care about drilling down into individual sources like this. But in Explorit Everywhere! the option is there to search the broad or narrow path. We even filter out the rocks.

Google's Self-Driving Car Project and Fiat Chrysler last week announced that they would integrate autonomous vehicle technology into 2017 Chrysler Pacifica Hybrid minivans as part of Google's testing program.

It is the first time Google has worked directly with a car manufacturer to integrate its self-driving technology into a passenger vehicle.

 

It will add 100 Chrysler Pacifica Hybrid vehicles, designed and engineered by Fiat Chrysler, to its existing self-driving test program -- more than doubling the number of cars participating in the program. Google will integrate the sensors and computers the vehicles use to navigate roads without a driver.

 

Both companies will place a portion of their engineering teams in a facility in southeastern Michigan to speed up the development of self-driving cars.

 

Safer Roads

 

"The opportunity to work closely with FCA engineers will accelerate our efforts to develop a fully self-driving car that will make our roads safer and bring everyday destinations within reach for those who cannot drive," said John Krafcik, CEO of Google's Self-Driving Car Project.

 

Self-driving technology has the potential to prevent 33,000 auto-related deaths per year, 94 percent of them due to human error, the companies said.

 

Google is testing self-driving cars in four U.S. cities: Mountain View, Calif.; Austin, Texas; Kirkland, Wash.; and Phoenix. Google's self-driving team will test the self-driving minivans on its private test track in California, prior to being deployed on public roads, the company said.

 

Google won't sell the vehicles being tested with the autonomous technology. However, the team is studying how community members perceive and interact with the autonomous vehicles, and based on that the vehicle performance will be smoothed out to make them feel more natural to people inside and outside of the vehicles, Google's Self-Driving Car Project said in a statement provided to TechNewsWorld by spokesperson Lauren Barriere.

 

Google Steps Ahead

 

The announcement means Google has taken a huge leap forward ahead of the competition in the development of self-driving cars, according to Colin Bird, senior analyst at IHS.

 

"Google is on the vanguard of deploying self-driving, driverless car software," he told TechNewsWorld. "The main issue they were facing was who was going to license it for the vehicles, as Google has shown no indication of wanting to make a vehicle themselves."

 

The collaboration suggests that Fiat Chrysler would be interested in deploying Google's L5 technology -- driverless and requiring no human intervention -- when the system is commercialized, Bird suggested.

 

Until now, Google has been using modified Lexus and Toyota SUVs and hybrids as well as 100 pod cars developed by its own engineers.

 

The Chrysler Pacifica minivan could become part of an autonomous on-demand network of vehicles through a Car as a Service, Bird said. The minivan is "space optimized, features plenty of seats, and previous FCA minivan models have been modified to be wheelchair accessible."

 

Other Chrysler minivan models have been integral components in car-for-hire fleets, he noted.

 

Industry-Wide Race

 

The Google announcement marks the latest advance in the rush to develop autonomous vehicles.

 

Late last month, Google announced an alliance with Ford, Uber, Lyft and Volvo called the Self-Driving Coalition for Safer Streets, designed to promote the safety of autonomous vehicles. David Strickland, formerly of the U.S. National Highway Traffic Safety Administration, was named the national spokesperson for the coalition.

 

Apple last month reportedly hired Chris Porritt, former vice president of vehicle engineering at Tesla, to head up its Project Titan top-secret car program in Germany.

 

Source: http://www.technewsworld.com/story/83482.html

 

 

Safer Roads

"The opportunity to work closely with FCA engineers will accelerate our efforts to develop a fully self-driving car that will make our roads safer and bring everyday destinations within reach for those who cannot drive," said John Krafcik, CEO of Google's Self-Driving Car Project.

Self-driving technology has the potential to prevent 33,000 auto-related deaths per year, 94 percent of them due to human error, the companies said.

Google is testing self-driving cars in four U.S. cities: Mountain View, Calif.; Austin, Texas; Kirkland, Wash.; and Phoenix. Google's self-driving team will test the self-driving minivans on its private test track in California, prior to being deployed on public roads, the company said.

Google won't sell the vehicles being tested with the autonomous technology. However, the team is studying how community members perceive and interact with the autonomous vehicles, and based on that the vehicle performance will be smoothed out to make them feel more natural to people inside and outside of the vehicles, Google's Self-Driving Car Project said in a statement provided to TechNewsWorld by spokesperson Lauren Barriere.

Google Steps Ahead

The announcement means Google has taken a huge leap forward ahead of the competition in the development of self-driving cars, according to Colin Bird, senior analyst at IHS.

"Google is on the vanguard of deploying self-driving, driverless car software," he told TechNewsWorld. "The main issue they were facing was who was going to license it for the vehicles, as Google has shown no indication of wanting to make a vehicle themselves."

The collaboration suggests that Fiat Chrysler would be interested in deploying Google's L5 technology -- driverless and requiring no human intervention -- when the system is commercialized, Bird suggested.

Until now, Google has been using modified Lexus and Toyota SUVs and hybrids as well as 100 pod cars developed by its own engineers.

The Chrysler Pacifica minivan could become part of an autonomous on-demand network of vehicles through a Car as a Service, Bird said. The minivan is "space optimized, features plenty of seats, and previous FCA minivan models have been modified to be wheelchair accessible."

Other Chrysler minivan models have been integral components in car-for-hire fleets, he noted.

Industry-Wide Race

The Google announcement marks the latest advance in the rush to develop autonomous vehicles.

Late last month, Google announced an alliance with Ford, Uber, Lyft and Volvo called the Self-Driving Coalition for Safer Streets, designed to promote the safety of autonomous vehicles. David Strickland, formerly of the U.S. National Highway Traffic Safety Administration, was named the national spokesperson for the coalition.

Apple last month reportedly hired Chris Porritt, former vice president of vehicle engineering at Tesla, to head up its Project Titan top-secret car program in Germany.

The prospective scale of the Internet of Things (IoT) has the potential to fill anyone looking from the outside with the technical equivalent of agoraphobia. However, from the inside, the view is very different. Looked at in detail, it is a series of intricate threads being aligned by a complex array of organizations.

As with any new technological epoch, questions around shape, ownership and regulation are starting to rise. Imagine trying to build the Internet again. It’s like that, but at a bigger scale.

The first hurdle is that of technological standards. We are at a pivotal moment in the development of the IoT. As the diversity of connected things grows, so does the potential risk from not allowing each “thing” to talk to one another.

This begins with networking standards. From ZigBee to Z-Wave, EnOcean, Bluetooth LE or SigFox and LoRa, there are simply too many competing and incompatible networking standards from which to choose. Luckily enough, things seem to be converging and consolidating.

Moreover, the already well-established alliances are regrouping. First in the indoors world, where ZigBee 3.0 is getting closer to Google’s Thread — albeit still challenged by the Bluetooth consortium, who are about to release the Bluetooth mesh standard. More interestingly, the Wi-Fi Alliance is working on IEEE 802.11ah known as HaLow. All three standards specifically target lower power requirements and better range tailored for the IoT.

Similarly, in the outdoors world, the Next Generation Mobile Networks (NGMN) Alliance (working closely with the well-established GSMA, ruling the world of mobile standards) is working on an important piece of the puzzle for the world of smart things: 5G. With increased data range, lower latency and better coverage, it is vital to handle the multitude of individual connections and will be a serious global competitor to the existing LPWAN (Low Power Wireless Area Networks), such as SigFox and LoRa.

Security is one of the biggest barriers preventing mainstream consumer IoT adoption.

Whilst trials are currently taking place, commercial deployment is not expected until 2020. Before this can happen, spectrum auctions must be completed; typically a government refereed scrap between technology and telecoms companies, with battle lines drawn on price. It’s important to put an early stake in the ground with regulators to ensure sufficient spectrum is available at a cost that encourages IoT to flourish, instead of being at the mercy of inflated wholesale prices.

But the challenge doesn’t stop at the network level; the data or application level is also a big part of the game. The divergence in application protocols is only being compounded as tech giants begin to make a bid to capture the space. Apple HomeKit, Google Weave and a number of other initiatives are attempting to promote their own ecosystems, each with their own commercial agendas.

Left to evolve in an unmanaged way, we’ll end up with separate disparate approaches that will inexcusably restrict the ability of the IoT to operate as an open ecosystem. This is a movie we’ve seen before.

The web has already been through this messy process, eventually standardizing itself by Darwinian principles of technology and practices of use. The web provided a simple and scalable application layer for the Internet, a set of standards that any node of the Internet could use whatever physical technology it uses to connect to the Internet.

The web is what made the Internet useful and ultimately successful. This is why a Web of Things (WoT) approach is essential. Such an approach has substantial support already. AWeb Thing Model has recently been submitted to W3C, based on research done by a mixture of tech giants, startups and academic institutes. These are early tentative steps toward an open and singular vision for the IoT.

The resolution of this issue opens up the possibility of a vast collaborative network, where uniform data can optimize a wild array of existing processes. However, as data gradually becomes the most valuable asset of a slew of once inanimate objects, what does this mean for legacy companies who build the products which have had no previous data strategy?

The tech sector is comfortable with sharing and using such information, but for companies that have their grounding in making everything from light bulbs to cars, this is a new concept. Such organizations have traditionally had a much more closed operational approach, treating data like intellectual property — something to be locked away.

 

To change this requires a cultural shift inside any business. Whilst this is not insurmountable by any means, it brings to the fore the need to effect a change in mind-set inside the boardroom. For such a sea change to happen, it will require education, human resources and technology investment.

The security of a smart object is only as strong as its weakest connected link.

Security is one of the biggest barriers preventing mainstream consumer IoT adoption. A Fortinet survey found that 68 percent of global homeowners are concerned about a data breach from a connected device. And they should be: Take a quick look at Shodan, an IoT search engine that gives you instantaneous access to thousands of unsecured IoT devices, baby monitors included! In 2015, the U.S. Federal Trade Commission stated that “perceived risks to privacy and security…undermine the consumer confidence necessary for technologies to meet their full potential.”

For manufacturers to boost consumer confidence, they must be able to demonstrate that their products are secure, something that seems to have come under increasing pressure lately. The problem with security is that it is simply never achieved. Security is a constant battle against the clock, deploying patches and improvements as they come.

This clearly can be overwhelming for product manufacturers. In order to do this, relying on an established IoT platform that has implemented comprehensive and robust security methodologies and that can guide them through such a complex area is a wise move.

Consumers also share some responsibility in increasing the security of their data — by using strong passwords for product user accounts and on Internet-facing devices, like routers or smart devices; use of encryption (like WPA2) when setting up Wi-Fi networks; and installing any software updates promptly.

However, as consumer adoption of IoT rises, it is critical for manufacturers to ensure that the security of smart, connected products is at the heart of their IoT strategy. After all, the security of a smart object is only as strong as its weakest connected link.

Coupled with security, emergent issues around data privacy, sharing and usage will become something everyone will have to tackle, not just tech companies. In the data-driven world of IoT, the data that gets shared is more personal and intimate than in the current digital economy.

For example, consumers have the ability to trade though their bathroom scales protected data such as health and medical information, perhaps for a better health insurance premium. But what happens if a consumer is supposed to lose weight, and ends up gaining it instead? What control can consumers exert over access to their data, and what are the consequences?

Consumers should be empowered with granular data-sharing controls (not all-or-nothing sharing), and should be able to monetize the data they own and generate. Consumers should also have a “contract” with a product manufacturer that adjusts over time — whether actively or automatically — and that spells out the implications of either a rift in data sharing, or in situations where the data itself is unfavorable.

The onus here also lies on regulators to ensure that legal frameworks are in place to build trust into the heart of the IoT from the very beginning. The industry needs embrace this and embark on an open and honest dialogue with users from the very beginning. Informed consent will never be more important, as data and metadata from connected devices is able to build a hyper-personalized picture of individuals.

Brands would be wise to understand that the coming influx of consumer data is a potential revenue stream that must be protected and nurtured. As such, the perception of privacy and respect are tantamount for long-term engagement with customers. So much so that it is likely that product manufacturers will start changing their business models to create data-sharing incentives and perhaps even give their products away for free.

Due to its massive potential, the Internet of Things is advancing apace, driven largely by technology companies and academic institutions. However, only through wide-scale education and collaboration outside of this group, will it truly hit full stride and make our processes, resources utilization and, ultimately, our lives, better.

Source:  http://techcrunch.com/2016/02/25/the-politics-of-the-internet-of-things/ 

Researchers at nine UK universities will work together over the next three years on a £23m ($33.5m) project to explore the privacy, ethics, and security of the Internet of Things.

 

The project is part of 'IoTUK', a three-year, £40m government programme to boost the adoption of IoT technologies and services by business and the public sector.The Petras group of universities is led by UCL with Imperial College London, University of Oxford, University of Warwick, Lancaster University, University of Southampton, University of Surrey, University of Edinburgh, and Cardiff University, plus 47 partners from industry and the public sector.

 

Professor Philip Nelson, chief executive of the UK's Engineering and Physical Sciences Research Council, said in the not-too-distant future almost all our daily lives will be connected to the digital world, while physical objects and devices will be able to interact with each other, ourselves, and the wider virtual world.

 

"But, before this can happen, there must be trust and confidence in how the Internet of Things works, its security, and its resilience," he said.

The research will focus on five themes: privacy and trust; safety and security; harnessing economic value; standards, governance, and policy; and adoption and acceptability. Each will be looked at from a technical point of view and the impact on society.

 

Initial projects include large-scale experiments at the Queen Elizabeth Olympic Park; the cybersecurity of low-power body sensors and implants; understanding how individuals and companies can increase IoT security through better day-to-day practices; and ensuring that connected smart meters are not a threat to home security.

 

It's still early days for the IoT but already concerns have surfaced about the security and privacy of the technology, and how the data generated by, for example, fitness monitors or other home systems can be used by the companies that collect it.

 

Source: http://www.zdnet.com/article/researchers-investigate-the-ethics-of-the-internet-of-things/

IBM Research is making its quantum processor available to the public via IBM's cloud to any desktop or mobile device.

"This moment represents the birth of quantum cloud computing," Arvind Krishna, senior vice president and director of IBM Research, said in a statement today. "Quantum computers are very different from today's computers, not only in what they look like and are made of, but more importantly in what they can do. Quantum computing is becoming a reality and it will extend computation far
The cloud-enabled quantum computing platform, dubbed the IBM Quantum Experience, is designed to let people use individual quantum bits, also known as qubits, to run algorithms and experiments on IBM's quantum processor.

 

Jay Gambetta, manager of Theory of Quantum Computing and Information at IBM, told Computerworld that the public use of Quantum Experience will be free.

 

"Since this is open to the public, there is no organization or business that will have priority," said Gambetta. "There are several opportunities for material and drug design, optimization, and other commercially important applications where quantum computing promises to offer significant value beyond what classical computers can offer."


Charles King, an analyst with Pund-IT, Inc., said IBM's 5-qubit processor should be powerful enough to handle a variety of research and other computations.

 

"I personally believe this is a very big deal," he added. "First and foremost, it should significantly broaden interest in and work around quantum computing. At this point, those efforts are mainly being performed by researchers associated with companies and labs able to afford highly experimental and highly expensive quantum technologies."

 

King also noted that providing public access should help validate work being done on quantum computing algorithms and applications, which previously could only be run in simulations.

 

"The project demonstrates that IBM's concepts around quantum processors work, can be reproduced and are stable enough to support cloud-based access and services," said King. "If the project succeeds and leads to a clearer understanding of quantum computing, as well as workable larger systems, it will definitely be remembered as a game changer."

 

Earl Joseph, an IDC analyst, noted that in addition to fully building a quantum computer, the big challenge is figuring out how to program it. IBM’s move to engage the public should help with that.

 

“This experiment provides the opportunity for a large group of people to start to learn how to program quantum computers, which will help to develop ways to use this new type of technology,” said Joseph. “Hopefully, it will help to motivate students to go into quantum computing programming as a field of research…. It’s a milestone in allowing a larger number of people around the world to get their hands on this.”

Richard Doherty, an analyst with The Envisioneering Group, called the IBM move a potential game changer.

 

“Quantum computers may be the most compelling, rich-data, cognitive engines for decades to come,” he said. “Our eagerness to solve business, and societal IT and calculation challenges seems limitless. Data farms and smart data demand quantum computing power. If you make it, they will come. IBM and the public get to establish this.”

 

Although D-Wave Systems Inc., a Canadian company, has said it's built a quantum computer and Google and NASA are testing their own quantum hardware, many in the computer industry and the world of physics say a full-scale quantum computer has not yet been created.

IBM isn't saying it's built a quantum computer. What it has are quantum processors, which are much smaller than a full-scale computer.

 

According to IBM, four to five qubits is the minimum number required to support quantum algorithms and simple applications. IBM's quantum processor contains five qubits.

 

The company noted that its scientists think in the next 10 years they'll have medium-sized quantum processors of 50 to 100 qubits, which they believe will be capable of tapping into quantum physics.

 

At 50 qubits, IBM contends that classic computers could not compete with it in terms of speed running complex calculations.

A quantum computer uses qubits, instead of the bits used in classic computers. A qubit has the possibility of being both a one and a zero. Using qubits, a quantum machine doesn't work in an orderly fashion and can calculate all possibilities at the same time.

 

That means quantum machines should be able to work on problems requiring complex and massive calculations much faster.

Scientists hope quantum computers will eventually be used to find distant habitable planets, create greater computer security and find a cure for cancer and heart disease.

 

IBM's current quantum processor is being housed at the IBM T.J. Watson Research Center in New York.

"By giving hands-on access to IBM's experimental quantum systems, the IBM Quantum Experience will make it easier for researchers and the scientific community to accelerate innovations in the quantum field, and help discover new applications for this technology," said Krishna.

 

Source: http://www.computerworld.com/article/3065422/cloud-computing/ibm-makes-quantum-computing-available-in-the-cloud.html

Life is full of big decisions: getting married, buying a home, picking your default Web browser.

I’m serious. Think about where you spend the majority of your time on your computer or phone. It’s inside those four WWW walls.

“Apps will kill the Web!” prognosticators proclaimed, as if Achilles and his Greek army were invading. Yet on our app-packed smartphones and tablets, the browser is still the first stop to look something up. Not that you can always do that quickly: From typing URLs to managing tabs, our browsing problems only get worse on the small screen. On our more spacious laptops and desktops, the browser has become the home of our apps—our email, calendar, word processor, photo library and more.

If browsers have never been more important, why are you using the wrong one? Nearly 40% of computer-based Web surfers still use Microsoft’s Internet Explorer, according to NetMarketshare. You realize that browser is not only sluggish but about as secure as a camping tent, right?

I’m not saying there is a perfect browser—except for my dog, named Browser, that is. But the best one for any device should nail the four S’s: simplicity, stamina, speed and security. A fifth S would be syncing—in a perfect world, all our gadgets would share browser settings, bookmarks and history.

After testing multiple browsers on many computers and smartphones, I’ve determined which ones you should be using—and found shortcuts to use with them.

Windows Computers: Chrome or Edge
ENLARGE
PHOTO: DELL; PHOTO ILLUSTRATION/THE WALL STREET JOURNAL

If you’re using Internet Explorer on a Windows 7 or Windows 8 computer, please stop reading and go download Google’s Chrome. Once you see how much faster and cleaner it is, you’ll want to celebrate with cocktails. Don’t worry about leaving bookmarks behind, they’re coming too. (Just follow these transfer instructions.)

Not even Microsoft wants you to use outdated Internet Explorer anymore. It’s why Windows 10 comes with Edge, a brand new browser with an intuitive, modern interface. Goodbye ugly buttons and cluttered toolbars! It’s also why choosing a browser on Windows 10 is tough.

In industry benchmarks and my own speed tests, Edge and Chrome were neck and neck for first place. Firefox and Opera—two clunky yet long-surviving third-party browsers—trailed. Internet Explorer barely placed.

Yet unlike Chrome, Edge doesn’t hog so much of a computer’s power. On a Web-browsing battery test, the Dell XPS 13 lasted an hour longer with Edge than with Chrome. When streaming Netflix, it lasted two full hours longer. And security experts say Edge is as secure as Chrome.

So finally, Windows 10 PC buyers don’t need to download a new browser? Not exactly. Edge is still too rough around the...edges. Since it’s new, Web developers haven’t really focused on it, so Web apps can be slow or erratic. Plus, it doesn’t support feature-adding extensions. In Chrome, I use a calendar, to-do list and tab manager. Microsoft is adding extensions in the next version.

I suggest you use Chrome on Windows 10. The exception: Edge will eke out better performance on underpowered Windows 10 laptops and tablets.

Mac Computers: Safari or Chrome

Google’s Chrome has long been my default browser on Apple laptops, but my tests all proved this was a poor life decision. Apple’s Safari consistently scored 10% to 15% higher on speed tests. On systems with the weakest processors, like the new MacBook, Chrome occasionally rendered the system unusably slow.

ENLARGE
PHOTO: APPLE; PHOTO ILLUSTRATION/THE WALL STREET JOURNAL

 

Yet again, the less-taxing browser led to noticeably better battery life. On a Web surfing test with the MacBook and a 13-inch MacBook Pro, Safari provided one more hour of battery life than Chrome. In a Netflix streaming test, the results were even more drastic: When streaming “Daredevil” on the MacBook Pro, Safari beat Chrome by two hours.

Chrome may be the top browser on the market, but its power hunger can make you want to avoid it entirely. Chrome product management director Rahul Roy-Chowdhury says Mac and Windows performance has become a big area of focus. Before every Chrome update, thousands of tests are run on many different Mac and Windows devices, he says.

On more powerful desktops or laptops, I’d still likely opt for Chrome. In the latest Mac OS X release, El Capitan, Safari has borrowed most of Chrome’s best features—including pinned tabs—yet Chrome still has a larger variety of extensions. Chrome is also easier to use when you’ve got dozens of tabs fighting for your attention, thanks to those tiny website icons appearing on each tab.

Some may be wary of using Chrome because of Google’s use of private data to improve its search experiences. But keep in mind that if you use Google’s search or other services in any browser, you’ll likely log in and be tracked anyway. Google provides full details onwhat Chrome does and doesn’t collect here. Most browsers, including Chrome, do have no-tracking privacy modes.

 
iPhone and iPad: Safari
ENLARGE
PHOTO: APPLE; PHOTO ILLUSTRATION/THE WALL STREET JOURNAL
 
 
There are loads of things I love about third-party browsers for the iPhone or iPad. I love how Dolphin lets you swipe to see your open tabs. I love Opera Mini’s data-saving features. I love the simple layout on Chrome and Firefox.

 

But Apple doesn’t let you change your default browser, so none of that matters. Whenever you click a link in your email or text messages, Safari and only Safari will launch. Apple says it helps maintain an integrated experience. (Also, third-party iOS browsers including Chrome have to use Safari’s browsing engine, so there aren’t performance advantages to using them.)

So yes, Safari is the best browser to use on the iPhone and iPad. If you also use Safari on your Mac, you can easily sync your tabs, bookmarks and other settings across devices. Hit the tab button and scroll down to see them listed.

 
Android Smartphones, Tablets: Chrome

On Android, since Google supports changing your default browser, your choices are vast. In addition to Chrome and Opera, there’s Firefox, Dolphin and Puffin—not zoo animals, actual browser options.

ENLARGE
PHOTO: SAMSUNG GALAXY S7; PHOTO ILLUSTRATION/THE WALL STREET JOURNAL

In speed tests, Firefox, Puffin and Opera often beat Chrome, yet I didn’t find those speed improvements to outweigh Chrome’s superior interface and Android integration: For easy access, your tabs can even appear alongside open apps in the app switcher.

Additionally, if you’re also using Chrome on your laptop or desktop, it seamlessly syncs your open sites, settings and passwords with your phone or tablet. (On Samsung phones, make sure you’re using Chrome and not Samsung’s own browser.)

Chrome has a data-saver feature, like the others, which compresses and optimizes parts of a site while you’re on a cellular connection. Chrome, however, doesn’t support ad or content blockers on mobile. If that’s important to you, try Firefox, my second pick for Android users.

Write to Joanna Stern atThis email address is being protected from spambots. You need JavaScript enabled to view it.

Source : http://www.wsj.com/articles/find-the-best-web-browser-for-your-devices-a-review-of-chrome-safari-and-edge-1462297625?mod=ST1

 

Saturday, 24 October 2015 18:10

Using the Internet for Research

The World Wide Web is an extraordinary resource for gaining access to information of all kinds, including historical, and each day a greater number of sources become available online. The advantages that the internet offers students are tremendous; so much so that some may be tempted to bypass the library entirely and conduct all of their research on the web. The History Department wants CU students to pursue knowledge with every tool available, including the internet, so long as they do so judiciously.

It is important to know that the Web is an unregulated resource. Because many unreliable sources exist on the internet, anyone – even people who have no expertise at all in your subject – can post anything at anytime. Many sources on the web have proven to be unreliable, biased, and inaccurate. Too much reliance on the web could do more damage than good. Checking the reliability and accuracy of information taken from random sites could take more time than going to the library. And using information you have not checked from such sources could have a detrimental impact on your final grade.

The key is to learn how to use the web to your best advantage.

  • To determine the best application of internet sources to your particular assignment it is strongly recommended that students talk with their instructors. Ask what internet sources will make your research and learning experience most productive.
  • Just as there are countless questionable and unreliable sources on the web, there are a growing number of newspapers, journals, archives, historical societies, libraries, colleges, and universities that are making their holdings available to all. One invaluable source is the Library of Congress (www.loc.gov), which has made millions of sources – written and visual -- accessible. Instructors and library research staff can help students locate many similar sites.
  • The internet should never be your only source when doing research. The best option for students is always the university’s libraries. Students should begin any research project by (1) familiarizing themselves with resources held in Norlin and other libraries around campus; and (2) accessing internet-based resources through the CU Library gateway.
  • A web-based tutorial, which will instruct library users on how to conduct web-based research, is available to everyone. It will show you: the difference between scholarly and popular sources, how to identify keywords, how to conduct searches on a library’s catalogue and through article databases, how to evaluate the integrity of sources, and how to use the information you find legally and ethically. The tutorial can be found at: http://ucblibraries.colorado.edu/pwr/public_tutorial/home.htm
  • History students can go to a page designed especially for them. This link will give you access to subject guides in history as well as introduce you to reliable internet and CU library resources: http://ucblibraries.colorado.edu/research/subjectguides/history.
  • The library maintains a page of electronic resourses, including searchable database, such as JSTOR and EEBO, so that students can take advantage of the considerable resources available to members of the university community

     Source:  http://www.colorado.edu/history/undergraduates/paper-guidelines/using-internet-research

 

Page 7 of 7

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media