Ever since the days of MySpace, it became pretty clear that social media platforms have a bright future ahead. This became further clear when networking platform Facebook was launched and became an instant hit.

In other words, the social media world has expanded rapidly in the past few years. People are using it to post and share a variety of content. The messenger apps are increasingly used for engaging in direct online communication.

Social media trends

Social Media Trends

The displayed statistics reveal that as of April 2016, Facebook stands as the number one social network with 1.6 billion active users. The data puts forth the fact that user engagement holds the key to success of social media channels such as Facebook, InstagramTwitter, and more.


User Engagement

In simple words, user engagement is crucial as an average Facebook user spends over 36 hours on a monthly basis.

Realizing that user or customer engagement can ensure success, businesses worldwide recognized it as a lucrative opportunity to extend reach and clientele base. This is also one of the reasons why social media marketing became huge and the demand for an internet marketing director, digital strategist, content marketing manager, online community manager, social media strategist, online marketing specialist, and social media manager grew.

Social media platforms became the new competing grounds where companies created awareness and build an online presence. In fact, many businesses used their official social media profiles to provide a better customer service. This trend is growing at a fast pace as businesses are able to build a connect with the existing customers and prospects both.

Understanding the relevance of knowledge management

Information is one of the most important resource any organization has. The use of information, operational knowledge, and their distribution are able to lead an organization towards success. Efficient handling of these resources within an organization can be ensure by the use of knowledge base. The role of an efficient knowledge base management is not just limited to storing the practices but also to identify the best ones and develop an efficient way to put them to effective use to help the stakeholders.


The benefits of having a knowledge base are many. The companies that have fully adopted this approach have witnessed a jump in productivity, improved workflow, shortened onboarding phase, and better collaboration. These benefits are tied to the internal knowledge base, but the use of an external knowledge base can prove helpful in strengthening client relationship. User guides, quick resolution to common problems and an engaging learning environment are some of the important factors that an external knowledge base is capable of handling which eventually boosts the customer satisfaction and increases the chances of getting more referrals and mentions.

Now we enjoy access to top-notch knowledge management solutions and we have the access to in-depth guides on how to build a knowledge base. But the only thing that can act as a hindrance is the company culture.

It is proven that the best practices always come from the most experienced workers who usually play a key role in solving the problems through collaboration and deduction. At this point, an organization has to find a way to motivate such employees to share their knowledge with colleagues leveraging the knowledge base. In such a scenario, the use of knowledge management can ensure success. This is one of the key reasons why knowledge management is fast gaining ground as companies have reported improvements across all departments after implementing it.

How social media can transform knowledge management?

The form and aetiology of the data found on social media and on the knowledge base are completely different. The only social aspect of data in knowledge management systems is found during the approval process when an appointed employee reviews the credibility of the data source and its value for the organization before publishing it via the knowledge base software. This data can then be altered by an update that has to follow the same approval process.

On social media, things work differently. The social interaction aspect removes all boundaries and makes it interactive. Content becomes a key part of social interaction and it can shift the meaning as the discussion progresses, where every reply has robust value. But, how to use the power of this information found on social media with a knowledge base?

The solution is enterprise social computing which is rather an emerging trend. The leaders in the artificial intelligence field have developed APIs that can be used to manage unstructured content such the one found on social media. Associating and categorizing unstructured data will provide companies with new insights on how to improve and increase the chances of reaching success.


This newly compiled data can easily be integrated into the knowledge base without much time lost on cleaning, validating, and categorizing data. It is less likely that social media will push the knowledge base completely out of the picture, but it is safe to assume that it will definitely put this practice to serious challenges.

Author : Robin Singh

Source : http://www.business2community.com/social-media/knowledge-management-age-social-media-01783155#dw6342D8vy7hKfxc.97

Categorized in Social

A researcher in Russia has made more than 48 million journal articles - almost every single peer-reviewed paper every published - freely available online. And she's now refusing to shut the site down, despite a court injunction and a lawsuit from Elsevier, one of the world's biggest publishers.

For those of you who aren't already using it, the site in question is Sci-Hub, and it's sort of like a Pirate Bay of the science world. It was established in 2011 by neuroscientist Alexandra Elbakyan, who was frustrated that she couldn't afford to access the articles needed for her research, and it's since gone viral, with hundreds of thousands of papers being downloaded daily. But at the end of last year, the site was ordered to be taken down by a New York district court - a ruling that Elbakyan has decided to fight, triggering a debate over who really owns science. 


"Payment of $32 is just insane when you need to skim or read tens or hundreds of these papers to do research. I obtained these papers by pirating them," Elbakyan told Torrent Freak last year."Everyone should have access to knowledge regardless of their income or affiliation. And that’s absolutely legal."

If it sounds like a modern day Robin Hood struggle, that's because it kinda is. But in this story, it's not just the poor who don't have access to scientific papers - journal subscriptions have become so expensive that leading universities such as Harvard and Cornell have admitted they can no longer afford them. Researchers have also taken a stand - with 15,000 scientists vowing to boycott publisher Elsevier in part for its excessive paywall fees.

Don't get us wrong, journal publishers have also done a whole lot of good - they've encouraged better research thanks to peer review, and before the Internet, they were crucial to the dissemination of knowledge.

But in recent years, more and more people are beginning to question whether they're still helping the progress of science. In fact, in some cases, the 'publish or perish' mentality is creating more problems than solutions, with a growing number of predatory publishers now charging researchers to have their work published - often without any proper peer review process or even editing.

"They feel pressured to do this," Elbakyan wrote in an open letter to the New York judge last year. "If a researcher wants to be recognised, make a career - he or she needs to have publications in such journals."

That's where Sci-Hub comes into the picture. The site works in two stages. First of all when you search for a paper, Sci-Hub tries to immediately download it from fellow pirate database LibGen. If that doesn't work, Sci-Hub is able to bypass journal paywalls thanks to a range of access keys that have been donated by anonymous academics (thank you, science spies).

This means that Sci-Hub can instantly access any paper published by the big guys, including JSTOR, Springer, Sage, and Elsevier, and deliver it to you for free within seconds. The site then automatically sends a copy of that paper to LibGen, to help share the love. 



It's an ingenious system, as Simon Oxenham explains for Big Think:

"In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities' institutional access - literally a world of knowledge."

That's all well and good for us users, but understandably, the big publishers are pissed off. Last year, a New York court delivered an injunction against Sci-Hub, making its domain unavailable (something Elbakyan dodged by switching to a new location), and the site is also being sued by Elsevier for "irreparable harm" - a case that experts are predicting will win Elsevier around $750 to $150,000 for each pirated article. Even at the lowest estimations, that would quickly add up to millions in damages.

But Elbakyan is not only standing her ground, she's come out swinging, claiming that it's Elsevier that have the illegal business model.

"I think Elsevier’s business model is itself illegal," she told Torrent Freak,referring to article 27 of the UN Declaration of Human Rights, which states that"everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits".

She also explains that the academic publishing situation is different to the music or film industry, where pirating is ripping off creators. "All papers on their website are written by researchers, and researchers do not receive money from what Elsevier collects. That is very different from the music or movie industry, where creators receive money from each copy sold," she said.

Elbakyan hopes that the lawsuit will set a precedent, and make it very clear to the scientific world either way who owns their ideas.

"If Elsevier manages to shut down our projects or force them into the darknet, that will demonstrate an important idea: that the public does not have the right to knowledge," she said. "We have to win over Elsevier and other publishers and show that what these commercial companies are doing is fundamentally wrong."

To be fair, Elbakyan is somewhat protected by the fact that she's in Russia and doesn't have any US assets, so even if Elsevier wins their lawsuit, it's going to be pretty hard for them to get the money.

Still, it's a bold move, and we're pretty interested to see how this fight turns out - because if there's one thing the world needs more of, it's scientific knowledge. In the meantime, Sci-Hub is still up and accessible for anyone who wants to use it, and Elbakyan has no plans to change that anytime soon.


Source : http://www.sciencealert.com/this-woman-has-illegally-uploaded-millions-of-journal-articles-in-an-attempt-to-open-up-science?action_object_map=%5B887220361395454%5D&action_ref_map=%5B%5D&action_type_map=%5B%25252525252525252525252522og.shares%25252

Categorized in Online Research

By now the majority of searchers on Google are familiar with the Knowledge Graph box, which appears on the right side of the search results page and highlights information about various entities (companies, people, places, things and so on).

In the past, I’ve written about how to both obtain and optimize a Knowledge Graph result. In my attempts to learn more and get a better understanding of Google’s Knowledge Graph, I’ve discovered some cool hacks and tricks. While these tricks do not have any significant practical implications, they can serve as a way to better understand how Google’s Knowledge Graph works, which can be used to optimize your personal or company Knowledge Graph results.

Before getting into the tricks, let’s break down some important items. For the examples below, I used Donald Trump.


Google Knowledge Graph API

The Google Knowledge Graph API is a tool that can be used to get insight into your Knowledge Graph result. It provides details about classification of the entity, score and ranking. It also provides a machine-generated identifier (MID), which is a unique code assigned to each entity.

Google Knowledge Graph API

MID (machine-generated identifier)

Machine-generated identifiers, or MIDs, are unique entity IDs that were originally generated from Freebase. Although Freebase was ultimately shut down and the data was migrated to Wikidata, Google is still using the Freebase identifier codes for entities. This could eventually change to Wikidata Qids, but we have to wait and see what Google will do.


As you can see below, in the Knowlege Graph API results for “donald trump,” the president’s entity MID is /m/0cqt90.

Donald Trump MID Entity

Normal search engine results page (SERP)

Here is what the normal SERP looks like when a search for “Donald Trump” is completed. As you can see, Trump is an entity and has a detailed Knowledge Graph panel appearing alongside his search results.


Donald Trump SERP

Knowledge Graph tricks

Knowledge Graph-only SERP

By inserting the following modifiers into the search URL, we can have just the Knowledge Graph Panel appear on the SERP, with no paid or organic listings.


  • kponly  (a command that tweaks the SERPs to only show the Knowledge Graph panel)
  • kgmid= (a command that allows the SERP to be filtered specifically for the specified entity; simply add the MID number after the equal sign)

Original URL: https://www.google.com/search?q=donald+trump

Modified URL: https://www.google.com/search?q=donald+trump&kponly&kgmid=/m/0cqt90

Donald Trump Knowledge Graph Result

Change the language of content in the Knowledge Graph

Adding the hl string to the URL and specifying a language code will change the language of the Knowledge Graph SERP. Here is an example of the content changed to Spanish.


Original URL: https://www.google.com/search?q=donald+trump

Modified URL: https://www.google.com/search?q=donald+trump&kponly&kgmid=/m/0cqt90&hl=es

Donald Trump Spanish

Change related images in the Knowledge Graph panel

Another trick that can be done is to change the related images within the Knowledge Graph panel to items you specify in your search query. For example, a regular search for “donald trump mexico” does not return a Knowledge Graph result.

Donald Trump Mexico No Knowledge Graph

However, by appending the MID to the search results URL, we can generate a Knowledge Graph result that contains images relevant to the search query.

Original URL: https://www.google.com/search?q=donald+trump+mexico

Modified URL: https://www.google.com/search?q=donald+trump+mexico&kgmid=/m/0cqt90

Donald Trump Mexico with Knowledge Graph

Generate a Knowledge Graph panel for an entity for any search query

This trick is my favorite, as it’s great for a prank or joke. Applying the same tactic from the previous example allows you to have a Knowledge Graph panel for your specified MID appear on the SERP for any search query — even if your Knowledge Graph entity is not related to your search.

Original URL: https://www.google.com/search?q=homer+simpson

Modified URL: https://www.google.com/search?q=homer+simpson&kgmid=/m/0cqt90

Donald Trump Homer Simpson

Note that this trick will override an existing Knowledge Graph panel for an entity. Without the “hacked” URL, a search for “Homer Simpson” generates a Knowledge Graph result for Homer Simpson, as expected.


I hope this helps shed some light on how some parts of Google’s Knowledge Graph work. It can be useful if you’re redirecting users to a Google SERP result for reasons related to business or humor.

Author : Tony Edward

Source : http://searchengineland.com/cool-tricks-hack-googles-knowledge-graph-results-featuring-donald-trump-268231

Categorized in Search Engine

Who invented the refrigerator? When was the Pleistocene era? How long do dolphins live?

Chances are, you don’t know the answers to these questions, but at least one of them made you think about Googling the answer. Type any of those questions into Google and you’ll see a small box above the conventional list-based search results, which concisely answers your question and links back to the source that provided it—all without you having to click any search results.

Some search queries even come with a full box of information on your chosen subject, off to the right side, such as the cast and crew of popular movies or a brief synopsis of a politician’s career.


Enter the Knowledge Graph

Already, we’re starting to take these revolutionary information sources for granted, but they only exist thanks to Google’s provision of “rich answers” and the Knowledge Graph, Google’s intelligent internal encyclopedia of information. These major search developments are bringing more information to users than ever before, and faster than ever before, but they’re also complicating the world of search engine optimization (SEO) and digital advertising. For example, these advancements may lead to lower click-through rates for some organic search results, or a lower return on “general information” content.

But how will rich answers and the Knowledge Graph change from here? Based on Google’s past and a reasonable expectation of technological progress to come, I have seven predictions:

1. Spoken answers will rise in popularity.

As of last year, about 20 percent of all mobile queries were voice searches. That number has consistently grown in line with the prevalence of mobile devices, and continues to grow to this day. I expect even steeper growth as voice recognition software grows more sophisticated and users become more trusting. When that happens, spoken answers—serving as a dialogue-like response—will need to become more popular, in turn. That means Google’s visual layout will become less relevant, and fewer and fewer users will rely on traditional SERPs for their needs.

2. Rich answers may soon completely take over.

Since their original inception, the prevalence of rich answers in search queries has grown tremendously, with occasional bursts of growth corresponding to increases in Google’s capacity. You’ve likely noticed this yourself, as your general-knowledge queries have become faster and easier to address with a simple search. This growth rate is unlikely to wane anytime soon, and in the next few years, I anticipate the majority of queries will return some kind of rich answer. Even hyper-specific questions won’t be exempt from the display. Why? Because Google wants to keep you on its own domain as much as possible – in order to expose you to more advertising, which makes them money. When you click a result in its search results, you wind up on someone elses’s domain – not Google’s.

3. Answers will extend beyond simple responses.

Google is also making strides in expanding the types of content that are offered in search engines. It has offered a simple calculator, conversion of units of measurement, and translations of different languages already, and I’d be willing to be the Knowledge Graph is ready to do even more for consumers. SERPs’ built-in functionalities are about to experience a leap forward in both diversity and sophistication, especially as app streaming and other app-centric display technologies begin to emerge.


4. Rich answers may branch into a separate category of search.

This is more speculative, but it’s possible that the demand for rich answers grows so great that it splinters into a separate category of search altogether. Google search algorithms may branch users into main categories based on intent, with rich answers provided to those in need of a quick answer and traditional SERP listings for those interested in a specific company website, or more information in general. In line with this, we may see the development of a competing search engine that specializes in the provision of this kind of information.

5. User preferences may soon become a variable.

Your social media apps know an uncomfortable amount of information about you. Your search apps potentially know even more. Google has remained the dominant search engine in the world in part because of its commitment to personalized search results. Soon, your search history and personal preferences may begin to factor more heavily into the type of rich answers you see (and how often you see them).

6. Smart homes will impact future development greatly.

With the rise of Google Home (and competitors like Amazon Echo), I suspect that user search habits will change significantly. Consumers will be making even more in-the-moment queries, and search engines will need to provide faster, conversational, and personalized responses. Once smart home technology hits a certain popularity threshold, its effects on search habits will propel rich answers and the Knowledge Graph into completely new territory.

7. The Knowledge Graph will start running itself.

Google is a big fan of machine learning, and RankBrain is an early indication that one day, Google’s search algorithm may be able to evaluate and update itself. It’s not only feasible, but likely, that similar machine learning technologies will start to dictate the future of the Knowledge Graph, sharply increasing its curve of development and making it a truly “intelligent” archive of online information. From that point on, its future is, by definition, unpredictable.

Rich answers and the Knowledge Graph aren’t going to cripple your SEO strategy, and they aren’t going to permanently take over the internet—at least not in any way that limits your potential as a digital marketer. Instead of being feared or avoided, they should simply be considered, or even taken advantage of.

Adjusting your SEO strategy isn’t strictly about linear progress; it’s about adapting to new circumstances, and these are some of the latest that should be on your radar. Keep watch for the changes to come, and remain flexible enough to compensate for them.

Author : Jayson DeMers

Source : http://www.forbes.com/sites/jaysondemers/2017/01/27/7-predictions-on-the-future-of-rich-answers-and-the-google-knowledge-graph/#5c99bece41e1

Categorized in Search Engine

The world’s most extensive “knowledge graph” may not be at Google. Vertical search site FindTheBest (FTB) has rebranded and relaunched as Graphiq, a data visualization or knowledge graph search engine.

Using a huge volume of data sources, automation and human editorial oversight, the company says that it has created “the world’s deepest and most interconnected Knowledge Graph, featuring 1,000 collections, 1 billion entities, 120 billion attributes, and 25 billion curated relationships.”

Founder Kevin O’Conner told me that FTB’s experience creating roughly 18 distinct search verticals loaded with structured data was the foundation for the new site. There’s considerable sophistication behind the scenes, enabling the new site to dynamically generate 10 billion data visualizations.

Here are a few examples:


While it’s difficult to determine, Graphiq may offer the widest selection of structured data (and associated visualizations) anywhere online. The data and graphics range from international GDP comparisons to healthcare stats to the historical popularity of US baby names and well beyond.


FTB’s original vertical search sites are not being promoted, but they can still be found in general search results. For example, people will still be able to use and search for houses on the company’s real estate vertical (FindTheHome), look up credit card rates on its financial site (Credio) or do research on colleges and universities (StartClass). Indeed, many of the Graphiq visualizations click through to these underlying vertical comparison sites.

While the FTB vertical engines are consumer-facing, Graphiq is positioned very differently and is directed mainly toward journalists, researchers, publishers and enterprises. However, this is merely one business model expression of the underlying data model.

Publishers can use Graphiq’s charts as content, and journalists can sign up for alerts and research or embed graphics in their stories. In this sense, Graphiq is not that far removed from Nate Silver’s FiveThirtyEight.com.


Graphiq says it has done enterprise integrations with AOL/The Huffington Post, MSN, Hearst and several other large publishers. There are also WordPress plugins and custom integrations.

In these enterprise integrations, Graphiq will generate or recommend data and charts for stories based on an automated analysis of content. For example, the system might suggest a visualization like the above about a story written on Obamacare enrollment. Beyond this, users can simply search or browse the data in a more conventional way.

FindTheBest began as an effort to improve upon Google and provide structured comparison information and “answers not links.” It had terrific data but struggled to generate consumer awareness and create a brand. This shift (or expansion, as the company explains it) offers a more comprehensive enterprise-facing tool that offers immediate and obvious value. But there are many other interesting ways the underlying technology and data could be used.

Author : Greg Sterling

Source : http://searchengineland.com/graphiq-search-findthebest-pivots-to-become-knowledge-graph-engine-227660

Categorized in Search Engine

The most mysterious technological object on the planet should have been destroyed at least three times.

First, the device made it through a violent shipwreck in the Mediterranean Sea. Then, it sat submerged in salt water on a sandy cliff 200 feet below the surface of the ocean for more than two millennia. After it was hauled back to dry land in the year 1901, the object was forgotten for nearly a year. A lump of corroded bronze and shredded wood, it was left to rot in an ordinary crate in the open courtyard of the National Archaeological Museum in Athens.

It should have disintegrated. It almost did.

At the time, Museum workers were focused on other things. The bizarre events that led to the object’s discovery began in the autumn of 1900, when fishermen diving for sea sponges off the coast of Antikythera, Greece, came face to face with a ghastly sight. The seabed they searched wasn’t dotted with sponges. It was strewn with bodies.

The first sponge diver to resurface was panicked by what he’d seen. There were too many men and horses for him to count, presumably they’d been doomed in a shipwreck. Except they weren’t corpses, after all. The bodies were statues, part of an astounding collection of ancient works, a blockbuster archaeological find.


Over the course of the next 10 months, divers recovered scores of marble and bronze artifacts from the Antikythera shipwreck, which today remains the largest ancient ship ever found. Nearly all of the ship's equipment was massively oversized—including tremendous hull planks more than 4 inches thick. (They left behind even more treasures than they collected, opting to scrap the recovery after one man died of the bends and two others were paralyzed.) The shipwreck made headlines around the world—in part because it yielded several rare bronze statues, which scholars believed might be the work of Lysippos or Praxiteles, two of the most important Classical Greek sculptors of the fourth century B.C.E., according to newspaper reports at the time.  

But the divers had dredged up something even more precious. They wouldn’t realize it until nearly a year later, when museum curators peered into a forgotten crate in an Athens courtyard and began to examine the hunk of oxidized metal inside.

The corroded device still bore faded inscriptions and it appeared to have the guts of a clock, mechanics that didn’t make any sense. After all, the lump had been found among the wreckage of a ship that sailed the Mediterranean more than 1,000 years before timekeeping gearwork first appeared in Medieval Europe. When the ship went down, no one on the planet was supposed to have had complex scientific instruments—what was this thing?

It came to be known as the Antikythera Mechanism. In the decades that followed, with ever more sophisticated technology to guide them, researchers would begin to understand how the peculiar device once worked. Today, the mechanism is often described as the world’s oldest computer—more precisely, it seemed to be an analog machine for modeling and predicting astronomical and calendrical patterns. Even before it was lost, the device must have been a treasure. When it was new, the mechanism was a turn-crank marvel housed in a rectangular wooden case, like a mantel clock, with two dials on the back. Instead of having two hands to tell the time on the front, the mechanism had seven hands for displaying the movement of celestial bodies—the sun, the moon, Mercury, Venus, Mars, Jupiter, and Saturn. The planets were represented by tiny spheres that could themselves rotate, with the moon painted black and silvery white to depict its phases.

Yet the mystery of the mechanism is only partly solved. No one knows who made it, how many others like it were made, or where it was going when the ship carrying it sank. More than a century since it was discovered, the Antikythera Mechanism remains one of the strangest objects that has survived from the ancient world.

“We know what it did, but we don’t know exactly why they wanted it to do that, what it was used for, and the context in which it was used,” said Jo Marchant, the author of Decoding the Heavens: A 2,000-Year-old Computer and the Century Long Search to Discover Its Secrets. “We don’t know whether it was a teaching instrument in a school, or if a rich person would have had this on their dining table, whether it had religious importance, whether it had an astrological meaning—just what it meant to people.”

The prevailing theory today is that the mechanism was manufactured in Rhodes, perhaps for a buyer in Greece. Marine archaeologists and other researchers who have studied the Antikythera shipwreck believe the vessel was a gargantuan grain transporter, packed with valuable works of art, technology, and other luxury goods likely intended for trade, that set sail around 70 B.C.E. (Scholars suspect that grain would have been a natural, useful packing material.) It’s possible that the ship carried many strange and wonderful automata. One of the statues recovered from the site appears to have once stood on an automated pedestal.

Those who have studied the shipwreck believe the vessel could have carried several twins of the Antikythera Mechanism. The mechanism as it was recovered is split into three pieces and represents only a portion of the device as it was built. Scholars believe the rest of it was either destroyed, or is still on the seafloor, covered in sand. “Clearly, this mechanism wasn’t a one-off,” Marchant told me. “It was too sophisticated. It must be part of a whole tradition of these mechanisms.”

“What I believe is that it cannot be just one mechanism and there must be more of them somewhere else,” said Theotokis Theodoulou, an archaeologist and the head of Underwater Antiquities for Greece’s Ministry of Culture. “The Antikythera shipwreck could be such a site.”

Another possibility is more startling: What if other objects like the Antikythera Mechanism have already been discovered and forgotten? There may well be documented evidence of such finds somewhere in the world, in the vast archives of human research, scholarly and otherwise, but simply no way to search for them. Until now.

* * *

Scholars have long wrestled with “undiscovered public knowledge,” a problem that occurs when researchers arrive at conclusions independently from one another, creating fragments of understanding that are “logically related but never retrieved, brought together, [or] interpreted,” as Don Swanson wrote in an influential 1986 essay introducing the concept. “That is,” he wrote, “not only do we seek what we do not understand, we often do not even know at what level an understanding might be achieved.” In other words, on top of everything we don’t know, there’s everything we don’t know that we already know.

Solving this problem, Swanson argued, would require efforts “no less profound than trying to formalize human language, creativity, or inventiveness.” Thirty years after he published his essay, we no longer have to rely on human contrivances alone. Now, with the ubiquity of the internet and the rise of machine learning, a new kind of solution is beginning to take shape. The infrastructure of the web, built to link one resource to the next, was the beginning. The next wave of information systems promises to more deeply establish links between people, ideas, and artifacts that have, so far, remained out of reach—by drawing connections between information and objects that have come unmoored from context and history.

A simple Google search for “Antikythera Mechanism” turns up about 351,000 results, the first several pages of which are news articles, a Wikipedia page, and a few academic papers. These results offer decent context for what the device is, and the mystery surrounding it, but none of them go very deep. It would take quite a bit of additional reading and searching, for instance, to get to the 10th-century Arabic manuscript, discovered in the 1970s, that some researchers believe is proof that the Antikythera Mechanism directly influenced the development of modern clockwork, more than a millennia after the shipwreck at Antikythera.

Discovery in the online realm is powered by a mix of human curiosity and algorithmic inquiry, a dynamic that is reflected in the earliest language of the internet. The web was built to be explored not just by people, but by machines. As humans surf the web, they’re aided by algorithms doing the work beneath the surface, sequenced to monitor and rank an ever-swelling current of information for pluckable treasures. The search engine’s cultural status has evolved with the dramatic expansion of the web. Once a mere organizer or information, Google is now treated as an oracle.


The tipping point for this perception came sometime between 1993 and 1995, as the total number of websites online grew from about 130 to nearly 24,000. In 1994, for instance, a web search for the word “culinary” turned up nothing, according to a New York Times story published the following year. Within months, the same search yielded 800 websites. Search “culinary” today and you get 97 million results. There are, as of this writing, billions upon billions of webpages across more than 1 billion websites online, according to Internet Live Stats, and the galactic growth of the web over the course of the past two decades has required search engines to become smarter and faster as a result.

Google won the first battle of the search engines because of its obsession with relevancy, using a variety of weighted factors, such as a site’s quality or popularity, to influence the order of search results as they appear on a person’s screen. It wasn’t so long ago that this was a groundbreaking approach to search filtering. Algorithmic sorting was, in the year 2000, “‘the new nuclear bomb’ of the search-engine world,” Danny Sullivan, the technologist and founder of the website Search Engine Land, told The New York Times that year. But Google had already been thinking this way since its inception. Google’s “I’m Feeling Lucky” button was introduced when the search giant was still in beta, in 1998, as a way of communicating that it knew, down to a single search result, how to deliver what people wanted to find. (The button was designed to take people directly to whichever website Google determined was most relevant to their search, instead of showing them a list of 10 possible options.)

In its success, Google became the embodiment of a decades-long dream among information scientists to reorder the world’s data in ways that would make all of human knowledge more accessible. The search giant is still constantly tweaking its methods to meet the demands of a data-flooded digital world. Google now uses machine learning—as part of its RankBrain search system—in every single query it processes, a Google engineer told the tech site Backchannel earlier this year.

Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built. In his prescient essay, “As We May Think,” published by The Atlantic in 1945, the influential engineer and inventor Vannevar Bush imagined a future in which machines could handle tasks of logic by consulting large troves of connected data. His essay would prove instrumental in influencing early hypertext—which in turn helped shape the linked infrastructure of the web as we know it.

Bush envisioned sophisticated “selection devices” that would be able to comb through dense information and yield the relevant bits quickly and accurately. At the center of all this was what Bush called the Memex, his idea for a deep indexing system that could consolidate and search mammoth collections of information in various formats—including text, photocells, microfilm, and audio. The Memex, he argued, would be a technological solution to an almost existential problem: The totality of recorded human knowledge was constantly growing, but the tools for consulting this ever-swelling record remained “totally inadequate.” Instead, he looked to the intricate pathways of the human mind to inspire the architecture of a fantastical new system.

The Memex remains among Bush’s best known contributions to modern computing, including the computers he himself built in the 1920s and 1930s. Those machines, called differential analyzers, involved wheel-and-disc mechanisms designed to solve equations—a new kind of computational complexity in the 20th century, but based on much older inventions. “This idea is far from original,” he wrote in 1931, “...utilizing complex mechanical interrelationships as substitutes for intricate processes of reasoning owes its inception to an inventor of calculus itself.” Bush was referring to Gottfried Wilhelm Leibniz, the 17th-century philosopher and mathematician.

What Bush did not realize was that the predecessor for his machine was far, far older than Leibniz. The oldest known analog computer is the device found at Antikythera.

* * *

The island of Antikythera often appears as just a fleck on the map, if it’s pictured at all, in the cool waters that separate Cape Malea and Crete between the Aegean Sea and the Mediterranean.

In 1953, the ocean explorer Jacques Cousteau and his crew, voyaging on the research vessel Calypso, found themselves in this region. Windy seas had forced them to take shelter at Kythera, an island about 22 miles northwest of Antikythera. It was there that a little boy named John told Cousteau and his colleagues about what was hidden in the choppy waters nearby. “John introduced us to two fishermen who claimed to have knowledge of a sunken city, which is something every diver dreams about,” the legendary diver Frédéric Dumas wrote in his 1972 book, 30 Centuries Under the Sea. “So we were quickly back in the sea again.”

The next morning, locals agreed to lead the divers to the wreck site, where Dumas was the first to go down. “The water was so transparent that I felt as if it might let me fall right down the cliff, which extended vertically to a group of fallen boulders a hundred sixty feet below,” he wrote in his book. “Although I saw no trace of the wreck, I was sure it was there.”

Dumas’s certainty came in part from his appreciation for the local network of knowledge he’d stumbled upon—the kind of information that would have been difficult if not impossible to get from any other source at the time. (Today, Google can take the casual web explorer to a virtual pushpin on a map, showing where the Antikythera shipwreck is located.) “The excavation in 1901 was still the most important event in the history of the island, and it was unlikely that the fishermen, who lived by tradition, could have forgotten the location, especially when they had the cliff to go by, and not just some remote landmarks or a certain distance out to sea.”

“For some inexplicable reason,” he added, “I felt that the terrain was not in its natural, unspoiled state.

In subsequent dives, he and his colleagues found bits of pottery, amphoras, decanters, a fragment of an ancient anchor, and other scattered debris. At one point, they used a makeshift vacuum-like device, made from a sheet-metal pipe, to suck up artifacts from the wreck more efficiently—a destructive practice that makes today’s archaeologists cringe. Dumas remembered the wreck site as both lovely and unnerving. Even at dusk, when the waters seemed “black and uninviting,” soft light filtered down to the boulders below. “The rocks had taken on a disturbingly somber appearance and the sand had become more luminous,” he wrote.


“After the tomb of Tutankhamen was opened, some superstitious individuals remarked that all the scientists who had worked on the project died from unnatural causes,” Dumas wrote. “I wouldn’t go so far as to say the same about ancient wrecks, but it is true that such ships, with their air of mystery and promise of lost treasures, fascinate the average diver and cause him to lose the sangfroid that is so necessary in underwater operations.” Dumas remained convinced that vast treasures from the ship remained at the site—including, he thought, the other half of a strange mechanism, almost like an “astronomical clock” which he and Cousteau had gone to see in Athens. The rest of the device, he surmised, was still in the sand amid the rest of the 2,000-year-old wreckage.

After a few weeks in the region, the crew moved on to Sicilian waters, leaving the mystery of the mechanism behind. From there, it would be more than two decades before Dumas and Cousteau returned to Antikythera, this time to conduct a full excavation of the wreck. In 1976, using the most sophisticated diving technology available at the time, the team discovered hundreds of artifacts—a cache of pottery, bronze ship nails, ornate glassware, gold jewelry, ancient coins, gemstones, an oil lamp, a marble hand, even a human skull. They sifted the sand in search of gearwork, hoping to find more mechanisms or even pieces of the original. There was nothing.

* * *

If the Antikythera Mechanism has a twin somewhere in the world—a device that’s been discovered and forgotten, or perhaps never fully appreciated for what it is—how can researchers even begin to look for it?

“Before the Antikythera Mechanism, not one single gearwheel had ever been found from antiquity, nor indeed any example of an accurate pointer or scale,” Marchant wrote in her book. “Apart from the Antikythera Mechanism, they still haven’t."

That might be about to change. The search engine as we know it now is undergoing a period of radical reinvention, in processing power and in structure, and is likely to be transformed even more dramatically in the years to come. “[Today’s] search engines were a fantastic instrument to get you to where the information is,” said Ruggero Gramatic, the founder and CEO of the search app Yewno, “but often it’s not about searching, but also discovering something that you don’t know you’re looking for.”

Yewno resembles a search engine—you use it to search for information, after all—but its structure is network-like rather than list-based the way Google’s is. The idea is to return search results that illustrate relationships between different relevant resources—mapping out connections between people, events, and concepts affiliated with the search. (You can choose how many related concepts you want to see when you search; anywhere from fewer than 20 to more than 100).

Yewno, which was built primarily for academic researchers, is populated by tens of millions of books and journal articles from nearly two dozen well-known publishers like Spring, Nature, MIT Press, and JSTOR. Gramatic says Yewno’s database will swell to 78 million papers and documents by the end of the year, and will keep growing from there.

“What algorithms can help us do is process the whole information and delve into the knowledge to create something that is very similar to an inference,” he told me. “So when you are looking for something … thinking laterally—not just sequentially, but in a cross-disciplinary way—so you can connect things that are apparently unrelated. That is basically where we see the whole area of information processing going from now on.”

If there is any hope of finding new information about the Antikythera Mechanism—or, for that matter, any additional devices like it—it is likely that machines, working alongside human researchers, will play a pivotal role.

Just as Vannevar Bush envisioned, engineers are building computer models of neural networks, machines that mimic the elegance and complexity of human thought. But there are still many challenges ahead. Sourcing is a big one. Even a database built from tens of millions of well-vetted books and articles isn’t comprehensive. And there’s still the question of how the results from these new search engines ought to appear to the person searching. A simple graph that shows a connect-the-dots web of related resources and ideas is one way. A more sophisticated map-like interface is another—“like Google Maps,” Gramatic offers—but you’d still lose scale and context as you zoom in and out.

“In terms of how to visualize it, that is one of the biggest challenges. We need to move away from the list-of-links approach, like the traditional search engine, because otherwise you’re back to the same situation where you need to click, and read, and click, and another window opens, and another window, and another window—and you don’t let your brain see the whole connection.”


“In 10 years, I think we’re going to be offering an instrument where the ability to unearth information and correlate information is done for you,” he adds. “And basically you will ask a machine to generate an inference.” In this way, a search for “Antikythera Mechanism” might not only lead you to surprisingly relevant, long-lost manuscripts—but actually pose a theory that explains how the device is connected to such documents.

“I’m absolutely convinced that knowledge is a big chain starting from … the neolithic times, even earlier, and reaching our times.”

People who are thinking deeply about the future of search tend to agree that this sort of machine inference will be possible, yet there’s still no straightforward path to such a system. For all the promise and sophistication of machine learning systems, inference computing is only in its infancy. Computer can carry out massive contextualization tasks like facial recognition, but there are still many limitations to even the most impressive systems. Nevertheless, once machines can help process and catalogue huge troves of text—a not-too-distant inevitability in machine learning, many computer scientists say—it seems likely that a flood of previously forgotten artifacts will emerge from the depths of various archives.

Consider a discovery that occurred in 2012, for example, when a crucial document from American history surfaced after having been lost for nearly 150 years. It was a medical report on President Abraham Lincoln’s condition, written by the first doctor to arrive at Ford’s Theatre after Lincoln was shot. The document had been sent to the surgeon general shortly after Lincoln’s death. It had the potential to change the way scholars understood one of the darkest moments in American history.

It wasn’t actually lost, though. “No, it was in a box of other incoming correspondence to the Surgeon General, filed alphabetically under “L” for Leale, [the name of the doctor who wrote it],” Suzanne Fischer, a historian of technology and science, wrote for The Atlantic in 2012. “In short, this document that had been excavated from the depths of the earth with great physical effort was right where it was supposed to be.”

The trouble was with how the document had been catalogued. “This is because archivists catalogue not at ‘item level,’ a description of every piece of paper, which would take millennia, but at ‘collection level,’ a description of the shape of the collection, who owned it, and what kinds of things it contains. With the volume of materials, some collections may be undescribed or even described wrongly.”

But the bigger problem was this: “No one knew it existed, so how to locate it was beside the point,” Helena Iles Papaioannou, the researcher who found the document, wrote in a response to Fischer.

In the case of the Lincoln report, a human researcher happened upon the document. In the future, such serendipity may not be necessary. A machine that scrapes vast catalogues of text for context would be able to comb archived collections at the item level. (Of course, this would require digitization of the physical document, but that’s another issue). “I don’t think machines are going to completely supplant us, but they’re certainly going to augment our ability to discover things,” said Sam Arbesman, a scientist who studies complexity and the future of knowledge. “There are going to be more and more of these human-machine partnerships, especially in the realm of innovation and discovery.”

The structural underpinnings for these sorts of partnerships are already being built at the institutional level. For several years, the Library of Congress has been working with several universities—including Stanford, Cornell, Harvard, Princeton, and Columbia—on a project it calls BIBFRAME, a next-generation cataloguing system that will ultimately replace the current electronic system that most libraries use. The outgoing system, built on MARC records—short for MAchine-Readable Cataloging record—was what replaced physical card catalogues in the 1970s. Today’s electronic records are designed such that you can trace any descriptive element from one record—an author’s name, for example—to other records stored in the same format. But BIBFRAME will go much deeper, producing links that reveal connections about any number of other elements related to a book or resource, including items from the web. The new system is built for the Internet Age, and meant to meet expectations about how people search for information online. “[The existing system] is self contained and library-oriented, and we need to get something that is conversant with the larger information community,” said Beacher Wiggins, the library’s director for acquisitions and bibliographic access. With BIBFRAME, the idea is to use “the same language that the browser community and the internet community uses,” so that the library stays linked to outside resources even as browser technology changes.

It’s easy to see how such a system could accelerate major discoveries. In the 1950s, it took years for the Yale University historian, Derek de Solla Price, to work his way back through various manuscripts and scientific documents and eventually, having stumbled upon the Antikythera Mechanism, rewrite the history of modern clockwork. Price’s research spanned thousands of years of technological history. The Antikythera Mechanism, Price concluded, was not just a miracle of early gearwork but represented the very origin of modern machinery. While much of what he discovered came from his own direct observations of the mechanism and his ability to contextualize other findings related to it, consider how much more he might have learned if a computer had helped him comb through millions of documents in the first place. The right algorithm could find the thread between the wheel, the astrolabe, the sundial, and the Antikythera Mechanism—then produce a web of resources illustrating those connections.

It won’t be long before the public can begin ferreting out information this way themselves. In March, the Library of Congress completed its first pilot program for the new BIBFRAME system—which included transferring some 10,000 records to the new format. Now, it’s preparing for another test run, set to begin in early 2017. As part of the next pilot, the library is also developing specifications for other institutions who want to convert their data into the new format. The more data involved, the more powerful BIBFRAME becomes. The Library of Congress alone plans to convert around 20 million records to the new format within the next five years. “But keep in mind,” Wiggins says, “the library itself has 162 million items and all of those are not covered by MARC records even. Then you start thinking about the entire collection of MARC records in the world and you get into the hundreds of millions. How do you manage that? How do you have everyone who has a repository of MARC data come on aboard? I suppose in an ideal world, the goal is to convert all of them, but we know that won’t happen.”


“The value that I see going forward is the linking part of the data environment,” Wiggins added. “You start searching at one point, but you may be linked to things you didn’t know existed because of how another institution has listed it. This new system will show the relationship there. That’s going to be the piece that makes this transformative. It is the linking that is going to be the transformative.”

The idea for linking information this way can be traced back more than 70 years, all the way to Bush’s Memex. But none of it would be possible without new technology. Machine learning and artificial intelligence will change the way people search, but the search environments themselves will evolve, too. Already, computer scientists are building search functionalities into virtual reality. In other words, the future of human knowledge—how we discover and contextualize what we know—depends almost entirely on tools and digital spaces that are rapidly changing and will continue to change.

* * *

The field of marine archaeology, still in its infancy, began at Antikythera. Though the sponge divers in 1901 were able to recover great treasures without modern SCUBA gear, they really only ever glimpsed the environment of the wreck. More than a century later, divers have exhaustively searched this undersea world, with robot crawlers, 3-D mapping, closed circuit rebreathers, and an astronaut-like exosuit, among other technologies. All of the divers who have searched the site over the years have themselves become a crucial and “very symbolic” part of wreck’s significance, says Theodoulou, of Greece’s Ministry of Culture.

Today, the story of the wreck and those who have sought to understand it is told in a scatter of objects lost and found on the sandy sea shelf below the cliffs of Antikythera—and in the knowledge of the local folks who have led explorers to the site. “And it’s all embedded in this framework of technology,” Theodoulou told me. “The technology used over time to approach the site and the technological knowledge that the cargo itself provides to us.” That includes the heavy bronze helmets divers used in 1901, the early SCUBA equipment Cousteau used in 1953, a new kind of dredging tool that slurped up artifacts in 1976, all the way up to the advanced mapping software and high-tech diving suits of the past decade.

“We’ve got this feeling that we’re walking in the footsteps of giants, and that’s really cool,” said Brendan Foley, a marine archaeologist from Woods Hole Oceanographic Institution who has dived the wreck site multiple times. On one dive at Antikythera, for instance, Foley and his colleagues recovered a remarkably well preserved dinner plate—not an ancient artifact, they later realized, but likely a remnant from the dive mission in 1901. “We feel a direct connection to those sponge divers, and some of the things we’ve found that are most evocative are not the ancient artifacts but have to do with the 1901 and 1953 expeditions.”

“I’m absolutely convinced that knowledge is a big chain starting from the long past, from the neolithic times, even earlier, and reaching our times,” Theodoulou told me. “Rings of this chain have been broken in some places, but the chain is the same. You just have to find the pieces and bind them together. And the mechanism is the absolute, tangible example. It is so sophisticated that it could not be just a chance example, a chance find.”

Searching for lost information about the device is, in its own way, as much of a challenge as searching the seabed for fragments of the mechanism itself. But while many researchers are holding out hope that another mechanism might be found in the the ocean at Antikythera, it’s more likely that a similar device from the same era might be found elsewhere—or that other ancient artifacts or records might help fill in gaps of understanding about the existing device.

Researchers have long explored a possible link between the mechanism’s design and ancient Babylonian astronomical data. There are hints, too, in the writings of Cicero about the existence of a device that could reproduce the motions of the sun, moon, and planets. Later, around 400 C.E., the poet Claudian wrote of a “bold invention” of “human wit” that used a “toy moon” and other spheres to mimic nature. Researchers now believe that the use of gearwork to model celestial bodies was common among Islamic engineers in later centuries—and perhaps as part of a tradition inherited from the ancient Greeks. Several researchers believe that Archimedes’ treatise on sphere-making, a long-lost manuscript that’s referenced in existing works, could shed light on the origin of the Antikythera Mechanism. But it may never be found. The ancient documents that survive today aren’t always the best quality, in large part because the people who choose what to save over the course of many generations have different goals and value systems than the historians who come after them.

Surviving artifacts, especially anything made from bronze like the mechanism, are even harder to come by. Many such objects were melted down to make weapons and ammunition. We know from historic records that there were thousands—maybe even millions—of large bronze statues in ancient Greece. “Pliny wrote that there were 3,000 in the streets of Rhodes city alone, and this was in the first century A.D.,” Marchant wrote. Today, in the National Archaeological Museum in Athens, which boasts among the best collections of statues from this era on the planet, there are only 10.

“All but one,” Marchant wrote, “are from shipwrecks.”

* * *

Time erases most everything and everyone, eventually. Any effort to understand the past is based entirely on incomplete records. And because it is impossible to standardize the language used to catalogue what’s left, or to fully index what is found, humans are unable to search through our own vast repositories of knowledge.

To discover hidden gems in existing stores of human knowledge, Swanson wrote in his 1986 essay, we would need a massive thesaurus—one that describes “all relationships that people know about and then determine, for each search, which among those relationships” are actually relevant. “To build such a universal thesaurus entails no less than modeling all of human knowledge,” he wrote. It would be an impossible task—not least of all because, “to use such a thesaurus, one would have to retrieve relevant information from it, so a second universal thesaurus would be needed as a retrieval aid to the first, and so on ad infinitum. The builder of a thesaurus is, in principle, lost in an infinite regress.”

There’s some hope yet. Artificially intelligent systems are already creating and distilling robust models of human knowledge, but they’ll still be constrained by the datasets that feed into them. So there will be some degree of luck involved if, for instance, a machine happens upon an ancient document that reveals the whereabouts of more machines like the Antikythera Mechanism, or determines who built the one found on the Mediterranean seafloor so many decades ago. At the same time, the evolution of information systems makes remarkable discoveries seem more possible now than ever before. “All I can say is there are an awful lot of manuscripts that have never been read, let alone translated,” Marchant told me. “I think it really is a reminder of how much we don’t know.”


“Think how many other types of technology there must have been that we don’t know about,” she added. “What I find fascinating is this: We see this ancient technology and initially it seems it was lost, and we’re like, ‘Where did it go?’ But then you look and you see the threads, connecting it through history—of a sundial or the 13th century astrolabe. So it survived and played a key role in stimulated the tech we take for granted. The way different cultures use things in different ways, technology can become almost unrecognizable, but the kernel of that technology lives on.”

The richest source for new information about the mechanism may, for example, be waiting for researchers in old Islamic manuscripts—thousands of documents that have never been catalogued or translated by anyone with the technical expertise to appreciate what they might contain.

Or, perhaps the mystery of the mechanism will never be solved.

No amount of technology or depth of curiosity can bring back what’s forever lost. This is why searching is, and will always be, a “necessarily uncertain” endeavor, as Swanson put it. Searching for lost knowledge is its own kind of science, but ultimately an incomplete one. “In that sense,” Swanson wrote, “there are no limits to either science or information retrieval. But then, too, there are no final answers.”

And yet people keep searching, sifting through the sands of time for traces to the past. They continue looking, in dank archives and distant oceans, against all odds of discovery. We search because we must, because in every direction, stretching back to the beginning of human history, is the irresistible possibility that we might yet find a strange new sliver of who we were, and better understand what we have become.

Source : http://www.theatlantic.com/

Auhtor : 

Categorized in Others

Not only can your smartphone be hacked, it can be done very easily without your knowledge.

"At the end of the day, everything is hackable. What I am surprised about is that people sometimes forget that it's so easy to hack into these devices," said Adi Sharabani, the co-founder of mobile security company Skycure, who used to work for Israeli Intelligence. 

Even if a malicious attacker cannot get into your phone, they can try to get the sensitive data stored inside, including contacts, places visited and e-mails.

"It's important to realize that the services your smartphone relies on are much more attractive target to attackers. So for example, the photo leak that happened from iCloud where a bunch of celebrities had their photos posted all over the Internet is the perfect example," said Alex McGeorge, the head of threat intelligence at cybersecurity company Immunity, Inc.

Often, the hack or data breach occurs without the consumer's knowledge, according to Sharabani.


And it's not just consumers that criminals target. With the rise of smartphones and tablets in the workplace, hackers attempt to attack enterprises through vulnerabilities in mobile devices.

Both Sharabani and McGeorge perform attack simulations for clients and find that these hacking demonstrations usually go undetected.

"It's usually very rare that a breach that originated through a mobile device or is just contained to a mobile device is likely to be detected by a corporation's incident response team," McGeorge said.

And Sharibani agrees. He says he's still waiting for someone to call him and say that their IT department identified the attack demonstration. 

"No one knows," he said. "And the fact that organizations do not know how many of their mobile devices encountered an attack in the last month is a problem."

But there is a silver lining, according to the wireless industry.

"The U.S. has one of the lowest malware infection rate in the world thanks to the entire wireless ecosystem working together and individually to vigilantly protect consumers," said John Marinho, vice president of technology & cybersecurity at CTIA, the wireless association. CTIA is an industry group which represents both phone carriers and manufacturers.

Here are the three ways a smartphone is most likely to be breached.

Hacking mobile phone laptop
Scyther5 | Getty Images

Unsecure Wi-Fi

Wi-Fi in public places, such as cafes and airports could be unsecure, letting malicious actors view everything you do while connected.

"Someone is trying to gain access to your email, to your password. They are trying to gain access to all of your contacts, who you meet with, where and when. Do you approve? So me, as a security expert, I always click cancel," Sharabani said.

To know if you're on an unsecure connect pay attention to warning message your device is giving you. On iPhones, a warning will come up saying that the server identity cannot be verified and asking if you still want to connect. You will be prompted to click "continue" before you can join the Wi-Fi.

Despite the warning, "92 percent of people click continue on this screen," according to Sharabani.

"Your phone actually has a lot of really good built in technology to warn you when you are going to make a poor security decision. And what we found through our general penetration testing practice and talking to some of our customers is people are very conditioned to just click through whatever warnings it is because they want the content," said McGeorge.

To protect yourself, be careful when connecting to free Wi-Fi and avoid sharing sensitive information.

Operating system flaws

Despite the best intentions of smartphone manufacturers, vulnerabilities are found which could let attackers in.


"We see that the average ratio is that more than one vulnerability being publicly disclosed every day, and 10 percent of those are critical vulnerabilities, vulnerabilities that allow someone remotely to gain access to your device and control it," Sharabani said.

Device manufacturers release operating system updates frequently to protect users. 

"All of those updates have really important security fixes in them and people are worried well maybe this is going to impact how I use my phone or maybe my phone isn't compatible. They need to apply those updates as soon as they come out," said McGeorge.

Experts advise you install operating system updates as soon as they are available. Once updates are released, hackers know about vulnerabilities and attempt to breach out-of-dates devices.

Malicious apps

Applications add functionality to smartphone, but also increase the risk of a data breach, especially if they are downloaded from websites or messages, instead of an app store. Hidden inside applications, even ones that work, could be malicious code that lets hackers steal data.

"The app ecosystem of mobile phones is enormous. Neither Apple nor Google can possibly look through every single app on their store and determine if it's malicious or not," said McGeorge.

To protect yourself, McGeorge advises you limit the number of apps you install.

"The more apps you have increases what we call the attack surface on your phone. What that means is there is more lines of code and therefore there is higher incidence there is going to be a security critical bug in that amount of code," he said.

McGeorge also suggests you think about who the app developer is and if you really need the app.

Skycure's Sharabani suggests you look at the warning messages when installing applications.

"Read those messages that are being prompted to us that sometimes say, 'This app will have access to your email. Would you agree?'" He said. 

Bottom line, according to Sharibani , there is no such thing as being 100 percent secure. But there are many ways to reduce the risk and make it harder for hackers to invade your smartphone.

In a statement sent by e-mail, an Apple spokesman said, "We've built safeguards into iOS to help warn users of potentially harmful content… We also encourage our customers to download from only a trusted source like the App Store and to pay attention to the warnings that we've put in place before they choose to download and install untrusted content."

And Google, which oversees Android said it also has added additional privacy and security controls. 


"Last year, we launched a privacy / security controls 'hub' called My Account. Since then, 1 billion people have used this and just last week we added a new feature called Find your phone. It's a series of controls that enable you to secure your phone (Android or iPhone) and your Google account if your device is misplaced, lost, stolen, etc," a spokesman said in an e-mail.

Source : http://www.cnbc.com/

Auhtor : Jennifer Schlesinger

Categorized in Science & Tech


Is the Internet making us stupid? I’ve gone from being tired of this question to being more and more confused by it.

A recent poll says that about two-thirds of Americans think that the Net is indeed making us stupid. But I wonder what the percentage would be if the question were “Is the Internet making you stupid?”

We all can point to ways in which it is indeed making us stupider in some sense. I know that I check in with some sites too frequently, looking for distraction from work. Some of the sites are nothing to be ashamed of, such as Slate, Google News, Twitter, BoingBoing, some friends’ blogs, DailyKos. (Yeah, I’m a Democrat. Surprise!) Some are less dignified: Reddit, HuffingtonPost, BuzzFeed sometimes. And then there are the sites I don’t even want to mention in public because they reflect poorly on me. OK, fine: Yes, I’ve been to Gawker more than once.

Ready to be delved

We also all—or maybe just most of us—spend time bouncing around the Web as if it were a global pinball table. One link leads to another and then to another. Sometimes the topics are worth knowing about, and sometimes they’re just mental itches that spawn more itches every time we scratch. Often I can’t remember how I got there and sometimes I don’t even remember where I started and why. I suspect I’m not alone in that.

So, in those ways the Internet is making me stupider by wasting my time. Except that often those meandering excursions widen my world. So, maybe it’s not making me quite as stupid as it could.

But, when I look at what I do with the Internet, the idea that it’s overall making me stupider is ridiculous. Not only can I get answers to questions instantly, I can do actual research. Whom do the French credit with inventing the airplane and how do they view the Wright brothers? What was the mechanism that governed the speed at which a dial on an old phone returns to its initial state, and why was it necessary? Why did the Greeks think that history overall declines? Whatever you want to delve into, the Internet is ready to be delved.


Every level of explorer

Before the Internet, this was hard to do. With the Internet it’s so easy that we now complain about being distracted. But it’s by no means always pointless distraction. Getting easier answers encourages more questions. Those questions lead to new areas to explore where almost always you’ll find information written for every level of explorer. The threshold for discovery has been reduced to the movement of your clicking finger measured in millimeters. Your curiosity has been unleashed.

If you disagree, if you think the Internet is making you stupider, then stop using it. But of course you won’t. You with the Internet is much smarter than you without the Internet. Isn’t that true for just about all of us?

So, if it’s true that most of us act as if the Net is making us smarter, why do two-thirds of Americans think it’s making us dumber? The answer, I believe, comes from recognizing that people are really saying that the Internet is making them stupid. You know,them.

Them and us

Who is this them? At our worst, we define them by race, gender or other irrelevancies. But putting such prejudices aside, thethem are people we feel can’t navigate the Internet without getting lost or fooled. For example, they are children who think it’s fine to copy and paste from the Net into their homework. So, yes, parents and teachers need not only to teach students to think critically but to enjoy doing so.

Still, the question that gets asked isn’t “Is the Net making students stupid?” The them is broader than that. I suspect that when we think of a stupid them, we’re imagining someone with whom we disagree deeply. The them denies science, distrusts intellectual inquiry and votes for people we consider to be crazy, stupid or both.

But the mystery still remains, because those people—the them—also believe that the Net is making them smarter. After all, that’s how they found all that important (but wrong) information about, say, climate change or vaccinations. So then just about everyone should be thinking that the Net makes them smarter. If we all think the Net’s making us smarter, why are we so ready to disparage human knowledge’s greatest gift to itself?


Might it be that our sense that the Net is making other people dumber masks the recognition that all belief is based on networks of believers, authorities and works? Even ours. The Net has made visible a weakness of all human knowledge: It lives within systems of coherence constantly buttressed by other mere mortals. Perhaps that exposure brings us to condemn the Net’s effect on everyone else but us.

Source : http://www.kmworld.com/


Categorized in Science & Tech

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media