[Source: This article was published in theverge.com By Adi Robertson - Uploaded by the Association Member: Jay Harris]

Last weekend, in the hours after a deadly Texas church shooting, Google search promoted false reports about the suspect, suggesting that he was a radical communist affiliated with the antifa movement. The claims popped up in Google’s “Popular on Twitter” module, which made them prominently visible — although not the top results — in a search for the alleged killer’s name. Of course, the was just the latest instance of a long-standing problem: it was the latest of multiple similar missteps. As usual, Google promised to improve its search results, while the offending tweets disappeared. But telling Google to retrain its algorithms, as appropriate as that demand is, doesn’t solve the bigger issue: the search engine’s monopoly on truth.

Surveys suggest that, at least in theory, very few people unconditionally believe news from social media. But faith in search engines — a field long dominated by Google — appears consistently high. A 2017 Edelman survey found that 64 percent of respondents trusted search engines for news and information, a slight increase from the 61 percent who did in 2012, and notably more than the 57 percent who trusted traditional media. (Another 2012 survey, from Pew Research Center, found that 66 percent of people believed search engines were “fair and unbiased,” almost the same proportion that did in 2005.) Researcher danah boyd has suggested that media literacy training conflated doing independent research using search engines. Instead of learning to evaluate sources, “[students] heard that Google was trustworthy and Wikipedia was not.”

GOOGLE SEARCH IS A TOOL, NOT AN EXPERT

Google encourages this perception, as do competitors like Amazon and Apple — especially as their products depend more and more on virtual assistants. Though Google’s text-based search page is clearly a flawed system, at least it makes it clear that Google search functions as a directory for the larger internet — and at a more basic level, a useful tool for humans to master.

Google Assistant turns search into a trusted companion dispensing expert advice. The service has emphasized the idea that people shouldn’t have to learn special commands to “talk” to a computer, and demos of products like Google Home show off Assistant’s prowess at analyzing the context of simple spoken questions, then guessing exactly what users want. When bad information inevitably slips through, hearing it authoritatively spoken aloud is even more jarring than seeing it on a page.

Even if the search is overwhelmingly accurate, highlighting just a few bad results around topics like mass shootings is a major problem — especially if people are primed to believe that anything Google says is true. And for every advance Google makes to improve its results, there’s a host of people waiting to game the new system, forcing it to adapt again.

NOT ALL FEATURES ARE WORTH SAVING

Simply shaming Google over bad search results might actually play into its mythos, even if the goal is to hold the company accountable. It reinforces a framing where Google search’s ideal final state is a godlike, omniscient benefactor, not just a well-designed product. Yes, Google search should get better at avoiding obvious fakery, or creating a faux-neutral system that presents conspiracy theories next to hard reporting. But we should be wary of overemphasizing its ability, or that of any other technological system, to act as an arbiter of what’s real.

Alongside pushing Google to stop “fake news,” we should be looking for ways to limit trust in, and reliance on, search algorithms themselves. That might mean seeking handpicked video playlists instead of searching YouTube Kids, which recently drew criticism for surfacing inappropriate videos. It could mean focusing on reestablishing trust in human-led news curation, which has produced its own share of dangerous misinformation. It could mean pushing Google to kill, not improve, features that fail in predictable and damaging ways. At the very least, I’ve proposed that Google rename or abolish the Top Stories carousel, which offers legitimacy to certain pages without vetting their accuracy. Reducing the prominence of “Popular on Twitter” might make sense, too, unless Google clearly commits to strong human-led quality control.

The past year has made web platforms’ tremendous influence clearer than ever. Congress recently grilled Google, Facebook, and other tech companies over their role in spreading Russian propaganda during the presidential election. A report from The Verge revealed that unscrupulous rehab centers used Google to target people seeking addiction treatment. Simple design decisions can strip out the warning signs of a spammy news source. We have to hold these systems to a high standard. But when something like search screws up, we can’t just tell Google to offer the right answers. We have to operate on the assumption that it won’t ever have them.

Categorized in Search Engine

[This article is originally published in blogs.scientificamerican.com written by Daniel M. Russell and Mario Callegaro - Uploaded by AIRS Member: Rene Meyer] 

Researchers who study how we use search engines share common mistakes, misperceptions, and advice

In a cheery, sunshine-filled fourth-grade classroom in California, the teacher explained the assignment: write a short report about the history of the Belgian Congo at the end of the 19th century, when Belgium colonized this region of Africa. One of us (Russell) was there to help the students with their online research methods.

I watched in dismay as a young student slowly typed her query into a smartphone. This was not going to end well. She was trying to find out which city was the capital of the Belgian Congo during this time period. She reasonably searched [ capital Belgian Congo ] and in less than a second, she discovered that the capital of the Democratic Republic of Congo is Kinshasa, a port town on the Congo River. She happily copied the answer into her worksheet.

But the student did not realize that the Democratic Republic of Congo is a completely different country than the Belgian Congo, which used to occupy the same area. The capital of that former country was Boma until 1926 when it was moved to Léopoldville (which was later renamed Kinshasa). Knowing which city was the capital during which time period is complicated in the Congo, so I was not terribly surprised by the girl’s mistake.

The deep problem here is that she blindly accepted the answer offered by the search engine as correct. She did not realize that there is a deeper history here.

We Google researchers know this is what many students do—they enter the first query that pops into their heads and runs with the answer. Double checking and going deeper are skills that come only with a great deal of practice—and perhaps a bunch of answers marked wrong on important exams. Students often do not have a great deal of background knowledge to flag a result as potentially incorrect, so they are especially susceptible to misguided search results like this.

In fact, a 2016 report by Stanford University education researchers showed that most students are woefully unprepared to assess content they find on the web. For instance, the scientists found that 80 percent of students at U.S. universities are not able to determine if a given web site contains credible information. And it is not just students; many adults share these difficulties.

If she had clicked through to the linked page, the girl probably would have started reading about the history of the Belgian Congo, and found out that it has had a few hundred years of wars, corruption, changes in rulers and shifts in governance. The name of the country changed at least six times in a century, but she never realized that because she only read the answer presented on the search engine results page.

Asking a question of a search engine is something people do several billion times each day. It is the way we find the phone number of the local pharmacy, check on sports scores, read the latest scholarly papers, look for news articles, find pieces of code, and shop. And although searchers look for true answers to their questions, the search engine returns results that are attuned to the query, rather than some external sense of what is true or not. So a search for proof of wrongdoing by a political candidate can return sites that purport to have this information, whether or not the sites or the information are credible. You really do get what you search for.

In many ways, search engines make our metacognitive skills come to the foreground. It is easy to do a search that plays into your confirmation bias—your tendency to think new information supports views you already hold. So good searchers actively seek out information that may conflict with their preconceived notions. They look for secondary sources of support, doing a second or third query to gain other perspectives on their topic. They are constantly aware of what their cognitive biases are, and greet whatever responses they receive from a search engine with healthy skepticism.

For the vast majority of us, most searches are successful. Search engines are powerful tools that can be incredibly helpful, but they also require a bit of understanding to find the information you are actually seeking. Small changes in how you search can go a long way toward finding better answers.

The Limits of Search

It is not surprising or uncommon that a short query may not accurately reflect what a searcher really wants to know. What is actually remarkable is how often a simple, brief query like [ nets ] or [ giants ] will give the right results. After all, both of those words have multiple meanings, and a search engine might conclude that searchers were looking for information on tools to catch butterflies, in the first case, or larger-than-life people in the second. Yet most users who type those words are seeking basketball- and football-related sites and the first search results for those terms provide just that. Even the difference between a query like [the who]versus [a who] is striking. The first set of results are about a classic English rock band, whereas the second query returns references to a popular Dr. Seuss book.

But search engines sometimes seem to give the illusion that you can ask anything about anything and get the right answer. Just like the student in that example, however, most searchers overestimate the accuracy of search engines and their own searching skills. In fact, when Americans were asked to self-rate their searching ability by the Pew Research Center in 2012, 56 percent rated themselves as very confident in their ability to use a search engine to answer a question.

Not surprisingly, the highest confidence scores were for searchers with some college degrees (64 percent were “very confident”—by contrast, 45 percent of those who did not have a college degree describes themselves that way). Age affects this judgment as well, with 64 percent of those under 50 describing themselves as “very confident,” as opposed to only 40 percent older than 50. When talking about how successful they are in their searches, 29 percent reported that they can always find what they are looking for, and 62 percent said they are able to find an answer to their questions most of the time. In surveys, most people tell us that everything they want is online, and conversely, if they cannot find something via a quick search, then it must not exist, it might be out of date, or it might not be of much value.

These are the most recent published results, but we have seen in surveys done at Google in 2018 that these insights from Pew are still true and transcend the years. What was true in 2012 is still exactly the same now: People have great confidence in their ability to search. The only significant change is in their success rates, which have crept up to 35 percent can "always find" what they're looking for, while 73 percent say they can find what they seek "most of the time." This increase is largely due to improvements in the search engines, which improve their data coverage and algorithms every year."

What Good Searchers Do

As long as information needs are easy, simple searches work reasonably well. Most people actually do less than one search per day, and most of those searches are short and commonplace. The average query length on Google during 2016 was 2.3 words. Queries are often brief descriptions like: [ quiche recipe ] or [ calories in chocolate ] or [ parking Tulsa ].

And somewhat surprisingly, most searches have been done before. In an average day, less than 12 percent of all searches are completely novel—that is, most queries have already been entered by another searcher in the past day. By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.

A large fraction of queries are searches for another website (called navigational queries, which make up as much as 25 percent of all queries), or for a short factual piece of information (called informational queries, which are around 40 percent of all queries). However, complex search tasks often need more than a single query to find a satisfactory answer. So how can you do better searches? 

First, you can modify your query by changing a term in your search phrase, generally to make it more precise or by adding additional terms to reduce the number of off-topic results. Very experienced searchers often open multiple browser tabs or windows to pursue different avenues of research, usually investigating slightly different variations of the original query in parallel.

You can see good searchers rapidly trying different search queries in a row, rather than just being satisfied with what they get with the first search. This is especially true for searches that involve very ambiguous terms—a query like [animal food] has many possible interpretations. Good searchers modify the query to get to what they need quickly, such as [pet food] or [animal nutrition], depending on the underlying goal.

Choosing the best way to phrase your query means adding terms that:

  • are central to the topic (avoid peripheral terms that are off-topic)
  • you know the definition of (do not guess at a term if you are not certain)
  • leave common terms together in order ( [ chow pet ] is very different than [ pet chow ])
  • keep the query fairly short (you usually do not need more than two to five terms)

You can make your query more precise by limiting the scope of a search with special operators. The most powerful operators are things such as double-quote marks (as in the query [ “exponential growth occurs when” ], which finds only documents containing that phrase in that specific order. Two other commonly used search operators are site: and filetype: These let you search within only one web site (such as [site:ScientificAmerican.com ]) or for a particular filetype, such as a PDF file (example: [ filetype:pdf coral bleaching ])

Second, try to understand the range of possible search options. Recently, search engines added the capability of searching for images that are similar to the given photo that you can upload. A searcher who knows this can find photos online that have features that resemble those in the original. By clicking through the similar images, a searcher can often find information about the object (or place) in the image. Searching for matches of my favorite fish photo can tell me not just what kind of fish it is, but then provide links to other fishing locations and ichthyological descriptions of this fish species.        

Overall, expert searchers use all of the resources of the search engine and their browsers to search both deeply (by making query variations) and broadly (by having multiple tabs or windows open). Effective searchers also know how to limit a search to a particular website or to a particular kind of document, find a phrase (by using quote marks to delimit the phrase), and find text on a page (by using a text-find tool).

Third, learn some cool tricks. One is the find-text-on-page skill (that is, Command-F on Mac, Control-F on PC), which is unfamiliar to around 90 percent of the English-speaking, Internet-using population in the US. In our surveys of thousands of web users, the large majority have to do a slow (and errorful) visual scan for a string of text on a web site. Knowing how to use text-finding commands speeds up your overall search time by about 12 percent (and is a skill that transfers to almost every other computer application).

Fourth, use your critical-thinking skills.  In one case study, we found that searchers looking for the number of teachers in New York state would often do a query for [number of teachers New York ], and then take the first result as their answer—never realizing that they were reading about the teacher population of New York City, not New York State. In another study, we asked searchers to find the maximum weight a particular model of baby stroller could hold. How big could that baby be?

The answers we got back varied from two pounds to 250 pounds. At both ends of the spectrum, the answers make no sense (few babies in strollers weigh less than five pounds or more than 60 pounds), but inexperienced searchers just assumed that whatever numbers they found correctly answered their search questions. They did not read the context of the results with much care.  

Search engines are amazingly powerful tools that have transformed the way we think of research, but they can hurt more than help when we lack the skills to use them appropriately and evaluate what they tell us. Skilled searchers know that the ranking of results from a search engine is not a statement about objective truth, but about the best matching of the search query, term frequency, and the connectedness of web pages. Whether or not those results answer the searchers’ questions is still up for them to determine.

Categorized in Search Engine

Source: This article was published hindustantimes.com By Karen Weise and Sarah Frier - Contributed by Member: David J. Redcliff

For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network.

The professor was incredulous. David Craig had been studying the rise of entertainment on social media for several years when a Facebook Inc. employee he didn’t know emailed him last December, asking about his research. “I thought I was being pumped,” Craig said. The company flew him to Menlo Park and offered him $25,000 to fund his ongoing projects, with no obligation to do anything in return. This was definitely not normal, but after checking with his school, University of Southern California, Craig took the gift. “Hell, yes, it was generous to get an out-of-the-blue offer to support our work, with no strings,” he said. “It’s not all so black and white that they are villains.”

Other academics got these gifts, too. One, who said she had $25,000 deposited in her research account recently without signing a single document, spoke to a reporter hoping maybe the journalist could help explain it. Another professor said one of his former students got an unsolicited monetary offer from Facebook, and he had to assure the recipient it wasn’t a scam. The professor surmised that Facebook uses the gifts as a low-cost way to build connections that could lead to closer collaboration later. He also thinks Facebook “happily lives in the ambiguity” of the unusual arrangement. If researchers truly understood that the funding has no strings, “people would feel less obligated to interact with them,” he said.

The free gifts are just one of the little-known and complicated ways Facebook works with academic researchers. For scholars, the scale of Facebook’s 2.2 billion users provides an irresistible way to investigate how human nature may play out on, and be shaped by, the social network. For Facebook, the motivations to work with outside academics are far thornier, and it’s Facebook that decides who gets access to its data to examine its impact on society.“Just from a business standpoint, people won’t want to be on Facebook if Facebook is not positive for them in their lives,” said Rob Sherman, Facebook’s deputy chief privacy officer. “We also have a broader responsibility to make sure that we’re having the right impact on society.”

The company’s long been conflicted about how to work with social scientists, and now runs several programs, each reflecting the contorted relationship Facebook has with external scrutiny. The collaborations have become even more complicated in the aftermath of the Cambridge Analytica scandal, which was set off by revelations that a professor who once collaborated with Facebook’s in-house researchers used data collected separately to influence elections. ALSO READ: Facebook admits it tracks your mouse movements

“Historically the focus of our research has been on product development, on doing things that help us understand how people are using Facebook and build improvements to Facebook,” Sherman said. Facebook’s heard more from academics and non-profits recently who say “because of the expertise that we have, and the data that Facebook stores, we have an opportunity to contribute to generalizable knowledge and to answer some of these broader social questions,” he said. “So you’ve seen us begin to invest more heavily in social science research and in answering some of these questions.”

Facebook has a corporate culture that reveres research. The company builds its product based on internal data on user behaviour, surveys and focus groups. More than a hundred Ph.D.-level researchers work on Facebook’s in-house core data science team, and employees say the information that points to growth has had more of an impact on the company’s direction than Chief Executive Officer Mark Zuckerberg’s ideas.

Facebook is far more hesitant to work with outsiders; it risks unflattering findings, leaks of proprietary information, and privacy breaches. But Facebook likes it when external research proves that Facebook is great. And in the fierce talent wars of Silicon Valley, working with professors can make it easier to recruit their students.

It can also improve the bottom line. In 2016, when Facebook changed the “like” button into a set of emojis that better-captured user expression—and feelings for advertisers— it did so with the help of Dacher Keltner, a psychology professor at the University of California, Berkeley, who’s an expert in compassion and emotions. Keltner’s Greater Good Science Center continues to work closely with the company. And this January, Facebook made research the centerpiece of a major change to its news feed algorithm. In studies published with academics at several universities, Facebook found that people who used social media actively—commenting on friends’ posts, setting up events—were likely to see a positive impact on mental health, while those who used it passively may feel depressed. In reaction, Facebook declared it would spend more time encouraging “meaningful interaction.” Of course, the more people engage with Facebook, the more data it collects for advertisers.

The company has stopped short of pursuing deeper research on the potentially negative fallout of its power. According to its public database of published research, Facebook’s written more than 180 public papers about artificial intelligence but just one study about elections, based on an experiment Facebook ran on 61 million users to mobilize voters in the Congressional midterms back in 2010. Facebook’s Sherman said, “We’ve certainly been doing a lot of work over the past couple of months, particularly to expand the areas where we’re looking.”

Facebook’s first peer-reviewed papers with outside scholars were published in 2009, and almost a decade into producing academic work, it still wavers over how to structure the arrangements. It’s given out the smaller unrestricted gifts. But those gifts don’t come with access to Facebook’s data, at least initially. The company is more restrictive about who can mine or survey its users. It looks for research projects that dovetail with its business goals.

Some academics cycle through one-year fellowships while pursuing doctorate degrees, and others get paid for consulting projects, which never get published.

When Facebook does provide data to researchers, it retains the right to veto or edit the paper before publication. None of the professors Bloomberg spoke with knew of cases when Facebook prohibited a publication, though many said the arrangement inevitably leads academics to propose investigations less likely to be challenged. “Researchers focus on things that don’t create a moral hazard,” said Dean Eckles, a former Facebook data scientist now at the MIT Sloan School of Management. Without a guaranteed right to publish, Eckles said, researchers inevitably shy away from potentially critical work. That means some of the most burning societal questions may go unprobed.

Facebook also almost always pairs outsiders with in-house researchers. This ensures scholars have a partner who’s intimately familiar with Facebook’s vast data, but some who’ve worked with Facebook say this also creates a selection bias about what gets studied. “Stuff still comes out, but only the immensely positive, happy stories—the goody-goody research that they could show off,” said one social scientist who worked as a researcher at Facebook. For example, he pointed out that the company’s published widely on issues related to well-being, or what makes people feel good and fulfilled, which is positive for Facebook’s public image and product. “The question is: ‘What’s not coming out?,’” he said.

Facebook argues its body of work on well-being does have broad importance. “Because we are a social product that has large distribution within society, it is both about societal issues as well as the product,” said David Ginsberg, Facebook’s director of research.Other social networks have smaller research ambitions, but have tried more open approaches. This spring, Twitter Inc. asked for proposals to measure the health of conversations on its platform, and Microsoft Corp.’s LinkedIn is running a multi-year programme to have researchers use its data to understand how to improve the economic opportunities of workers. Facebook has issued public calls for technical research, but until the past few months, hasn’t done so for social sciences. Yet it has solicited in that area, albeit quietly: Last summer, one scholarly association begged discretion when sharing information on a Facebook pilot project to study tech’s impact in developing economies. Its email read, “Facebook is not widely publicizing the program.”

In 2014, the prestigious Proceedings of the National Academy of Sciences published a massive study, co-authored by two Facebook researchers and an outside academic, that found emotions were “contagious” online, that people who saw sad posts were more likely to make sad posts. The catch: the results came from an experiment run on 689,003 Facebook users, where researchers secretly tweaked the algorithm of Facebook’s news feed to show some cheerier content than others. People were angry, protesting that they didn’t give Facebook permission to manipulate their emotions.

The company first said people allowed such studies by agreeing to its terms of service, and then eventually apologized. While the academic journal didn’t retract the paper, it issued an “Editorial Expression of Concern.”

To get federal research funding, universities must run testing on humans through what’s known as an institutional review board, which includes at least one outside expert, approves the ethics of the study and ensures subjects provide informed consent. Companies don’t have to run research through IRBs. The emotional-contagion study fell through the cracks.

The outcry profoundly changed Facebook’s research operations, creating a review process that was more formal and cautious. It set up a pseudo-IRB of its own, which doesn’t include an outside expert but does have policy and PR staff. Facebook also created a new public database of its published research, which lists more than 470 papers. But that database now has a notable omission—a December 2015 paper two Facebook employees co-wrote with Aleksandr Kogan, the professor at the heart of the Cambridge Analytica scandal. Facebook said it believes the study was inadvertently never posted and is working to ensure other papers aren’t left off in the future.

In March, Gary King, a Harvard University political science professor, met with some Facebook executives about trying to get the company to share more data with academics. It wasn’t the first time he’d made his case, but he left the meeting with no commitment.

A few days later, the Cambridge Analytica scandal broke, and soon Facebook was on the phone with King. Maybe it was time to cooperate, at least to understand what happens in elections. Since then, King and a Stanford University law professor have developed a complicated new structure to give more researchers access to Facebook’s data on the elections and let scholars publish whatever they find. The resulting structure is baroque, involving a new “commission” of scholars Facebook will help pick, an outside academic council that will award research projects, and seven independent U.S. foundations to fund the work. “Negotiating this was kind of like the Arab-Israel peace treaty, but with a lot more partners,” King said.

The new effort, which has yet to propose its first research project, is the most open approach Facebook’s taken yet. “We hope that will be a model that replicates not just within Facebook but across the industry,” Facebook’s Ginsberg said. “It’s a way to make data available for social science research in a way that means that it’s both independent and maintains privacy.” But the new approach will also face an uphill battle to prove its credibility. The new Facebook research project came together under the company’s public relations and policy team, not its research group of PhDs trained in ethics and research design. More than 200 scholars from the Association of Internet Researchers, a global group of interdisciplinary academics, have signed a letter saying the effort is too limited in the questions it’s asking, and also that it risks replicating what sociologists call the “Matthew effect,” where only scholars from elite universities—like Harvard and Stanford—get an inside track.

“Facebook’s new initiative is set up in such a way that it will select projects that address known problems in an area known to be problematic,” the academics wrote. The research effort, the letter said, also won’t let the world—or Facebook, for that matter—get ahead of the next big problem.

Categorized in Social

Source: This article was published helpnetsecurity.com - Contributed by Member: Corey Parker

Ben-Gurion University of the Negev and University of Washington researchers have developed a new generic method to detect fake accounts on most types of social networks, including Facebook and Twitter.

According to their new study in Social Network Analysis and Mining, the new method is based on the assumption that fake accounts tend to establish improbable links to other users in the networks.

“With recent disturbing news about failures to safeguard user privacy, and targeted use of social media by Russia to influence elections, rooting out fake users has never been of greater importance,” explains Dima Kagan, lead researcher and a researcher in the BGU Department of Software and Information Systems Engineering.

“We tested our algorithm on simulated and real-world datasets on 10 different social networks and it performed well on both.”

The algorithm consists of two main iterations based on machine-learning algorithms. The first constructs a link prediction classifier that can estimate, with high accuracy, the probability of a link existing between two users.

The second iteration generates a new set of meta-features based on the features created by the link prediction classifier. Lastly, the researchers used these meta-features and constructed a generic classifier that can detect fake profiles in a variety of online social networks.

Here’s a helpful video explanation of how it all works:

“Overall, the results demonstrated that in a real-life friendship scenario we can detect people who have the strongest friendship ties as well as malicious users, even on Twitter,” the researchers say. “Our method outperforms other anomaly detection methods and we believe that it has considerable potential for a wide range of applications particularly in the cyber-security arena.”

Other researchers who contributed are Dr. Michael Fire of the University of Washington (former Ben-Gurion U. doctoral student) and Prof. Yuval Elovici, director of [email protected] and a member of the BGU Department of Software and Information Systems Engineering.

The Ben-Gurion University researchers previously developed the Social Privacy Protector (SPP) Facebook app to help users evaluate their friend's list in seconds to identify which have few or no mutual links and might be “fake” profiles.

Categorized in Social

Conducting academic research is a critical process. You cannot rely solely on the information you get on the web because some of the search results are non-relevant or not related to your topic. To ensure that you only gather genuine facts and credible data for your academic papers, check out only the most trusted and incredibly useful resources for your research.

Here's a list of gratuitous and best academic search engines that can help you in your research journey.

Google Scholar

Google Scholar is a customized search engine specifically designed for students, educators and anyone related to academics. It allows users to find credible information, search journals, and save sources to their personal library. If you need help for your best essays, citations for your thesis and other researches, this easy-to-use resource can easily find citation-worthy materials for your academic writing.

iSEEK- Education

iSeek education is a go-to search engine for students, scholars and educators. It is one of the widely used search tools for academic research online. iSeek offers safe, smart, and reliable resources for your paper writing. Using this tool will help you save time, effort and energy in getting your written work done quickly.

Educational Resources Information Center - ERIC

ERIC is a comprehensive online digital library funded by Institute of Education Sciences of the U.S. Department of Education. It provides a database of education research and information for students, educators, librarians and the public. ERIC contains around 1.3 million articles and users can search for anything education-related such as journals, books, research papers, various reports, dissertations, policy papers, and other academic materials.

Virtual Learning Resources Center - VLRC

If you're looking for high quality educational sites to explore? You must check out VLRC. This learning resource center is the best place to go when you're in search for useful research materials and accurate information for your academic requirement. It has a collection of more than 10,000 indexed webpages for all subject areas.

Internet Archive

Internet Archive, a non-profit digital library, enables users to get free access to cultural artifacts and historical collections in digital format. It contains millions of free books, music, software, texts, audio, and moving images. Capturing, managing and searching different contents without any technical expertise or hosting facilities made easier for you through this search engine.

Infotopia

Infotopia is Google alternative safe search engine that gives information and reference sites on the following subjects: art, social sciences, history, languages, literature, science and technology and many more.

Source: This article was published hastac.org By Amber Stanley

Categorized in Search Engine

Researchers are wielding the same strange properties that drive quantum computers to create hack-proof forms of data encryption.

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data -- from medical records to bank transactions -- could be vulnerable to attack.

To fight back against the future threat, researchers are wielding the same strange properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

And now, these quantum encryption techniques may be one step closer to wide-scale use thanks to a new system developed by scientists at Duke University, The Ohio State University and Oak Ridge National Laboratory. Their system is capable of creating and distributing encryption codes at megabit-per-second rates, which is five to 10 times faster than existing methods and on par with current internet speeds when running several systems in parallel.

The researchers demonstrate that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

“We are now likely to have a functioning quantum computer that might be able to start breaking the existing cryptographic codes in the near future,” said Daniel Gauthier, a professor of physics at The Ohio State University. “We really need to be thinking hard now of different techniques that we could use for trying to secure the internet.”

The results appear online Nov. 24 in Science Advances.

To a hacker, our online purchases, bank transactions and medical records all look like gibberish due to ciphers called encryption keys. Personal information sent over the web is first scrambled using one of these keys, and then unscrambled by the receiver using the same key. 

For this system to work, both parties must have access to the same key, and it must be kept secret. Quantum key distribution (QKD) takes advantage of one of the fundamental properties of quantum mechanics -- measuring tiny bits of matter like electrons or photons automatically changes their properties -- to exchange keys in a way that immediately alerts both parties to the existence of a security breach. 

Though QKD was first theorized in 1984 and implemented shortly thereafter, the technologies to support its wide-scale use are only now coming online. Companies in Europe now sell laser-based systems for QKD, and in a highly-publicized event last summer, China used a satellite to send a quantum key to two land-based stations located 1200 km apart.

The problem with many of these systems, said Nurul Taimur Islam, a graduate student in physics at Duke, is that they can only transmit keys at relatively low rates -- between tens to hundreds of kilobits per second -- which are too slow for most practical uses on the internet.

“At these rates, quantum-secure encryption systems cannot support some basic daily tasks, such as hosting an encrypted telephone call or video streaming,” Islam said.

Like many QKD systems, Islam’s key transmitter uses a weakened laser to encode information on individual photons of light. But they found a way to pack more information onto each photon, making their technique faster.

By adjusting the time at which the photon is released, and a property of the photon called the phase, their system can encode two bits of information per photon instead of one. This trick, paired with high-speed detectors developed by Clinton Cahall, graduate student in electrical and computer engineering, and Jungsang Kim, professor of electrical and computer engineering at Duke, powers their system to transmit keys five to 10 times faster than other methods.

“It was changing these additional properties of the photon that allowed us to almost double the secure key rate that we were able to obtain if we hadn’t done that,” said Gauthier, who began the work as a professor of physics at Duke before moving to OSU.

Related...

In a perfect world, QKD would be perfectly secure. Any attempt to hack a key exchange would leave errors on the transmission that could be easily spotted by the receiver. But real-world implementations of QKD require imperfect equipment, and these imperfections open up leaks that hackers can exploit.

The researchers carefully characterized the limitations of each piece of equipment they used. They then worked with Charles Lim, currently a professor of electrical and computer engineering at the National University of Singapore, to incorporate these experimental flaws into the theory.

“We wanted to identify every experimental flaw in the system, and include these flaws in the theory so that we could ensure our system is secure and there is no potential side-channel attack,” Islam said.

Though their transmitter requires some specialty parts, all of the components are currently available commercially. Encryption keys encoded in photons of light can be sent over existing optical fiber lines that burrow under cities, making it relatively straightforward to integrate their transmitter and receiver into the current internet infrastructure.

“All of this equipment, apart from the single-photon detectors, exist in the telecommunications industry, and with some engineering we could probably fit the entire transmitter and receiver in a box as big as a computer CPU,” Islam said.

This research was supported by the Office of Naval Research Multidisciplinary University Research Initiative program on Wavelength-Agile QKD in a AQ12 Marine Environment (N00014-13-1-0627) and the Defense Advanced Research Projects Agency Defense Sciences Office Information in a Photon program. Additional support was provided by Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725, and National University of Singapore startup grant R-263-000-C78-133/731.

CITATION:  "Provably Secure and High-Rate Quantum Key Distribution With Time-Bin Qudits," Nurul T. Islam, Charles Ci Wen Lim, Clinton Cahall, Jungsang Kim and Daniel J. Gauthier. Science Advances, Nov. 24, 2017. DOI: 10.1126/sciadv.1701491

Source: This article was published today.duke.edu By AKARA MANKE

Categorized in Internet Privacy

The internet is humongous. Finding what you need means that you should select from amongst millions and sometimes trillions of search results. However, no one can claim for sure that you have found the right information. Is the information reliable and accurate? Or would you have to shop for another set of information that is even better? Or say, relevant to the query – While the Internet keeps growing every single minute, the clutter makes it even harder to catch up with, and perhaps, a more valuable information keeps getting buried underneath it. Unfortunately, the larger the internet grows, it gets harder to find what you need.

Think of search engines and its browsers to be a set of information search tools that will fetch what you need from the Internet. But, a tool is as good as the job it gets done. While, Google, Bing, Yahoo and the like are considered a more generic tool for Internet search, they perform a “fit all search types job”. The search results throw tons of web pages at you and thus, much harder selections and surely less accuracy. 


A simple solution to deal with too much information on the Internet is out there, but only if you care to pay attention – here is a List of Over 1500 Search Engines and Directories to cut your research time in half.


There exists a whole new world of Internet search tools that are job specific and finds that information you need through filtered and precision search. They subscribe to the same world wide web and look through the same web pages as the main search engines, but only better. These search tools are split up into Specialized Search Engines and Online Directories.

The Specialized Search Engines are built to drill down into a more accurate type of information. They can collect a filtered and less cluttered search results when compared to the leading search engines such as Google, Bing, Yahoo. What makes them unique is their built-in ability to use powerful customized filters, and sometimes it has its database to deliver the type of information you need in specific file formats.

Advanced Research Method

We will classify Specialized Search Engines into Meta-crawlers (or Meta-SearchEngine) and the Specialized

Content SearchEngine

Unlike conventional search engines, the Meta-crawlers don’t crawl the web themselves, and they do not build their own web page indexes; instead, they allow search snippets to be collected (aggregated) from several mainstream search engines (Google, Bing, Yahoo and similar) all at once. They don't have their proprietary search technology or the large and expensive infrastructure as the main search engines do. The Meta-crawler aggregates the results and displays these on their proprietary search result pages. In short, they usually concentrate on front-end technologies such as user interface experience and novel ways of displaying the information. They generate revenues by displaying ads and provide the user option to search for images, audio, video, news and even more options, simulating a typical search browsing experience.

Some of the well-known Meta-Crawlers to explore.

  • Ixquick  -  A meta-search engine with options for choosing what the results should be based on? - It respects the information privacy, and the results get opened in Ixquick proxy window.
  • Google - Considered the first stop by many Web searchers. Has a large index and results are known for their high relevancy. Includes ability to search for images, and products, among other features.
  • Bing- General web search engine from Microsoft.
  • Google Scholar - One of Google's specialized search tools, Google Scholar focuses primarily on information from scholarly and peer-reviewed sources. By using the Scholar Preferences page, you can link back to URI's subscriptions for access to many otherwise fee-based articles.
  • DuckDuckGoA general search engine with a focus on user privacy.
  • Yahoo!A combination search engine and human-compiled directory, Yahoo also allows you to search for images, Yellow Page listings, and products.
  • Internet Public LibraryA collection of carefully selected and arranged Internet reference sources, online texts, online periodicals, and library-related links. Includes IPL original resources such as Associations on the Net, the Online Literary Criticism Collection, POTUS: Presidents of the United States, and Stately Knowledge (facts about the states).
  • URI Libraries' Internet ResourcesThis is a collection of links collected and maintained by the URI librarians. It is arranged by subject, like our online databases, and provides access to free internet resources to enhance your learning and research options.
  • Carrot Search   A meta-search engine based on a variety of search engines. Has clickable topic links and diagrams to narrow down search results.
  • iBoogie  -  A meta-search engine with customizable search type tabs. Search rankings have an emphasis on clusters.
  • iSeek  – The meta-search results are from a compilation of authoritative resources from university, government, and established non-commercial providers.
  • PDF Search Engine  – Searches for documents with the following extensions such as, .doc, .pdf, .chm, rtf, .txt.

The Specialized Content Search Engine focuses on a specific segment of online content; that is why they are also called a Topical (Subject Specific) Search Engines. The content area may is based on topicality, media, and content type or genre of content – further to this, the source of material and the original function it performs in transforming it, is what defines their specialty.

We can go a bit further and split these into three groups.

Information Contribution – The information source can be data collected from a Public Contribution Resource Engines as social media contributions and from reference platform such as Wikis. Examples are YouTube, Vimeo, Linked-in, Facebook, Reddit. The other types are a Private Contribution Resource Engines of the searchable database. These are created internally by the efforts of the search engine vendors; examples are Netflix (movies), Reuters (news content), Tineye(image repository), LexisNexis (legal information).

Specialized Function - These are the search engines that are programmed to perform a type of service that is proprietary and unique. They execute tasks that involve collecting web content as information and work on it with algorithms of their own, adding value to the result it produces.

An example of such types of search engines are websites such as the Wayback Machine Organization that provides and maintain records of website pages that are no longer available online as a historical record. Alexa Analytics that performs web analytics and measures traffic on websites and provide performance metrics and Alpha Wolfram who is more than a search engine. It gives you access to the world's facts and data and calculates answers across a range of topics.

Information Category (Subject Specific material) - This is where the search is subject specific and based on the information it retrieves. It does this by a special arrangement with outside sources on a consistent basis. Some of their examples are found under the broader headings.

  • Yellow Pages and phone directories
  • PeopleSearch
  • Government Database and archives
  • Public libraries
  • News Bureaus, Online Journals, and magazines
  • international organizations

web directory or Link Directory is a well-organized catalog on the World Wide Web. A collection of data organized into categories and subcategories. This directory specializes in linking to other web sites and categorizing those links. The web directory is not a search engine, and it does not show numerous web pages from the keyword search. Instead, it exhibits a list of website links according to category and subcategory. Most web directory entries are not commonly found by web crawlers. Instead, they are searched by humans. This categorization encompasses the whole website instead of a single page or a set of keywords; here the websites are often limited to inclusion in only a few categories. Web directories often allow site owners to submit their site for listing and have editors review submissions for its fitness.

The directories are distinguished into two broad categories.

Public Directories that do not require user registration or fee; and the Private Directories with an online registration that may or may not be subject to a fee for inclusions in its listings. Examples of Paid Commercial Versions.

The Public Directories is for General Topics, or it can be Subject Based or Domain-Specific Public Directories.

The General Topics Directory carry popular reference subjects, interests, content domains and their subcategories. Their examples are, DMOZ  (The largest directory of the Web. The open content is mirrored at many sites, including the Google Directory (until July 20, 2011). The A1 Web Directory Organization (This is a general web directory that lists various quality sites under traditional categories and relevant subcategories). The PHPLink Directory  ( A Free Directory Script phpLD is released to the public as a free directory script in 2006, and they continue to offer this as the free download).

The Subject Based or Domain-Specific Public Directories are subject and topic focused. A more famous of these are Hot Frog (a commercial web directory providing websites categorized topically and regionally). The Librarians Index to Internet (directory listing program from the Library of California) and OpenDOAR  (This is an authoritative directory of academic open access repositories).

The PrivateDirectories requires online registration and may be subject to a fee for inclusions in its listings.

Examples of Paid Commercial Versions.

  • Starting Point Directory - $99/Yr
  • Yelp Business Directory - $100/Yr
  • Manta.com - $299/Yr

The Directories that require registration as a member, employee, student or a subscriber.  Examples of these types are found in.

  • Government Employees Websites (Government Secure Portals)
  • Library Networks (Private, Public and Local Libraries)
  • Bureaus, Public Records Access, Legal Documents, Courts Data, Medical Records

The Association of Internet Research Specialists (AIRS) have compiled a comprehensive list they call an "Internet Information Resources." There you will find an extensive collection of Search Engines and interesting information resources for avid Internet research enthusiasts; especially, those that seek serious information without the hassle of sifting through the many pages of the unfiltered Internet. Alternatively, one can search through Phil Bradley’s website or The Search Engine’s List that has some interesting links for the many alternatives to typical search engines out there. 

 Author: Naveed Manzoor [Toronto, Ontario] 

Categorized in Online Research

What you will be doing.  Companies need the latest business statistics on different companies and markets that they have a stake in.  They need to know about the competition and how to better manage their enterprises.  The one thing they don’t have is the time or resources to do it themselves

As a freelance Internet researcher, you will be the one who digs and finds this information for the companies in question. You also put this information into usable formats for the company.   

How to start Research can be done for any company in any field.  If you choose one niche, you will learn over time which search engines yield the most promising information.  You can create a database of sites and the types of information that can be gotten from them.   

Establish yourself by taking freelance jobs as an Internet researcher.  To gain the experience and a reputation, you may have to take assignments in various fields.  Sites like Elance hire professionals like you to help clients with their information gathering. 

Starting costs.  Since you will be working on the Internet as your primary source of information, you will need a computer that can handle the load.  The computer needs all of the latest software for report writing, spreadsheets, presentations, and any other format in which the client may want to receive the information that you’ve found. 

Your home office should be comfortable and functional.  It needs to have a telephone, fax machine, copier, laser printer, and a comfy chair.  You also need money for advertising materials to get your name out there.  Look to spend around $2,000. 

Skills needed.  You need to like research and be good at assessing the value of information at a glance.  A good researcher uses well worn paths to find their information.  Some researchers get bogged down in too much information and have a tough time sorting it all out.  With each new project, you will learn to only chase down leads that are relevant to the specifics of the assignment and throw the others out.  

Related items... 

MarketingNetworking is important here.  Concentrate on the area where you want to concentrate your research.  Use business associations to develop a list of contacts for mass mailings.  Highlight your area of expertise and list some past work. 

Start your own website to attract customers.  Advertise on as many sites and forums as you can.  Offer discounts for the first research project to gain a client’s trust and the promise of future business. 

Research can be an interesting business.  You uncover bits of information that could mean good news for your clients.  Pretty soon, you’ll be able to find anything for anyone. 

Source: This article was published internetbasedmoms.com

Categorized in Online Research

THE WARNINGS CONSUMERS hear from information security pros tend to focus on trust: Don't click web links or attachments from an untrusted sender. Only install applications from a trusted source or from a trusted app store. But lately, devious hackers have been targeting their attacks further up the software supply chain, sneaking malware into downloads from even trusted vendors, long before you ever click to install.

On Monday, Cisco's Talos security research division revealedthat hackers sabotaged the ultra-popular, free computer-cleanup tool CCleaner for at least the last month, inserting a backdoor into updates to the application that landed in millions of personal computers. That attack betrayed basic consumer trust in CCleaner-developer Avast, and software firms more broadly, by lacing a legitimate program with malware—one distributed by a security company, no less.

It's also an increasingly common incident. Three times in the last three months, hackers have exploited the digital supply chain to plant tainted code that hides in software companies' own systems of installation and updates, hijacking those trusted channels to stealthily spread their malicious code.

"There's a concerning trend in these supply-chain attacks," says Craig Williams, the head of Cisco's Talos team. "Attackers are realizing that if they find these soft targets, companies without a lot of security practices, they can hijack that customer base and use it as their own malware install base...And the more we see it, the more attackers will be attracted to it."

According to Avast, the tainted version of the CCleaner app had been installed 2.27 million times from when the software was first sabotaged in August until last week, when a beta version of a Cisco network monitoring tool discovered the rogue app acting suspiciously on a customer's network. (Israeli security firm Morphisec alerted Avast to the problem even earlier, in mid-August.) Avast cryptographically signs installations and updates for CCleaner, so that no imposter can spoof its downloads without possessing an unforgeable cryptographic key. But the hackers had apparently infiltrated Avast's software development or distribution process before that signature occurred, so that the antivirus firm was essentially putting its stamp of approval on malware, and pushing it out to consumers.

That attack comes two months after hackers used a similar supply-chain vulnerability to deliver a massively damaging outbreak of destructive software known as NotPetya to hundreds of targets focused in Ukraine, but also branching out other European countries and the US. That software, which posed as ransomware but is widely believed to have in fact been a data-wiping disruption tool, commandeered the update mechanism of an obscure—but popular in Ukraine—piece of accounting software known as MeDoc. Using that update mechanism as an infection point and then spreading through corporate networks, NotPetya paralyzed operations at hundreds of companies, from Ukrainian banks and power plants, to Danish shipping conglomerate Maersk, to US pharmaceutical giant Merck.

One month later, researchers at Russian security firm Kaspersky discovered another supply chain attack they called "Shadowpad": Hackers had smuggled a backdoor capable of downloading malware into hundreds of banks, energy, and drug companies via corrupted software distributed by the South Korea-based firm Netsarang, which sells enterprise and network management tools. “ShadowPad is an example of how dangerous and wide-scale a successful supply-chain attack can be," Kaspersky analyst Igor Soumenkov wrote at the time. "Given the opportunities for reach and data collection it gives to the attackers, most likely it will be reproduced again and again with some other widely used software component." (Kaspersky itself is dealing with its own software trust problem: The Department of Homeland Security has banned its use in US government agencies, and retail giant Best Buy has pulled its software from shelves, due to suspicions that it too could be abused by Kaspersky's suspected associates in the Russian government.)

Supply-chain attacks have intermittently surfaced for years. But the summer's repeated incidents point to an uptick, says Jake Williams, a researcher and consultant at security firm Rendition Infosec. "We have a reliance on open-source or widely distributed software where the distribution points are themselves vulnerable," says Williams. "That’s becoming the new low-hanging fruit."

Williams argues that move up the supply chain may be in part due to improved security for consumers, and companies cutting off some other easy routes to infection. Firewalls are near-univeral, finding hackable vulnerabilities in applications like Microsoft Office or PDF readers isn't as easy as it used to be, and companies are increasingly—though not always—installing security patches in a timely manner. "People are getting better about general security," Williams says. "But these software supply-chain attacks break all the models. They pass antivirus and basic security checks. And sometimes patching is the attack vector."

'People trust companies, and when they're compromised like this it really breaks that trust. It punishes good behavior.' —Craig Williams, Cisco Talos

In some recent cases, hackers have moved yet another link up the chain, attacking not just software companies instead of consumers, but the development tools used by those companies' programmers. In late 2015, hackers distributed a fake version of the Apple developer tool Xcode on sites frequented by Chinese developers. Those tools injected malicious code known as XcodeGhost into 39 iOS apps, many of which passed Apple's App Store review, resulting in the largest-ever outbreak of iOS malware. And just last week, a similar—but less serious—problem hit Python developers, when the Slovakian government warned that a Python code repository known as Python Package Index, or PyPI, had been loaded with malicious code.

These kinds of supply-chain attacks are especially insidious because they violate every basic mantra of computer security for consumers, says Cisco's Craig Williams, potentially leaving those who stick to known, trusted sources of software just as vulnerable as those who click and install more promiscuously. That goes double when the proximate source of malware is a security company like Avast. "People trust companies, and when they're compromised like this it really breaks that trust," says Williams. "It punishes good behavior."

These attacks leave consumers, Williams says, with few options to protect themselves. At best, you can try to vaguely suss out the internal security practices of the companies whose software you use, or read up on different applications to determine if they're created with security practices that would prevent them from being corrupted.

But for the average internet user, that information is hardly accessible or transparent. Ultimately, the responsibility for protecting those users from the growing rash of supply-chain attacks will have to move up the supply chain, too—to the companies whose own vulnerabilities have been passed down to their trusting customers.

Source: This article was published wired.com By ANDY GREENBERG

Categorized in Internet Privacy

The vulnerability exists within Microsoft's own antimalware protection engine, but thankfully there's already a fix.

Apple may now be the richest company, but it's Microsoft's operating system that still loads on most of our desktops and laptops around the world. So when a major security bug is discovered it's important it gets fixed quickly. And Google researchers recently discovered a really serious one in Windows Defender of all places.

The bug was discovered by Google Project Zero vulnerability researchers Tavis Ormandy and Natalie Silvanovich. As the tweet by Ormandy below notes, this is the "worst Windows remote code exec" bug discovered as far as he can remember.

 

 Tavis Ormandy

 @tavisoI think @natashenka and I just discovered the worst Windows remote code exec in recent memory. This is crazy bad. Report on the way. 

 

The vulnerability allows remote code execution if the Microsoft Malware Protection Engine "scans a specially crafted file." If successful, the attacker is then able to run whatever code they like on the breached system as well as using it to start infecting other Windows machines.

According to Engadget, the vulnerability is present on Windows 7, 8.1, RT and Windows 10, meaning just about everyone running Windows is vulnerable.

So you won't be surprised to hear that Microsoft marked the bug as Critical and already has a fix available to close the security hole. It should be applied to your system automatically over the next few days, or you can manually trigger a Windows Update to install the patch now.

Source : This article was published pcmag.com By MATTHEW HUMPHRIES

Categorized in Search Engine
Page 1 of 3

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now