fbpx
Martin Grossner

Martin Grossner

By default, Edge is configured to use Microsoft’s Bing for web searches. Oddly, Google Search is not an option. But you can fix that if you know the trick.

Note: This tip is derived from the Windows 10 Field Guide, which is now being updated for the Windows 10 Creators Update.

If you navigate to Edge’s Advanced Settings interface—Settings and More (“…”) > Settings > View advanced settings > Change search engine (under “Search in the address bar with”), you will discover something unexpected: Google Search is not even an option.

To fix this, use Edge to navigate to Google.com (or your favorite Internet search engine’s website). Then, return to the Change search engine option in Advanced settings. (Shown above.)

Now, select “Google Search (discovered)” (or whichever search engine you visited) and then the Set as default button at the bottom of this pane. Future address bar searches will use that search engine instead of Bing.

Note: There is one major downside to this change. Certain Edge features, like Quick answers, will no longer work if you change the search engine from Bing.

Author : Paul Thurrott

Source : https://www.thurrott.com/windows/windows-10/107298/windows-10-tip-change-microsoft-edge-search-engine

Could a synthetic magnetic bubble, like a mini-magnetosphere, protect a crewed mission to Mars from cosmic radiation, and would the energy cost be prohibitively high?

As much as some folks are keen on sending people to Mars as soon as possible, it’s become obvious that protecting any astronauts from an unsafe level of radiation before they even get to Mars is going to be a tricky business.There are two main problems for astronauts leaving our home planet; one is cosmic rays, which are usually turbo-speed protons from outside of our solar system. Some cosmic rays are blocked by our Earth's magnetosphere, and the remainder are usually stopped by our atmosphere. The other problem comes direct from the Sun itself; the Sun also flings electrons and protons in our direction in the solar wind.

The solar wind is mostly stopped by our magnetosphere, but if you’re going out a bit further, we won’t have that protection.

The solar wind is a stream of particles, mainly protons and electrons, flowing from the sun's atmosphere at a speed of about 1 million mph.
NASA's Scientific Visualization Studio and the MAVEN Science Team
The solar wind is a stream of particles, mainly protons and electrons, flowing from the sun's atmosphere at a speed of about 1 million mph.

The solar wind is usually relatively easy to protect yourself from; with a slightly thicker wall than the bare minimum on your spacecraft, you can usually protect your crewmembers from a solar wind related battering. However, cosmic rays are harder to stop. The protons which make up cosmic rays typically have more energy to them, so shielding has to be more robust. The second problem with cosmic rays is that sometimes they’re more than just a proton; they can be an entire helium nucleus (two protons, and two neutrons), making them a projectile that’s both very high speed and four times the mass of a solar wind particle. These enormous cosmic rays can break apart, at an atomic level, the material they crash into, filling the interior of your spacecraft with radiation, which is not great for anyone trying to live in there.Once a spacecraft leaves the Earth’s protective bubble, not only does the cosmic ray dose increase dramatically, but you’ve also got a much less protected place to deal with the solar wind. And if the Sun decides to unleash a solar flare in your direction, you’ve got an awful lot of protons coming your way from the Sun, in addition to the Galaxy in general pelting you with helium nuclei.Enlil model run of the July 23, 2012 CME and events leading up to it. This view is a 'top-down' view in the plane of Earth's orbit.Unprotected, a solar flare can rapidly give you radiation sickness, which makes you tired and also makes you vomit. Fortunately for all involved, most spacecraft have thick enough walls that the crew should be protected from solar flares, but it’s generally considered good practice to reduce all possible risks. On the other hand, cosmic rays are not so easily stopped.

Because cosmic rays are fundamentally a charged particle, using a miniature magnetosphere surrounding the spacecraft would be an effective way of keeping them away from both your crew and the walls of the spacecraft; if this could be built into a spacecraft, you wouldn’t need to bulk up the outer surfaces of the craft for radiation protection. However, actually doing so is a bit beyond us at the moment. There have been a number of proposed magnet configurations developed, and a recent simulation of three different styles indicated that the magnetic shielding could, in fact, reduce the overall radiation dose an astronaut would recieve. This is not a given, because to create such a magnetic field, you need to add extra stuff to your spacecraft; the more mass you have, the more stuff Galactic cosmic rays can bash into, filling your craft with extra radiation. However, these portable magnetospheres are only just in the design phase - the next big steps will be building them, making them lighter, easier to power, and making sure they work they way we hoped they would. At this point, all we can really say is that it should be possible. We'll have to wait and see if it's also practical.

Author : Jillian Scudder

Source : https://www.forbes.com/sites/jillianscudder/2017/03/19/astroquizzical-magnetosphere-travel-mars/#3d83bb8351c8

Thursday, 09 March 2017 17:21

The Next Frontier Of Internet And Search

Search is an everyday part of our lives. From searching for the ingredients to make breakfast, looking for travel routes, and even to things as obscure as finding a dog-sitter - we are used to searching for things every day, so much so that over the past ten years the number of hours spent on the Internet has doubled to an average of 20 hours per week. However, during that time, the technology employed by search engines has not changed drastically and has been reliant on keyword based search. This kind of search picks out main words, disregards connective words and, in turn, provides users with pages and pages of results, many of which are not relevant.

In recent years, developments in mobile technology and voice search have changed the way people seek out information, and as a result, the way we search has evolved. However, some of the major search engines are yet to catch up.

So, what are the key trends that we can expect to see revolutionise the way we search?

The future of search appears to be in the algorithms behind the technology. Semantic search or natural language is being hailed as the ‘holy grail,’ but in the Search of the Future, new methods will prevail that provide better results thanks to their ability to organise information deriving from improved algorithms driven by the new methods. These methods will also utilise new technological approaches such as “natural intelligence” or “human language search,” rather than artificial intelligence and natural language search.

The difference among the search types is that: the keyword search only picks out the words that it thinks are relevant; the natural language search is closer to how the human brain processes information; the human language search that we practice is the exact matching between questions and answers as it happens in interactions between human beings.

The technology behind the human language search approach allows users to type in words or terms composed of a number of questions in sequences that replicate the dialogue that occurs between human beings. For example, instead of carrying out three different searches for UK golf courses, train stations and hotels under £300, users would simply type in “which UK hotels under £300 have golf courses and are near a train station”. This would immediately provide them with accurate results by returning in a single view information about hotels, golf courses and train stations.

In an ‘always on, always connected’ world, where people demand instantaneous results, the answers to a search must be precise, complete and immediately accessible.

The humanisation of technology and in particular, search, can be attributed to this new direction that the future of search is taking.

The aforementioned technology is transforming search and introducing new trends like the human language search approach. Yet the gap between using personal devices and using traditional search engines is yet to be fully bridged. This quest for an effective, true to life search engine that is identical to the way humans think is the holy grail, the online equivalent of the scientific search for a cure for cancer. In recent years there has been a handful of search engines trying and succeeding in mirroring these search techniques, and we can expect to see them launch into and dominate the consumer market. 



The emergence of IoT and Big Data has resulted in increasing amounts of data being produced, and it’s predicted that by the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet. All of this additional information means that search needs to be streamlined so that users can filter through the ‘noise’ and efficiently find what they are looking for. Search engines will need to be far more proficient to allow everyday users to effectively navigate the minefield of additional information.

Another key trend we can expect to see in the future of search is the shift from ‘search engines’ to ‘search platforms’, meaning that they will have a wider use. They will provide tools, services and a level of precision that is not currently available. It will be designed for the organisation, and management of information. Essential to this, is the simplification of results findings, presenting all the relevant search findings onto just one page, instead of the hundreds of results that we are used to being offered.

Ultimately, what the future holds is unknown, as the amount of time that we spend online increases, and technology becomes an innate part of our lives. It is expected that the desktop versions of search engines that we have become accustomed to will start to copy their mobile counterparts by embracing new methods and techniques like the human language search approach, thus providing accurate results. Fortunately these shifts are already being witnessed within the business sphere, and we can expect to see them being offered to the rest of society within a number of years, if not sooner.

Author : Gianpiero Lotito

Source : http://www.huffingtonpost.co.uk/gianpiero-lotito/the-next-frontier-of-inte_b_14738538.html

Washington (AFP) - Google said Monday it was working to fix a search algorithm glitch that produced "inappropriate and misleading" results from its search engine and connected speaker.

The internet giant reacted after a blog post highlighted unsubstantiated search results indicating former president Barack Obama was planning a "coup d'etat' and that four former US presidents were members of the Ku Klux Klan.

The weekend post from Search Engine Land editor Danny Sullivan found Google delivered "terribly wrong" answers to some queries in its "one true answer" box at the top of search results and in queries to its Google Home speaker.

"The problematic examples I review don't appear to have been deliberate attempts," Sullivan wrote. "Rather, they seem to be the result of Google's algorithms and machine learning making bad selections."

Sullivan said when he asked the speaker if US Republicans were the same as Nazis, it answered in the affirmative.

Similarly, he cited an example in which Google's search engine listed four former US presidents as "active and known" KKK members, even though there has been no conclusive historical evidence supporting that.

The news comes amid a growing controversy over "fake news" circulating online via Google or Facebook, and efforts by the internet giants to weed out hoaxes and misinformation.

In a statement to AFP, Google said its boxed results at the top of a search query, known as "featured snippets," are based on an algorithmic formula.

"Unfortunately, there are instances when we feature a site with inappropriate or misleading content," Google's statement said.

"When we are alerted to a featured snippet that violates our policies, we work quickly to remove them, which we have done in this instance. We apologize for any offense this may have caused."

Google also noted it includes a "feedback" link under these snippets that can allow the search giant to flag or remove inappropriate content.

Source: https://www.yahoo.com/tech/google-vows-fix-inappropriate-search-results-161030628.html

Anonymity networks offer individuals living under restraining regimes protection from surveillance of their internet use. But citing the recently divulged vulnerabilities in the most popular of these networks - Tor - has urged computer scientists to bring forth more secure anonymity schemes.

An all-new anonymity scheme that offers strong security guarantees, but utilizes bandwidth more efficiently as compared to its predecessors is in the works.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory in collaboration with the the école Polytechnique Fédérale de Lausanne will present the new scheme during the Privacy Enhancing Technologies Symposium in this month.

During experiments, the researchers' system required only one-tenth as much time as current systems to transfer a large file between anonymous users, according to a post on MIT official website.

Albert Kwon, the first author on the new paper and a graduate student in electrical engineering and computer science said as the basic use case, the team thought of doing anonymous file-sharing where both, the receiving and the sending ends didn't each other.

This was done keeping in mind that honeypotting and other similar things - in which spies offer services via an anonymity network in a bid to entice its users - are real challenges. "But we also studied applications in microblogging," Kwon said - something like Twitter where a user can opt to anonymously broadcast his/her message to everyone.

The system designed by Kwon in collaboration with his coauthors - Bryan Ford SM '02 PhD '08, an associate professor of computer and communication sciences at the école Polytechnique Fédérale de Lausanne, David Lazar, a graduate student in electrical engineering and computer science, Edwin Sibley Webster Professor of Electrical Engineering and Computer Science at MIT and his adviser Srini Devadas - makes use of an array of existing cryptographic techniques, but combines them in a peculiar manner.

The internet, for a lot of people can seem like a frightening and intimidating place and all they seek is help feeling safer on the internet, especially while performing an array of tasks such as making an online purchase, Anonhq reported.

Shell game

 A series of servers known as a 'mixnet,' is the core f the system. Just before passing a received message on to the next server, each server rearranges the order in which it receives messages - for instance - messages from Tom, Bob and Rob reach the first server in the order A, B, C, that server would then forward them to the second server in a completely different order, something like C, B, A. The second server would do the same before sending them to the third and so on.

Even if an attacker somehow manages to track the messages' point of origin, he/she will not be able to decipher which was which by the time they moved out of the last server. The new system is called 'Riffle' citing this reshuffling of the messages.

Public proof

In a bid to curb messages tampering, Riffle makes use of a technique dubbed a verifiable shuffle.

Thanks to the onion encryption, the messages forwarded by each server does not resemble the ones it receives, it has peeled off a layer of encryption. However, the encryption can be done in a way that allows the server to generate mathematical evidence that the messages it sends are indeed credible manipulations of the ones it receives.

In order to verify the proof, it has to be checked against copies of messages received by the server. Basically, with Riffle, users send their primary messages to all the servers in the mixnet at the same time. Servers then independently check for manipulation.

As long as one server in the mixnet continues to be uncompromised by an attacker, Riffle is cryptographically secure.

Author : Vinay Patel

Source : http://www.universityherald.com/articles/34093/20160712/mit-massachusetts-institute-of-technology-researchschool-of-engineering-computer-science-and-artificial-intelligence-laboratory-csail-computer-science-and-technology-cyber-security.htm

How can online research tools aid the work of investigative reporters and others looking into transnational financial flows, corporate structures and other illicit activities of organized crime and global business?

Google and the Organized Crime and Corruption Reporting Project (OCCRP) brought together a small group of investigative journalists and technologists from around the world to examine the answers to this question at their first Investigathon in London last month.

While Google staff demonstrated how to use some of their tools, such as Image Search and Fusion Tables, the star of the event was the Investigative Dashboard, a project of OCCRP developed by investigative journalist Paul Radu and media strategist Justin Arenstein. The platform connects journalists investigating transnational stories with researchers who are familiar with a particular geographic region. These researchers then retrieve information about companies or individuals from local registries and databases.

Dashboard-inspired group work sessions at the Investigathon highlighted the importance of data from company registries and official gazettes in investigative journalism. Knowing which companies share board members or have subsidiaries in other countries is crucial to understanding potential conflicts of interest--and potentially dodgy business schemes.

The workshop highlighted the need for newsrooms to put this data on companies and control structures into context, alongside other evidence, over the course of an investigation. While such data are available in a globally consolidated form through paid services, few investigative reporters or news outlets are able to afford their steep fees.

OpenCorporates, an open data-powered alternative, makes data freely available, but the start-up doesn't yet cover all jurisdictions of interest. And while OpenCorporates, and other sites such as the OffshoreLeaks database, OpenSpending and OpenSecrets increasingly offer newsworthy data for reporters to mine, there needs to be another, investigation-specific layer on top of these sites.

Such a layer of tools would combine data about persons and companies from public data platforms with other evidence that research uncovers. This layer would also store information about the source and reliability of the data, and allow journalists to analyze the combined dataset to discover signs of illicit or illegal behavior.

While such tools for integrating and analyzing investigative evidence are available for those with deep pockets, there is also a need for an open source solution. This is why one of the goals of my ICFJ Knight International Journalism Fellowship is to build out grano, a toolkit for investigative journalists.

Starting with an influence-mapping tool, similar to Poderopedia and LittleSis, our first goal is to create an involuntary face book of the companies, people and institutions of interest to journalistic investigations. Once that basic infrastructure is matured, we want to take the step from making a database to making a workbench. grano will help investigators create a coherent representation of their investigations, by combining data from public sources with the evidence they have collected themselves. This will create both an analytical tool and a re-usable memory, either to be shared within institutions or even in public.

 

More: 

 

As a first prototype, we're building out connectedAfrica, which we'll launch in South Africa. This collaboration between the African Network of Centers for Investigative Reporting (ANCIR) and the Institute for Security Studies will begin by providing a public database focused on South Africa's top politicians, detailing their business and political involvements. Our goal is to give users a way to explore connections between individuals, companies and public institutions, and to analyze the network for signs of dodgy behavior.

Based on the feedback from this experiment, we're hoping to turn grano into a product that can support users who want to create similar structures for their own investigations - whether they are looking at company networks, political finance, government licensing, procurement, court cases, policy making -- or the all of these combined.

How do you think such tools might support your investigative reporting? Please share your thoughts in the comments.

Friedrich Lindenberg is an ICFJ Knight International Journalism Fellow who works with journalists and watchdog organizations to develop data resources and investigative tools.

Global media innovation coverage related to the projects and partners of the ICFJ Knight International Journalism Fellows on IJNet is supported by the John S. and James L. Knight Foundation and edited by Jennifer Dorroh.

Author : Friedrich Lindenberg

Source : http://ijnet.org/en/blog/how-can-online-research-tools-aid-work-investigative-reporters

Finding free and legal images to accompany your web content has never been difficult, thanks to Creative Commons. The nonprofit organization offers copyright licenses that creators can use to share their work more broadly, while putting them in control of where and how their work can be used, how it should be attributed and more. Now the organization is making it easier to access this content with a new search engine, CC Search, launched into beta this morning.

Larger image search engines, like those from Google and Flickr, have for years offered tools to filter for CC-licensed images, but Creative Commons’ own search tool continues to have a sizable audience of its own. The organization says that nearly 60,000 users search its site every month. But it believed it needed to do better, in terms of making the commons more accessible.

“There is no ‘front door’ to the commons, and the tools people need to curate, share, and remix works aren’t yet available,” writes Ryan Merkley, Creative Commons CEO on the organization’s blog. “We want to make the commons more usable, and this is our next step in that direction.”

screen-shot-2017-02-07-at-8-39-13-am-1024x462

While Creative Commons licenses can be used across a variety of media, including video, audio, music and much more, the search engine for now only focuses on images given that they comprise half of the total commons.

The engine pulls in photos from Flickr, 500px, Rijksmuseum, the New York Public Library and the Metropolitan Museum of Art as its initial sources. The latter was added to just today, to coincide with the launch, and brings 200,000 more images to the service.

In total, there are roughly 9,477,000 images available at the time of launch, though the exact figures will vary at times.

In addition to having a more modern look-and-feel, the new CC Search lets you narrow searches by license type, title, creator, tags, collection and type of institution. It also includes social features, letting you make and share lists of favorite images, as well as add tags and favorites to individual items. Plus, you can save your searches for quick access in the future.

screen-shot-2017-02-07-at-8-40-00-am-1024x525

The engine also makes it easier to apply the necessary attribution, when available, by offering  pre-formatted copy you can click to copy and paste.

As a beta, the organization says it’s looking to now gain feedback from users about the new product, which it will use to help guide the next steps, including things like forthcoming features, what media types to support next and which repositories should be added. Creative Commons says it’s already focused on bringing to future CC Search releases the full content of the Europeana collection, a selected subset from DPLA and a larger subset of the Flickr Commons.

Other additions in the works include more tools to customize shared lists, a way to search from your own curated material, the ability for trusted users to push metadata (e.g. tags) back to the larger collection and more advanced search tools, like search by colordrill down into tags and search public lists.

Being able to more easily search the commons has been a focus over the years for smaller services, too, which long ago launched their own dedicated CC search tools, like Compfight or Openphoto, for example. But it makes sense to have an advanced search feature that lives on Creative Commons’ website as well — and one that, in time, will expand beyond just images.

“This is a significant moment for CC, as we’ve always wanted to be able to do more to help people find and use the commons and make connections with each other as they create new things,” noted Merkley, in the announcement.

The beta search engine is available at ccsearch.creativecommons.org.

Source : https://techcrunch.com/2017/02/07/creative-commons-unveils-a-new-photo-search-engine-with-filters-lists-social-sharing/

Tuesday, 07 February 2017 16:36

Curing disease with scientific literature

Chan Zuckerberg Initiative acquires AI-powered science search engine Meta

Meta, a search engine powered by artificial intelligence (AI), is in the spotlight after news of its recent acquisition by the Chan Zuckerberg Initiative (CZI). The CZI has the implicit mission of “curing all disease,” unlike other charitable foundations that tend to be disease-specific.

By acquiring this powerful scientific search engine and providing it for free to all researchers, the CZI steps closer towards achieving its main goals: “To foster collaboration between teams of scientists and labs across multiple universities over long periods of time, to focus on developing tools that are geared toward eradicating diseases rather than simply treating them, and to improve and expand scientific funding writ large.”

In a blog post, Molyneux writes that the idea for his company stemmed from his attempt to address a common hurdle faced by many researchers: literature overload. “So I teamed up with my sister, Amy Molyneux — who happened to be an incredible developer with experience developing large-scale online platforms — and together we took on the challenge of building an online platform that would solve the problem of literature overload and allow people to stream and discover the literature in a more organic way,” wrote Molyneux. “Sciencescape, and eventually Meta, was born from that challenge.”

Meta’s impact on the research experience

In an interview with The Varsity, Michael Guerzhoy, a machine learning lecturer at U of T’s Department of Computer Science, noted Meta’s potential.

“There was enormous progress over the past ten or so years in terms of making the body of human knowledge searchable and accessible. You used to hear stories about graduate students working on a research idea for a long while before accidentally discovering that someone already published the same idea. Finding research papers, both old and new, that are relevant to your research used to be laborious.
“Meta aims to facilitate and, more excitingly, automate these processes,” said Guerzhoy.

For all students interested in pursuing a career in machine learning, Guerzhoy imparted the following advice:

“For people who are just starting university, my advice is to learn as much math as possible.” Guerzhoy notes that math is fundamentally what’s behind the “magic” of machine learning. To him, “it’s kind of amazing that you can use calculus and linear algebra to get insights from data.” He advises students to “take the most challenging math and statistics courses that you can handle, take an interest in them, and make sure that you deeply understand the material.”

Guerzhoy makes it clear that data is everywhere. “We are surrounded by data — election results, sports stats, financial news, etc.”
However, Guerzhoy makes it clear that theory needs to be practiced by students. “Once you have got a bit of a background in machine learning — through taking a U of T course, or perhaps through taking one of the several excellent online courses that are available because you couldn’t wait until your third or fourth year, start working on machine learning projects,” he suggests.

“In the past several years, a lot of excellent machine learning frameworks, such as Google’s TensorFlow, have become available,” notes Guerzhoy. “They make it relatively easy to build machine learning systems that perform very well. Right now, someone who successfully implemented an interesting machine learning system and can talk about it during a job interview would not have trouble getting an offer. Especially if you are considering graduate school, try to work on a course project or a summer project with a research faculty member.”

News about Meta has generated a lot of excitement and interest about AI among the U of T community.

“It’s great to see that U of T is still a hub for AI innovation,” says Matthew Scicluna, Statistics Lecturer at UTM and a machine learning enthusiast. “It’s very encouraging for future students who want to study machine learning, or use it in their business.”

A previous version of this article stated that “Meta has the implicit mission of ‘curing all disease.'” In fact, the Chan Zuckerberg Initiative (CZI) has this as their implicit mission.

Author : Sheryl Anne Montano

Source : http://thevarsity.ca/2017/02/06/curing-disease-with-scientific-literature/

We live in the information age, and that’s reflected in the emergence of content marketing as a significant part of business marketing strategies. Create the right content and you can grow your business organically, as people find it via search engines and social media. Providing valuable content also establishes your business as an authority in its field, enhancing its reputation.

However, it’s time consuming to generate content and develop a marketing action plan for that content. If you get it wrong, you could end up investing a large chunk of valuable time on a failed campaign. That’s why it helps to incorporate content marketing tools to strengthen your efforts.

Here are five content marketing tools that can take your strategy to the next level.

BuzzSumo

One of the most difficult aspects of content marketing is consistently coming up with ideas for new content. BuzzSumo uses analytics to show you what content is trending and where. You can see what is being shared the most across any of the major social media networks and filter the results by the type of content (blog posts, infographics, videos, etc.).

Want to know which topics will get you the best results? BuzzSumo can show you topics related to your business that are the most popular right now, helping you create content that is more likely to trend. This tool’s analytics show you where your advertising is best spent to bring in more of your target audience. It also has information on which influencers are getting the most traction in your market, so you can find the right people to promote your content.

BuzzSumo

Curata

Curata is a content curation software that simplifies the process of finding and publishing content. It offers two products: a content creation software and a data-driven content marketing platform. If you’re having trouble sticking to a publishing schedule or curating content, Curata is a huge help. It draws and organizes content from hundreds of thousands of sources, and it has a self-learning engine so you get better results the more you use it.

You can publish content you find through Curata across any of your channels in one click. This makes it easy to maintain a steady stream of content without needing to generate the content yourself. Curata also analyzes the results of the content with your audience, showing you the outcome of your efforts and giving you the information you need to update your strategy when necessary.

Curata

Buffer

The most effective way to start building your business’ social media accounts is through automation, and when it comes to platforms, it’s hard to beat Buffer’s ease of use and excellent features. Buffer works with Twitter, Facebook, LinkedIn, Google+, Instagram, and Pinterest, so you can manage all your accounts in one place.

Posting consistently is a key factor in building a social media following, but that’s also a big-time commitment, especially when you’re attempting to build an audience on multiple social networks. Instead of periodically posting on social media throughout the day, use Buffer to set up a schedule of posts. You can use their Chrome browser extension and mobile app to add content to your queue at any time.

Not only does Buffer save you time, it also uses analytics to determine the best times of day to post content. Its Twitter analytics are especially useful, as it checks when your followers are active and analyzes the engagement your tweets get.

Buffer

Contently

With the rising number of businesses employing remote workers, managing everyone in a global content marketing operation can be a challenge. Contently helps keep your content marketing team organized, whether it consists of remote employees, freelance creatives, or both.

Everyone involved in a project can stay connected and collaborate with in-line commenting, email notifications, and a messaging system. The platform has cloud storage for everything related to your content, allowing you and your team to access it 24/7.

Using Contently, you’re able to view assignments on a dashboard, set up deadlines, track progress, and approve content. The platform also has tools for obtaining legal approval of completed content and sending invoices. In addition to its powerful content management capabilities, Contently can provide suggestions to help you generate ideas for new pieces of content.Contently

Outbrain

 

As anyone with content marketing experience knows, the saying “if you build it, they will come” doesn’t apply to online content. Getting your content in front of an audience is a challenge in and of itself. Outbrain is the leading content discovery platform on the web, so it can significantly grow your audience through increased visibility. It is a pay-to-play tool, but if you have the money in your marketing budget, this is a simple and effective way to get your content out there.

Outbrain is able to promote just about any type of content you want, whether that’s an article, video, infographic, or something else. Your content is placed on popular local and national sites as a promoted suggestion. When your audience finishes reading an article and are looking for something new to check out, they’ll see a link to your content.

Outbrain

There’s only so much time in the day, and with all that goes into effective content marketing, you could easily spend it all on mundane tasks. A more efficient and effective solution is to use the powerful content marketing tools listed above to streamline as much of the process as possible. This leaves you with more time to put towards your overall content marketing strategy.

The analytics available through these five content marketing tools are also extremely useful, as they break down what type of content is working well and show you the results of your content. Having more information speeds up and improves your decision-making process, so you can maximize returns on all your marketing efforts.

Author: Jill Phillips
Source: http://www.business2community.com/brandviews/mainstreethost/5-tools-will-help-create-amazing-content-01751531#io0Uodrg0eB1KrUJ.97

 

India 2nd in seeking user information after the US with 6,324 such requests to Facebook alone in H1 2016

During the first half of calendar year 2016, social networking site Facebook removed access to 2,034 pieces of content based on requests by the Indian government and other agencies. This was starkly lower than the 14,971 content restricted during the six-month ago period, and 15,155 in the year-ago period.

To justify content restrictions, Facebook has said: “We restricted access to content in India in response to legal requests from law enforcement agencies and the India Computer Emergency Response Team (CERT-In) within the Ministry of Communications and Information Technology. The majority of content restricted was alleged to violate local laws against anti-religious speech and hate speech.”

The Section 79 (3)(b) of the Information Technology Act, 2000, requires all stakeholders to take down or block access to content when demanded by the government.

However, in March 2015, the SC laid down the interpretation of the aforementioned section: “Section 79 is valid subject to Section 79 (3) (b) being read down to mean that an intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate government or its agency that unlawful acts relatable to Article 19 (2) are going to be committed then fails to expeditiously remove or disable access to such material.”

Facebook also said that following the court order, it ceased acting on legal requests to remove access to content unless received by way of a binding court order and/or a notification by an authorised agency which conformed to the constitutional safeguards as directed by the SC.

However, India still requests major technology companies such as Facebook, Twitter, Google, and Apple for data
about certain users.

In the January-June period of 2016, Facebook received 6,324 requests from India about its users, and in 53.59 per cent of the cases, the social networking company yielded with some data. This is higher as compared with 5,561 in the second half of 2015, and 5,115 requests in the first half of 2015. According to the latest available data, India was the second in requests that went to Facebook, behind the US with 23,854 requests and closely followed by the UK with 5,469 requests.

Most major technology companies have been making this data public under their ‘transparency reports’ periodically after, in 2013, Edward Snowden leaked files revealing global mass surveillance being conducted by governments through these websites.

So, why do governments seek information about the internet users?

Facebook has said: “As part of official investigations, government officials sometimes request data about people who use Facebook. The vast majority of these requests relate to criminal cases, such as robberies or kidnappings. In many cases, the government is requesting basic subscriber information, such as name and length of service. Requests may also ask for IP address logs or account content.”

The company has also said that it checks for the legal sufficiency in every request, and often shares only “basic subscriber information”. Sometimes, these firms also receive fake court orders. Google has claimed it received four fake court orders from India in 2012 alone.
“From time to time, we receive falsified court orders. We do examine the legitimacy of the documents that we receive, and if we determine that a court order is false, we will not comply with it,” Google said.

Similarly, in case certain content posted on websites such as Facebook, Twitter and Google is in contravention of the local laws, the governments send requests to these firms asking them to restrict such content.

For instance, the global content restriction requests spiked during the first half of 2016, compared with the second half of 2015 due to one such reason. Chris Sonderby, Facebook’s deputy general counsel, said: “As for content restriction requests, the number of items restricted for violating local law decreased 83 per cent from 55,827 to 9,663. Last cycle’s figures had been elevated primarily by French content restrictions of a single image from the November 13, 2015, terrorist attacks.”

The processes of responding to government requests for user data also vary from country to country. In the US, where the highest number of requests were received by most platforms during January-June 2016 — 23,854 to Facebook, 2,520 to Twitter, 1,363 to Apple — several legal processes are used at the federal, state, and local levels with most common ones being search warrants and subpoenas.

“Outside the US, we ask the request to be properly issued, for example, through a mutual legal assistance treaty or a form of international process known as a letter rogatory, except in the case of certain emergencies,” professional networking platform LinkedIn said.

But, what are considered as emergencies?

“In rare circumstances involving imminent serious bodily harm or death, we will consider responding to an emergency request for data. These requests must be submitted using the Emergency Disclosure Request Form included in LinkedIn’s Law Enforcement Data Request … and must signed under penalty of perjury by a law enforcement agent,” LinkedIn said. Apart from these corporate giants, there are other independent organisations as well that release regular reports and databases about requests made by governments for user information and content removal from websites.

A website called Lumen publishes requests and notices sent, by various stakeholders to companies, including those with claims of copyrighted content.

While some of the technology companies have a policy to notify the users about their information being shared with the government, exceptions to these policies also remain.

“Twitter’s policy is to notify users of requests for their account information, which includes a copy of the request, prior to disclosure unless we are prohibited from doing so. Exceptions to prior notice may include exigent or counterproductive circumstances (e.g., emergencies regarding imminent threat to life; child sexual exploitation; terrorism). We may also provide post-notice to affected users when prior notice is prohibited,” micro-blogging website Twitter has said.

Author: Pranav Mukul
Source: http://indianexpress.com/article/technology/tech-news-technology/online-surveillance-seeking-user-info-a-rising-global-phenomenon-4451087

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media