Videos have revolutionized the world of Internet marketing, and its popularity is only slated to grow more. More than 60% of businesses use videos as a marketing tool today, which is a 66% increase from the last year. A survey conducted by Wyzowl reports that nearly 91% of these businesses plan to either increase or maintain their spending this year. So what is it that makes videos so powerful in marketing? And what can you do to make sure your marketing and promotional videos reach your target audience? Here’s a quick look at three quick and easy ways to make your video content as search engine friendly as possible.

69% of Consumers Choose to Watch Video Content.

If you are a complete novice to video marketing, you first need to understand why videos are so much more effective than plain text in improving sales and driving conversion. Apparently, watching videos doesn’t require as much reasoning as it would to derive meaning through reading. The human brain is hardwired so that it can process videos 60,000 times faster than text. Add to that the natural human tendencies to avoid laborious cognitive strain and to get emotionally affected by something they see in a video more than text, it is no wonder that the love for videos is so extensive and deep.

Consumer data show that 69% of people would choose a video over text to learn about a product or a service. Similarly, if given an option between talking to a customer support team and watching an explainer video for solving a problem, 68% of people would choose the latter option. Moreover, nearly three-quarters of the people who watch an explainer video end up buying the product or service.

Not surprising when you consider the fact that more than 75% of businesses that have used videos for marketing and promotion say that their videos have given them a good return on their investment. While 93% say that these videos have helped their customers understand their product or service better, 62% believe that videos have increased the amount of organic traffic their websites receive.

3 Ways to Make Videos More Search Engine Friendly

If amazing content were all you needed to rise above your competitors and rank higher in terms of search engine rankings, life would be so much easier! Unfortunately, that is not the case. In addition to basic search engine optimization techniques such as using descriptive file names, adding relevant keywords to your title, met tags, and description, and using the Comments feed section to its maximum potential, there are also other ways through which you can maximize the popularity of your videos.

#1 The Magic of Closed Captions

A study conducted by Discovery Digital Networks in 2013 found a marked increase in views for YouTube videos that had closed captions, when compared to videos that did not. Since search engines cannot “watch” your videos to understand what it is all about, they depend on the accompanying text data such as closed captions for indexing, which makes it more likely that your videos will be ranked higher in search engine results if you add captions to your videos.

While you have the added benefit of making your videos accessible to millions of people who are deaf or have hearing disabilities, they are not the only ones who use closed captions. About 80% of the people who use closed captions do so because it makes it easier for them to enjoy videos in an unfavorable environment like a noisy train or a quiet library.

#2 Use Transcripts on Web Pages 

Transcripts offer yet another useful way to help your videos climb up in search engine rankings. Similar to captions, transcripts are used by search engines to index your webpage and thus contribute to an organic increase in web traffic. In fact, a study conducted by Liveclicker found that the addition of transcripts to web pages increased revenue by about 16%. Also, transcripts make the content more useful for people in the audience who would like to skim through or refer the text to prepare for an exam and for others who have hearing disabilities.

#3 Add Multi-lingual Subtitles to Your Video Content

Of course, we cannot disregard the importance of adding subtitles to your videos. Having your closed captions translated into multiple languages improves accessibility and helps thousands of viewers from around the world enjoy your videos. If numbers are what it takes to convince you, studies have indicated that the addition of subtitles to a video can increase viewer engagement time by more than 40%. Subtitles have also been shown to increase the number of people who watch a video to completion by 80%.

If you find yourself intrigued by all these possibilities for improving your video search engine rankings, don’t forget to check out this brand new infographic from Take 1 Transcription for tons of useful info along with some amazing facts and figures about video marketing!


Source : tubularinsights.com


Categorized in Online Research

A new startup, Justice Toolbox, Inc., today released its online search engine to help everyday people find lawyers (http://www.justicetoolbox.com). The search engine uses data mined from official state court records to compute and display how many cases a lawyer has won and lost and the lawyer’s approximate win rate. All of this is done on a per case type basis, so that, for example, consumers can see how often lawyers win in traffic cases separately from divorce cases. No other products on the market allow consumers to check how often lawyers actually win cases.

The search engine uses data from close to 5 million state court records, includes over 70,000 lawyers, and allows searching for 180 case types. It is free to use.

The core technology was developed in secrecy over the last year and involves a custom-designed artificial intelligence (AI) program that analyzes each court record, formally called a “docket,” that is the official record of events in a case. Dockets are commonly used by lawyers, though they are incomprehensible to everyday people due to legal jargon. Justice Toolbox’s AI reads and understand these dockets as a lawyer would, in order to determine the case outcome, case type, and the attorneys involved.

“Choosing counsel is one of the most important decisions a person can make in their legal case,” said Bryant Lee, Founder and CEO of Justice Toolbox, Inc. “People should have the benefit of seeing an attorney’s public court records before making that decision.”

The startup is the brain child of Lee, an attorney and Harvard Law School graduate, who previously worked at one of Washington, D.C.’s largest law firms, Covington & Burling LLP. He was inspired to create Justice Toolbox because attorneys at his firm would routinely send company-wide emails asking for suggestions on attorneys to use for everyday issues. He realized that even lawyers had no idea how to find other lawyers and that a technological solution was needed. As a lawyer, Lee would regularly read court dockets and believed that he could develop an AI program to do the same thing.

Justice Toolbox is based in Bethesda, Maryland and is initially launching with data from Maryland and District of Columbia courts. It plans to expand to more cities and states across the country by next year.

Source:  prweb.com

Categorized in Search Engine

Look out Google Scholar—there’s a new kid on the block. Semantic Scholar, a free, online tool developed under the guidance of Microsoft cofounder Paul Allen, is using machine learning and other aspects of artificial intelligence (AI) to make the monumental task of parsing the scientific literature less onerous. Launched last year, Semantic Scholar can now comb through 10 million published research papers, its creators announced last week (November 11). “This is a game changer,” Andrew Huberman, a neurobiologist at Stanford University not involved in the project, told Nature. “It leads you through what is otherwise a pretty dense jungle of information.”

When the nonprofit Allen Institute for Artificial Intelligence (AI2) launched Semantic Scholar last November, the search engine indexed 3 million published research articles in the field of computer science. The service now searches 10 million papers in both computer science and neuroscience. “Semantic Scholar puts AI at the service of the scientific community,” Oren Etzioni, chief executive officer of AI2, said in the statement. “The brain continues to mystify the scientific and medical research community and harbors some of the diseases that are the most challenging to cure. Our hope is that the field of neuroscience can benefit from AI methods to ensure the best and most relevant studies are easily queried so medical research can move with maximum speed and efficiency.”

The main benefit of Semantic Scholar, which its creators say will soon be expanded to include the full biomedical literature, is that the AI-driven engine is able to understand the content and context of scientific papers, searching figures within an article, for example, rather than just listing its abstract and raw bibliographic data.

But early reports suggest that Semantic Scholar isn’t yet 100 percent debugged. “Looking at ‘most influential publications’ sometimes gives strange results,” Sam Gershman, a Harvard University computational neuroscientist told ScienceInsider. “For example, none of the most influential articles listed for [University of California, Berkeley, psychologist] Thomas Griffiths fall into his top five most cited articles.”

Warts and all, ScienceInsider recently took the search engine for a spin, instructing it to rank the top 10 most influential neuroscientists based on an analysis of their citation histories.

Source:  the-scientist.com

Categorized in Search Engine

A new artificial intelligence-based search engine may be an important step towards search results that are the result of a machine that “thinks” through answers rather than just indexing information.

Semantic Scholar is an AI-based search engine created by the non-profit Allen Institute for Artificial Intelligence, which is based in San Diego. The big thinkers there plan to trump their main competitor Google Scholar by upping their total research article database from 3 million at launch to over 10 million early next year.

“This is a game changer,” says Andrew Huberman, a neurobiologist at Stanford University. “It leads you through what is otherwise a pretty dense jungle of information.”

Sematic Scholar, which currently caters mainly to neurology and computer sciences, uses machine learning, semantic analysis and machine vision, a process that has been used primarily in robot guidance and in industrial inspections, to identify connections between relevant research papers. Google Scholar, meanwhile, works like a general search engine and simply searches for key words.

But Semantic Scholar has a long way to go. It has a meager database compared to Google Scholar, which indexes more than 200-million articles and can even access information that has been locked behind pay walls. Most researchers will probably end up using both, at least in the short term.

“The one I still use the most is Google Scholar,” says Jose Manuel Gómez-Pérez, who works on semantic searching for the software company Expert System in Madrid. “But there is a lot of potential here.”

Some where is this all going? Those who have been binge-watching the new HBO hit “WestWorld”, which is set in a futurisitc amusement park populated by synthetic androids, may get a little creeped out by the end game here. The goal for the Allen Institute for Artificial Intelligence is to create an AI system that will be able to read and understand scientific text and create its own hypothesis. This obviously has huge and wide-ranging implications and we trust that the answer to the question “What is the best Thai restaurant in Cleveland?” will not be “Kill all humans”.

Sematic Scholar is not the only search engine of its kind. Microsoft quietly released its own AI scholar research tool, called Microsoft Academic, in May of this year. Microsoft Academic is the replacement for an older product called Microsoft Academic Search, which was shelved in 2012.

At least to date, this field is chracterized by its cooperation as much as competition. Semantic Scholar, Microsoft Academic, and a few smaller AI search engines shares search algorithms so they as to hasten the development of current AI capabilities. Microsoft Academic uses Bing database so it has access to over 160 million publications and is also toying with the idea of making the engine user customizable.

“The Microsoft Academic phoenix is undeniably growing wings,” says Anne-Wil Harzing, who studies science metrics at Middlesex University in the UK.

But one expert says the machine learning aspect of AI search engines could prove problematic. Jeff Clune, a computer scientist at the University of Wyoming in Laramie told Science recently that he liked Semantic Scholar and found it to be a fun and useful service, but said he was at the same time worried that it was a “black box” of information.

“Will people understand where the numbers are coming from?” he asked.

Source:  cantechletter.com

Categorized in Search Engine

A Hitwise post digs into the behavior of searchers and sees whether they prefer searching in the singular or plural. Using the term "laptop" versus "laptops," it is clear that "laptops" is the winner in search. When investigating nine other terms, the following was discovered:

[W]hile the results are not conclusive, it does seem that plural terms are better at sending traffic to retailers than singular terms. Two thirds of the products tested performed better as plurals, with technology products in particular skewing in favour of an added ‘s’.

Of course, as one member on Sphinn notices, this is specific to traffic, not necessarily conversions. However, it's a good first stop. Now can someone compile a report on the conversions? ;)

Forum discussion continues at Sphinn.

Source:  seroundtable.com

Categorized in Search Engine

Facebook has responded to widespread criticism of how its Newsfeed algorithm disseminates and amplifies misinformation in the wake of the Trump victory in the US presidential election yesterday.

Multiple commentators were quick to point a finger of blame at Facebook’s role in the election campaign, arguing the tech giant has been hugely irresponsible given the role its platform now plays as a major media source, and specifically by enabling bogus stories to proliferate — many examples of which were seen to circulated in the Facebook Newsfeed during the campaign.

Last week Buzzfeed reported on an entire cottage industry of web users in Macedonia generating fake news stories related to Trump vs Clinton in order to inject them into Facebook’s Newsfeed as a way to drive viral views and generate ad revenue from lucrative US eyeballs.

This enterprise has apparently been wildly successful for the teens involved, with some apparently managing to pull in up to $3,000 and $5,000 per month thanks to the power of Facebook’s amplification algorithm.

That’s a pretty hefty economic incentive to game an algorithm.

As TC’s Sarah Perez wrote yesterday, the social network has become “an outsize player in crafting our understanding of the events that take place around us”.

In a statement sent to TechCrunch responding to a series of questions we put to the company (see below for what we asked), Adam Mosseri, VP of product management at Facebook, conceded the company does need to do more to tackle this problem — although he did not give any indication of how it plans to address the issue.

Here’s his statement in full:

We take misinformation on Facebook very seriously. We value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. In Newsfeed we use various signals based on community feedback to determine which posts are likely to contain inaccurate information, and reduce their distribution. In Trending we look at a variety of signals to help make sure the topics being shown are reflective of real-world events, and take additional steps to prevent false or misleading content from appearing. Despite these efforts we understand there’s so much more we need to do, and that is why it’s important that we keep improving our ability to detect misinformation. We’re committed to continuing to work on this issue and improve the experiences on our platform.

Facebook has previously been criticized for firing the human editors it used to employ to curate its trending news section. The replacement algorithm it switched to was quickly shown to be trivially easy to fool.

Yet the company continues to self-define as a technology platform, deliberately eschewing wider editorial responsibility for the content its algorithms distribute, in favor of applying a narrow and universal set of community standards and/or trying to find engineering solutions to filter the Newsfeed. An increasingly irresponsible position, given Facebook’s increasingly powerful position as a source of and amplifier of ‘news’ (or, as it sometimes turns out to be, propaganda clickbait).

Pew research earlier this year found that a majority of U.S. adults (62 per cent) now get news via social media. And while Facebook is not the only social media outfit in town, nor the only where fake news can spread (see also: Twitter), it is by far the dominant such platform player in the US and in many other markets.

Beyond literal fake news spread via Facebook’s click-hungry platform, the wider issue is the filter bubble its preference-fed Newsfeed algorithms use to encircle individuals as they work to spoonfeed them more clicks — and thus keep users spinning inside concentric circles of opinion, unexposed to alternative points of view.

That’s clearly very bad for empathy, diversity and for a cohesive society.

The filter bubble has been a much discussed concern — for multiple years — but the consequences of algorithmically denuding the spectrum of available opinion, whilst simultaneously cranking open the Overton window along the axis of an individual’s own particular viewpoint, are perhaps becoming increasingly apparent this year, as social divisions seem to loom larger, noisier and uglier than in recent memory — at very least as played out on social media.

We know the medium is the message. And on social media we know the message is inherently personal. So letting algorithms manage and control what is often highly emotive messaging makes it look rather like there’s a very large tech giant asleep at the wheel.

Questions we put to Facebook:

  • How does Facebook respond to criticism of its Newsfeed algorithm amplifying fake news during the US election, thereby contributing negatively to misinformation campaigns and ultimately helping drive support for Donald Trump’s election?
  • Does Facebook have a specific response to Buzzfeed’s investigation of websites in Macedonia being used to generate large numbers of fake news stories that were placed into the Newsfeed?
  • What steps will Facebook be taking to prevent fake news being amplified and propagated on its platform in future?
  • Does the company accept any responsibility for the propagation of fake news via its platform?
  • Will Facebook be reversing its position and hiring human editors and journalists to prevent the trivial gaming of its news algorithms?
  • Does Facebook accept that as increasing numbers of people use its platform as a main news source it has a civic duty to accept editorial responsibility for the content it is broadcasting?
  • Any general comment from Facebook on Trump’s election?

Source:  techcrunch.com

Categorized in Social

In a stunning display of willful ignorance Google appears to not realize it is a monopolist.

With 93% of the search market in Europe and 76% of the smartphone market (88% worldwide) Google is obligated to behave a bit differently from its competitors, which is exactly why the EU is accusing it of abusing it market dominance.

In its response to the allegations Google spent much time comparing itself to its non-monopolist competitors, including Windows Phone, which has less than 4% market share, clearly failing to understand the issue caused by forcing OEMs to pre-install their suite of apps and prominently feature its search bar to get access to the essential Google Play store and other foundational proprietary components needed by Play Store apps.

Real Estate

Google explained that they needed to keep a tight rein on their OEMs to prevent fragmentation, and that their pre-installed apps were only 1/3 of the usual pre-installed load. This of course ignores complaints that Google retaliated against OEMs who did not toe the line including preventing them from releasing both Google Play and AOSP devices.

Additionally Google explained that they needed to pre-install the Google Search bar to keep Google’s version of Android free, tacitly confirming that search was their major revenue driver from Android. Given that Google had a search monopoly using their profits from search to undercut proprietary competitors like Windows Phone is exactly the kind of monopoly behaviour (using profits from one monopoly to create another) which is illegal.

Google writes:

Finally, distributing products like Google Search together with Google Play permits us to offer our entire suite for free — as opposed to, for example, charging upfront licensing fees. This free distribution is an efficient solution for everyone — it lowers prices for phone makers and consumers, while still letting us sustain our substantial investment in Android and Play.

In the early 2000’s Microsoft could have equally said that bundling Internet Explorer with Windows and preventing OEMS from selling both Windows and Linux PCs was efficient for computer users and the market.

In short Google’s excuses would not have stood 10 years ago, and with their search and phone monopoly even more secure now will certainly not. Google should be required to give access to their Play store to any competitor under reasonable terms, for example allowing Microsoft to install it on a mobile OS of their choice without being required to install Google’s other suite of apps and search bar, and allow other companies to provide a real choice to the market.

Google – you are a monopolist. Start behaving like one.

Source:  mspoweruser.com

Categorized in Search Engine

In a Blogpost yesterday, Google rebuffed antitrust charges by the European Union accusing the search giant of monopolistic practices with its Android operating system.

The EU said that by requiring hardware manufacturers to pre-install Google apps under “restrictive licensing practices,” Google was closing the doors to rival search engines and browsers trying to enter the market.

Google, however, says Android is an open source platform that has helped to significantly lower costs for device manufacturers that use the operating system for free — albeit after agreeing to Google’s terms.

Google also points to the fact that Apple pre-installs Apple apps on the iPhone, as does Windows. And that Android doesn’t block device manufacturers from pre-installing competing services next to Google apps, nor does it block users from deleting Google apps.

But those who have complained to the EU about Google’s restrictive contracts see it differently. One of the industry organizations that lodged a complaint, Fairsearch — which represents competitors Microsoft, Nokia and Oracle — says Google locks phone manufacturers into a web of contracts that effectively force them to install Google apps.

Apps, after all, are how phones collect user data, and that’s how Google sells ads.

Google is in the throes of two other antitrust complaints with the EU. One involves accusations that the company favors its own search results in its online shopping service over its rivals. The other alleges the online search giant abuses its market power by offering its online advertising on third-party websites that use Google’s search engine.

If the European Union concludes Google is in violation of its antitrust rules, it could fine the company up to $7.5 billion, or 10 percent of its annual revenue.

The case has similarities to when the European Commission accused Microsoft of antitrust abuses. Because Internet Explorer was bundled with Windows in 2009, regulators claimed Internet Explorer unfairly held a disproportionate share of the browser market in Europe. In the end, Microsoft paid $3.4 billion in fines.

Source:  recode.net

Categorized in Search Engine

Performance marketing platform Criteo, best known for its programmatic display product ad retargeting capabilities, is expanding into search. The new product, Criteo Predictive Search, is aimed at bringing predictive optimization to Google Shopping campaigns.

Google Shopping continues to grow, with same-store sales increasing between 30 and 50 percent over the past year, according to Channel Advisor. Recent research by Engel Research Partners, commissioned by Criteo, found Google Shopping now accounts for 21 percent of the average retailer’s digital marketing budget. While there has been significant innovation on the engine side, Jason Lehmbeck, general manager of search at Criteo, said in an interview that retailers are looking for partners to help turn the added complexity that’s come with this innovation into more sales. A lot of the tools just haven’t kept up, he said.

Criteo Predictive Search is designed to automate the entire optimization process for Google Shopping campaigns without increasing the retailer’s cost per order. It uses machine learning to identify opportunities for better matching and bidding opportunities down to the product level. Every aspect of the campaign from structure to remarketing to bids is modified automatically through the system. The predicitive technology, in part, learns from Criteo’s existing access to and analysis of over 1.2 billion users it delivers advertising to per month for over four billion SKUs.



Roughly 30 retailers, including Telefora, Camping World and Revolve, have been testing the product in closed beta. Criteo says early testers have seen as much as a 22–49 percent lift in revenue at constant cost. The product is priced on a revenue share model, and there are no annual contracts or charges as a percentage of spend.

Criteo Predictive Search is launching in the US on Tuesday, with more markets to follow in 2017. It does not encompass Local Inventory Ads at this time.

Source:  searchengineland.com

Categorized in Search Engine

Want to know about California’s Proposition 63, a measure to control gun ammunition sales and large magazines, which is on the state ballot this month? Google has the answer. It’s a “deceptive ballot initiative that will criminalize millions of law-abiding Californians.” So much for balanced search results.

Google presents that answer at the very top of its results, when searching for “Prop 63” or “Proposition 63,” as shown below:


To answer “What’s happening here,” as Medium CEO Ev Williams asked when spotting this four days ago: for all its smarts, Google is still pretty dumb.asked when spotting this four days ago: for all its smarts, Google is still pretty dumb.

Over the years, Google has increased the frequency of showing direct answers in its search results — something it calls “Featured Snippets.” The idea is that mobile users especially want fast facts, not to have to click through to a website.

That’s even a potential advantage in its forthcoming Google Home assistant. It should allow Google Home to answer questions well beyond rival Amazon Echo, because Google will rely on the entire web rather than a more limited set of curated resources, especially Wikipedia.

For example, here’s Google Home answering a question it found from the web, making use of featured snippets, that  I’d previous tested with Amazon Echo, which couldn’t do it:

This is a question Amazon Echo couldn't answer for me yesterday that Google Home got thanks to the web & featured snippets 

(NOTE: the video above might not show due to temporary problems Twitter is having).

To get these answers, Google effectively guesses (even with all that machine learning) at which site it thinks might have a definitive answer. But the downside is that when Google goes wide beyond curated sources, it makes mistakes. God only loves Christians. Dinosaurs are an indoctrination tool. A not-safe-for-work answer for eating sushi. These are real things that Google featured snippets have gotten wrong in the past.

Heck, Google still will tell you that Barack Obama is “King of the United States” based on our own article about how a featured snippet originally screwed up this answer.


These types of mistakes are embarrassing in web search results. They’re going to be even worse with Google Home, where Google will start reading aloud some of these crazy answers without at least the back-up of other search results. Potentially, that could even hinder the product.

Indeed, as Google Home increases the attention that featured snippets get, it’s not unlikely that we might see companies actively  workingto spam them (more than they do now), or even rival groups trying to obtain their “side” as the preferred answer that Google gives.

We’ve asked Google for comment and will update if one comes.

Source:  searchengineland.com

Categorized in Search Engine

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media