fbpx
Jay Harris

Jay Harris

The fact that Google gathers personal data on its users using a variety of their services is in no way recent news. It has long been known that this internet giant has a database where various search patterns and habits of their users are stored.

These are even publically accessible using said user’s Google email and password. The explanation, or rather excuse, for doing this is that the data being collected is carefully guarded and that ultimately it cannot be used to compromise the user in any way.

They continue to explain that the data is used to better the services provided by Google and to further enhance Google’s advertising capabilities by offering relevant ads during internet browsing.

They heavily negate claims that said data is shared with third parties, mainly law enforcement agencies, without the user’s knowledge or approval. All of this shows quite a considerable lack of interest in their users’ online anonymity.

If all of the above is true, then why are our voice search queries recorded using their virtual assistant on both Android and Windows 10. Every time somebody uses voice command to search for something on the web, it is recorded and later even transcribed and saved in that database.

Every time somebody uses voice command to search for something on the web, it is recorded and later even transcribed and saved in that database.

Luckily, this database is also readily accessible by the user and can even be altered; deleting it completely is the recommended course of action in order to preserve our online anonymity. After that, there is a way to turn off the permission for recording any further voice commands.

After that, there is a way to turn off the permission for recording any further voice commands.

Turning off Google Voice Command Recording

  • First of we will have to follow this link in order to log into the recordings database.
  • After entering our email and username, we will be presented with the list of recordings that represent our voice commands to Google’s virtual assistant and their transcriptions.
  • In the left sidebar, there is a “Delete Activity By” button that will allow us to delete entries from the database.
  • After clicking on it, we should choose “All Time” from the “Delete by Date” dropdown menu.
  • By clicking “Delete,” the process will be completed and all of the records, including our voice command recordings, will be deleted.

Turning off Further Recording

While this has deleted all off the records that Google has acquired about us since we started using their services, it still does not prevent them from continuing to collect records. Luckily, this can be disabled as well and here is how:

  • Again, we will have to visit this link.
  • Instead of clicking the “Delete Activity By” button, this time, go to “Activity Control.”
  • Here you will be presented with several switches that you can turn off, turn off all of them.

By following above steps, you have successfully denied Google permission to track and log your online activity and in turn increased your online anonymity.

Google and Entering the Dark Web

While we should already be aware that it is not possible to access the dark web using Google Chore, there are still some ways that Google can record our dark web searches, which are mentioned on Tor’s official webpage.

As it stands, while using Tor, it is important that other browsers, like Google Chrome in this instance, is turned off.

It is also advisable not to use Google search engine with Tor and to log out of any accounts connected to Google, like Gmail or YouTube to make sure that our online anonymity is secure.

One last thing to note is that some search queries, like information regarding hidden services, will “mark” us to the law enforcements and in extreme cases make them actively monitor our internet usage and online activity.

While this is not tied specifically or solely to Google and its services, it is still advisable to consider using other browsers or at least a strong VPN.

Secure Alternatives to Google Chrome

Tor

tor
Many people believe that tools for online anonymity like Tor are used only be a select few, usually assumed to be hackers.

For quite a long time now Tor has been the #1 browser for people looking to increase their online anonymity. It is also the only way to access hidden services located on the dark web.

Tor uses an ever-expanding network of nodes, computers which are mostly owned by people volunteering them for use in building up the Tor network.

This network serves as an intermediary between the user and the internet making it hard to connect their IP to the searches.

The downside of Tor actually lies in its popularity making it the most heavily monitored secure browser.

Despite this, Tor’s security is hardly compromised, and it will continue its work uninterrupted as long as there is a community to back it up.

Epic

Epic
Epic is an awesome browser if you want to keep your web browsing as tightly secure as possible.

There are as many routes taken to ensure online anonymity as there are browsers specializing in it. Epic is one of the more popular choices, and its philosophy is offering security through minimalism.

It is based on Chromium and if we had to compare it to anything it would be a heavily stripped down Google Chrome. An interesting feature about Epic is that it reroutes all their user’s searches through their company’s servers.

While this does increase online anonymity by making it harder to connect our IP to our specific searches, it also slows down our search speed, but not so much to make it not a very worthwhile exchange.

Another downside is the lack of malware and phishing prevention systems, but these can be avoided anyway but using a bit of common sense.

Cocoon Browsing

Cocoon Browsing
Cocoon makes the Web a better place by protecting your online privacy,

While not technically a browser, but rather a browser plugin, Cocoon Browsing offers some of the best online anonymity features on the web.

Aside from offering online anonymity through secure browsing, it also has built-in features like anti-Facebook tracking and end-to-end encrypted connection.

The only downside to this service is that it requires a monthly or yearly subscription. There are two versions of the service, Cocoon and Cocoon+, costing $1.49 and $2.49 per month respectively.

Conclusion

With every passing day, big corporations are gathering more and more data on their clients, and while said information is kept secure and confidential it is only a matter of time before it falls into the wrong hands.

In the end, many people will find that being monitored on the internet is intrusive to their online anonymity, despite the fact that the data logged is not accessible to the public.

While this data may provide some increase in comfort and utility when searching the web, it is still advisable to at least turn off all the tracking permissions on our browsers, if not using an online anonymity based alternative.

Source : darkwebnews

What is the Deep Web? What is the Dark Web? These are questions that tend to arise when we hear the term in many popular spy movies and TV shows today. It is a term that is used to describe a collection of websites that are visible but are very masterfully hidden from the general public. Most of these websites hide their IP addresses, which makes it impossible to identify who is behind the screen. 

 

These websites can’t be accessed via your standard browsers like Chrome and Firefox. These websites are not indexed on Google or Yahoo either. Some of these websites are useful and some I would not dare visit again. You can buy anything illegal from many of these websites;

you can buy a gun or a kilo of marijuana from Mexico. You can hire a hitman or even buy yourself a fake identity. The Deep Web is a murky place and I happened to explore it at length to find out what the fuck goes on in there. 

How Does it Work?

Almost every website in the Dark Web uses an encryption tool called TOR. It also acts as a browser and many of these websites have a .tor domain, something you cannot access through any regular browser.  Tor or Onion Routing is a free software that was developed by the United States Naval Research Lab for anonymous communication. The reason why it is also known as Onion routing is because the Internet is much like an onion and has many layers. 

What is The Dark Web?

The internet that we know of consists of 8 billion websites from all over the world, and what’s shocking to know is that traffic to websites like Facebook, Google, Amazon and any other page that uses HTTPS or .com as a domain, only contributes to only 4% of web content on the internet.  These websites are only on the ‘surface web’. 96% of the digital universe is predominantly present in the deep web. These websites are protected by passwords and cannot be traced to their owners. 

What is The Dark Web?

Not all Dark Websites in the Deep Web use TOR encryption. Some websites like the all-new Silk Road Reloaded use a similar service called I2P.  Silk Road was (maybe still is) a website for purchasing and selling recreational drugs.  Silk Road was the first modern black market website that caught too much heat and the owner was arrested and was incarcerated recently.  We will be talking about online marketplaces in the Deep Web a bit later. 

Is it the Deep Web or The Dark Web? 

You might think the terms are co-related but these terms tend to be different in definition. They don’t mean the same thing, as Deep Web is a term used for ALL websites that are present in the network including ‘Dark Web’ sites. Deep Web consists of all sorts of data which is boring and present for mundane reasons. 

However, it is exciting and scary to talk about Dark Websites in general. 

What Is The Dark Web And What The Hell Is Going On In There?

The Dark web is a part of the Deep Web and it requires specialised tools and equipment to access. These websites are deep underground and the owners of the websites have very good reason to keep their content hidden. 

Because of its nature, we cannot possibly fathom or determine how many websites actually exist with malicious content, but as I was researching the Deep Web, I came across some horrific websites that I would like to elaborate on. 

Online Black Market Marketplaces

What is The Dark Web?

The most visited websites in the Deep Web are mostly marketplaces that sell illegal drugs, pharmaceutical products and even pirated games. According to Trend Micro, an internet security firm, the user base that visits these websites normally like to buy and sell the following drugs:

What is The Dark Web?© BCCL

What is The Dark Web?© BCCL

What is The Dark Web?

The deep web provides a platform for anonymity and that is the best motivation for people to engage in illegal activities. There is a cybercriminal underground and these activities can have drastic effects in the real world. Recently, the founder of Silk Road, Ross William Ulbricht, or better known as Dread Pirate Roberts, was accused of money laundering, murder for hire, computer hacking, conspiracy to traffic fraudulent identities and conspiracy to traffic narcotics. You might be asking why? That’s because his website ‘Silk Road’ enabled people to sell all of these services. Think of Silk Road as the Amazon of illegal substances and services. 

What is The Dark Web?

The availability of illegal drugs is easily accessible and varies a lot on the Deep Web. Many of these websites sell cocaine, cannabis, and psychedelics amongst others. 

What is The Dark Web?

What is The Dark Web?

The Dark Web Is A Fucked Up Place Where You Can Buy Illegal Guns, Fake Money, Have People Killed. This Is What We Found

There’s even a search engine called ‘Grams’ that indexes and allows people to easily search Deep Web sites that sell illegal drugs. Hell, they even mimicked the Google logo to set themselves apart from other competing websites.  

Money Laundering and Counterfeiting 

In the Deep Web, you never use your regular credit card or your debit card to buy stuff. Hell, you don’t even use PayPal for these services. A virtual currency called ‘Bitcoin’ is the dominant mode of payment and it is a currency designed keeping anonymity in mind. It is the ideal currency for illegal activities which is outside of the control of traditional banks and governments. 

What is The Dark Web?

There are services available on the Deep web which makes it even harder for authorities to track your Bitcoin transactions. These services mix your Bitcoins through spidey networks and return them to you. Of course, they charge a small processing fee but this way it remains impossible to track. 

What is The Dark Web?

Bitcoins can be exchanged for real cash; however, there is a wide availability of fraudulent currency on the Deep Web. This counterfeit cash is available to buy in bulk or per order basis. They are almost identical to the real thing and are made of 100% Cotton Linen Paper, which is used in most paper money today.

These bills even have the appropriate watermark to make them look legit and can also fool any infrared checker that you commonly see in Bank today. Most of these bills can be detected by an infrared scanner but that does not prevent people from buying or selling it. These websites offer $20 Bills for half the price and other websites also offer Euros and Yen. 

Guns 

According to research by Carnegie Mellon University (CMU), the most popular items sold on the dark web are illegal drugs. MDMA and Marijuana are the most popular items sold, however, the sale of guns and other forms of weapons are catching up.  There is a dedicated website called The Armory, where users can buy all kinds of illegal firearms and explosives. And get this...they ship it all over the world!What is The Dark Web?

These sites have made it hard for authorities to effectively monitor them and it seems like the situation is not going to get better anytime soon. 

Passports and Citizenships 

What is The Dark Web?

Owning an American or an EU passport is one of the most valuable assets when it comes to travel and citizenship benefits. Being an American or an EU citizen certainly has its perks. They act not only as a document that will let you cross borders but one can open bank accounts, apply for loans, purchase property and even get state benefits if you are a citizen of a specific country. Unique documents like Passports and other powerful documents are faked and sold on the dark web. There are plenty of websites that claim to sell identical passports and driver licenses. They vary in price from country to country. 

What is The Dark Web?

What is The Dark Web?

In fact, the founder of Silk Road, Dread Pirate Roberts, was caught because he ordered dozens of Fake IDs on the deep web in order to hide his identity from the FBI. These fake IDs were caught by the FBI and showed how extensive and accurate some of these documents can be. 

Child Pornography 

I am not even going to dignify this topic with a full blown paragraph, since it is simply inhumane and disgusting even talking about this. Child pornography is present in stupendous quantity and it needs to stop right now. Just make sure you don’t click on that Twitter logo if you ever decide to explore the dark web. 

The Deep Web was invented with the sole purpose of fulfilling the genuine need of freedom and anonymity. It’s used by Governments to communicate with each other during a crisis, and journalists use it to leak documents that wouldn’t normally be available on the surface web. Civilians used it during the Egyptian crisis and it denotes that the Deep Web has far more use for good than evil. 

What is The Dark Web?© BCCL

Cybercrime has emerged to be the dominant form of usage for users from across the globe. It is the platform for obscurity and protection these cybercriminals need in order to operate. It gives them an edge over law and order and they have been polluting a space that might be the future of anonymity in due time. 

We here at Mensxp pay no heed to the Dark Web and we do not endorse or encourage you to start taking part in any illegal activity or immoral behaviour.   

Source : mensxp

We asked Google's Gary Illyes what SEOs and webmasters should think about as this year closes.

Every year we like to get a Googler who is close with the ranking and search quality team to give us future thinking points to relay to the search marketing community. In part two of our interview with Gary Illyes of Google, we asked him that question.

Advertisment

become-an-internet-research-specialist

After a little bit of coercion, Illyes told us three things:

(1) Machine learning

(2) AMP

(3) Structured data

He said:

Well I guess you can guess that we are going to focus more and more on machine learning. Pretty much everywhere in search. But it will not take over the core algorithm. So that’s one thing.

The other thing is that there is a very strong push for AMP everywhere. And you can expect more launches around that.

Structured data, again this is getting picked up more and more by the leads. So that would be another thing.

Interesting how Illyes said that the core algorithm will not be taken over by machine learning — that is an interesting point there. AMP, is obvious and structured data is as well.

Barry Schwartz: Gary, one of my favorite parts of Matt Cutts, I guess, presentations at SMX advanced towards the year and some other conferences. Was that one of the slides always gave, I guess, webmasters and SEOs what’s coming, like what the Google search quality team is looking into for the upcoming year. And we’re kind of at the end of the year now and I was wondering if you have any of those to share.

Gary Illyes: Common, it’s early October. I understand that they started like pushing the pumpkin spice thing but it’s really not the end of the year.

Danny Sullivan: I mean you guys take all of December off and work your way from the end of the year. And might I add like December 4th you’ll be like here’s the end. It’s not like in January you go to Google Trends goes, here’s what happened year. there pushed out in Decembers. Well one thing, surely you’ve got one thing one. One forward looking statement that you can give for us to make stock purchases.

Gary Illyes: Actually you are in luck because I have a presentation or keynote for Pubcon and I will have a slide for what’s coming.

Danny Sullivan: Well let’s have that slide. Because this isn’t going to air until after…

Barry Schwartz: Can you share anything, one thing,

Danny Sullivan: One thing, this isn’t, like this isn’t live.

Gary Illyes: Oh sure.

Well I guess you can guess that we are going to focus more and more on machine learning. Pretty much everywhere in search. But it will not take over the core algorithm. So that’s one thing. The other thing is that there is a very strong push for AMP everywhere. And you can expect more launches around that. Structure data, again this is getting picked up more and more by the leads. So that would be another thing.

Source : searchengineland

Wednesday, 19 October 2016 15:57

5-step guide to a killer marketing strategy

Benjamin Franklin once said, “If you fail to plan, you plan to fail.” Just as you plan for other aspects of your business, such as product, operations and inventory, marketing requires some extensive planning as well. Plot your marketing strategies ahead of time and it may drive your business from breaking even to breaking records!

Advertisment

become-an-internet-research-specialist

If you can’t get your product out there to your customers, there’s really no point in continuing the line of work. So, no matter the size of your business, spend some time to think about your business and craft a robust, killer marketing strategy. There are many factors to consider, but let’s look at what you should focus on.

1. Identify your customer persona

Smart businesses always determine their niche market first. Surveys, focus groups, and website metrics are great ways to get to know your audience that help you develop a detailed customer profile. Stay tuned to your customers’ needs, and make them feel valued with intelligent and relevant content – this helps set the stage for long-term relationships. If you’re bootstrapping, this can be a powerful marketing technique!

2. Know your competition

Entering new markets involves a great deal of research about the market landscape, which includes your competition. Instead of trial and error—which is a rapid path to burnout—look at what your competitors are doing. Pinpoint both their strengths and weaknesses and determine what you can do to edge ahead of the rest. What makes you special to your customers? Why is your product different and better? Always remain differentiated and don’t get lost in advertising clutter and spam. Know who you’re up against, and outsmart them.

3. Set realistic goals

Imagine a desolate desert. You’re wandering aimlessly from one mirage to another. If you don’t know where you’re going, how do you know when you get there? The same goes for marketing. Setting goals are the starting point of any plan. The exact marketing achievements should be realistic and attainable. Depending on the industry and business stage you’re in, each goal should impact the bottom line.

4. It’s all about tactics

Game plans are important. They guide and keep you focused on the key aspects of your business. Spend some time to figure out the exact marketing tasks that can help you achieve your goals. And be specific. For instance, if your goal is to generate online leads, then video content marketing tasks may become part of your key activity list. Dive in deeper with the details. Youtube videos? Webinars? Testimonial videos? Video edms? Get your tactics right, you’ll be well on your way for a home run!

5. Stay ahead of the curve

Technology evolves. Markets change. Algorithms adjust. Users adapt. We live in a dynamic environment, where changes are happening constantly. Take social media for example. Facebook and Instagram, 2 of the major social media platforms that businesses use, constantly change their algorithms. This potentially alters the way marketers maintain their reach and interaction with their customers from the site. Whether we like it or not, change is the only constant. Regularly review your marketing strategy and revise it as necessary. It is really more of a process and not a plan.

Learn from seasoned professionals

Marketing doesn’t have to be rocket science. With the right marketing strategy, it can be your best sales asset.

They say learning is a never-ending process, and you can always do with more knowledge. That’s exactly what Tech in Asia Jakarta’s Marketing Stage aims to do. Glean new insights from experts at Hubspot, LINE, Edelman, and more as they share actionable takeaways on Day 1 of our Jakarta conference on November 16 and 17. Check out our panel of speakers below:

With one pass, get access to all 6 content stages and so much more! Passes are currently going fast, at a 10 percent discount (code: tiajkt10). Promotion ends on October 28 – get your passes today before it’s too late!

Source : telegraph

Google employs some of the world’s smartest researchers in deep learning and artificial intelligence, so it’s not a bad idea to listen to what they have to say about the space. One of those researchers, senior research scientist Greg Corrado, spoke at RE:WORK’s Deep Learning Summit on Thursday in San Francisco and gave some advice on when, why and how to use deep learning.

Advertisment

become-an-internet-research-specialist

His talk was pragmatic and potentially very useful for folks who have heard about deep learning and how great it is — well, at computer visionlanguage understanding and speech recognition, at least — and are now wondering whether they should try using it for something. The TL;DR version is “maybe,” but here’s a little more nuanced advice from Corrado’s talk.

(And, of course, if you want to learn even more about deep learning, you can attend Gigaom’s Structure Data conference in March and our inaugural Structure Intelligence conference in September. You can also watch the presentations from our Future of AI meetup, which was held in late 2014.)

1. It’s not always necessary, even if it would work

Probably the most-useful piece of advice Corrado gave is that deep learning isn’t necessarily the best approach to solving a problem, even if it would offer the best results. Presently, it’s computationally expensive (in all meanings of the word), it often requires a lot of data (more on that later) and probably requires some in-house expertise if you’re building systems yourself.

So while deep learning might ultimately work well on pattern-recognition tasks on structured data — fraud detection, stock-market prediction or analyzing sales pipelines, for example — Corrado said it’s easier to justify in the areas where it’s already widely used. “In machine perception, deep learning is so much better than the second-best approach that it’s hard to argue with,” he explained, while the gap between deep learning and other options is not so great in other applications.

That being said, I found myself in multiple conversations at the event centered around the opportunity to soup up existing enterprise software markets with deep learning and met a few startups trying to do it. In an on-stage interview I did with Baidu’s Andrew Ng (who worked alongside Corrado on the Google Brain project) earlier in the day, he noted how deep learning is currently powering some ad serving at Baidu and suggested that data center operations (something Google is actually exploring) might be a good fit.

Greg Corrado

Greg Corrado

2. You don’t have to be Google to do it

Even when companies do decide to take on deep learning work, they don’t need to aim for systems as big as those at Google or Facebook or Baidu, Corrado said. “The answer is definitely not,” he reiterated. “. . . You only need an engine big enough for the rocket fuel available.”

The rocket analogy is a reference to something Ng said in our interview, explaining the tight relationship between systems design and data volume in deep learning environments. Corrado explained that Google needs a huge system because it’s working with huge volumes of data and needs to be able to move quickly as its research evolves. But if you know what you want to do or don’t have major time constraints, he said, smaller systems could work just fine.

For getting started, he added later, a desktop computer could actually work provided it has a sufficiently capable GPU.

3. But you probably need a lot of data

However, Corrado cautioned, it’s no joke that training deep learning models really does take a lot of data. Ideally as much as you can get yours hands on. If he’s advising executives on when they should consider deep learning, it pretty much comes down to (a) whether they’re trying to solve a machine perception problem and/or (b) whether they have “a mountain of data.”

If they don’t have a mountain of data, he might suggest they get one. At least 100 trainable observations per feature you want to train is a good start, he said, adding that it’s conceivable to waste months of effort trying to optimize a model that would have been solved a lot quicker if you had just spent some time gathering training data early on.

Corrado said he views his job not as building intelligent computers (artificial intelligence) or building computers that can learn (machine learning), but asbuilding computers that can learn to be intelligent. And, he said, “You have to have a lot of data in order for that to work.”

Source: Google

Training a system that can do this takes a lot of data.

4. It’s not really based on the brain

Corrado received his Ph.D. in neuroscience and worked on IBM’s SyNAPSE neurosynaptic chip before coming to Google, and says he feels confident in saying that deep learning is only loosely based on how the brain works. And that’s based on what little we know about the brain to begin with.

Earlier in the day, Ng said about the same thing. To drive the point home, he noted that while many researchers believe we learn in an unsupervised manner, most production deep learning models today are still trained in a supervised manner. That is, they analyze lots of labeled images, speech samples or whatever in order to learn what it is.

And comparisons to the brain, while easier than nuanced explanations, tend to lead to overinflated connotations about what deep learning is or might be capable of. “This analogy,” Corrado said, “is now officially overhyped.”

Source : gigaom

When we asked Google's Gary Illyes about Penguin, he said SEOs should focus on where their links come from for the most part, but they have less to worry about now that Penguin devalues those links, as opposed to demoting the site.

Here’s another nugget of information learned from the A conversation with Google’s Gary Illyes (part 1) podcast at Marketing Land, our sister site: Penguin is coined a “web spam” algorithm, but it indeed focuses mostly on “link spam.” Google has continually told webmasters that this is a web spam algorithm, but every webmaster and SEO focuses mostly around links. Google’s Gary Illyes said their focus is right, that they should be mostly concerned with the links when tackling Penguin issues.

Gary Illyes made a point to clarify that it isn’t just the link, but rather the “source site” the link is coming from. Google said Penguin is based on “the source site, not on the target site.” You want your links to come from quality sources, as opposed to a low-quality source.

One example Gary revealed was his looking at a negative SEO case submitted to him, and he said the majority of the links were on “empty profile pages, forum profile pages.” When he looked at those links, the new Penguin algorithm was already “discounting” those links, devaluing those links.

“The good thing is that it is discounting the links, basically ignoring the links instead of the demoting,” Gary Illyes added. 

Barry Schwartz: You also talked about web spam versus link spam and Penguin. I know John Mueller specifically called it out again, in the original Penguin blog post that you had posted, that you said this is specifically a web spam algorithm. But every SEO that I know focuses just on link spam regarding Penguin. And I know when you initially started talking about this on our podcast just now, you said it’s mostly around really really bad links. Is that accurate to say when you talk about Penguin, [that] typically it’s around really, really bad links and not other types of web spam?

Gary Illyes: It’s not just links. It looks at a bunch of different things related to the source site. Links is just the most visible thing and the one that we decided to talk most about because we already talked about about links in general.

But it looks at different things on the source site, not on the target site, and then makes decisions based on those special signals.

I don’t actually want to reveal more of those spam signals because I think they would be pretty, I wouldn’t say easy to spam, but they would easy to mess with. And I really don’t want that.

But there are quite a few hints in the original, the old Penguin article.

Barry Schwartz: Can you mention one of those hints that is in the article?

Gary Illyes: I would rather not. I know that you can make pretty good assumptions. So I would just let you make assumptions.

Danny Sullivan: If you were making assumptions, how would you make those assumptions?

Gary Illyes: I try not to make assumptions. I try to to make decisions based on data.

Barry Schwartz: Should we be focusing on a link spam aspect of it for Penguin? Obviously, focus on all the “make best quality sites,” yada-yada-yada, but we talk about Penguin as reporters, and we’re telling people that SEOs are like Penguin specialists or something like that — they only focus on link spam — Is that wrong? I mean should they?

Gary Illyes: I think that’s the the main thing that they should focus on.

See where it is coming from, and then make a decision based on the source site — whether they want that link or not.

Well, for example, like I was looking at the negative SEO case just the yesterday or two days ago. And basically, the content owner played hundreds of links on empty profile pages, forum profile pages. Those links with the new Penguin were discounted. But like if you looked at the page, it was pretty obvious that the links were placed there for a very specific reason, and that’s to game the ranking algorithms. But not just Google’s but any other ranking algorithm that uses links. Like if you look at a page, you can make a pretty easy decision on whether to disavow or remove that link or not. And that’s what Penguin is doing. It’s looking at signals on the source page. Basically, what kind of page it is, what could be the purpose of that link ,and then make a decision based on that whether to discount those things or not.

The good thing is that it is discounting the links, basically ignoring the links instead of the demoting.

So in general, unless people are overdoing it, it’s unlikely that they will actually feel any sort of effect by placing those. But again, if they are overdoing it, then the manual actions team might take a deeper look.

You can listen to part one of the interview at Marketing Land.

Source : searchengineland

Thursday, 13 October 2016 23:12

Why more women are in top internet jobs

Ant Financial Service Group, the financial arm of e-commerce giant Alibaba, recently appointed Eric Jing as chief executive.

Lucy Peng, who has been at the helm of Ant Financial since its founding two years ago, will remain chairman.

The move is said to pave the way for a Hong Kong initial public offering in 2017.

Peng, 43, a former economics professor, is one of 18 first-generation team members of Alibaba.

She has served as Alibaba’s chief people officer (CPO) for a long time before heading Ant Financial in 2013.

As the CPO, Peng focused on how to instill the right corporate culture and values in the group’s massive staff of nearly 40,000 people.

Peng is widely regarded as No. 3 in Alibaba behind Jack Ma and Group vice chairman Cai Chongxin.

Previously, Alibaba faced a serious credibility issue caused by fake goods and wrongdoing by merchants on its website.

Peng was the key person behind the effort to resolve the crisis.

Peng is well known for her patience, attention to detail and excellent communication skills.

It is said that she would e-mail back and forth with technical staff a hundred times over a minor product or service.

In a letter to employees announcing the personnel change, Ma wrote that Peng has demonstrated “outstanding leadership using unique insight as a woman”, and was a “rock-steady presence in the face of changing times”.

In the past, the internet world used to be dominated by men, who are typically better at science and engineering.

In fact, most programmers are male. The founder of the world’s top 10 internet firms are all men.

However, more women are serving key positions as CEO, COO, or CPO in internet giants these days.

Peng, for instance, joined the ranks of the most powerful women in internet giants, including Facebook chief operating officer Sheryl Sandberg, YouTube CEO Susan Wojcicki and Yahoo CEO Marissa Mayer.

In the early stage of the internet industry, technology was at the core of competitiveness.

A company that can develop a unique technology can often beat its rivals. Engineers and program developers were assigned the top posts.

Nowadays, all internet giants have cloud, big data, e-commerce, social networking platforms and artificial intelligence.

It’s not easy for ordinary customers to tell the difference. Brand image, company values and user experience are increasingly making the difference.

Women leaders usually do better in these areas.

Samsung’s Galaxy Note 7 saga is a good example.

The poor handling of the explosion cases of the new smartphone may in fact have something to do with Samsung’s all men leadership team.

The CEO, CFO, COO, as well as the nine-person board of the Korean tech giant are all male.

By contrast, its arch rival Apple has two women on the board. Angela Ahrendts, former Burberry CEO, now serves as Apple’s vice-president of retail and online stores.

Source : ejinsight.com

Even if your user downloads your app, which has app indexing deployed, Google will show them the AMP page over your app page.

At SMX East yesterday, Adam Greenberg, head of Global Product Partnerships at Google, gave a talk about AMP. He said during the question and answer time that AMP pages will override app deep links for the “foreseeable future.”

Last week, we covered how when Google began rolling out AMP to the core mobile results, Google quietly added to their changelog that AMP pages will trump app deep links. In short, that means when a user installs an app of a publisher, does a search on the mobile phone where the app resides and clicks on a link within the Google mobile results that could lead to the app opening up, instead, Google will show the AMP page — not the content within the app the user installed.

Google has made several large pushes with App Indexing through the years. These were incentives to get developers to add deep links and App Indexing to their apps — such as installing apps from the mobile results, app indexing support for iOS apps, a ranking boost for deploying app indexing, Google Search Console reporting and so much more.

But now, if your website has both deployed app indexing and AMP, your app indexing won’t be doing much for you to drive more visits to your native iOS or Android app.

Google told us they “have found that AMP helps us deliver” on a better user experience “because it is consistently fast and reliable.” Google added, “AMP uses 10x less data than a non-AMP page.” Google told us that “people really like AMP” and are “more likely to click on a result when it’s presented in AMP format versus non-AMP.”

Google also told us that they “support both approaches,” but “with AMP — and the ability to deliver a result on Google Search in a median time of less than a second — we know we can provide that reliable and consistently fast experience.”

Personally, as a publisher who has deployed virtually everything Google has asked developers to deploy — from specialized Google Custom Search features to authorshipapp indexingAMP,mobile-friendlyHTTPS and more — I find this a bit discouraging, to say the least.

I think if a user has downloaded the app, keeps the app on their device and consumes content within the app, that user would prefer seeing the content within the publisher’s app versus on a lightweight AMP page. But Google clearly disagrees with my personal opinion on this matter.

Original source of this article is searchengineland

Monday, 29 August 2016 13:13

Mozilla invests in browser Cliqz

Mozilla made a strategic investment in Cliqz, maker of an iOS and Android browser with a built-in search engine, “to enable innovation of privacy-focused search experiences”.

Mark Mayo, SVP of Mozilla Firefox, said Cliqz’s products “align with the Mozilla mission. We are proud to help advance the privacy-focused innovation from Cliqz through this strategic investment in their company”.

Cliqz is based in Munich and is majority-owned by international media and technology company Hubert Burda Media.

The Cliqz for Firefox Add-on is already available as a free download. It adds to Firefox “an innovative quick search engine as well as privacy and safety enhancements such as anti-tracking”, said Mozilla.

Cliqz quick search is available in Cliqz’s browsers for Windows, Mac, Linux, Android and iOS. The desktop and iOS versions are built on Mozilla Firefox open source technology and offer built-in privacy and safety features.

Cliqz quick search is optimised for the German language and shows website suggestions, news and information to enable users to search quickly.

It claimed that while conventional search engines primarily work with data related to the content, structuring, and linking of websites, instead it works with statistical data on actual search queries and website visits.

It has developed a technology capable of collecting this information and then building a web index out of it, something it calls the ‘Human Web’.

What’s more, Cliqz’s “privacy-by-design” architecture technology guarantees that no personal data or personally identifiable information is transmitted or saved on its servers.

Jean-Paul Schmetz, founder and managing director at Cliqz, said Mozilla is the ideal company to work with because both parties believe in an open internet where people have control over their data.

“Data and search are our core competencies and it makes us proud to contribute our search and privacy technologies to the Mozilla ecosystem,” he said.

Source : http://www.mobileworldlive.com/apps/news-apps/mozilla-invests-in-browser-cliqz/

We’re all a bit worried about the terrifying surveillance state that becomes possible when you cross omnipresent cameras with reliable facial recognition — but a new study suggests that some of the best algorithms are far from infallible when it comes to sorting through a million or more faces.

The University of Washington’s MegaFace Challenge is an open competition among public facial recognition algorithms that’s been running since late last year. The idea is to see how systems that outperform humans on sets of thousands of images do when the database size is increased by an order of magnitude or two.

See, while many of the systems out there learn to find faces by perusing millions or even hundreds of millions of photos, the actual testing has often been done on sets like the Labeled Faces in the Wild one, with 13,000 images ideal for this kind of thing. But real-world circumstances are likely to differ.

“We’re the first to suggest that face recs algorithms should be tested at ‘planet-scale,'” wrote the study’s lead author, Ira Kemelmacher-Shlizerman, in an email to TechCrunch. “I think that many will agree it’s important. The big problem is to create a public dataset and benchmark (where people can compete on the same data). Creating a benchmark is typically a lot of work but a big boost to a research area.”

The researchers started with existing labeled image sets of people — one set consisting of celebrities from various angles, another of individuals with widely varying ages. They added noise to this signal in the form of “distractors,” faces scraped from Creative Commons licensed photos on Flickr.

They ran the test with as few as 10 distractors or as many as a million — essentially, the number of needles stayed the same but they piled on the hay.

megaface results

The results show a few surprisingly tenacious algorithms: The clear victor for the age-varied set is Google’s FaceNet, while it and Russia’s N-TechLab are neck and neck in the celebrity database. (SIAT MMLab, from Shenzhen, China, gets honorable mention.)

Conspicuously absent is Facebook’s DeepFace, which in all likelihood would be a serious contender. But as participation is voluntary and Facebook hasn’t released its system publicly, its performance on MegaFace remains a mystery.

Both leaders showed a steady decline as more distractors were added, although efficacy doesn’t fall off quite as fast as the logarithmic scale on the graphs makes it look. The ultra-high accuracy rate touted by Google in its FaceNet paper doesn’t survive past 10,000 distractors, and by the time there are a million, despite a hefty lead, it’s not accurate enough to serve much of a purpose.

Still, getting three out of four right with a million distractors is impressive — but that success rate wouldn’t hold water in court or as a security product. It seems we still have a ways to go before that surveillance state becomes a reality — that one in particular, anyway.

The researchers’ work will be presented a week from today at the Conference on Computer Vision and Pattern Recognition in Las Vegas.

Source : https://techcrunch.com/2016/06/23/facial-recognition-systems-stumble-when-confronted-with-million-face-database/

Page 5 of 7

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media