fbpx

PimEyes markets its service as a tool to protect privacy and the misuse of images

Ever wondered where you appear on the internet? Now, a facial recognition website claims you can upload a picture of anyone and the site will find that same person’s images all around the internet.

PimEyes, a Polish facial recognition website, is a free tool that allows anyone to upload a photo of a person’s face and find more images of that person from publicly accessible websites like Tumblr, YouTube, WordPress blogs, and news outlets.

In essence, it’s not so different from the service provided by Clearview AI, which is currently being used by police and law enforcement agencies around the world. PimEyes’ facial recognition engine doesn’t seem as powerful as Clearview AI’s app is supposed to be. And unlike Clearview AI, it does not scrape most social media sites.

PimEyes markets its service as a tool to protect privacy and the misuse of images. But there’s no guarantee that someone will upload their own face, making it equally powerful for anyone trying to stalk someone else. The company did not respond to a request for comment.

PimEyes monetizes facial recognition by charging for a premium tier, which allows users to see which websites are hosting images of their faces and gives them the ability to set alerts for when new images are uploaded. The PimEyes premium tiers also allow up to 25 saved alerts, meaning one person could be alerted to newly uploaded images of up to 25 people across the internet. PimEyes has also opened up its service for developers to search its database, with pricing for up to 100 million searches per month.

Facial recognition search sites are rare but not new. In 2016, Russian tech company NtechLab launched FindFace, which offered similar search functionality, until shutting it down in a pivot to state surveillance. Founders described it as a way to find women a person wanted to date.

“You could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages,” cofounder Alexander Kabakov told The Guardian.

The PimEyes premium tiers also allow up to 25 saved alerts, meaning one person could be alerted to newly uploaded images of up to 25 people across the internet.

While Google’s reverse image search also has some capability to find similar faces, it doesn’t use specific facial recognition technology, the company told OneZero earlier this year.

“Search term suggestions rely on aggregate metadata associated with images on the web that are similar to the same composition, background, and non-biometric attributes of a particular image,” a company spokesperson wrote in February. If you upload a photo of yourself with a blank background, for example, Google may surface similarly composed portraits of other people who look nothing like you.

PimEyes also writes on its website that it has special contracts available for law enforcement that can search “darknet websites,” and its algorithms are also built into at least one other company’s application. PimEyes works with Paliscope, software aimed at law enforcement investigators, to provide facial recognition inside documents and videos. Paliscope says it has recently partnered with 4theOne Foundation, which seeks to find and recover trafficked children.

There are still many open questions about PimEyes, like exactly how it obtains data on people’s faces, its contracts with law enforcement, and the accuracy of its algorithms.

PimEyes markets itself as a solution for customers worried about where their photos appear online. The company suggests contacting websites where images are hosted and asking them to remove images. But because anyone can search for anyone, services like PimEyes may generate more privacy issues than they solve.

[Source: This article was published in onezero.medium.com By Dave Gershgorn - Uploaded by the Association Member: Grace Irwin]

Categorized in Search Engine

Privacy-preserving AI techniques could allow researchers to extract insights from sensitive data if cost and complexity barriers can be overcome. But as the concept of privacy-preserving artificial intelligence matures, so do data volumes and complexity. This year, the size of the digital universe could hit 44 zettabytes, according to the World Economic Forum. That sum is 40 times more bytes than the number of stars in the observable universe. And by 2025, IDC projects that number could nearly double.

More Data, More Privacy Problems

While the explosion in data volume, together with declining computation costs, has driven interest in artificial intelligence, a significant portion of data poses potential privacy and cybersecurity questions. Regulatory and cybersecurity issues concerning data abound. AI researchers are constrained by data quality and availability. Databases that would enable them, for instance, to shed light on common diseases or stamp out financial fraud — an estimated $5 trillion global problem — are difficult to obtain. Conversely, innocuous datasets like ImageNet have driven machine learning advances because they are freely available.

A traditional strategy to protect sensitive data is to anonymize it, stripping out confidential information. “Most of the privacy regulations have a clause that permits sufficiently anonymizing it instead of deleting data at request,” said Lisa Donchak, associate partner at McKinsey.

But the catch is, the explosion of data makes the task of re-identifying individuals in masked datasets progressively easier. The goal of protecting privacy is getting “harder and harder to solve because there are so many data snippets available,” said Zulfikar Ramzan, chief technology officer at RSA.

The Internet of Things (IoT) complicates the picture. Connected sensors, found in everything from surveillance cameras to industrial plants to fitness trackers, collect troves of sensitive data. With the appropriate privacy protections in place, such data could be a gold mine for AI research. But security and privacy concerns stand in the way.

Addressing such hurdles requires two things. First, a framework providing user controls and rights on the front-end protects data coming into a database. “That includes specifying who has access to my data and for what purpose,” said Casimir Wierzynski, senior director of AI products at Intel. Second, it requires sufficient data protection, including encrypting data while it is at rest or in transit. The latter is arguably a thornier challenge.

[Source: This article was published in urgentcomm.com By Brian Buntz - Uploaded by the Association Member: Bridget Miller]

Categorized in Internet Privacy

Reports published in Market Research Inc for the Mobile Analytics Software and Tools market are spread out over several pages and provide the latest industry data, market future trends, enabling products and end users to drive revenue growth and profitability. Industry reports list and study key competitors and provide strategic industry analysis of key factors affecting market dynamics. This report begins with an overview of the Mobile Analytics Software and Tools market and is available throughout development. It provides a comprehensive analysis of all regional and major player segments that provide insight into current market conditions and future market opportunities along with drivers, trend segments, consumer behavior, price factors and market performance and estimates over the forecast period.

Request a pdf copy of this report at
https://www.marketresearchinc.com/request-sample.php?id=30725

 Key Strategic Manufacturers 

:
Adobe Analytics, Pendo, Amplitude Analytics, CleverTap, AppsFlyer, Branch, Heap, Mixpanel, Smartlook, Crashlytics, Instabug, Sentry, Raygun, Bugsee, QuincyKit

The report gives a complete insight of this industry consisting the qualitative and quantitative analysis provided for this market industry along with prime development trends, competitive analysis, and vital factors that are predominant in the Mobile Analytics Software and Tools Market.

The report also targets local markets and key players who have adopted important strategies for business development. The data in the report is presented in statistical form to help you understand the mechanics. The Mobile Analytics Software and Tools market report gathers thorough information from proven research methodologies and dedicated sources in many industries.

Avail 40% Discount on this report at
https://www.marketresearchinc.com/ask-for-discount.php?id=30725

Key Objectives of Mobile Analytics Software and Tools Market Report:
– Study of the annual revenues and market developments of the major players that supply Mobile Analytics Software and Tools
– Analysis of the demand for Mobile Analytics Software and Tools by component
– Assessment of future trends and growth of architecture in the Mobile Analytics Software and Tools market
– Assessment of the Mobile Analytics Software and Tools market with respect to the type of application
– Study of the market trends in various regions and countries, by component, of the Mobile Analytics Software and Tools market
– Study of contracts and developments related to the Mobile Analytics Software and Tools market by key players across different regions
– Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying Mobile Analytics Software and Tools across the globe.

Furthermore, the years considered for the study are as follows:

Historical year – 2015-2019

Base year – 2019

Forecast period – 2020 to 2026

Table of Content: 

Mobile Analytics Software and Tools Market Research Report
Chapter 1: Industry Overview
Chapter 2: Analysis of Revenue by Classifications
Chapter 3: Analysis of Revenue by Regions and Applications
Chapter 6: Analysis of Market Revenue Market Status.
Chapter 4: Analysis of Industry Key Manufacturers
Chapter 5: Marketing Trader or Distributor Analysis of Market.
Chapter 6: Development Trend of Mobile Analytics Software and Tools market

Continue for TOC………

 If You Have Any Query, Ask Our Experts:
https://www.marketresearchinc.com/enquiry-before-buying.php?id=30725

 About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other.  When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

[Source: This article was published in coleofduty.com By - Uploaded by the Association Member: Anthony Frank]

Crowdfunding has become the de facto way to support individual ventures and philanthropic efforts. But as crowdfunding platforms have risen to prominence, they’ve also attracted malicious actors who take advantage of unsuspecting donors. Last August, a report from the Verge investigated the Dragonfly Futurefön, a decade-long fraud operation that cost victims nearly $6 million and caught the attention of the FBI. Two years ago, the U.S. Federal Trade Commission announced it was looking into a campaign for a Wi-Fi-enabled, battery-powered backpack that disappeared with more than $700,000.

GoFundMe previously said fraudulent campaigns make up less than 0.1% of all those on its platform, but with millions of new projects launching each year, many bad actors are able to avoid detection. To help catch them, researchers at the University College London, Telefonica Research, and the London School of economics devised an AI system that takes into account textual and image-based features to classify fraudulent crowdfunding behavior at the moment of publication. They claim it’s up to 90.14% accurate at distinguishing between fraudulent and legitimate crowdfunding behavior, even without any user or donation activity.

While two of the largest crowdfunding platforms on the web — GoFundMe and Kickstarter — employ forms of automation to spot potential fraud, neither claims to take the AI-driven approach advocated by the study coauthors. A spokesperson for GoFundMe told VentureBeat the company relies on the “dedicated experts” on its trust and safety team, who use technology “on par with the financial industry” and community reports to spot fraudulent campaigns. To do this, they look at things like:

  • Whether the campaign abides by the terms of service
  • Whether it provides enough information for donors
  • Whether it’s plagiarized
  • Who started the campaign
  • Who is withdrawing funds
  • Who should be receiving funds

Kickstarter says it doesn’t use AI or machine learning tools to prevent fraud, excepting proprietary automated tools, and that the majority of its investigative work is performed manually by looking at what signals surface and analyzing them to guide any action taken. A spokesperson told VentureBeat that in 2018 Kickstarter’s team suspended 354 projects and 509,487 accounts and banned 5,397 users for violating the company’s rules and guidelines — 8 times as many as it suspended in 2017.

The researchers would argue those efforts don’t go far enough. “We find that fraud is a small percentage of the crowdfunding ecosystem, but an insidious problem. It corrodes the trust ecosystem on which these platforms operate, endangering the support that thousands of people receive year on year,” they wrote. “[Crowdfunding platforms aren’t properly] incentivized to combat fraud among users and the campaigns they launch: On the one hand, a platform’s revenue is directly proportional to the number of transactions performed (since the platform charges a fixed amount per donation); on the other hand, if a platform is transparent with respect to how much fraud it has, it may discourage potential donors from participating.”

To build a corpus that could be used to “teach” the above-mentioned system to pick out fraudulent campaigns, the researchers sourced entries from GoFraudMe, a resource that aims to catalog fraudulent cases on the platform. They then created two manually annotated data sets focusing on the health domain, where the monetary and emotional stakes tend to be high. One set contained 191 campaigns from GoFundMe’s medical category, while the other contained 350 campaigns from different crowdfunding platforms (Indiegogo, GoFundMe, MightyCause, Fundrazr, and Fundly) that were directly related to organ transplants.

Human annotators labeled each of the roughly 700 campaigns in the corpora as “fraud” or “not-fraud” according to guidelines that included factors like evidence of contradictory information, a lack of engagement on the part of donors, and participation of the creator in other campaigns. Next, the researchers examined different textual and visual cues that might inform the system’s analysis:

  • Sentiment analysis: The team extracted the sentiments and tones expressed in campaign descriptions using IBM’s Watson natural language processing service. They computed the sentiment as a probability across five emotions (sadness, joy, fear, disgust, and anger) before analyzing confidence scores for seven possible tones (frustration, satisfaction, excitement, politeness, impoliteness, sadness, and sympathy).
  • Complexity and language choice: Operating on the assumption that fraudsters prefer simpler language and shorter sentences, the researchers checked language complexity and word choice in the campaign descriptions. They looked at both a series of readability scores and language features like function words, personal pronouns, and average syllables per word, as well as the total number of characters.
  • Form of the text: The coauthors examined the visual structure of campaign text, looking at things like whether the letters were all lowercase or all uppercase and the number of emojis in the text.
  • Word importance and named-entity recognition: The team computed word importance for the text in the campaign description, revealing similarities (and dissimilarities) among campaigns. They also identified proper nouns, numeric entities, and currencies in the text and assigned them to a finite set of categories.
  • Emotion representation: The researchers repurposed a pretrained AI model to classify campaign images as evoking one of eight emotions (amusement, anger, awe, contentment, disgust, excitement, fear, and sadness) by fine-tuning it on 23,000 emotion-labeled images from Flickr and Instagram.
  • Appearance and semantic representation: Using another AI model, the researchers extracted image appearance representations that provided a description of each image, like dominant colors, the textures of the edges of segments, and the presence of certain objects. They also used a face detector algorithm to estimate the number of faces present in each image.

After boiling many thousands of possible features down to 71 textual and 501 visual variables, the researchers used them to train a machine learning model to automatically detect fraudulent campaigns. Arriving at this ensemble model required building sub-models to classify images and text as fraudulent or not fraudulent and combining the results into a single score for each campaign.

The coauthors claim their approach revealed peculiar trends, like the fact that legitimate campaigns are more likely to have images with at least one face compared with fraudulent campaigns. On the other hand, fraudulent campaigns are generally more desperate in their appeals, in contrast with legitimate campaigns’ descriptiveness and openness about circumstances.

“In recent years, crowdfunding has emerged as a means of making personal appeals for financial support to members of the public … The community trusts that the individual who requests support, whatever the task, is doing so without malicious intent,” the researchers wrote. “However, time and again, fraudulent cases come to light, ranging from fake objectives to embezzlement. Fraudsters often fly under the radar and defraud people of what adds up to tens of millions, under the guise of crowdfunding support, enabled by small individual donations. Detecting and preventing fraud is thus an adversarial problem. Inevitably, perpetrators adapt and attempt to bypass whatever system is deployed to prevent their malicious schemes.”

It’s possible that the system might be latching onto certain features in making its predictions, exhibiting a bias that’s not obvious at first glance. That’s why the coauthors plan to improve it by taking into account sources of labeling bias and test its robustness against unlabeled medically related campaigns across crowdfunding platforms.

“This is a significant step in building a system that is preemptive (e.g., a browser plugin) as opposed to reactive,” they wrote. “We believe our method could help build trust in this ecosystem by allowing potential donors to vet campaigns before contributing.”

[Source: This article was published in venturebeat.com By Kyle Wiggers - Uploaded by the Association Member: Jeremy Frink]

Categorized in Investigative Research

An update to Google Images creates a new way for site owners to drive traffic with their photos.

Google is adding more context to photos in image search results, which presents site owners with a new opportunity to earn traffic.

Launching this week, a new feature in Google Images surfaces quick facts about what’s being shown in photos.

Information about people, places or things related to the image is pulled from Google’s Knowledge Graph and displayed underneath photos when they’re clicked on.

image-search.jpeg

More Context = More Clicks?

Google says this update is intended to help searchers explore topics in more detail.

One of the ways searchers can explore topics in more detail is by visiting the web page where the image is featured.

The added context is likely to make images more appealing to click on. It’s almost like Google added meta descriptions to image search results.

However, it’s not quite the same as that, because the images and the facts appearing underneath come from different sources.

image-search-1.jpeg

Results in Google Images are sourced from sites all over the web, but the corresponding facts for each image are pulled from the Knowledge Graph.

In the examples shared by Google, you can see how the image comes from the website where it’s hosted while the additional info is taken from another source.

On one hand, that gives site owners little control over the information that is displayed under their images in search results.

On the other hand, Google is giving searchers more information about images that could potentially drive more clicks to the image source.

Perhaps the best part of this update is it requires no action on the part of site owners. Google will enhance your image search snippets all on its own.

Another Traffic Opportunity

If you’re fortunate enough to have content included in Google’s Knowledge Graph, then there’s now more opportunities to have those links surfaced in search results.

Contrary to how it may seem at times, Wikipedia is not the only source of information in Google’s Knowledge Graph. Google draws from hundreds of sites across the web to compile billions of facts.

After all, there are over 500 billion facts about five billion entities in the Knowledge Graph – they can’t all come from Wikipedia.

An official Google help page states:

“Facts in the Knowledge Graph come from a variety of sources that compile factual information. In addition to public sources, we license data to provide information such as sports scores, stock prices, and weather forecasts.

We also receive factual information directly from content owners in various ways, including from those who suggest changes to knowledge panels they’ve claimed.”

As Google says, site owners can submit information to the Knowledge Graph by claiming a knowledge panel.

That’s not something everyone can do, however, as they either have to be an entity featured in a knowledge panel or represent one.

But this is still worth mentioning as it’s low-hanging fruit for those who have the opportunity to claim a knowledge panel and haven’t yet.

Claiming your business’s knowledge panel is a must-do if you haven’t done so already. Local businesses stand to gain the most from from this update.

That’s especially true if yours is the sort of business that would have photos of it published on the web.

Then your Knowledge Graph information, with a link, could potentially be surfaced underneath those images.

[Source: This article was published in searchenginejournal.com By Matt Southern - Uploaded by the Association Member: Deborah Tannen]

Categorized in Search Engine

In 2020, phishing is just about the common kinds of cyberattacks on businesses and individuals alike. 56% of IT decision-makers state that phishing attacks will be the top security threat they truly are facing, with 32% of hacks involving phishing. Here is video phishing and how you protect your self.

Phishing is no longer limited to emails from Nigerian princes offering the recipients massive returns on investments.

Many phishing messages and internet sites have become sophisticated to the point that users are no longer in a position to recognize them without specific training. Google now blacklists an average of 50,000 internet sites for phishing every week.

On the upside, the ways that it is possible to protect your self from phishing attacks have evolved aswell in recent years. They range from using up-to-date firewall software to using secure platforms such as for example cloud-based business phone services.

A new threat is looming on the horizon: video phishing.

Driven by technological advances, artificial intelligence, and machine learning, this new trend has the potential of causing catastrophic security breaches.

Keep reading to find out what video phishing is, what it seems like, and how you can protect yourself.

How does Video Phishing work?

Surprise! Elon Musk is interrupting your Zoom call.

Sounds fake? It is.

But it looks disturbingly real.

See the end of the document for embed.

The video above shows a software of Avatarify, a tool manufactured by a researcher to transform users in to celebrities in real-time throughout Zoom or Skype calls. Its inventor, Ali Aliev, says that the program’s purpose was to have some fun throughout COVID-19 lockdown — by surprising friends during video conferences as Albert Einstein, Eminem, or the Mona Lisa.

The technology behind donning someone else’s animated face like a mask is called deepfaking.

Deepfakes are relatively new applications of machine learning tools. These tools generate realistic faces by analyzing 1000s of videos and images of a target’s face and extracting patterns for common expressions and movements. Then, these patterns can be projected onto anybody, effectively morphing them in to someone else.

You utilize the image of  Elon Musk. Or President Obama. In fact, a deep fake video of the former President calling his successor ‘a total and complete dips**t’ went viral in 2018.

The implications of this technology for cybersecurity are wide-reaching and potentially disastrous.

BECAUSE RATHER THAN TROLLING YOUR PALS, OR INSULTING PRESIDENT TRUMP VIA SOME BODY FAMOUS DEEPFAKES — YOU WON’T KNOW IF IT’S FRIENDS BEING COMICAL — OR THE DANGEROUS, VIDEO PHISHING.

What will be the Dangers of Video Phishing?

According to CNN, the majority of deepfake videos on the net as of the conclusion of 2019, were pornography. In total, 15,000 of such videos were counted. That might not seem like much, taking into consideration the vastness of the internet.

The reason behind these rather limited numbers has been that generating convincing deepfakes has a fair amount of computational power. Avatarify, for example, takes a high-level gaming PC to operate properly.

But lower-quality applications have been completely developed, like a face-swapping app that got banned again fairly quickly.

It is a question of time before deepfake technology becomes widely available. And widely used for cybercrime.

Some of those scams have been completely recorded and you can find them on YouTube.

In one case, hackers used similar technology to deepfake the voices of Chief executive officers and sent voicemail messages to executives. They succeeded in effecting a transfer of a mind-boggling $243,000.

In still another case, three men were arrested in Israel for swindling a businessman out of $8 million by impersonating the French foreign minister.

Experts already are warning against other possible applications of deepfake videos for frauds to generate funds. One scenario, for example, is extortion. Hackers could threaten the release of a video containing content that may be damaging to a person’s or business’ reputation. Such content could range from straight-out pornography to the CEO of a business endorsing racist views.

As experiences have shown, that may be disastrous. For businesses, even the regular type of ‘fake news’ might have catastrophic impacts on industry relationships, and even their stock market values.

“Those kinds of things can put a company out of business through reputation damage,” Chris Kennedy of the AI cyber-security platform AttackIQ said in a recent interview with Forbes. “We’re hitting the tipping point in which technology is taking advantage of the biggest human weakness, we’re over-trusting.”

How to Defend Yourself against Deepfake Video Phishing

Today, having a higher cybersecurity standard is more important than in the past. With on the web life proliferating during the COVID-19 crisis, scams and phishing attacks have flourished aswell.

The good news regarding phishing videos is that the technology, as of 2020, is still relatively new, and the case numbers relatively low. That means that individuals and companies have time and energy to prepare, and disseminate information to ward against such attacks.

Know the essential defense moves

As a most basic kind of defense, careful attention is advised in the event that you receive an unsolicited video call, particularly from some body famous or in a position of authority. Never trusting caller IDs, hanging up instantly, and perhaps not sharing any information on such calls is important.

If you receive video messages that could be authentic, nevertheless, you are uncertain about it, you should use software to find out if that which you are facing is a deep fake. For example, businesses such as Deeptrace offers computer software with the capability to recognize AI-generated video content.

Apart from that, some low-tech solutions to force away video phishing are having agreed-upon code words when communicating about painful and sensitive information via video messaging, using a 2nd communication channel to confirm information, or asking security questions that your interlocutor can only answer if they are the real thing.

Basically, pretend you’re in an old James Bond film. ‘In London, April’s a Spring month’ and all that.

Final Thoughts

Using AI to morph into somebody else and extract sensitive information may still sound futuristic. But it’s only a question of time until video phishing hits the main-stream.

As technology advances and artificial intelligence and machine learning applications to copy the face area and voice of people become widely available, how many deepfake scams is set to undergo the roof.

[Source: This article was published in digitalmarketnews.com By Kanheya Singh - Uploaded by the Association Member: Issac Avila]

Categorized in Deep Web

Google has made some new substantial changes to their How Google Search Works” search documents for website owners. And as always when Google makes changes to important documents with impact on SEO, such as How Search Works and the Quality Rater Guidelines, there are some key insights SEOs can gleam from the new changes Google has made.

Of particular note, Google detailing how it views a “document” as potentially comprising of more than one webpage, what Google considers primary and secondary crawls, as well as an update to their reference of “more than 200 ranking factors” which has been present in this document since 2013.

But here are the changes and what they mean for SEOs.

Contents [hide]

  • 1 Crawling
    • 1.1 Improving Your Crawling
  • 2 The Long Version
  • 3 Crawling
    • 3.1 How does Google find a page?
    • 3.2 Improving Your Crawling
  • 4 Indexing
    • 4.1 Improving your Indexing
      • 4.1.1 What is a document?
  • 5 Serving Results
  • 6 Final Thoughts
      • 6.0.1 Jennifer Slegg
      • 6.0.2 Latest posts by Jennifer Slegg (see all)

Crawling

Google has greatly expanded this section.

They made a slight change to wording, with “some pages are known because Google has already crawled them before” changed to “some pages are known because Google has already visited them before.”   This is a fairly minor change, primarily because Google decided to include an expanded section detailing what crawling actually is.

Google removed:

This process of discovery is called crawling.

The removal of the crawling definition was simply because it was redundant.  In Google’s expanded crawling section, they included a much more detailed definition and description of crawling instead.

The added definition:

Once Google discovers a page URL, it visits, or crawls, the page to find out what’s on it. Google renders the page and analyzes both the text and non-text content and overall visual layout to decide where it should appear in Search results. The better that Google can understand your site, the better we can match it to people who are looking for your content.

There is still a great debate on how much page layout is taken into account.  There was the page layout algo that was released many years, in order to penalize content that was pushed well below the fold in order to increase the odds a visitor might click on an advertisement that appeared above the fold instead.  But with more traffic moving to mobile, and the addition of mobile first indexing, the importance of above and below the fold for on page layout seemingly was less important.

When it comes to page layout and mobile first, Google says:

Don’t let ads harm your mobile page ranking. Follow the Better Ads Standard when displaying ads on mobile devices. For example, ads at the top of the page can take up too much room on a mobile device, which is a bad user experience.

But in How Google Search Works, Google is specifically calling attention to the “overall visual layout” with “where it should appear in Search results.”

It also brings attention to “non-text” content.  While the most obvious of this refers to image content, the referral to it is quite open ended.  Could this refer to OCR as well, which we know Google has been dabbling in?

Improving Your Crawling

Under the “to improve your site crawling” section, Google has expanded this section significantly as well.

Google has added this point:

Verify that Google can reach the pages on your site, and that they look correct. Google accesses the web as an anonymous user (a user with no passwords or information). Google should also be able to see all the images and other elements of the page to be able to understand it correctly. You can do a quick check by typing your page URL in the Mobile-Friendly test tool.

This is a good point – so many new site owners end up accidentally blocking Googlebot from crawling or not realizing their site is set to be only viewable by logged in users only.  This makes it clear that site owners should try viewing their site without also being logged into it, to see if there are any unexpected accessibility or other issues that aren’t note when logged in as an admin or high level user.

Also recommending site owners check their site via the Mobile-Friendly testing tool is good, since even seasoned SEOs use the tool to quickly see if there are Googlebot specific issues with how Google is able to see, render and crawl a specific webpage – or a competitor’s page.

Google expanded their specific note about submitting a single page to the index.

If you’ve created or updated a single page, you can submit an individual URL to Google. To tell Google about many new or updated pages at once, use a sitemap.

Previously, it just mentioned submitting changes to a single page using the submit URL tool.  This just adds clarification to those who are newer to SEO that they do not need to submit every single new or updated pages to Google individually, but that using sitemaps is the best way to do that.  There have definitely been new site owners who add each page to Google using that tool because they don’t realize sitemaps is a thing.  But part of this is that WordPress is such a prevalent way to create a new website, yet it does not have native support for sitemaps (yet), so site owners need to either install a specific sitemaps plugin or use one of the many SEO tool plugins that offer sitemaps as a feature.

This new change also highlights using the tool for creating pages as well, instead of just the previous reference of “changes to a single page.”

Google has also made a change to the section about “if you ask Google to crawl only one page” section as well.  They are now referencing what Google views as a “small site” – according to Google,  a smaller site is one with less than 1,000 pages.

Google also stresses the importance of a strong navigation structure, even for sites it considers “small.”  It says site owners of small sites can just submit their homepage to Google, “provided that Google can reach all your other pages by following a path of links that start from your homepage.”

With so many sites being on WordPress, it is less likely that there will be random orphaned pages that are not accessible by following links from the homepage  But depending on the specific WordPress theme used, sometimes there can be orphaned pages from pages being added but not manually added to the pages menu… in these cases, if a sitemap is used as well, those pages shouldn’t be missed even if not directly linked from the homepage.

In the “get your page linked to by another page” section, Google has added that links in “advertisements links that you pay for in other sites, links in comments, or other links that don’t follow the Google Webmaster Guidelines won’t be followed by Google.”  A small change, but Google is making it clear that it is a Google specific thing that these links won’t be followed, but they might be followed by other search engines.

But perhaps the most telling part of this is at the end of the crawling section, Google adds:

Google doesn’t accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they’re wrong.

It has long been an issue with scammy SEO companies to guarantee first positioning on Google, to increase rankings or requiring payment to submit a site to Google.  And with the ambiguous Google Partner badge for AdWords, many use the Google Partners badge to imply  they are certified by Google for SEO and organic ranking purposes.  That said, most of those who are reading the How Search Works probably are already aware of this.  But nice to see Google add this in writing again, for times when SEOs need to prove to clients that there is not a “pay to win” option, outside of AdWords, or simply to show someone who might be falling for some scammy SEO company’s claims of Google rankings.

The Long Version

Google then gets into what they call the “long version” of How Google Search Works, with more details on the above sections, covering more nuances that impact SEO.

Crawling

Google has changed how they refer to the “algorithmic process”.  Previously, it stated “Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often and how many pages to fetch from each site.”  Curiously, they removed the reference to “computer programs”, which provoked the question about which computer programs exactly Google was using.

The new updated version simply states:

Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site.

Google also updated the wording for the crawl process, changing that it is “augmented with sitemap data” to “augmented by sitemap” data.

Google also made a change where it referenced that Googlebot “detects” links and changed it to “finds” links, as well as changes from Googlebot visiting “each of these websites” to the much more specific “page”.  This second change makes it more accurate since Google visiting a website won’t necessarily mean it crawls all links on all pages.  The change to “page” makes it more accurate and specific for webmasters.

Previously it read:

As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl.

Now it reads:

When Googlebot visits a page it finds links on the page and adds them to its list of pages to crawl.

Google has added a new section about using Chrome to crawl:

During the crawl, Google renders the page using a recent version of Chrome. As part of the rendering process, it runs any page scripts it finds. If your site uses dynamically-generated content, be sure that you follow the JavaScript SEO basics.

By referencing a recent version of Chrome, this addition is clarifying the change from last year where Googlebot was finally upgraded to the latest version of Chromium for crawling, an update from Google only crawling with Chrome 41 for years.

Google also notes it runs “any page scripts it finds,” and advises site owners to be aware of possible crawl issues as a result of using dynamically-generated content with the use of JavaScript, specifying that site owners should ensure they follow their JavaScript SEO basics.

Google also details the primary and secondary crawls, something that has garnered much confusion since Google revealed primary and secondary crawls, but Google’s details in this How Google Search Works documents detail it differently than how some SEOs previously interpreted it.

Here is the entire new section for primary and secondary crawls:

Primary crawl / secondary crawl

Google uses two different crawlers for crawling websites: a mobile crawler and a desktop crawler. Each crawler type simulates a user visiting your page with a device of that type.

Google uses one crawler type (mobile or desktop) as the primary crawler for your site. All pages on your site that are crawled by Google are crawled using the primary crawler. The primary crawler for all new websites is the mobile crawler.

In addition, Google recrawls a few pages on your site with the other crawler type (mobile or desktop). This is called the secondary crawl, and is done to see how well your site works with the other device type.

In this section, Google refers to primary and secondary crawls as being specific to their two crawlers – the mobile crawler and the desktop crawler.  Many SEOs think of primary and secondary crawling in reference to Googlebot making two passes over a page, where javascript is rendered on the secondary crawl.  So while Google clarifies their use of desktop and mobile Googlebots, the use of language here does cause confusion for those who use this to refer to the primary and secondary crawls for javascript purposes.  So to be clear, Google’s reference to their primary and secondary crawl has nothing to do with javascript rendering, but only to how they use both mobile and desktop Googlebots to crawl and check a page.

What Google is clarifying in this specific reference to primary and secondary crawl is that Google is using two crawlers – both mobile and desktop versions of Googlebot – and will crawl sites using a combination of both.

Google did specifically state that new websites are crawled with the mobile crawler in their Mobile-First Indexing Best Practices” document, as of July 2019.  But this is the first time it has made an appearance in their How Google Search Works document.

Google does go into more detail about how it uses both the desktop and mobile Googlebots, particularly for sites that are currently considered mobile first by Google.  It wasn’t clear just how much Google was checking desktop versions of sites if they were mobile first, and there have been some who have tried to take advantage of this by presenting a spammier version to desktop users, or in some cases completely different content.  But Google is confirming it is still checking the alternate version of the page with their crawlers.

So sites that are mobile first will see some of their pages crawled with the desktop crawler.  However, it still isn’t clear how Google handles cases where they are vastly different, especially when done for spam reasons, as there doesn’t seem to be any penalty for doing so, aside from a possible spam manual action if it is checked or a spam report is submitted.  And this would have been a perfect opportunity to be clearer about how Google will handle pages with vastly different content depending on whether it is viewed on desktop or on mobile.  Even in the mobile friendly documents, Google only warns about ranking differences if content is on the desktop version of the page but is missing on the mobile version of the page.

How does Google find a page?

Google has removed this section entirely from the new version of the document.

Here is what was included in it:

How does Google find a page?

Google uses many techniques to find a page, including:

  • Following links from other sites or pages
  • Reading sitemaps

It isn’t clear why Google removed this specifically.  It is slightly redundant, but it was missing the submitting a URL option as well.

Improving Your Crawling

Google makes the use of hreflang a bit clearer, especially for those who might just be learning what hreflang is and how it works by providing a bit more detail.

Formerly it said “Use hreflang to point to alternate language pages.”  Now it states “Use hreflang to point to alternate versions of your page in other languages.”

Not a huge change, but a bit clearer.

Google has also added two new points, providing more detail about ensuring Googlebot is able to access all the content on the page, not just the content (words) specifically.

First, Google added:

Be sure that Google can access the key pages, and also the important resources (images, CSS files, scripts) needed to render the page properly.

So Google is stressing about ensuring Google can access all the important content.  And it is also specifically calling attention to other types of elements on the page that Google wants to also have access to in order to properly crawl the page, including images, CSS and scripts.  For those webmasters who went through the whole “mobile first indexing” launch, they are fairly familiar with issues surrounding blocking files, especially CSS and scripts, something that some CMS had blocked Googlebot from crawling by default.

But for newer site owners, they might not realize this was possible, or that they might be doing it.  It would have been nice to see Google add specific information on how those newer to SEO can check for this, particularly for those who also might not be clear on what exactly “rendering” means.

Google also added:

Confirm that Google can access and render your page properly by running the URL Inspection tool on the live page.

Here Google does add specific information about using the URL Inspection tool in order to see what site owners are blocking or content that is causing issues when Google tries to render it.  I think these last two new points could have been combined, and made slightly clearer for how site owners can use the tool to check for all these issues.

Indexing

Google has made significant changes to this section as well. And Google starts off with making major changes to the first paragraph.  Here is the original version:

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such astags and alt attributes.

The updated version now reads:

Googlebot processes each page it crawls in order to understand the content of the page. This includes processing the textual content, key content tags and attributes, such astags and alt attributes, images, videos, and more.

Google no longer states it processes pages to “compile a massive index of all the words it sees and their location on each page.”  This was always a curious way for them to call attention to the fact they are simply indexing all words it comes across and their position on a page, when in reality it is a lot more complex than that.  So it definitely clears that up.

They have also added that they are processing “textual content” which is basically calling attention to the fact it indexes the words on the page, something that was assumed by everyone.  But it does differentiate between the new addition later in the paragraph regarding images, videos and more.

Previously, Google simply made reference to attributes such as title and alt tags and attributes.  But now it is getting more granular, specifically referring to “images, videos and more.”  However, this does mean Google is considering images, videos and “more” to understand the content on the page, which could affect rankings.

Improving your Indexing

Google changed “read our SEO guide for more tips” to “Read our basic SEO guide and advanced user guide for more tips.”

What is a document?

Google has added a massive section here called “What is a document?”  It talks specifically about how Google determines what is a document, but also includes details about how Google views multiple pages with identical content as a single document, even with different URLs, and how it determines canonicals.

First, here is the first part of this new section:

What is a “document”?

Internally, Google represents the web as an (enormous) set of documents. Each document represents one or more web pages. These pages are either identical or very similar, but are essentially the same content, reachable by different URLs. The different URLs in a document can lead to exactly the same page (for instance, example.com/dresses/summer/1234 and example.com?product=1234 might show the same page), or the same page with small variations intended for users on different devices (for example, example.com/mypage for desktop users and m.example.com/mypage for mobile users).

Google chooses one of the URLs in a document and defines it as the document’s canonical URL. The document’s canonical URL is the one that Google crawls and indexes most often; the other URLs are considered duplicates or alternates, and may occasionally be crawled, or served according to the user request: for instance, if a document’s canonical URL is the mobile URL, Google will still probably serve the desktop (alternate) URL for users searching on desktop.

Most reports in Search Console attribute data to the document’s canonical URL. Some tools (such as the Inspect URL tool) support testing alternate URLs, but inspecting the canonical URL should provide information about the alternate URLs as well.

You can tell Google which URL you prefer to be canonical, but Google may choose a different canonical for various reasons.

So the tl:dr is that Google will view pages with identical  or near-identical content as the same document, regardless of how many of them there are.  For seasoned SEOs, we know this as internal duplicate content.

Google also states that when Google determines these duplicates, they may not be crawled as often.  This is important to note for site owners that are working to de-duplicate content which Google is considering duplicate.  So it would be more important to submit these URLs to be recrawled, or give those newly de-duplicated pages links from the homepage in order to ensure Google recrawls and indexed the new content, so Google de-dupes them properly.

It also brings up an important note about desktop versus mobile, that Google will still likely serve the desktop version of a page instead of the mobile version for desktop users, when a site has two different URLs for the same page where is designed for mobile users and the other for desktop.  While many websites have changed to serving the same URL and content for both using responsive design, some sites still run two completely different sites and URLs for desktop and mobile users.

Google also mentions that you can tell Google the URL you prefer Google to use as the canonical, but states they can chose a different URL “for various reasons.”  While Google doesn’t detail specifics about why Google might choose a different canonical than the one the site owner specifies, it is usually due to http vs https, if a page is included in a sitemap or not, page quality, if the pages appear to be completely different and should not be canonicalized, or due to significant incoming links to the non-canonical URL.

Google has also included definitions for many o the terms used by SEOs and in Google Search Console.

Document: A collection of similar pages. Has a canonical URL, and possibly alternate URLs, if your site has duplicate pages. URLs in the document can be from the same or different organization (the root domain, for example “google” in www.google.com). Google chooses the best URL to show in Search results according to the platform (mobile/desktop), user language‡ or location, and many other variables. Google discovers related pages on your site by organic crawling, or by site-implemented features such as redirects or tags. Related pages on other organizations can only be marked as alternates if explicitly coded by your site (through redirects or link tags).

Again, Google is talking about the fact a single document can encompass more than just a single URL, as Google will consider a single document to potentially have many duplicate or near duplicate pages as well as pages assigned via canonical.  Google makes specific mention about “alternates” that appear on other sites, that can only be considered alternates if the site owner specifically codes it.  And that Google will choose the best URL from within the collection of documents to show.

But it fails to mention that Google can consider pages duplicate on other sites and will not show those duplicates, even if they aren’t from the same sites, something that site owners see happen frequently when someone steals content and sometimes sees the stolen version ranking over the original.

There was a notation added for the above, dealing with hreflang.

Pages with the same content in different languages are stored in different documents that reference each other using hreflang tags; this is why it’s important to use hreflang tags for translated content.

Google shows that it doesn’t include identical content under the same “document” when it is simply in a different language, which is interesting.  But Google is tressing the importance of using hreflang in these cases.

URL: The URL used to reach a given piece of content on a site. The site might resolve different URLs to the same page.

Pretty self explanatory, although it does have reference to the fact different URLs can be resolved to the same page, presumably such as with redirects or alias.

Page: A given web page, reached by one or more URLs. There can be different versions of a page, depending on the user’s platform (mobile, desktop, tablet, and so on).

Also pretty self explanatory, bringing up the specifics that some site owners can be served different versions of the same page, such as if they try and view the same page on a mobile device versus a desktop computer.

Version: One variation of the page, typically categorized as “mobile,” “desktop,” and “AMP” (although AMP can itself have mobile and desktop versions). Each version can have a different URL (example.com vs m.example.com) or the same URL (if your site uses dynamic serving or responsive web design, the same URL can show different versions of the same page) depending on your site configuration. Language variations are not considered different versions, but different documents.

Simply clarifying with greater details the different versions of a page, and how Google typically categorizes them as “mobile,” “desktop,” and “AMP”.

Canonical page or URL: The URL that Google considers as most representative of the document. Google always crawls this URL; duplicate URLs in the document are occasionally crawled as well.

Google states here again that non-canonical pages are not crawled as often as the main canonical that a site owner assigns to a group of pages they want canonical.  Google does not include specific mention here that they sometimes chose a different page as the canonical one, even if there is a specific page designated as the canonical one.

Alternate/duplicate page or URL: The document URL that Google might occasionally crawl. Google also serves these URLs if they are appropriate to the user and request (for example, an alternate URL for desktop users will be served for desktop requests rather than a canonical mobile URL).

The key takeaway here is that Google “might” occasionally crawl the site’s duplicate or alternative page.  And here they stress that Google will serve these alternative URLs “if they are appropriate.”  It is unfortunate they don’t go into greater detail in why they might serve these pages instead of the canonical, outside of the mention of desktop versus mobile, as we have seen many cases where Google picks a different page to show other than the canonical for a myriad of reasons.

Google also fails to mention how this impacts duplicate content found on other sites, we we do know Google will crawl those less often as well.

Site: Usually used as a synonym for a website (a conceptually related set of web pages), but sometimes used as a synonym for a Search Console property, although a property can actually be defined as only part of a site. A site can span subdomains (and even domains, for properly linked AMP pages).

Interesting to note here what they consider a website – a conceptually related set of webpages – and how it related to the usage of a Google Search Console property, as “a property can actually be defined as only part of a site.”

Google does make mention that AMP, which technically appear on a different domain, are considered part of the main site.

Serving Results

Google has made a pretty interesting specific change here in regards to their ranking factors.  Previously, Google stated:

Relevancy is determined by over 200 factors, and we always work on improving our algorithm.

Google has now updated this “over 200 factors” with a less specific one.

Relevancy is determined by hundreds of factors, and we always work on improving our algorithm.

The 200 factors in the How Google Search Works dates back to 2013 when the document was launched, although then it also made reference to PageRank (“Relevancy is determined by over 200 factors, one of which is the PageRank for a given page”) which Google removed when they redesigned their document in 2018.

While Google doesn’t go into specifics on the number anymore, it can be assumed that a significant number of ranking factors have been added since 2013 when this was first claimed in this document.  But I am sure some SEOs will be disappointed we don’t get a brand new shiny number like “over 500” ranking factors that SEOs can obsess about.

Final Thoughts

There are some pretty significant changes made to this document that SEOs can get a bit of insight from.

Google’s description of what it considers a document and how it relates to other identical or near-identical pages on a site is interesting, as well as Google’s crawling behavior towards the pages within a document it considers as alternate pages.  While this behavior has often been noted, it is more concrete information on how site owners should handle these duplicate and near-duplicate pages, particularly when they are trying to un-duplicate those pages and see them crawled and indexed as their own document.

They added a lot of useful advice for newer site owners, which is particularly helpful with so many new websites coming online this year due to the global pandemic.  Things such as checking a site without being logged in, how to submit both pages and sites to Google, etc.

The mention of what Google considers a “small site” is interesting because it gives a more concrete reference point for how Google sees large versus small sites.  For some, a small site could mean under 30 pages and the idea of a site with millions of pages being unfathomable.  And the reinforcement of a strong navigation, even for “small sites” is useful for showing site owners and clients who might push for navigation that is more aesthetic than practical for both usability and SEO.

The primary and secondary crawl additions will probably cause some confusion for those who think of primary and secondary in terms of how Google processes scripts on a page when it crawls it.  But it is nice to have more concrete information on how and when Google will crawl using the alternate version of Googlebot for sites that are usually crawled with either the mobile Googlebot or the desktop one.

Lastly, the change from the “200 ranking factors” to a less specific, but presumably much higher number of ranking factors will disappoint some SEOs who liked having some kind of specific number of potential ranking factors to work out.

[Source: This article was published in thesempost.com By JENNIFER SLEGG - Uploaded by the Association Member: Barbara larson]

Categorized in Search Techniques

I haven't been the biggest fan of Google Images since it removed direct image links, but the service has been working on a few useful features behind the scenes. Starting this week, contextual information about images will appear when you tap on them, similar to what you would get from regular web searches.

"When you search for an image on mobile in the U.S.," Google wrote in a blog post, "you might see information from the Knowledge Graph related to the result. That information would include people, places or things related to the image from the Knowledge Graph’s database of billions of facts, helping you explore the topic more."

Screenshot_20200708-154057-329x713.png

Unlike with web searches, Images can display multiple Knowledge Graph information panels for a single result. Google says the feature combines data from the web page and Google Lens-style deep learning to determine what information to display.

The feature is going live in the Google Android app, as well as the mobile web version of Google Images.

[Source: This article was published in androidpolice.com By Corbin Davenport - Uploaded by the Association Member: Jeremy Frink]

Categorized in Search Engine

There are more than 15 billion stolen account credentials circulating on criminal forums within the dark web, a new study has revealed.

Researchers at cyber security firm Digital Shadows discovered usernames, passwords and other login information for everything from online bank accounts, to music and video streaming services.

The majority of exposed credentials belong to consumers rather than businesses, the researchers found, resulting from hundreds of thousands of data breaches.

Unsurprisingly, the most expensive credentials for sale were those for bank and financial services. The average listing for these was £56 on the dark web – a section of the internet notorious for criminal activity that is only accessible using specialist software.

“The sheer number of credentials available is staggering,” said Rick Holland, CISO at Digital Shadows.

“Some of these exposed accounts can have (or have access to) incredibly sensitive information. Details exposed from one breach could be re-used to compromise accounts used elsewhere.”

Mr Holland said that his firm had alerted its customers to around 27 million credentials over the past one-and-a-half years that could directly affect them.

The number of stolen credentials has risen by more than 300 per cent since 2018, due to a surge in data breaches. An estimated 100,000 separate breaches have taken place over the last two years.

Among the credentials for sale were those that granted access to accounts within organisations, with usernames containing the word "invoice" or "invoices" among the most popular listings.

Digital Shadows said it was unable to confirm the validity of the data that the vendors purport to own without purchasing it. The researchers said that listings included those for large corporations and government organisations in multiple countries.

Security experts advise internet users to use individual passwords for each online service that they use, while also adopting measures like two-factor authentication where possible.

Online tools like HaveIBeenPwned can also indicate whether a person's email address has been compromised in a major data breach.

 [Source: This article was published in independent.co.uk By Brien Posey - Uploaded by the Association Member: Anthony Cuthbertson]

Categorized in Internet Privacy

It’s safe to say that nearly everyone is familiar with Internet search engines. Most of us use a search engine such as Google, Duck Duck Go, or Bing daily. As helpful as these and other search engines might be however, they don’t always give users the answers that they need. This is especially true if the user is searching for something specific to your organization. Unless the item that the user is looking for happens to be on your organization’s public-facing web page, a search engine like Google is unlikely to be able to point the user to whatever it is that they happen to be looking for. The solution to this problem is to deploy an enterprise search engine. An enterprise search engine is similar to search engines that we are all familiar with, except that it is configured to search within your organization’s resources. While there are third-party options for setting up an enterprise search engine, Microsoft includes a search engine with its productivity cloud suite, Microsoft 365.

Using Microsoft Search in Microsoft 365

Microsoft Search is enabled by default for Microsoft 365 users. Even so, most users probably don’t even realize that they can search across the organization’s online resources.

Before a user can use Microsoft Search, they must be logged into Microsoft 365. Once logged in, the user needs only to open their browser, go to Bing.com, and enter the search query. Upon doing so, Bing will return both public and private search results.

If you look at the image below, you will see Bing’s results page (I have hidden the search query and the search results for privacy reasons). The main thing that I wanted to point out in the figure is that there are a series of tabs just beneath the search bar. These tabs include things like All, Work, Images, Videos, and more. The Work tab, which has a lock icon next to it, is where users need to go if they are looking for non-public information. As you might have noticed in the figure, a Bing search can return a variety of document types, and users can filter the search results by People, Groups, Sites, Files, and Conversations. In case you are wondering, the search results only include documents that the user has access to.

Bing can surface search results that are specific to your organization
Enterprise-Search-1-1024x569.jpg

Search limitations

As you would probably expect, Microsoft gives you some control over the types of results generated by Microsoft Search. I’m going to show you some of the more useful configuration options. Before I do, though, I want to mention what I consider to be Microsoft Search’s most significant limitation.

As previously mentioned, Microsoft Search is built into Microsoft 365. This means that Microsoft Search’s results only include whatever data Microsoft 365 is aware of. For example, Microsoft Search isn’t going to be able to return results related to your company’s billing application unless that application is somehow tied to Microsoft 365.

 

In my opinion, the most significant limitation associated with using Microsoft Search is that the search engine does not index your file servers. It assumes that most of your file data reside in SharePoint Online. The only way that Microsoft Search can index files stored on-premises is if you have a hybrid SharePoint deployment and the files that you want to index are stored within SharePoint.

Configuring Microsoft Search

In any organization, there are certain things that users will inevitably end up searching for. These questions will differ from one organization to the next, but here are a few examples:

  • How do I request time off?
  • How many vacation days do I get per year?
  • How can I access email from my phone?
  • What is the company’s mailing address?

One of the really nice things about Microsoft Search is that you don’t have to wonder if it will be able to figure out the answers to these types of questions. You can create a list of the questions that users are likely to ask and give Microsoft Search the answers to those questions.

To populate Microsoft Search with questions and answers, begin by logging into Microsoft 365 and go to the Microsoft 365 Admin Center. Next, click on the Add Admin Centers option, followed by Microsoft Search. You can see what this looks like in the screenshot below.

Click on All Admin Centers, followed by Microsoft Search
Enterprise-Search-2-1024x505.jpg

When the Microsoft Search interface appears, click on the Answers tab, and then click on Q&A. Now, click the Add link, shown in the figure below.

Select the Q&A tab and then click the Add link
Enterprise-Search-3-1024x585.jpg

You can see what the Add Q&A screen looks like below. To create a question and answer, you will need to type the question into the Title field. Keep in mind that this field has a 60-character limit, so you will need to keep the question short. You will also need to type the answer to the question into the Answer Description field. You can also optionally link to a specific URL. When you are done, click the Publish button.

This is what the Add Q&A screen looks like
Enterprise-Search-4.jpg

When creating a Q&A, you have the option of including a link to a web page, but the goal is to include the answer to the question in the Answer Description field. If you would rather not have to type the answers to questions, you can create a bookmark instead. A bookmark tells Microsoft Search to point to a specific URL when a user enters certain keywords. If you look at the figure below, for example, you can see that the keywords Office Admin Portal are linked to the URL for the Office 365 Admin Center.

Bookmarks associate keywords with URLs
Enterprise-Search-5-1024x572.jpg

To create a bookmark, just go to the Bookmark tab and click on the Add button. This will cause Microsoft 365 to display the Add Bookmark screen, shown in the figure below. As you can see in the figure, you will need to provide a title for the bookmark that you are creating as well as the bookmark URL and one or more keywords. You can also include an optional bookmark description of up to 300 characters.

This is what it looks like when you create a bookmark
Enterprise-Search-6-1024x514.jpg

Microsoft Search and Microsoft 365: A bounty of possibilities

As you begin populating Microsoft Search, most of your manual additions will likely be bookmarks or questions and answers. Even so, there are other types of data that you can add. For example, you can include floor plans and information about your company’s locations. If your organization uses a lot of acronyms internally, you can add those to the search interface as well.

[Source: This article was published in techgenix.com By Brien Posey - Uploaded by the Association Member: Joshua Simon]

Categorized in Search Engine
Page 1 of 32

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media