Using Modern SEO to Build Brand Authority

Posted by kaiserthesage

It’s obvious that the technology behind search engines’ ability to determine and understand web entities is gradually leaning towards how real people will normally perceive things from a traditional marketing perspective.

The
emphasis on E-A-T (expertise, authoritativeness, trustworthiness) from Google’s recently updated Quality Rating Guide shows that search engines are shifting towards brand-related metrics to identify sites/pages that deserve to be more visible in search results.

Online branding, or authority building, is quite similar to the traditional SEO practices that many of us have already been accustomed with.

Building a stronger brand presence online and improving a site’s search visibility both require two major processes: the things you implement on the site and the things you do outside of the site.

This is where several of the more advanced aspects of SEO can blend perfectly with online branding when implemented the right way. In this post, I’ll use some examples from my own experience to show you how.

Pick a niche and excel

Building on your brand’s
topical expertise is probably the fastest way to go when you’re looking to build a name for yourself or your business in a very competitive industry.

There are a few reasons why:

  • Proving your field expertise in one or two areas of your industry can be a strong unique selling point (USP) for your brand.
  • It’s easier to expand and delve into the deeper and more competitive parts of your industry once you’ve already established yourself as an expert in your chosen field.
  • Obviously, search engines favour brands known to be experts in their respective fields.

Just to give a brief example, when I started blogging back in 2010, I was all over the place. Then, a few months later, I decided to focus on one specific area of SEO—link building—and
wrote dozens of guides on how I do it.

By aiming to build my blog’s brand identity to become a prime destination for link building tutorials, it became a lot easier for me to sell my ideas on the other aspects of inbound marketing to my continuously growing audience (from technical SEO to social media, content marketing, email marketing and more).

Strengthening your brand starts with the quality of your brand’s content, whether it’s your product/service or the plethora of information available on your website.

You can start by assessing the categories where you’re getting the most traction in terms of natural link acquisitions, social shares, conversions, and/or sales.

Prioritize your content development efforts on the niche where your brand can genuinely compete in and will have a better fighting chance to dominate the market. It’s the smartest way to stand out and scale, especially when you’re still in your campaign’s early stages.

Optimize for semantic search and knowledge graph

In the past, most webmasters and publishers would rely on the usage of generic keywords/terms in optimizing their website’s content to make it easier for search engines to understand what they are about.

But now, while the continuously evolving technologies behind search may seem to make the optimization process more complicated, the fact is that it may just reward those who pursue high-level trustworthy marketing efforts to stand out in the search results.

These technologies and factors for determining relevance—which include entity recognition and disambiguation (ERD), structured data or schema markups, natural language processing (NLP), phrase-based indexing for co-occurrence and co-citations, concept matching, and a lot more—are all driven by branding campaigns and
how an average human would normally find, talk, or ask about a certain thing.

Easily identifiable brands will surely win in this type of setup.

Where to start? See if Google already knows what your brand is about.

How to optimize your site for the Knowledge Graph and at the same time build it as an authority online

1. Provide the best and the most precise answers to the “who, what, why, and how” queries that people might look for in your space.

Razvan Gavrilas did 
an extensive study on how Google’s Answer Boxes work. Getting listed in the answer box will not just drive more traffic and conversions to a business, but can also help position a brand on a higher level in its industry.

But of course, getting one of your entries placed for Google’s answer boxes for certain queries will also require other authority signals (like natural links, domain authority, etc.).

But what search crawlers would typically search for to evaluate whether a page’s content is appropriate to be displayed in the answer boxes (according to Razvan’s post):

  • If the page selected for the answer contains the question in a very similar (if not exact) form, along with the answer, at a short distance from the question (repeating at least some of the words from the question) and
  • If the page selected for the answer belongs to a trustworthy website. So most of the times, if it’s not Wikipedia, it will be a site that it can consider a non-biased third party, such as is the case with a lot of “.edu” sites, or news organization websites.

Although,
John Mueller mentioned recently that Knowledge Graph listings should not be branded, in which you might think that the approach and effort will be for nothing.

But wait, just think about it—the intent alone of optimizing your content for Google’s Knowledge Graph will allow you to serve better content to your users (which is what Google rewards the most these days, so it’s still the soundest action to take if you want to really build a solid brand, right?).

2. Clearly define your brand’s identity to your audience.

Being remarkable and being able to separate your brand from your competitors is crucial in online marketing (be it through your content or the experience people feel when they’re using your site/service/product).


Optimizing for humans through branding allows you to condition the way people will talk about you
. This factor is very important when you’re aiming to get more brand mentions that would really impact your site’s SEO efforts, branding, and conversions.

The more search engines are getting signals (even unlinked mentions) that verify that you’re an authority in your field, the more your brand will be trusted and rank your pages well on SERPs.

3. Build a strong authorship portfolio.

Author photos/badges may have been taken down from the search results a few weeks ago, but it doesn’t mean that authorship markup no longer has value.

Both
Mark Traphagen and Bill Slawski have shared why authorship markup still matters. And clearly, an author’s authority will still be a viable search ranking factor, given that it enables Google to easily identify topical experts and credible documents available around the web.

It will continue to help tie entities (publishers and brands) to their respective industries, which may still accumulate scores over time based on the popularity and reception from the author’s works (AuthorRank).

This approach is a great complement to personal brand building, especially when you’re expanding your content marketing efforts’ reach through guest blogging on industry-specific blogs where you can really absorb more new readers and followers.

There’s certainly more to implement under
Knowledge Graph Optimization, and here’s a short list from what AJ Kohn has already shared on his blog earlier this year, which are all still useful to this day:

  • Use entities (aka Nouns) in your writing
  • Get connected and link out to relevant sites
  • Implement Structured Data to increase entity detection
  • Use the sameAs property
  • Optimize your Google+ presence
  • Get exposure on Wikipedia
  • Edit and update your Freebase entry

Online branding through scalable link building

The right relationships make link building scalable.

In the past, many link builders believed that it’s best to have thousands of links from diversified sources, which apparently forced a lot of early practitioners to resort to tactics focused on manually dropping links to thousands of unique domains (and spamming).

And, unfortunately, guest blogging as a link building tactic has eventually become a part of this craze.

I’ve mentioned this dozens of times before, and I’m going to say it one more time:
It’s better to have multiple links from a few link sources that are highly trusted than having hundreds of one-off links from several mediocre sites.

Focus on building signals that will strongly indicate relationships, because it’s probably the most powerful off-site signal you can build out there.

When other influential entities in your space are vouching for your brand (whether it’s through links, social shares, or even unlinked brand mentions), it allows you to somehow become a part of the list of sites that will most likely be trusted by search engines.

It can most definitely impact how people will see your brand as an authority as well, when they see that you’re being trusted by other credible brands in your industry.

These relationships can also open a lot of opportunities for natural link acquisitions and lead generation, knowing that some of the most trusted brands in your space trust you.

Making all of this actionable

1. Identify and make a list of the top domains and publishers in your industry, particularly those that have high search share.

There are so many tools that you can use to get these data, like
SEMRush, Compete.com, and/or Alexa.com.

You can also use
Google Search and SEOQuake to make a list of sites that are performing well on search for your industry’s head terms (given that Google is displaying better search results these days, it’s probably one of the best prospecting tools you can use).

I also use other free tools in doing this type of prospecting, particularly in cleaning up the list (in
removing duplicate domains, and extracting unique hostnames; and in filtering out highly authoritative sites that are clearly irrelevant for the task, such as ranking pages from Facebook, Wikipedia, and other popular news sites).

2. Try to penetrate at least 2 high authority sites from the first 50 websites on your list—and become a regular contributor for them.

Start engaging them by genuinely participating in their existing communities.

The process shouldn’t stop with you contributing content for them on a regular basis, as along the way you can initiate collaborative tasks, such as inviting them to publish content on your site as well.

This can help draw more traffic (and links) from their end, and can exponentially improve the perceived value of your brand as a publisher (based on your relationships with other influential entities in your industry).

These kinds of relationships will make the latter part of your link building campaign less stressful. As soon as you get to build a strong footing with your brand’s existing relationships and content portfolio (in and out of your site), it’ll be a lot easier for you to pitch and get published on other authoritative industry-specific publications (or even in getting interview opportunities).

3. Write the types of content that your target influencers are usually reading.

Stalk your target influencers on social networks, and take note of the topics/ideas that interest them the most (related to your industry). See what type of content they usually share to their followers.

Knowing these things will give you ton of ideas on how you can effectively approach your content development efforts and can help you come up with content ideas that are most likely to be read, shared, and linked to.

You can also go the extra mile by knowing which sites they mostly link out to or use as reference for their own works (use
ScreamingFrog).

4. Take advantage of your own existing community (or others’ as well).

Collaborate with the people who are already participating in your brand’s online community (blog comments, social networks, discussions, etc.). Identify those who truly contribute and really add value to the discussions, and see if they run their own websites or work for a company that’s also in your industry.

Leverage these interactions, as these can form long-term relationships that can also be beneficial to both parties (for instance, inviting them to write for you or having you write for their blog, and/or cross-promote your works/services).

And perhaps, you can also use this approach to other brands’ communities as well, like reaching out to people you see who have really smart inputs about your industry (that’ll you see on other blog’s comment sections) and asking them if they’ll be interested to talk/share more about that topic and have it published on your website instead.

Building a solid community can easily help automate link building, but more importantly, it can surely help strengthen a brand’s online presence.

Conclusion

SEO can be a tremendous help to your online branding efforts. Likewise, branding can be a tremendous help to your SEO efforts. Alignment and integration of both practices is what keeps winners winning in this game (just look at Moz).

If you liked this post or have any questions, let me know in the comments below, and you can find me on Twitter
@jasonacidre.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Unraveling Panda Patterns

Posted by billslawski

This is my first official blog post at Moz.com, and I’m going to be requesting your help and expertise and imagination.

I’m going to be asking you to take over as Panda for a little while to see if you can identify the kinds of things that Google’s Navneet Panda addressed when faced with what looked like an incomplete patent created to identify sites as parked domain pages, content farm pages, and link farm pages. You’re probably better at this now then he was then.

You’re a subject matter expert.

To put things in perspective, I’m going to include some information about what appears to be the very first Panda patent, and some of Google’s effort behind what they were calling the “high-quality site algorithm.”

I’m going to then include some of the patterns they describe in the patent to identify lower-quality pages, and then describe some of the features I personally would suggest to score and rank a higher-quality site of one type.

Google’s Amit Singhal identified a number of questions about higher quality sites that he might use, and told us in the blog post where he listed those that it was an incomplete list because they didn’t want to make it easy for people to abuse their algorithm.

In my opinion though, any discussion about improving the quality of webpages is one worth having, because it can help improve the quality of the Web for everyone, which Google should be happy to see anyway.

Warning searchers about low-quality content

In “Processing web pages based on content quality,” the original patent filing for Panda, there’s a somewhat mysterious statement that makes it sound as if Google might warn searchers before sending them to a low quality search result, and give them a choice whether or not they might actually click through to such a page.

As it notes, the types of low quality pages the patent was supposed to address included parked domain pages, content farm pages, and link farm pages (yes,
link farm pages):

“The processor 260 is configured to receive from a client device (e.g., 110), a request for a web page (e.g., 206). The processor 260 is configured to determine the content quality of the requested web page based on whether the requested web page is a parked web page, a content farm web page, or a link farm web page.

Based on the content quality of the requested web page, the processor is configured to provide for display, a graphical component (e.g., a warning prompt). That is, the processor 260 is configured to provide for display a graphical component (e.g., a warning prompt) if the content quality of the requested web page is at or below a certain threshold.

The graphical component provided for display by the processor 260 includes options to proceed to the requested web page or to proceed to one or more alternate web pages relevant to the request for the web page (e.g., 206). The graphical component may also provide an option to stop proceeding to the requested web page.

The processor 260 is further configured to receive an indication of a selection of an option from the graphical component to proceed to the requested web page, or to proceed to an alternate web page. The processor 260 is further configured to provide for display, based on the received indication, the requested web page or the alternate web page.”

This did not sound like a good idea.

Recently, Google announced in a post on the Google Webmaster Central blog post,
Promoting modern websites for modern devices in Google search results, that they would start providing warning notices on mobile versions of sites if there were issues on those pages that visitors might go to.

I imagine that as a site owner, you might be disappointed seeing such warning notice shown to searchers on your site about technology used on your site possibly not working correctly on a specific device. That recent blog post mentions Flash as an example of a technology that might not work correctly on some devices. For example, we know that Apple’s mobile devices and Flash don’t work well together.

That’s not a bad warning in that it provides enough information to act upon and fix to the benefit of a lot of potential visitors. :)

But imagine if you tried to visit your website in 2011, and instead of getting to the site, you received a Google warning that the page you were trying to visit was a content farm page or a link farm page, and it provided alternative pages to visit as well.

That ”
your website sucks” warning still doesn’t sound like a good idea. One of the inventors listed on the patent is described in LinkedIn as presently working on the Google Play store. The warning for mobile devices might have been something he brought to Google from his work on this Panda patent.

We know that when the Panda Update was released that it was targeting specific types of pages that people at places such as
The New York Times were complaining about, such as parked domains and content farm sites. A
follow-up from the Timesafter the algorithm update was released puts it into perspective for us.

It wasn’t easy to know that your pages might have been targeted by that particular Google update either, or if your site was a false positive—and many site owners ended up posting in the Google Help forums after a Google search engineer invited them to post there if they believed that they were targeted by the update when they shouldn’t have been.

The wording of that
invitation is interesting in light of the original name of the Panda algorithm. (Note that the thread was broken into multiple threads when Google did a migration of posts to new software, and many appear to have disappeared at some point.)

As we were told in the invite from the Google search engineer:

“According to our metrics, this update improves overall search quality. However, we are interested in hearing feedback from site owners and the community as we continue to refine our algorithms. If you know of a high-quality site that has been negatively affected by this change, please bring it to our attention in this thread.

Note that as this is an algorithmic change we are unable to make manual exceptions, but in cases of high quality content we can pass the examples along to the engineers who will look at them as they work on future iterations and improvements to the algorithm.

So even if you don’t see us responding, know that we’re doing a lot of listening.”

The timing for such in-SERP warnings might have been troublesome. A site that mysteriously stops appearing in search results for queries that it used to rank well for might be said to have gone astray of
Google’s guidelines. Instead, such a warning might be a little like the purposefully embarrassing “Scarlet A” in Nathaniel Hawthorn’s novel The Scarlet Letter.

A page that shows up in search results with a warning to searchers stating that it was a content farm, or a link farm, or a parked domain probably shouldn’t be ranking well to begin with. Having Google continuing to display those results ranking highly, showing both a link and a warning to those pages, and then diverting searchers to alternative pages might have been more than those site owners could handle. Keep in mind that the fates of those businesses are usually tied to such detoured traffic.

My imagination is filled with the filing of lawsuits against Google based upon such tantalizing warnings, rather than site owners filling up a Google Webmaster Help Forum with information about the circumstances involving their sites being impacted by the upgrade.

In retrospect, it is probably a good idea that the warnings hinted at in the original Panda Patent were avoided.

Google seems to think that such warnings are appropriate now when it comes to multiple devices and technologies that may not work well together, like Flash and iPhones.

But there were still issues with how well or how poorly the algorithm described in the patent might work.

In the March, 2011 interview with Google’s Head of Search Quality, Amit Sighal, and his team member and Head of Web Spam at Google, Matt Cutts, titled
TED 2011: The “Panda” That Hates Farms: A Q&A With Google’s Top Search Engineers, we learned of the code name that Google claimed to be using to refer to the algorithm update as “Panda,” after an engineer with that name came along and provided suggestions on patterns that could be used by the patent to identify high- and low-quality pages.

His input seems to have been pretty impactful—enough for Google to have changed the name of the update, from the “High Quality Site Algorithm” to the “Panda” update.

How the High-Quality Site Algorithm became Panda

Danny Sullivan named the update the “Farmer update” since it supposedly targeted content farm web sites. Soon afterwards the joint interview with Singhal and Cutts identified the Panda codename, and that’s what it’s been called ever since.

Google didn’t completely abandon the name found in the original patent, the “high quality sites algorithm,” as can be seen in the titles of these Google Blog posts:

The most interesting of those is the “more guidance” post, in which Amit Singhal lists 23 questions about things Google might look for on a page to determine whether or not it was high-quality. I’ve spent a lot of time since then looking at those questions thinking of features on a page that might convey quality.

The original patent is at:

Processing web pages based on content quality
Inventors: Brandon Bilinski and Stephen Kirkham
Assigned to Google

US Patent 8,775,924

Granted July 8, 2014

Filed: March 9, 2012

Abstract

“Computer-implemented methods of processing web pages based on content quality are provided. In one aspect, a method includes receiving a request for a web page.

The method includes determining the content quality of the requested web page based on whether it is a parked web page, a content farm web page, or a link farm web page. The method includes providing for display, based on the content quality of the requested web page, a graphical component providing options to proceed to the requested web page or to an alternate web page relevant to the request for the web page.

The method includes receiving an indication of a selection of an option from the graphical component to proceed to the requested web page or to an alternate web page. The method further includes providing, based on the received indication, the requested web page or an alternate web page.

The patent expands on what are examples of low-quality web pages, including:

  • Parked web pages
  • Content farm web pages
  • Link farm web pages
  • Default pages
  • Pages that do not offer useful content, and/or pages that contain advertisements and little else

An invitation to crowdsource high-quality patterns

This is the section I mentioned above where I am asking for your help. You don’t have to publish your thoughts on how quality might be identified, but I’m going to start with some examples.

Under the patent, a content quality value score is calculated for every page on a website based upon patterns found on known low-quality pages, “such as parked web pages, content farm web pages, and/or link farm web pages.”

For each of the patterns identified on a page, the content quality value of the page might be reduced based upon the presence of that particular pattern—and each pattern might be weighted differently.

Some simple patterns that might be applied to a low-quality web page might be one or more references to:

  • A known advertising network,
  • A web page parking service, and/or
  • A content farm provider

One of these references may be in the form of an IP address that the destination hostname resolves to, a Domain Name Server (“DNS server”) that the destination domain name is pointing to, an “a href” attribute on the destination page, and/or an “img src” attribute on the destination page.

That’s a pretty simple pattern, but a web page resolving to an IP address known to exclusively serve parked web pages provided by a particular Internet domain registrar can be deemed a parked web page, so it can be pretty effective.

A web page with a DNS server known to be associated with web pages that contain little or no content other than advertisements may very well provide little or no content other than advertising. So that one can be effective, too.

Some of the patterns listed in the patent don’t seem quite as useful or informative. For example, the one stating that a web page containing a common typographical error of a bona fide domain name may likely be a low-quality web page, or a non-existent web page. I’ve seen more than a couple of legitimate sites with common misspellings of good domains, so I’m not too sure how helpful a pattern that is.

Of course, some textual content is a dead giveaway the patent tells us, with terms on them such as “domain is for sale,” “buy this domain,” and/or “this page is parked.”

Likewise, a web page with little or no content is probably (but not always) a low-quality web page.

This is a simple but effective pattern, even if not too imaginative:

… page providing 99% hyperlinks and 1% plain text is more likely to be a low-quality web page than a web page providing 50% hyperlinks and 50% plain text.

Another pattern is one that I often check upon and address in site audits, and it involves how functional and responsive pages on a site are.

The determination of whether a web site is full functional may be based on an HTTP response code, information received from a DNS server (e.g., hostname records), and/or a lack of a response within a certain amount of time. As an example, an HTTP response that is anything other than 200 (e.g., “404 Not Found”) would indicate that a web site is not fully functional.

As another example, a DNS server that does not return authoritative records for a hostname would indicate that the web site is not fully functional. Similarly, a lack of a response within a certain amount of time, from the IP address of the hostname for a web site would indicate that the web site is not fully functional.

As for user-data, sometimes it might play a role as well, as the patent tells us:

A web page may be suggested for review and/or its content quality value may be adapted based on the amount of time spent on that page.

For example, if a user reaches a web page and then leaves immediately, the brief nature of the visit may cause the content quality value of that page to be reviewed and/or reduced. The amount of time spent on a particular web page may be determined through a variety of approaches. For example, web requests for web pages may be used to determine the amount of time spent on a particular web page.”

My example of some patterns for an e-commerce website

There are a lot of things that you might want to include on an ecommerce site that help to indicate that it’s high quality. If you look at the questions that Amit Singhal raised in the last Google Blog post I mentioned above, one of his questions was “Would you be comfortable giving your credit card information to this site?” Patterns that might fit with this question could include:

  • Is there a privacy policy linked to on pages of the site?
  • Is there a “terms of service” page linked to on pages of the site?
  • Is there a “customer service” page or section linked to on pages of the site?
  • Do ordering forms function fully on the site? Do they return 404 pages or 500 server errors?
  • If an order is made, does a thank-you or acknowledgement page show up?
  • Does the site use an https protocol when sending data or personally identifiable data (like a credit card number)?

As I mentioned above, the patent tells us that a high-quality content score for a page might be different from one pattern to another.

The
questions from Amit Singhal imply a lot of other patterns, but as SEOs who work on and build and improve a lot of websites, this is an area where we probably have more expertise than Google’s search engineers.

What other questions would you ask if you were tasked with looking at this original Panda Patent? What patterns would you suggest looking for when trying to identify high or low quality pages?  Perhaps if we share with one another patterns or features on a site that Google might look for algorithmically, we could build pages that might not be interpreted by Google as being a low quality site. I provided a few patterns for an ecommerce site above. What patterns would you suggest?

(Illustrations: Devin Holmes @DevinGoFish)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Unraveling Panda Patterns

Posted by billslawski

This is my first official blog post at Moz.com, and I’m going to be requesting your help and expertise and imagination.

I’m going to be asking you to take over as Panda for a little while to see if you can identify the kinds of things that Google’s Navneet Panda addressed when faced with what looked like an incomplete patent created to identify sites as parked domain pages, content farm pages, and link farm pages. You’re probably better at this now then he was then.

You’re a subject matter expert.

To put things in perspective, I’m going to include some information about what appears to be the very first Panda patent, and some of Google’s effort behind what they were calling the “high-quality site algorithm.”

I’m going to then include some of the patterns they describe in the patent to identify lower-quality pages, and then describe some of the features I personally would suggest to score and rank a higher-quality site of one type.

Google’s Amit Singhal identified a number of questions about higher quality sites that he might use, and told us in the blog post where he listed those that it was an incomplete list because they didn’t want to make it easy for people to abuse their algorithm.

In my opinion though, any discussion about improving the quality of webpages is one worth having, because it can help improve the quality of the Web for everyone, which Google should be happy to see anyway.

Warning searchers about low-quality content

In “Processing web pages based on content quality,” the original patent filing for Panda, there’s a somewhat mysterious statement that makes it sound as if Google might warn searchers before sending them to a low quality search result, and give them a choice whether or not they might actually click through to such a page.

As it notes, the types of low quality pages the patent was supposed to address included parked domain pages, content farm pages, and link farm pages (yes,
link farm pages):

“The processor 260 is configured to receive from a client device (e.g., 110), a request for a web page (e.g., 206). The processor 260 is configured to determine the content quality of the requested web page based on whether the requested web page is a parked web page, a content farm web page, or a link farm web page.

Based on the content quality of the requested web page, the processor is configured to provide for display, a graphical component (e.g., a warning prompt). That is, the processor 260 is configured to provide for display a graphical component (e.g., a warning prompt) if the content quality of the requested web page is at or below a certain threshold.

The graphical component provided for display by the processor 260 includes options to proceed to the requested web page or to proceed to one or more alternate web pages relevant to the request for the web page (e.g., 206). The graphical component may also provide an option to stop proceeding to the requested web page.

The processor 260 is further configured to receive an indication of a selection of an option from the graphical component to proceed to the requested web page, or to proceed to an alternate web page. The processor 260 is further configured to provide for display, based on the received indication, the requested web page or the alternate web page.”

This did not sound like a good idea.

Recently, Google announced in a post on the Google Webmaster Central blog post,
Promoting modern websites for modern devices in Google search results, that they would start providing warning notices on mobile versions of sites if there were issues on those pages that visitors might go to.

I imagine that as a site owner, you might be disappointed seeing such warning notice shown to searchers on your site about technology used on your site possibly not working correctly on a specific device. That recent blog post mentions Flash as an example of a technology that might not work correctly on some devices. For example, we know that Apple’s mobile devices and Flash don’t work well together.

That’s not a bad warning in that it provides enough information to act upon and fix to the benefit of a lot of potential visitors. :)

But imagine if you tried to visit your website in 2011, and instead of getting to the site, you received a Google warning that the page you were trying to visit was a content farm page or a link farm page, and it provided alternative pages to visit as well.

That ”
your website sucks” warning still doesn’t sound like a good idea. One of the inventors listed on the patent is described in LinkedIn as presently working on the Google Play store. The warning for mobile devices might have been something he brought to Google from his work on this Panda patent.

We know that when the Panda Update was released that it was targeting specific types of pages that people at places such as
The New York Times were complaining about, such as parked domains and content farm sites. A
follow-up from the Timesafter the algorithm update was released puts it into perspective for us.

It wasn’t easy to know that your pages might have been targeted by that particular Google update either, or if your site was a false positive—and many site owners ended up posting in the Google Help forums after a Google search engineer invited them to post there if they believed that they were targeted by the update when they shouldn’t have been.

The wording of that
invitation is interesting in light of the original name of the Panda algorithm. (Note that the thread was broken into multiple threads when Google did a migration of posts to new software, and many appear to have disappeared at some point.)

As we were told in the invite from the Google search engineer:

“According to our metrics, this update improves overall search quality. However, we are interested in hearing feedback from site owners and the community as we continue to refine our algorithms. If you know of a high-quality site that has been negatively affected by this change, please bring it to our attention in this thread.

Note that as this is an algorithmic change we are unable to make manual exceptions, but in cases of high quality content we can pass the examples along to the engineers who will look at them as they work on future iterations and improvements to the algorithm.

So even if you don’t see us responding, know that we’re doing a lot of listening.”

The timing for such in-SERP warnings might have been troublesome. A site that mysteriously stops appearing in search results for queries that it used to rank well for might be said to have gone astray of
Google’s guidelines. Instead, such a warning might be a little like the purposefully embarrassing “Scarlet A” in Nathaniel Hawthorn’s novel The Scarlet Letter.

A page that shows up in search results with a warning to searchers stating that it was a content farm, or a link farm, or a parked domain probably shouldn’t be ranking well to begin with. Having Google continuing to display those results ranking highly, showing both a link and a warning to those pages, and then diverting searchers to alternative pages might have been more than those site owners could handle. Keep in mind that the fates of those businesses are usually tied to such detoured traffic.

My imagination is filled with the filing of lawsuits against Google based upon such tantalizing warnings, rather than site owners filling up a Google Webmaster Help Forum with information about the circumstances involving their sites being impacted by the upgrade.

In retrospect, it is probably a good idea that the warnings hinted at in the original Panda Patent were avoided.

Google seems to think that such warnings are appropriate now when it comes to multiple devices and technologies that may not work well together, like Flash and iPhones.

But there were still issues with how well or how poorly the algorithm described in the patent might work.

In the March, 2011 interview with Google’s Head of Search Quality, Amit Sighal, and his team member and Head of Web Spam at Google, Matt Cutts, titled
TED 2011: The “Panda” That Hates Farms: A Q&A With Google’s Top Search Engineers, we learned of the code name that Google claimed to be using to refer to the algorithm update as “Panda,” after an engineer with that name came along and provided suggestions on patterns that could be used by the patent to identify high- and low-quality pages.

His input seems to have been pretty impactful—enough for Google to have changed the name of the update, from the “High Quality Site Algorithm” to the “Panda” update.

How the High-Quality Site Algorithm became Panda

Danny Sullivan named the update the “Farmer update” since it supposedly targeted content farm web sites. Soon afterwards the joint interview with Singhal and Cutts identified the Panda codename, and that’s what it’s been called ever since.

Google didn’t completely abandon the name found in the original patent, the “high quality sites algorithm,” as can be seen in the titles of these Google Blog posts:

The most interesting of those is the “more guidance” post, in which Amit Singhal lists 23 questions about things Google might look for on a page to determine whether or not it was high-quality. I’ve spent a lot of time since then looking at those questions thinking of features on a page that might convey quality.

The original patent is at:

Processing web pages based on content quality
Inventors: Brandon Bilinski and Stephen Kirkham
Assigned to Google

US Patent 8,775,924

Granted July 8, 2014

Filed: March 9, 2012

Abstract

“Computer-implemented methods of processing web pages based on content quality are provided. In one aspect, a method includes receiving a request for a web page.

The method includes determining the content quality of the requested web page based on whether it is a parked web page, a content farm web page, or a link farm web page. The method includes providing for display, based on the content quality of the requested web page, a graphical component providing options to proceed to the requested web page or to an alternate web page relevant to the request for the web page.

The method includes receiving an indication of a selection of an option from the graphical component to proceed to the requested web page or to an alternate web page. The method further includes providing, based on the received indication, the requested web page or an alternate web page.

The patent expands on what are examples of low-quality web pages, including:

  • Parked web pages
  • Content farm web pages
  • Link farm web pages
  • Default pages
  • Pages that do not offer useful content, and/or pages that contain advertisements and little else

An invitation to crowdsource high-quality patterns

This is the section I mentioned above where I am asking for your help. You don’t have to publish your thoughts on how quality might be identified, but I’m going to start with some examples.

Under the patent, a content quality value score is calculated for every page on a website based upon patterns found on known low-quality pages, “such as parked web pages, content farm web pages, and/or link farm web pages.”

For each of the patterns identified on a page, the content quality value of the page might be reduced based upon the presence of that particular pattern—and each pattern might be weighted differently.

Some simple patterns that might be applied to a low-quality web page might be one or more references to:

  • A known advertising network,
  • A web page parking service, and/or
  • A content farm provider

One of these references may be in the form of an IP address that the destination hostname resolves to, a Domain Name Server (“DNS server”) that the destination domain name is pointing to, an “a href” attribute on the destination page, and/or an “img src” attribute on the destination page.

That’s a pretty simple pattern, but a web page resolving to an IP address known to exclusively serve parked web pages provided by a particular Internet domain registrar can be deemed a parked web page, so it can be pretty effective.

A web page with a DNS server known to be associated with web pages that contain little or no content other than advertisements may very well provide little or no content other than advertising. So that one can be effective, too.

Some of the patterns listed in the patent don’t seem quite as useful or informative. For example, the one stating that a web page containing a common typographical error of a bona fide domain name may likely be a low-quality web page, or a non-existent web page. I’ve seen more than a couple of legitimate sites with common misspellings of good domains, so I’m not too sure how helpful a pattern that is.

Of course, some textual content is a dead giveaway the patent tells us, with terms on them such as “domain is for sale,” “buy this domain,” and/or “this page is parked.”

Likewise, a web page with little or no content is probably (but not always) a low-quality web page.

This is a simple but effective pattern, even if not too imaginative:

… page providing 99% hyperlinks and 1% plain text is more likely to be a low-quality web page than a web page providing 50% hyperlinks and 50% plain text.

Another pattern is one that I often check upon and address in site audits, and it involves how functional and responsive pages on a site are.

The determination of whether a web site is full functional may be based on an HTTP response code, information received from a DNS server (e.g., hostname records), and/or a lack of a response within a certain amount of time. As an example, an HTTP response that is anything other than 200 (e.g., “404 Not Found”) would indicate that a web site is not fully functional.

As another example, a DNS server that does not return authoritative records for a hostname would indicate that the web site is not fully functional. Similarly, a lack of a response within a certain amount of time, from the IP address of the hostname for a web site would indicate that the web site is not fully functional.

As for user-data, sometimes it might play a role as well, as the patent tells us:

A web page may be suggested for review and/or its content quality value may be adapted based on the amount of time spent on that page.

For example, if a user reaches a web page and then leaves immediately, the brief nature of the visit may cause the content quality value of that page to be reviewed and/or reduced. The amount of time spent on a particular web page may be determined through a variety of approaches. For example, web requests for web pages may be used to determine the amount of time spent on a particular web page.”

My example of some patterns for an e-commerce website

There are a lot of things that you might want to include on an ecommerce site that help to indicate that it’s high quality. If you look at the questions that Amit Singhal raised in the last Google Blog post I mentioned above, one of his questions was “Would you be comfortable giving your credit card information to this site?” Patterns that might fit with this question could include:

  • Is there a privacy policy linked to on pages of the site?
  • Is there a “terms of service” page linked to on pages of the site?
  • Is there a “customer service” page or section linked to on pages of the site?
  • Do ordering forms function fully on the site? Do they return 404 pages or 500 server errors?
  • If an order is made, does a thank-you or acknowledgement page show up?
  • Does the site use an https protocol when sending data or personally identifiable data (like a credit card number)?

As I mentioned above, the patent tells us that a high-quality content score for a page might be different from one pattern to another.

The
questions from Amit Singhal imply a lot of other patterns, but as SEOs who work on and build and improve a lot of websites, this is an area where we probably have more expertise than Google’s search engineers.

What other questions would you ask if you were tasked with looking at this original Panda Patent? What patterns would you suggest looking for when trying to identify high or low quality pages?  Perhaps if we share with one another patterns or features on a site that Google might look for algorithmically, we could build pages that might not be interpreted by Google as being a low quality site. I provided a few patterns for an ecommerce site above. What patterns would you suggest?

(Illustrations: Devin Holmes @DevinGoFish)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Here’s Your Syllabus: Everything a Marketer Needs for Day 1 of an MBA

Posted by willcritchlow

A few years ago, I wrote a
post on my personal blog about MBA courses. I have a great deal of respect for the top-flight MBA courses based, in part, on how difficult I found the business-school courses I took during my graduate degree. I’m well aware of the stereotypes prevalent in the startup and online worlds, but I believe there is a lot of benefit to marketers having a strong understanding of how businesses function.

Recently, I’ve been thinking about how to build this into our training and development at Distilled; I think that our consultative approach needs this kind of awareness even more than most.

This post is designed to give you the building blocks needed to grow your capabilities in this area. Think of it as a cross between a recommended reading list and a home study guide.

Personal development: a personal responsibility

I’ve
written before about the difference between learning and training, and how I believe that individuals should take a high degree of ownership over their own development. In an area like this, where it’s unlikely to be a core functional responsibility, it’s even more likely that you will need to dedicate your own time and effort to building your capabilities.

Start with financial basics

I may well be biased by my own experiences, but I believe that, by starting with the financial fundamentals, you gain a deeper understanding of everything that comes afterwards. My own financial education started before high school:

  • My dad used to give me simple arithmetic tasks based around the financials of his own business before I was old enough to be allowed to answer the phone (when my voice broke!)
  • At college, I took some informal entrepreneurial courses as well as elected to study a few hardcore mathematical finance subjects during grad school
  • After college, I worked as a “consultant” (really, a developer) for a financial software company and got my first real introduction to P&Ls, general ledgers, balance sheets, and so forth
  • Before starting Distilled, I worked as a management consultant and learnt to build financial models and business cases (though the most memorable lesson of this era is that big businesses just have more zeros in the model – at a certain point it doesn’t matter whether you’re working with $, $k, $m or $b)

So, where should you start your financial education?

I’d begin by learning
how to read a balance sheet (which will quickly lead you to a load of ratios) and how to read a P&L (profit and loss statement). From there, you can get to cash flow.

In order to take this all in, you will need to set aside some time to work through a few examples and to dig into the definitions, acronyms, and concepts you haven’t heard before. These are not the kinds of post you can simply skim.

This may also be a good time to revisit some basics:

While working through all of this, you should be aiming to:

  • Become comfortable with the language and terminology
  • Understand the connections between cash, profit and assets
  • Begin seeing the sensitivities in how timing, margins, and business models impact outcomes

Since all of this is pretty dry, be sure to add in some human interest by reading about business models in ‘the wild’ and applying your new-found knowledge to some real-world examples. Amazon is a great place to start because of its
unusual focus on free cash flow over profit (for longer reads, I also recommend Bezos’ shareholders letters and The Everything Store).

Management: structures and methodologies

When you’re trying to get things done, it pays to understand the context of the people you’re seeking to influence. Whether you’re an external consultant or embedded in the organisation, the people you’re dealing with will have their own priorities, incentives, and worldview.

Above all, you need to get close to people in order to understand what truly makes them tick.

You can spend a lot of time digging into dry tomes on organisational design if you wish, but I’ve learned a lot of things from reading business biographies in order to understand the thinking of senior management at big business. Here are some of my favourites:

  • I already mentioned The Everything Store about Amazon in general and Jeff Bezos in particular
  • In the Plex (relevant to our interests: about Google) has all kinds of interesting anecdotes and management challenges
  • I am a massive fan of The Hard Thing About Hard Things by Ben Horowitz that documents a lot of the interpersonal and management challenges he has seen and faced over the years
  • I was recommended The Five Dysfunctions of a Team by a client-turned-friend who has led large engineering teams – its language of “conflict and commit” quickly became part of my personal thinking
  • In a variety of ways, I’ve learned interesting things even from books that I found difficult to read or where I felt as if I wouldn’t like to work for the individuals (including Winning by Jack Welch, Steve Jobs by Walter Isaacson, Who Says Elephants Can’t Dance by Lou Gerstner, and the epic Warren Buffett biography, The Snowball by Alice Schroeder)
BenHorowitz081513_076BSheehan_crop.jpg

Ben Horowitz
Image source: developers.blog.box.com/

6629275_c179a0dc20_z.jpg

Jeff Bezos
Image source: James Duncan Davidson, Flickr

The way I work is to highlight sections of a book as I read it (the Kindle is a godsend for this) and then, if I found it interesting enough, to write up a brief book report for my team. This should be somewhere between enough information to persuade them it would be an interesting read and enough to impart its key lessons.

See my write-up of “Only the Paranoid Survive”.

Strategy: the interesting parts

For me, all of this forms the basics of what you need to think about the interesting parts. I’m fully aware (and glad!) that some people become specialists in the details above and enjoy working in them. For me, they serve as tools to understand and to communicate about the way that companies and markets function.

I find that the most interesting learning has elements of storytelling, timeliness, and humanity. During my university studies, I was most excited when I got to hear about theorems developed in the last few years. In corporate strategy terms, it’s important to know your history, but it’s also exciting to realise that we can read the history that’s happening all around us right now.

In rough chronological order, here is some reading material I’ve found interesting recently:

  • Only the Paranoid Survive is the story of how Andy Grove chose to lead Intel through the almost complete disappearance of its multi-billion dollar core business in just a few short years. See my notes
  • The Innovator’s Dilemma by Clayton Christensen – I’ve written about this before here on Moz and over on our own site
  • In writing about The Innovator’s Dilemma, I’ve talked before about the influence Mark Suster has had on my thinking. I find the vast majority of his writing very thought-provoking. I particularly enjoyed his recent presentation on Why it’s Morning in VC which included some great stats about the growth in market opportunity for businesses that think ‘online first’
  • For the most cutting-edge thinking about the evolution of strategy in a connected world, I haven’t come across a better thinker than Benedict Evans. His newsletter is one of a handful that I read religiously

Putting it together

Different people learn in different ways, but I thought I’d close with a few ideas for more structured ways of learning:

  • Work through the basics by recreating some of the analysis linked to above. Build the Excel, learn the terminology, etc.
  • Read a book from the list and write a book report
  • Form a study group to discuss a Harvard Business Review case

Further reading

The more you get into reading about business, the more you’ll realise what a rabbit hole it truly is. I’d love to hear some recommendations in the comments section. I’ve also included a few more resources that didn’t fit into the flow above but that I thought people might like to check out:

  • MBA Mondays from Fred Wilson (I recommend you start at the end and work forwards through time). I’ve liberally pulled individual posts into the writing above, but there’s a wealth of further information in this series
  • Harvard Business Review produces an incredible amount of content, so you’ll have to pick and choose, but there’s something there for everyone – from written content and video, to deep financial analysis to inter-personal management advice
  • I have had The Personal MBA recommended highly to me – though I have to admit that it’s still sitting in my wishlist

When I first pitched this post idea to the Moz editors, they were keen that it contain actual insight itself rather than just links to a bunch of books – something I wholeheartedly support. So, I thought I’d include my notes on Andy Grove’s Only the Paranoid Survive. Here you can see what I meant by taking notes and highlighting sections of a book to discuss with a group. This is a great way of digesting the ideas of a book, especially if they are particularly complex.

8267616249_0bf5b1b597_z.jpg

Andy Grove, Robert Noyce and Gordon Moore (1978)

Image source: Intel Free Press,
Flickr

Only the Paranoid Survive: my notes on a business classic

Documenting his time at Intel, Andy Grove’s book provides a fascinating insight into how he led the company out of the memory business and into the microprocessor business. He details his approach for dealing with what he calls “strategic inflection points” which are those times in the life of a business when its fundamentals are about to change. As he says, this “can mean an opportunity to rise to new heights. But it may just as likely signal the beginning of the end.”

It’s an incredible story of leadership, management, and strategy, and I highly recommend you read it (even though it’s unfortunately not available on the Kindle – a criteria which is fast becoming my top priority for which books I’ll read).

Written in 1996, the book looks pretty dated (in parts) almost twenty years later. But to illustrate the power of the insights from the man that Fortune magazine called “The best manager in the world”, I wanted to kick off with some quotes from the final chapter of the book. These detail Grove’s support of Intel post-retirement – guiding them on the changes he thought the internet would bring to their business. It highlights the value of the rest of the book by proving that he is capable of applying his theories to the future (which is always the hardest part of making predictions).

Highlighting the era in which the book was written, it starts with a section entitled:

What is the Internet Anyway?

Grove immediately lays out his stall:

I felt that the Internet was the biggest change in our environment over the last year.

And then goes on to predict a number of the disruptions that subsequently came to pass – starting with the effect on advertising:

To do that on a big scale, you have to “steal the eye-balls,” so to say, of the consumer audience from where they get those messages today … to displays on the World Wide Web.

Publishing:

We may be witnessing the birth of a new media industry.

And even mobile (though he doesn’t call it that):

Such an Internet appliance could be built around a simpler and less expensive microchip. Clearly, this would be detrimental to our business.

His presentation, to a group of senior managers at Intel in the mid-90s, clearly met a mixed reception – and I love how much the quotes could be an indictment of my entire career:

Comments on my presentation range from “This was the best strategic analysis you’ve ever done” to “Why the hell did you waste so much time on the Internet?”

Rather than just pointing out problems, he clearly outlines a set of solutions – starting with embedding the internet at the top level of strategic direction:

Intel operates by following the direction set by three high-level corporate strategic objectives: the first has to do with our micro-processor business; the second with our communications business; the third with our operations and the executions of our plans. We add a fourth objective, encapsulating all the things that are necessary to mobilize our efforts in connection with the Internet.

…and hedging with a deliberate attempt to check his hypotheses to make sure they are correct:

So I think there is one more step for Intel to take to prepare ourselves for the future. And I think we should take it now while our market momentum is stronger than ever. I think we should put together a group to build the best inexpensive Internet appliance that can be built, around an Intel microchip. Let this group try to derail our strategies themselves.

Having set the scene, rather than rehash the story itself, I want to jump to the second half of the book. Here Grove details the general lessons he learned and the approaches he has taught since his retirement as a professor at the Stanford University Graduate School of Business.

So many of the lessons concern the ways that senior managers can make sure they stay abreast of the lessons their teams are learning at the coalface. But learning the lessons are so rarely enough in themselves. Grove details a conversation he had with Intel’s Chairman and CEO at the time – Gordon Moore:

I looked out the window at the Ferris wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and I asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why shouldn’t you and I walk out the door, come back and do it ourselves?”

Knowing is not sufficient. It’s clear that you still need to work up the courage to make a change somehow. Early on, Grove dedicates a chapter to the necessary methodology of gathering information from the people he calls “Cassandras” who help funnel knowledge of impending changes to senior management:

The Cassandras in your organization are a consistently helpful element in recognizing strategic inflection points… Cassandras are usually in middle management; often they work in the sales organization.

He then goes on to address two potential objections to the particular kinds of action needed at this stage – by middle and senior management. I particularly like the second part – exhorting ‘armchair quarterbacks’ to get out of their comfy seats:

If you are in senior management, don’t feel you’re being a wimp for taking the time to solicit the views, convictions and passions of the experts. No statues will be carved for corporate leaders who charge off on the wrong side of a complex decision…If you are in middle management, don’t be a wimp. Don’t sit on the sidelines waiting for the senior people to make a decision so that later on you can criticize them over a beer — “My God, how could they be so dumb?”

Continuing the theme that it’s necessary, but not sufficient to know what’s going on, Grove calls out common behaviour among senior people who (consciously or subconsciously) know what they need to be focusing on, but continually find themselves drawn in other directions. It reminded me of the adage that
your calendar never lies:

At such times, senior managers often involve themselves in feverish charitable fundraising, a lot of outside board activities or pet projects…Frankly, as I look back, I have to wonder if it was an accident that I devoted a significant amount of my time in the years preceding our memory episode, years during which the storm clouds were already very evident, to writing a book. And as I write this, I wonder what storm clouds I might be ducking now.

In this theoretical section that comes after many of the personal stories of his own challenges, Grove lays out some mechanisms for coping with and dealing with strategic inflection points once you’ve seen them coming. In particular, he focuses on clarity of communication:

But when the structure of the industry changes, all of these elements change too. The mental map that you have been carrying with you all these years and relied upon in charting your company’s course of action suddenly loses its validity. However you haven’t had a chance to replace it with a new mental map. You haven’t made the explicit substitutions about how things are done now versus how they were done before, or who matters now versus who mattered then…If senior managers and know-how managers share a common view of the industry, the likelihood of their acknowledging changes in the environment and responding in an appropriate fashion will greatly increase. Sharing a common picture of the map of the industry and its dynamics is a key tool in making your organization an adaptive one.

…and clarity of purpose:

Management writers use the word “vision” for this. That’s too lofty for my taste. What you’re trying to do is capture the essence of the company and the focus of its business. You are trying to define what the company will be, yet that can only be done if you also undertake to define what the company will not be.

…and he addresses head-on the obvious counter to some of his simple examples by saying that he believes oversimplification is a risk worth taking in pursuit of extreme focus:

But the danger of oversimplification pales in comparison with the danger of catering to the desire of every manager to be included in the simple description of the refocused business, therefore making that description so lofty and so inclusive as to be meaningless.

Just before we get to the final section on the internet that I started with to make my broader point about the usefulness of Grove’s framework, he talks about some of the personal pitfalls of leading in the way he describes. I found these two passages to have echoes of ‘the hard thing about hard things’ that I referenced above. First, leading when you can’t know if you’re right:

I can’t help but wonder why leaders are so often hesitant to lead. I guess it takes a lot of conviction and trusting your gut to get ahead of your peers, your staff and your employees while they are still squabbling about which path to take, and set an unhesitating, unequivocal course whose rightness or wrongness will not be known for years.

…and second, the loneliness of this course:

When I started on this software study, I had to take the time I spent on it away from other things…This brought with it its own difficulties because people who were accustomed to seeing me periodically no longer saw me as often as they used to. They started asking questions like, “Does this mean you no longer care about what we do?”


I hope you’ve enjoyed this little tour through my ways of learning about business. I think an awful lot of learning comes down to curiosity and, in my experience, business is an endless source of fascination and things about which to be curious. I look forward to hearing your best links and book recommendations in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How Do I Successfully Run SEO Tests On My Website? – Whiteboard Friday

Posted by randfish
By now, most of us have gotten around to doing testing of some sort on our websites, but testing specifically for SEO can be extremely difficult and requires extra vigilance. In today’s Whiteboard Friday, Rand explains three major t…

Continue Reading →

How PornHub Is Bringing its A-Game (SFW)

Posted by malditojavi

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

Editor’s note: While the images in this post are free of graphic content, there are many suggestive references to potentially objectionable material.

It has come to my attention how PornHub is marketing itself. It is one of the biggest pornographic websites, and I have no idea who is behind their online marketing strategy, but hats off to their team because they’re stepping up the porn websites’ game to the next level.

Let me detail here some of their latest actions and you will understand why I’m so impressed.

A bit of context: porn still seen as taboo?

‘The Internet has taken porn mainstream’
stated Aurora Snow, a retired porn star when EJ Dickson, editor from The Daily Dot asked her about sexting and amateur porn as causes that have contributed to not see porn as a taboo.

In her interview, in which the main topic covered is her participation as speaker in a conference at Harvard, she points out how many porn industry A-list names have jumped to commercial and mainstream channels (‘James Deen is in a movie with Lindsay Lohan, Sasha Grey is on Entourage’). It’s totally OK if you don’t know any of these names, but it can give you an idea on how
p0rn is now more accepted in our society.

Mobile first: unlimited videos… but only for mobile devices

Global mobile traffic reached almost 800,000 Terabytes just during last year 2013 and it’s estimated to double that figure in the current year, according to the research provided by Statista/Cisco. If that wasn’t enough, increase of smartphone ownership went from 35% in 2011 to 56% in May 2013 (source).


Global Mobile Video Traffic from 2013 to 2018 - Statista

Even if those figures are not specific to the PornHub’s specific business, we could take the latest statements from Hulu’s CEO
pointing out that despite of starting as a desktop app, 50% of their five million subscribers are running their service only through devices (smartphones, tablets).

PornHub No Limit - How PornHub Is Bringing It's A-GamePornHub Unlimited - PornHub No Limit - How PornHub Is Bringing It's A-Game

After many years of selling adult entertainment content through network TV channels, and from there going to home desktops, the porn business knows that nowadays mobile screens are becoming part of our daily routine. And what did PornHub do? Delete any kind of restriction for its mobile users by removing the five videos per day limit still enforceable in their desktop version. What happened then? Take a look to some of the reactions on Twitter. 

PornHub Unlimited Fans - How PornHub Is Bringing It's A-Game

Let’s not forget to mention the peak of 15,000 mentions in Twitter (you can see it in Topsy screenshot of the next section). On average they don’t go beyond 3,000 – 4,000 mentions per day.

PornHub masters Twitter and social media is its playground

If there are Twitter accounts for things as weird and eccentric such as one for the
Big Ben tweeting what time is it, or accounts that only tweet once per year, why shouldn’t a porn site have its own bizarre Twitter account?

Would you be more likely to stablish a conversation between an impersonal company account or would you be more likely to talk if you know who is behind the screen? I guess the second answer is your choice, right? Instead of having a team account, PornHub in Twitter is ‘
PornHub Katie’, what is the name of its Community Manager. The same strategy adopted from its related website, YouPorn and Jude, YouPorn Community Manager.

By giving a quick look into their timeline you will see that they know how to play the Twitter game, and how friendly the conversation they can engage in. They are not they typical conversations that you would have in a open public space though. Not going to include here some of their latest tweets but you can nose around some of their idiosyncrasy (though not all of their tweets are appropriate for all workplaces).

To have deeper insights based on data and not only in my unbiased opinion (I’m more than a follower of their tweets, I’m a devotee!), let’s take the first three competitors for the ‘adult’ category provided by Similar Web.

Ranking for 'Adult' Category provided by SimilarWeb - How PornHub It's Bringing It's A-Game

By analysing them through Topsy, we can see that that effectively
xVideos and xHamster can’t compare with PornHub in terms of Twitter presence. If there is another website that is starting to do great on it, that’s YouPorn. But oh surprise, YouPorn belongs to the network of PornHub.

Topsy Analysis @PornHub @xHamsterCom @xVideoscom - How PornHub Is Bringing It's A-Game

Would you retweet a high-level erotic photo? PornHub knows that you are not likely to do that. Instead, they clearly know how virality works and images are their aces up the sleeves. These are some of their latest memes that you can find if you are following them through Twitter.

Captura de pantalla 2014-05-05 a la(s) 15.01.52.pngCaptura de pantalla 2014-05-05 a la(s) 15.00.47.pngCaptura de pantalla 2014-05-05 a la(s) 15.00.36.pngCaptura de pantalla 2014-05-05 a la(s) 14.59.49.png

Are they not great? Do you think you could it better? Well, today’s your lucky day. Late in 2013 they announced a campaign in order to look for a Creative Director to join them for a year. Their goal? Launch an all-publics national advertising campaign that can be channeled through mainstream media.

PornHub Wants To Launch a Public National Campaign - How PornHub Is Bringing It's A-Game

Nonetheless, it’s not always sunny in Pornland. With such initiative any designer, or even non-designers, could blame them for doing a spec-based design contest. Spec what? No other thing but ‘
any kind of creative work (…) submitted (…) by designers to prospective clients before taking steps to secure both their work and equitable fees’, as described in NoSpec.com. In other words, they are using royalty-free content submitted that was created painstakingly by designers around the world. Pas cool.

Hijacking trends has also place in its timeline, no matter if it’s about the latest Cinco de Mayo festivity, Rihanna news or if it’s about the SuperBowl. They are pretty conscious about when a porn website might be more ‘needed’ in someone’s life, and that’s the reason they started to offer no video limit on…. Valentine’s Day.

No Video Limite in St Valentine's Day - How PornHub Is Bringing It's A-Game

A community of people openly admitting to like p0rn?

Apple has a huge amount of people that spread their love for the products they make. Buffer is engrossing a community of startup transparency thanks to some of their latest initiatives like the open
metrics dashboard powered by Baremetrics or their open salaries posts. Ryan and the team at Product Hunt really care about the power of a ‘true engaged community‘, and ‘not just acquire additional users‘.

At the end, it seems ‘easy’ to grow a community if what you do is good enough. But what happens if your product is porn? What in the beginning seems to be a difficult challenge, growing a porn community in the 21st century is now easier than ever. People are not afraid of showing what they like, even if it’s porn. Sexuality is a dish present in everyday TV shows, parents are aware of sexual education to their children, sex is not a topic to avoid any more in many cases.

Would you invite your followers to send you nude pics? Would you take a send a nude pic that you know is going to be shared to 300 K porn fans? PornHub jumps right in when it’s about talking to the community and make them participants of PornHub history. 

Engaging With The Community - How PornHub Is Bringing Its A-Game

Porn Community - How PornHub Is Bringing Its A-Game

In case you feel curious, you can head off to Twitter to get some of the 
most imaginative and sexy responses and replies to these kind of contests. Totally NSFW, everyone. You were warned.

Mother nature and porn viewing insights: best allies to get backlinks

How does a porn site earns links from ‘normal’ sites? By ‘normal’, I mean non-erotic or non-sexually explicit at all websites.

On one side, PornHub seems to take the role that OkCupid started a while ago with their blog.
OkTrends was a blog collecting all original researches and insights from the dating site by giving shape to the ‘hundreds of millions of OkCupid user interactions’. PornHub has named it ‘PornHub Insights‘ and it’s delightul if you love data and some erotic references.

PornInsights - How PornHub Is Bringing Its A-Game

Extra points: spotted what are two of their favorite tools for visualization: Infogr.am and Tableau

And on the other hand, they have partner up with the Mother Nature by promising to donate one tree for every 100 videos viewed in the ‘Big Dick’ category. Result?
2294. Trees? No, no yet. 2294 referral external backlinks according to the data from MajesticSEO.

Captura de pantalla 2014-05-06 a la(s) 15.16.12.png

I tend to think that everyone does things right, but do we really believe they care that much about nature, or it’s just a ‘gimme some backlinks and promo’ kind of thing? I have my doubts, but by taking a look to the backlink profile from PornHub and the previous mentioned competitors, it seems Mother Nature is great for some link-building. In comparison with its competitors xVideos and xHamster, there is no presence in their 20 best performing backlinks of other content different than p0rn. 

Captura de pantalla 2014-05-06 a la(s) 17.56.43.png

‘Translate me this into other words, Javier’. Well, it’s not about the trees is about getting the attention of 
CosmopolitanBetaBeat, Gawker, Huffington PostViceGQ and a long list of mass media audiences.

Linkbaiting? Also well covered in PornHub

Type ‘pornhub + justin bieber’ into the search box of your preferred search engine, and check it out. Certainly a SERPs output quite different from similar search queries featuring other famous like ‘pornhub + paris hilton’ or ‘porn hub + kim kardashian’ (not going to link them, but you can look up by yourself).


We are not interested in hosting any Justin Bieber’s sex tapes (…) It’s nothing against Selena Gomez, we just don’t approve of all of Bieber’s gross behavior – spitting on fans, driving dangerously and endangering people, and just being a real jerk‘ were some of PornHub VP’s comments when he was asked about what position PornHub if it was offered to buy or license a possible video leak.

Respectful approach… or just another marketing strategy? Because they might not be interested in such a great asset for their video collection, but they don’t hesitate to target the beliebers, hordes of Justin’s fans no doubting on go to your jugular if you address no nice words against their Messiah.

PornHub Wishes HB to Justin Bieber After Calling Him 'Jerk' - How PornHub Is Bringing Its A-Game

By the way, were you impressed with PornHub’s performance against its competitors on Twitter? Wait to be speechless when you see the
analysis between the 35 million-visits/day-porn-website vs the canadian singer personal account (54 million Twitter foll… Beliebers).

PornHub knows its personas and goes for them

Great marketers have pointed out the importance of setting up the personas you are trying to target within your marketing strategy. A
great piece about understanding who is behind the screens can be found also through this same site, written by Mike. Don’t bookmark it, all we know you won’t end up reading it, so do it now.


Research and develop who are my personas will take some of my time‘, you say. If it’s well done, for sure. But have you thought what such a great resource are you going to have? You establish who is your ideal user / visitor / reader, and you create a plan to get his attention. And by ‘attention‘, I mean their clicks, their time to spend on your site, their comments, etc. Let’s be precise, their money.

Does Machinima mean anything to you? It didn’t to me until I watched one of their 
Youtube videos (kind of NSFW) promoting PornHub. I typed ‘pornhub’ in Google’s famous video platform to check out what other content PornHub was hosting besides their interviews of personalities within their industry.

Machinimia‘s h1 tag describes its business as ‘next generation video entertainment network for the gamer lifestyle and beyond‘. Their services go from ‘monetization of your channel, grow your audience and expand your reach to new platforms‘. With most of its partners belonging to the video-game industry, they are able to deliver and work with these impressive figures.

Captura de pantalla 2014-05-05 a la(s) 14.48.36.png

‘Ok, now I know a bit more about these video-gamers, but how does it is relate with PornHub?’ Glad you are wondering that too. I must be honest, and say that I have been quite reluctant to the vloggers thing – I just prefer written content where you can read or scan what’s being said at your own pace – but after seeing that in just two days Machinima had 700,000 views in a video only featuring PornHub’s initiative, I might change my opinion about vlogging.
Machinima Promoting PornHub through Their Vlog - How PornHub Is Bringing Its A-Game

After watching the video, and seeing more about what’s the kind of audience
Machinima is targeting, don’t you find so many correlations and similarities with Pornhub’s audience?

Captura de pantalla 2014-05-05 a la(s) 14.48.44.png

‘Innovation’ is also possible in the porn industry

A quick analyis of search queries made by their visitors and daily sharing the weirdest ones? They got it.

PornMD Daily Analysis of The Best Porn Search Engine on the Internet - How PornHub Is Bringing Its A-Game

Some ‘machine learning’ to create a porn search engine that guess what kind of videos you are likely to enjoy? They have it too. Not PornHub main site, but a little brother.

PornMD - How PornHub Is Bringing Its A-Game

I don’t know if it can be considered as ‘innovation’, but as I have pointed out before, posts researching, analyizing about what kind of sexual genres are the most viewed, what are the sexual preferences for certain USA regions and football matches impact their visits figure depending if you team is winning or losing, is certainly something you don’ t expect from a porn site. And yep, they are on that too through 
PornHub Insights

In terms of marketing, promotion, buzzing, if all the porn sites out there would take their business as seriously as the PornHub is doing, marketing friends, we would have such a big pool of opportunities to learn. Because as I read while ago in 
Jordi’s post, ‘porn sites are the best places to learn about conversion‘ and I would add about online marketing too.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines

Posted by randfish

Recently, Moz contributor Scott Wyden, a photographer in New Jersey, received a warning in his Google Webmaster Tools about some links that violated Google’s Quality Guidelines. Many, many site owners have received warnings like this, and while some are helpful hints, many (like Scott’s) include sites and links that clearly do not violate the guidelines Google’s published.

Here’s a screenshot of Scott’s reconsideration request:

(note that the red text was added by Scott as a reminder to himself)

As founder, board member, and majority shareholder of Moz, which owns Moz.com (of which YouMoz is a part), I’m here to tell Google that Scott’s link from the YouMoz post was absolutely editorial. Our content team reviews every YouMoz submission. We reject the vast majority of them. We publish only those that are of value and interest to our community. And we check every frickin’ link.

Scott’s link, ironically, came from this post about Building Relationships, Not Links. It’s a good post with helpful information, good examples, and a message which I strongly support. I also, absolutely, support Scott’s earning of a link back to his Photography SEO community and to his page listing business books for photographers (this link was recently removed from the post at Scott’s request). Note that “Photography SEO community” isn’t just a descriptive name, it’s also the official brand name of the site Scott built. Scott linked the way I believe content creators should on the web: with descriptive anchor text that helps inform a reader what they’re going to find on that page. In this case, it may overlap with keywords Scott’s targeting for SEO, but I find it ridiculous to hurt usability in the name of tiptoeing around Google’s potential overenforcement. That’s a one-way ticket to a truly inorganic, Google-shaped web.

If Google doesn’t want to count those links, that’s their business (though I’d argue they’re losing out on a helpful link that improves the link graph and the web overall). What’s not OK is Google’s misrepresentation of Moz’s link as “inorganic” and “in violation of our quality guidelines” in their Webmaster Tools.

I really wish YouMoz was an outlier. Sadly, I’ve been seeing more and more of these frustratingly misleading warnings from Google Webmaster Tools.

(via this tweet)

Several months ago, Jen Lopez, Moz’s director of community, had an email conversation with Google’s Head of Webspam, Matt Cutts. Matt granted us permission to publish portions of that discussion, which you can see below:

Jen Lopez: Hey Matt,

I made the mistake of emailing you while you weren’t answering outside emails for 30 days. :D I wanted to bring this up again though because we have a question going on in Q&A right now about the topic. People are worried that they can’t guest post on Moz: http://moz.com/community/q/could-posting-on-youmoz-get-your-penalized-for-guest-blogging because they’ll get penalized. I was curious if you’d like to jump in and respond? Or give your thoughts on the topic?

Thanks!

Matt Cutts: Hey, the short answer is that if a site A links to spammy sites, that can affect site A’s reputation. That shouldn’t be a shock–I think we’ve talked about the hazards of linking to bad neighborhoods for a decade or so.

That said, with the specific instance of Moz.com, for the most part it’s an example of a site that does good due diligence, so on average Moz.com is linking to non-problematic sites. If Moz were to lower its quality standards then that could eventually affect Moz’s reputation.

The factors that make things safer are the commonsense things you’d expect, e.g. adding a nofollow will eliminate the linking issue completely. Short of that, keyword rich anchortext is higher risk than navigational anchortext like a person or site’s name, and so on.”

Jen, in particular, has been a champion of high standards and non-spammy guest publishing, and I’m very appreciative to Matt for the thoughtful reply (which matches our beliefs). Her talk at SMX Sydney—Guest Blogging Isn’t Dead, But Blogging Just for Links Is—and her post—Time for Guest Blogging With a Purpose—helps explain Moz’s position on the subject (one I believe Google shares). 

I can promise that our quality standards are only going up (you can read Keri’s post on YouMoz policies to get a sense of how seriously we take our publishing), that Scott’s link in particular was entirely editorial, organic, and intentional, and that we take great steps to insure that all of our authors and links are carefully vetted.

We’d love if Google’s webmaster review team used the same care when reviewing and calling out links in Webmaster Tools. It would help make the web (and Google’s search engine) a better place.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

The Broken Art of Company Blogging (and the Ignored Metric that Could Save Us All)

Posted by evolvingSEOThe perception of success
The following screenshot is from an actual blog post. Based upon what you see here, would you call it successful?

I think it depends on perception.
The optimist might see this…

Continue Reading →

Calculating Estimated ROI for a Specific Site & Body of Keywords

Posted by shannonskinner

One of the biggest challenges for SEO is proving its worth. We all know it’s valuable, but it’s important to convey its value in terms that key stakeholders (up to and including CEOs) understand. To do that, I put together a process to calculate an estimate of ROI for implementing changes to keyword targeting.

In this post, I will walk through that process, so hopefully you can do the same for your clients (or as an in-house SEO to get buy-in), too!

Overview

  1. Gather your data
    1. Keyword Data
    2. Strength of your Preferred URLs
    3. Competition URLs by Keyword
    4. Strength of Competition URLs
  2. Analyze the Data by Keyword
  3. Calculate your potential opportunity

What you need

There are quite a few parts to this recipe, and while the calculation part is pretty easy, gathering the data to throw in the mix is the challenging part. I’ll list each section here, including the components of each, and then we can go through how to retrieve each of them. 

  • Keyword data

    • list of keywords
    • search volumes for each keyword
    • preferred URLs on the site you’re estimating ROI
    • current rank
    • current ranking URL
  • Strength of your preferred URLs

    • De-duplicated list of preferred URLs
    • Page Authorities for each preferred URL
    • BONUS: External & Internal Links for each URL. You can include any measure you like here, as long as it’s something that can be compared (i.e. a number).
  • Where the competition sits

    • For each keyword, the sites that are ranking 1-10 in search currently
  • Strength of the competition

    • De-duplicated list of competing URLs
    • Page Authorities, Domain Authorities, 
    • BONUS: External & Internal Links, for each competing URL. Include any measure you’ve included on the Strength of Your Preferred URLs list.


How to get what you need


There has been quite a lot written about keyword research, so I won’t go into too much detail here. For the Keyword data list, the important thing is to get whatever keywords you’d like to assess into a spreadsheet, and include all the information listed above. You’ll have to select the preferred URLs based on what you think the strongest-competing and most appropriate URL would be for each keyword. 


For the
Preferred URLs list, you’ll want to use the data that’s in your keyword data under the preferred URL.

  1. Copy the preferred URL data from your Keyword Data into a new tab. 
  2. Use the Remove Duplicates tool (Data>Data Tools in Excel) to remove any duplicated URLs

Once you have the list of de-duplicated preferred URLs, you’ll need to pull the data from Open Site Explorer for these URLs. I prefer using the Moz API with SEOTools. You’ll have to install it to use it for Excel, or if you’d like to take a stab at using it in Google Docs, there are some resources available for that. Unfortunately, with the most recent update to Google Spreadsheets, I’ve had some difficulty with this method, so I’ve gone with Excel for now. 

Once you’ve got SEOTools installed, you can make the call “=MOZ_URLMetrics_toFit([enter your cells])”. This should give you a list of URL titles, canonical URLs, External & Internal links, as well as a few other metrics and DA/PA. 


For the
Where the competition sits list, you’ll first need to perform a search for each of your keywords. Obviously, you could do this manually, or if you have exportable data from a keyword ranking tool and you’ve been ranking the keywords you’d like to look at, you could use either of these methods. If you don’t have those, you can use the hacky method that I did–basically, use the ImportXML command in Google Spreadsheets to grab the top ranking URLs for each query. 

I’ve put a sample version of this together, which you can access here. A few caveats: you should be able to run MANY searches in a row–I had about 850 for my data, and they ran fine. Google will block your IP address, though, if you run too many, and what I found is that I needed to copy out my results as values into a different spreadsheet once I’d gotten them, because they timed out relatively quickly, but you can just put them into the Excel spreadsheet you’re building to make the ROI calculations (you’ll need them there anyway!).


From this list, you can pull each URL into a single list, and de-duplicate as explained for the preferred URLs list to generate the
Strength of the Competition list, and then run the analysis you did with the preferred URLs to generate the same data for these URLs as you did for the preferred URLs with SEOTools for Excel. 


Making your data work for you

Once you’ve got these lists, you can use some VLOOKUP magic to pull in the information you need. I used the
Where the competition sits list as the foundation of my work. 

From there, I pulled in the corresponding preferred URL and its Page Authority, as well as the PAs and DAs for each URL currently ranking 1-10. I then was able to calculate an average PA & DA for each query, and could compare the page I want to rank to this. I estimated the chances that the page I wanted to rank (given that I’d already determined these were relevant pages) could rank with better keyword targeting.

Here’s where things get interesting. You can be rather conservative, and only sum search volumes of keywords you’re fairly confident your site can rank, which is my preferred method. That’s because I use this method primarily to determine if I’m on the right track–whether making these recommendations are really worth the time to get implemented. So I’m going to move forward assuming I’m counting only the search volumes of terms I think I’m quite competitive for, AND that I’m not yet ranking for on page 1. 


Now, you want to move to your analytics data in order to calculate a few things: 

  • Conversion Rate
  • Average order value
  • Previous year’s revenue (for the section you’re looking at)

I’ve set up my sample data in this spreadsheet that you can refer to or use to make your own calculations. 

Each of the assumptions can be adjusted depending on the actual site data, or using estimates. I’m using very very generic overall CTR estimates, but you can select which you’d like and get as granular as you want! The main point for me is really getting to two numbers that I can stand by as pretty good estimates: 

  • Annual Impact (Revenue $$)
  • Increase in Revenue ($$) from last year

This is because, for higher-up folks, money talks. Obviously, this won’t be something you can promise, but it gives them a metric that they understand to really wrap their head around the value that you’re potentially brining to the table if the changes you’re recommending can be made. 

There are some great tools for estimating this kind of stuff on a smaller scale, but for a massive body of keyword data, hopefully you will find this process useful as well. Let me know what you think, and I’d love to see what parts anyone else can streamline or make even more efficient. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How to Prove ROI Potential of Content Campaigns – Whiteboard Friday

Posted by iPullRank
We all know that creating and promoting content can be a ton of work (not to mention expensive). So how do we know whether it’ll be worth it? In today’s Whiteboard Friday, MozCon 2014 speaker Mike King shows you several w…

Continue Reading →

Page 1 of 2077