Google Webmaster Tools Just Got a Lot More Important for Link Discovery and Cleanup

Posted by RobertFisher

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

What if you owned a paid directory site and every day you received emails upon emails stating that someone wants links removed. As they stacked up in your inbox, whether they were pleasant or they were sternly demanding you cease and desist, would you just want to give up?
What would you do to stop the barrage of emails if you thought the requests were just too overwhelming? How could you make it all go away, or at least the majority of it?

First, a bit of background

We had a new, important client come aboard on April 1, 2013 with a lot of work needed going forward. They had been losing rankings for some time and wanted help. With new clients, we want as much baseline data as possible so that we can measure progress going forward, so we do a lot of monitoring. On April 17th, one of our team members noticed something quite interesting. Using Ahrefs for link tracking, we saw there was a big spike in the number of external links coming to our new client’s site. 

When the client came on board on two weeks prior, the site had about 5,500 links coming in and many of those were less than quality. Likely half or more were comment links from sites with no relevance to the client and they used the domain as the anchor text. Now, overnight they were at 6,100 links and the next day even more. Each day the links kept increasing. We saw they were coming from a paid directory called Netwerker.com. Within a month to six weeks, they were at over 30,000 new links from that site.

We sent a couple of emails asking that they please stop the linking, and we watched Google Webmaster Tools (GWT) every day like hawks waiting for the first link from Netwerker to show. The emails got no response, but in late May we saw the first links from there show up in GWT and we submitted a domain disavow immediately.

We launched their new site in late June and watched as they climbed in the rankings; that is a great feeling. Because the site was rising in the rankings rather well, we assumed the disavow tool had worked on Netwerker. Unfortunately, there was a cloud on the horizon concerning all of the link building that had been done for the client prior to our engagement. October arrived with a Penguin attack
(Penguin 2.1, Oct. 4, 2013) and they fell considerably in the SERPs. I mean, they disappeared for many of the best terms they had again began to rank for. They had fallen to page five or deeper for key terms. (NOTE: This was all algorithmic and they had no manual penalty.)

While telling the client that their new drop was a Penguin issue related to the October Penguin update (and the large ratio of really bad links), we also looked for anything else that would cause the issue or might be affecting the results. We are constantly monitoring and changing things with our clients. As a result, there are times we do not make a good change and we have to move things back. (We always tell the client if we have caused a negative impact on their rankings, etc. This is one of the most important things we ever do in building trust over time and we have never lost a client because we made a mistake.) We went through everything thoroughly and eliminated any other potential causative factors. At every turn there was a Penguin staring back at us!

When we had launched the new site in late June 2013, we had seen them rise back to page one for key terms in a competitive vertical. Now, they were missing for the majority of their most important terms. In mid-March of 2014, nearly a year after engagement, they agreed to do a severe link clean up and we began immediately. There would be roughly 45,000 – 50,000 links to clean up, but with 30,000 from the one domain already appropriately disavowed, it was a bit less daunting. I have to say here that I believe their reticence to do the link cleanup was due to really bad SEO in the past. They had, over time, had several SEO people/firms and at every turn, they were given poor advice. I believe they were misinformed into believing that high rankings were easy to get and there were “tricks” that would fool Google so you could pull it off. So, it really isn’t a client’s fault when they believe things are easy in the world of SEO.

Finally, it begins to be fun

About two weeks in, we saw them start to pop up randomly in the rankings. We were regularly getting responses back from linking sites. Some responses were positive and some were requests for money to remove the links; the majority gave us the famous “no reply.” But, we were making progress and beginning to see a result. Around the first or second week of April their most precious term, geo location + product/service, was ranked number one and their rich snippets were beautiful. It came and went over the next week or two, staying longer each time.

To track links we use MajesticSEO, Ahrefs, Open Site Explorer, and Google Webmaster Tools. As the project progressed, our Director of Content and Media who was overseeing the project could not understand why so many links were falling off so quickly. Frankly, we were not getting that many agreeing to remove them.

Here is a screenshot of the lost links from Ahrefs.

ahrefs New and Lost Links March 7 to May 7

Here are the lost links in MajesticSEO.

MajesticSEO Lost Links March to May

We were seeing links fall off as if the wording we had used in our emails to the sites was magical. This caused a bit of skepticism on our team’s part so they began to dig deeper. It took little time to realize the majority of the links that were falling off were from Netwerker! (Remember, a disavow does not keep the links from showing in the link research tools.) Were they suddenly good guys and willing to clear it all up? Had our changed wording caused a change of heart? No, the links from Netwerker still showed in GWT; Webmaster Tools had never shown all from Netwerker, only about 13,000, and it was still showing 13,000. But, was that just because Google was slower at showing the change? To check we did a couple of things. First, we just tried out the links that were “lost” and we saw they still resolved to the site, so we dug some more.

Using a bit of magic in the form of a
User-Agent Switcher extension and eSolutions, What’s my info? (to verify the correct user-agent was being presented), our head of development ran the user-agent string for Ahrefs and MajesticSEO. What he found was that Netwerker was now starting to block MajesticSEO and Ahrefs via a 406 response. We were unable to check Removeem, but the site was not yet blocking OSE. Here are some screenshots to show the results we are seeing. Notice in the first screenshot, all is well with Googlebot.


But A Different Story for Ahrefs


And a Different Story for MajesticSEO

We alerted both Ahrefs and MajesticSEO and neither responded beyond we will look into it canned response. We thought it important to let those dealing with link removal know to look even more carefully. Now August and three months in, both maintain the original response.

User-agents and how to run these tests

The user-agent or user-agent string is sent to the server along with any request. This allows the server to determine the best response to deliver based on conditions set up by its developers. It appears in the case of Netwerker’s servers that the response is to deny access to certain user-agents.

  1. We used the User-Agent Switcher extension for Chrome
  2. Next determine the user-agent string you would like to check (these can be found on various sites, one set of examples can be found at: http://www.useragentstring.com/. In most cases, the owner of the crawler or browser will have a webpage associated with them, for example the Ahrefs bot.)
  3. Within the User-Agent Switcher extension, open the options panel and add the new user-agent string.
  4. Browse to the site you would like to check.
  5. Using the User-Agent Switcher select the Agent you would like to view the site as, it will reload the page and you will be viewing it as the new user-agent string.
  6. We used eSolutions, What’s my info? to verify that the User-Agent Switcher was presenting the correct data to us.

A final summary

If you talk with anyone who is known for link removal (think people like Ryan Kent of Vitopian, an expert in Link cleanup), they will tell you to use every link report you can get your hands on to ensure you miss nothing. They always include Google Webmaster Tools as an important tool. Personally, while we always use GWMT, early on I did not think GWT was important for other than checking to see if we missed anything due to them consistently showing less links than others and all of the links showing in GWT are usually showing in the other tools. My opinion has changed with this revelation.

Given we gather data on clients early on, we had something to refer back to with the link clean-up; today if someone comes in and we have no history of their links, we must assume they will have links from sites blocking major link discovery tools and we have a heightened sense of caution. We will not believe we have cleaned everything ever again; we can believe we cleaned everything in GWT.

If various directories and other sites with a lot of outbound links start blocking link discovery tools because they, “just don’t want to hear any more removal requests,” GWT just became your most important tool for catching the ones that block the tools. They would not want to block Google or Bing for the obvious reasons.

So, as you go forward and you look at links with your own site and/or with clients, I suggest that you go to GWT to make sure there is not something showing there which fails to show in the well-known link discovery tools.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Announcing the All-New Beginner’s Guide to Link Building

Posted by Trevor-Klein

It is my great pleasure to announce the release of Moz’s third guide for marketers, written by the inimitable 
Paddy Moogan of Distilled:

The Beginner's Guide to Link Building

We could tell you all about how high-quality, authoritative links pointing to your site benefit your standing in the SERPs, but instead we’ll just copy the words straight from the proverbial horse’s mouth:

“Backlinks, even though there’s some noise and certainly a lot of spam, for the most part are still a really, really big win in terms of quality for search results.”
— Matt Cutts, head of the webspam team at Google, 
2/19/14

Link buliding is one area of SEO that has changed significantly over the last several years; 
some tactics that were once effective are now easily identifiable and penalized by Google. At the same time, earning links remains vital to success in search marketing: Link authority features showed the strongest correlation with higher rankings in our 2013 ranking factors survey. For that reason, it has never been more important for marketers to truly earn their links, and this guide will have you building effective campaigns in no time.


What you’ll learn


1. What is Link Building, and Why Is It Important?


This is where it all begins. If you’re brand new to link building and aren’t sure whether or not it’s a good tactic to include in your marketing repertoire, give this chapter a look. Even the more seasoned link earners among us could use a refresher from time to time, and here we cover everything from what links mean to search engines to the various ways they can help your business’s bottom line.


2. Types of Links (Both Good and Bad)

Before you dive into building links of your own, it’s important to understand the three main types of links and why you should really only be thinking about two of them. That’s what this short and sweet chapter is all about.


3. How to Start a Link Building Campaign

Okay, enough with the theory; it’s time for the nitty-gritty. This chapter takes a deep dive into every step of a link building campaign, offering examples and templates you can use to build your own foundation. 


4. Link Building Tactics

Whether through ego bait or guest blogging (yes, that’s 
still a viable tactic!), there are several approaches you can take to building a strong link profile. This chapter takes a detailed run through the tactics you’re most likely to employ.


5. Link Building Metrics

Now that the links are rolling in, how do you prove to ourselves and our clients that our work is paying off? The metrics outlined in this chapter, along with the tools recommended to measure them, offer a number of options for your reports.


6. The Good, the Bad, and the Ugly of Link Building

If we’re preaching to the choir with this chapter, then we’re thrilled, because spammy links can lead to severe penalties. Google has gotten incredibly good at picking out and penalizing spammy link building techniques, and if this chapter isn’t enough to make you put your white hat on, nothing is.


7. Advanced Link Building Tips and Tricks

Mastered the rest of what the guide has to offer? Earning links faster than 
John Paulson earns cash? Here are a few tips to take your link building to the next level. Caution: You may or may not find yourself throwing fireballs after mastering these techniques.


The PDF

When we released the Beginner’s Guide to Social Media, there was an instant demand for a downloadable PDF version. This time, it’s ready from the get-go (big thanks to David O’Hara!).

Click here to download the PDF.

Thanks

We simply can’t thank Paddy Moogan enough for writing this guide. His expertise and wisdom made the project possible. Thanks as well to Ashley Tate for wrangling the early stages of the project, Cyrus Shepard for his expert review and a few key additions, Derric Wise and David O’Hara for bringing it to life with their art, and Andrew Palmer for seamlessly translating everything onto the web.

Now, go forth and earn those links!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How to Build Your Own Free Amazon Organic Search Rank Tracker

Posted by n8ngrimm

Do you want a free tool that tracks your organic search rankings in Amazon? Yes? You’re in luck.

I am going to show you how to build your own organic search rank tracking tool using Kimono Labs and Excel.

This is a follow-up to 
my last post about how to rank well in Amazon, which covered the basic inputs to Amazon’s ranking algorithm. It received a lot of comments about my rank-tracking prototype in Google Docs; the Moz community is overflowing with smart people who immediately saw the need for a tool to track their progress. As luck would have it, something in Google Sheets broke the day after I published, so I had to replicate the rank tracking tool in Excel using the SEOTools for Excel plugin. The Excel tool is a low-setup way to record your progress, but if you want to track more than a few terms, it is very laborious. I’ve since built a more (but not completely) automated, scalable way to track rankings using Kimono Labs to scrape the data and Excel to run the reports.

(Shout out to Benjamin Spiegel for turning me on to Kimono Labs through an excellent Moz post.)

Pros and cons of rank tracking

The death of
Google rank tracking has been widely reported, so I feel compelled to review why Amazon rank tracking is both useful and a terrible KPI.

Amazon rank tracking is great because…

  • You get feedback on your content optimization. How else are you going to determine if your content changes actually produce a positive effect?
  • It can provide a possible explanation for increases in listing traffic and sales. Amazon doesn’t provide traffic source data so you’re often left guessing about the source of changes.

Amazon rank tracking is a terrible KPI because…

  • You have no way of assigning a monetary value to a rank. Amazon does not report on search query volume, you don’t know how well your users convert for each keyword, you don’t know the click-through-rate at each position, and you don’t know what percentage of users use organic search vs. other methods of finding your product.
  • Many factors besides rankings will drive your success on Amazon. Inventory outages, winning the Buy Box, and a good seller rating will impact sales drastically and directly. You can even assign revenue and profit numbers to some of those attributes.

So use rankings as a leading indicator of traffic and sales improvements and to see if your changes are making a difference.

Overview

To build our rank tracking tool, we’re going to

Build the scraper

Extract structured data from an Amazon search

Kimono Labs has some great documentation on using their tools. If, at any point, you get lost or want to do something slightly different from my scraper, you can find
their documentation here. I’m going to show you the fastest way to copy my existing scraper so you can get up and running as quickly as possible.

After you
create an account with Kimono Labs and install their bookmarklet or Chrome extension, the first thing you need to build a scraper is a URL to start scraping. I’m using this search in Amazon as my start URL: http://www.amazon.com/s?field-keywords=juicer. It’s a basic keyword search for the word “juicer.”

Click on the Kimonify bookmarklet, then click on the data model view.

Then click on Advanced

We’re going to make two properties.

To make things faster, you can copy the Xpath I use to identify the listing title and the ASIN (Amazon’s unique product identifier) from here:

Listing: div > div > div > h3 > a
ASIN: div > div.prod.celwidget

Next we’ll select which attributes to scrape from the elements we identified with the XPath. For the Listing property’s attributes, we’ll select the Text Content and href then click Apply.

For the ASIN attribute, we’ll select id and name. Deselect the other attributes that are selected by default, then click Apply.

So long as Amazon hasn’t changed the number of results they display by the time you are reading this, the two yellow circles at the top of the toolbar will say 15. That means that for each property defined, Kimono Labs has identified 15 different instances on the page. Does your screen look like this? If so, click Save.

Give your scraper a fancy name, tag it if you want, and decide how often you want it to run. I set mine to run daily. Kimono Labs will store a new version of the data every time it runs so if you don’t record it one day, the older data will still be there. I could have it scrape hourly but then it’s more laborious to go back through the data and find the version I want to save.

Click on the link to view your scraper. To verify that the data is gathering correctly, click on the Preview Results tab and select the CSV endpoint. You should see the title in the Listing.text field, a link to the listing in Listing.href, the ASIN in ASIN.name, and the rank in ASIN.id.

Finally, to make sure that Kimono Labs is gathering and saving data correctly, go to the API Detail tab and switch Always Save to On.

Then go to Pagination/Crawling and make sure crawling is turned on.

Congratulations! You just made a scraper that will record the ranking of every product for the keyword “juicer” every single day!

Which types of searches do you want to monitor?

There are many types of searches in Amazon. You can search for a keyword, brand, category, and any combinations of those. I’ll explain the URL parameters used to generate the searches so users can track whichever ranking is most important to your business. You will use these parameters to construct your list of URLs to crawl in Kimono Labs.

To start with, this URL can be used as a base for all Amazon searches: 
http://www.amazon.com/s. We will add the parameter name-value pairs to the end to construct our search.

Name Example Value Description
field-keywords Juicer Add any keyword that you want to track
field-brandtextbin Breville Add any brand name. It must exactly match the brand name listed for the product in Amazon.
node 284507 Amazon’s ID number for a category. You can look through this list of
Amazon’s top-level category nodes, download the most relevant Browse Tree Guide for every node, or simply navigate to the category and find it in the URL.
page 2 If you want to scrape beyond the first page, you’ll need to list a new URL for every page you want to scrape.

As an example, here’s the search for the keyword Juicer, with a brand name of Breville, in the Food & Kitchen category, page 2.

http://www.amazon.com/s?field-keywords=juicer&field-brandtextbin=breville&node=284507&page=2.

Here are a few notes that will be helpful (even critical) as you construct your searches.

  • Place a question mark (?) before your first parameter
  • Separate subsequent parameters with an ampersand (&)
  • You cannot search for a brand by itself; it can only be used in conjunction with a keyword or a node. I don’t know why.

Once you create every search URL, add them to the “List URLs to Crawl” field in Kimono Labs on the Pagination/Crawling tab.

Transform and store the data in Excel

Now that we’re scraping and storing rankings data for your searches every day, we want to display the data in a useful format. You could talk to a developer to hook into your Kimono Labs API, or you can download the data as a CSV and store it in Excel.

I’ll use this Excel template to transform my data into a more readable format, store the data, and create reports.

Transform

First, download the data from your Kimono Labs endpoint or results preview.

Paste the data into cell A2 of the Excel file. If the data ends up filling only the first column, go to Data >> Text to Columns. Select Delimited, click Next, select Comma, and click Finish. Your data should end up looking like this.

I use the table on the right to transform the data in a few key ways. I’ll explain each.

ASIN: I don’t transform this data; I just copy it as is. If it shows a number instead of an alphanumeric string, that’s an ISBN. It’s probably a book, movie, or cd that’s ranking

Title: Again, I’m not transforming the title, just copying it over.

Keyword: The keyword is included in the Listing.href on the left as part of the URL. I made a really long formula to extract just the keyword and replace plus symbols with spaces.

Date: This uses Excel’s TODAY() function which simply returns the current days date. If you’re adding data that is from a previous day, replace this date with whichever date is correct.

Rank: I remove the “result_” from the beginning of the ASIN.id field on the left and add one since the rankings start at zero.

Store historical data

If you continue adding data day after day, you can begin to see a change in rankings; copy the data from the table on the right (not the headers).

Then go to the Historical sheet and paste values at the bottom of the table. You just want to paste values, not formulas:

The table should automatically expand to include the new data. If not, click on the corner of the table and drag it down to include the new data. Next, click on the Data tab in the ribbon, then click Refresh All; the pivot tables in the Table and Graph sheets will now include the new data.

Build some useful reports with pivot tables and charts

In the Excel Template, I added a Pivot Table and Pivot Chart that you can use to report on the Data. The Historical data sheet has six days of rankings data. You may want to skip this section and just watch Annie Cushing’s videos on creating
pivot charts and pivot tables. Once you are comfortable with pivot charts and tables, you can look at the data however you want.

Here are a few useful rankings charts and tables I use to look at rankings data. I’ve included the visualization as well as my settings in the screenshot.

All ranked keywords for a product over time

This chart displays all the keyword rankings for one product over time. I use the ASIN to filter the chart instead of the title, because the title for a listing can change over time but the ASIN won’t. This product ranks for both of our keywords and has moved around slightly throughout the six days we’ve tracked (there are no rankings on 7/31 and 8/1 for “masticating juicer” because I was not scraping data for this keyword on those days).

Two competing products for one keyword

This chart compares two products for one keyword. If you are monitoring a key competitor or have multiple products for your brand, this is a useful view. I used the filters to select the keyword “juicer” and the two products.

Rank by day

To quickly pick out which products improved or lost ranking over a time period, I use a table. In the row labels I group each rank by keyword then ASIN. I add Title below ASIN so I can recognize which product is moving up or down.

To the right of the table, I added a formula to subtract the rank on 8/2 from the rank on 8/5 (=G7-D7). To make it more obvious which products improved and which did worse, I added conditional formatting to highlight negative numbers with red and positive numbers with green.

Is there another view you’d like me to demonstrate? Ask me in the comments.

Limitations

This system for tracking and reporting rankings is not perfect.

You must manually download the data from Kimono Labs to Excel to run a report. That’s a bit clunky. This process could be automated with some code.

Kimono Labs is still in free Beta so stability is an issue. Scraping, as a general rule, fails fairly often and I’ve experienced spotty page loading. They allow you to scrape and store an impressive amount of data for free though. If you know of a better, free tool be sure to let everyone know in the comments.

Excel itself is a limitation. If you get beyond 500,000 rows of data it will start to crawl. That may sound like a lot, but if you want to track 5 pages of results for 100 keywords every day, you will generate 8,000 rows of data per day. Excel is not a long-term solution.

My company is working on a rankings tool that will address all of these limitations, but it is a couple months away. If you want an email when it’s ready, fill out the form
here. For now, I’m living with the limitations of this system and getting some great insight.

Questions?

This post has a really long list of steps so if you have an issue, let me know via email (my first name @dnaresponse.com) or in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How to Use Facebook for Targeted Content Promotion

Posted by Paddy_Moogan
As much as content and advertising agencies would like you to believe it, content produced by a business doesn’t just go viral on it’s own. There is often something that pushes it really, really hard when it first goes live whic…

Continue Reading →

Link Echoes (a.k.a. Link Ghosts): Why Rankings Remain Even After Links Disappear – Whiteboard Friday

Posted by randfish
One of the more interesting phenomena illustrated by Rand’s IMEC Lab project is that of “link echoes,” sometimes referred to as “link ghosts.” The idea is that if we move a page up in rankings by pointing links to it, and then remov…

Continue Reading →

Is that Mind-Blowing Title Blowing Your Credibility? You Decide

Posted by Isla_McKetta

tantalus

What if I told you I could teach you to write the perfect headline? One that is so irresistible every person who sees it will click on it. You’d sign up immediately and maybe even promise me your firstborn.

But what if I then told you not one single person out of all the millions who will click on that headline will convert? And that you might lose all your credibility in the process. Would all the traffic generated by that “perfect” headline be worth it?

Help us solve a dispute

It isn’t really that bad, but with all the emphasis lately on
headline science and the curiosity gap, Trevor (your faithful editor) and I (a recovering copywriter) started talking about the importance of headlines and what their role should be in regards to content. I’m for clickability (as long as there is strong content to back the headline) and, if he has to choose, Trevor is for credibility (with an equal emphasis on quality of the eventual content).

credible vs clickable headlines

What’s the purpose of a headline?

Back in the good ol’ days, headlines were created to sell newspapers. Newsboys stood on street corners shouting the headlines in an attempt to hawk those newspapers. Headlines had to be enough of a tease to get readers interested but they had to be trustworthy enough to get a reader to buy again tomorrow. Competition for eyeballs was less fierce because a town only had so many newspapers, but paper cost money and editors were always happy to get a repeat customer.

Nowadays the competition for eyeballs feels even stiffer because it’s hard to get noticed in the vast sea of the internet. It’s easy to feel a little desperate. And it seems like the opportunity cost of turning away a customer is much lower than it was before. But aren’t we doing content as a product? Does the quality of that product matter?

The forbidden secrets of clickable headlines

There’s no arguing that headlines are important. In fact, at MozCon this year, Nathalie Nahai suggested an 80:20 ratio of energy spent on headline to copy. That might be taking things a bit far, but a bad (or even just boring) headline will tank your traffic. Here is some expert advice on writing headlines that convert: 

  • Nahai advises that you take advantage of psychological trigger words like, “weird,” “free,” “incredible,” and “secret” to create a sense of urgency in the reader. Can you possibly wait to read “Secret Ways Butter can Save Your Life”?
  • Use question headlines like “Can You Increase Your Sales by 45% in Only 5 Minutes a Day?” that get a reader asking themselves, “I dunno, can I?” and clicking to read more.
  • Key into the curiosity gap with a headline like “What Mother Should Have Told You about Banking. (And How Not Knowing is Costing You Friends.)” Ridiculous claim? Maybe, but this kind of headline gets a reader hooked on narrative and they have to click through to see how the story comes together.
  • And if you’re looking for a formula for the best headlines ever, Nahai proposes the following:
    Number/Trigger word + Adjective + Keyword + Promise = Killer Headline.

Many readers still (consciously or not) consider headlines a promise. So remember, as you fill the headline with hyperbole and only write eleven of the twelve tips you set out to write, there is a reader on the other end hoping butter really is good for them.

The headline danger zone

This is where headline science can get ugly. Because a lot of “perfect” titles simply do not have the quality or depth of content to back them.

Those types of headlines remind me of the Greek myth of Tantalus. For sharing the secrets of the gods with the common folk, Tantalus was condemned to spend eternity surrounded by food and drink that were forever out of his reach. Now, content is hardly the secrets of the gods, but are we tantalizing our customers with teasing headlines that will never satisfy?

buzzfeed headlines

For me, reading headlines on
BuzzFeed and Upworthy and their ilk is like talking to the guy at the party with all those super wild anecdotes. He’s entertaining, but I don’t believe a word he says, soon wish he would shut up, and can’t remember his name five seconds later. Maybe I don’t believe in clickability as much as I thought…

So I turn to credible news sources for credible headlines.

washington post headlines

I’m having trouble deciding at this point if I’m more bothered by the headline at
The Washington Post, the fact that they’re covering that topic at all, or that they didn’t really go for true clickbait with something like “You Won’t Believe the Bizarre Reasons Girls Scream at Boy Band Concerts.” But one (or all) of those things makes me very sad. 

Are we developing an immunity to clickbait headlines?

Even
Upworthy is shifting their headline creation tactics a little. But that doesn’t mean they are switching from clickbait, it just means they’ve seen their audience get tired of the same old tactics. So they’re looking for new and better tactics to keep you engaged and clicking.

The importance of traffic

I think many of us would sell a little of our soul if it would increase our traffic, and of course those clickbaity curiosity gap headlines are designed to do that (and are mostly working, for now).

But we also want good traffic. The kind of people who are going to engage with our brand and build relationships with us over the long haul, right? Back to what we were discussing in the intro, we want the kind of traffic that’s likely to convert. Don’t we?

As much as I advocate for clickable headlines, the riskier the headline I write, the more closely I compare overall traffic (especially returning visitors) to click-throughs, time on page, and bounce rate to see if I’ve pushed it too far and am alienating our most loyal fans. Because new visitors are awesome, but loyal customers are priceless.

Headline science at Moz

At Moz, we’re trying to find the delicate balance between attracting all the customers and attracting the right customers. In my first week here when Trevor and Cyrus were polling readers on what headline they’d prefer to read, I advocated for a more clickable version. See if you can pick out which is mine…

headline poll

Yep, you guessed it. I suggested “Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird” because it contained a trigger word and a keyword, plus it was punchy. I actually liked “A Layman’s Explanation of the Panda Algorithm, the Penguin Algorithm, and Hummingbird,” but I was pretty sure no one would click on it.

Last time I checked, that has more traffic than any other post for the month of June. I won’t say that’s all because of the headline—it’s a really strong and useful post—but I think the headline helped a lot.

But that’s just one data point. I’ve also been spicing up the subject lines on the Moz Top 10 newsletter to see what gets the most traffic.

most-read subject lines

And the results here are more mixed. Titles I felt like were much more clickbaity like “Did Google Kill Spam?…” and “Are You Using Robots.txt the Right Way?…” underperformed compared to the straight up “Moz Top 10.”

While the most clickbaity “Groupon Did What?…” and the two about Google selling domains (which was accurate but suggested that Google was selling it’s own domains, which worried me a bit) have the most opens overall.

Help us resolve the dispute

As you can tell, I have some unresolved feelings about this whole clickbait versus credibility thing. While Trevor and I have strong opinions, we also have a lot of questions that we hope you can help us with. Blow my mind with your headline logic in the comments by sharing your opinion on any of the following:

  • Do clickbait titles erode trust? If yes, do you ever worry about that affecting your bottom line?
  • Would you sacrifice credibility for clickability? Does it have to be a choice?
  • Is there such thing as a formula for a perfect headline? What standards do you use when writing headlines?
  • Does a clickbait title affect how likely you are to read an article? What about sharing one? Do you ever feel duped by the content? Does that affect your behavior the next time?  
  • How much of your soul would you sell for more traffic?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How To Do a Content Audit – Step-by-Step

Posted by Everett

This is Inflow’s process for doing content audits.
It may not be the “best” way to do them every time, but we’ve managed to keep it fairly agile in terms of how you choose to analyze, interpret and make recommendations on the data. The fundamental parts of the process remain about the same across numerous types of websites no matter what their business goals are: Collect all of the content URLs on the site and fetch the data you need about each URL. Then analyze the data and provide recommendations for each content URL. Theoretically it’s simple. In practice, however, it can be a daunting exercise if you don’t have a plan or process in place. By the end of this post we hope you’ll have a good start on both.

Table of Contents

The many purposes of a content audit

A content audit can help in a variety of different ways, and the approach can be customized for any given scenario. I’ll write more about potential “scenarios” and how to approach them below. For now, here are some things a content audit can help you accomplish…

  1. Determine the most effective way to escape a Panda penalty
  2. Determine which pages need copywriting / editing
  3. Determine which pages need to be updated and made more current, and prioritize them
  4. Determine which pages should be consolidated due to overlapping topics
  5. Determine which pages should be pruned off the site, and what the approach to pruning should be
  6. Prioritize content based on a variety of metrics (e.g. visits, conversions, PA, copyscape risk score…)
  7. Find content gap opportunities to drive content ideation and editorial calendars
  8. Determine which pages are ranking for which keywords
  9. Determine which pages “should” be ranking for which keywords
  10. Find the strongest pages on a domain and develop a strategy to leverage them
  11. Uncover content marketing opportunities
  12. Auditing and creating an inventory of content assets when buying/selling a website
  13. Understanding the content assets of a new client (i.e. what you have to work with)
  14. And many more…

A content audit case study

8 Times the Leads

Inflow’s technical SEO specialist
Rick Ramos performed an earlier version of our content audit last year for
Phases Design Studio, who graciously permitted us to share their case study. After taking an inventory of all content URLs on the domain, Rick outlined a plan to noindex/follow and remove from their sitemap many of the older blog posts that were no longer relevant, and weren’t candidates for a content refresh. The site also had a series of campaign-based landing pages dating back from 2006. These pages typically had a life cycle of a few months, but were never removed from the site or Google’s index. Rick recommended that these pages be 301 redirected to a few evergreen landing pages that would be updated whenever a new campaign was launched—a tactic that works particularly well on seasonal pages for eCommerce sites (e.g. 2014 New Years Resolution Deals). Still more pages were candidates to be updated / refreshed, or improved in other ways.

The results

Shortly after the recommendations were implemented the client called to ask if we knew why they were suddenly seeing eight times the amount of leads they were used to seeing month over month.

Analytics traffic graph after a content audit


Why we think it worked

There are several probable reasons why this approach worked for our client. Here are a few of them…

  1. The ratio of useful, relevant, unique content to thin, irrelevant, duplicate content was greatly improved.
  2. The PageRank from dozens of expired campaign landing pages was consolidated into a relatively few evergreen pages (via 301 redirects and consolidation of internal linking signals).
  3. Crawl budget is now being used more efficiently.

This improved the overall customer experience on the site, as well as organic search rankings for important topic areas that were consolidated.

Since then we have refined and improved the process and have been performing them on a variety of sites with great success. It works particularly well for panda recoveries on large-scale content websites, and for prioritizing which eCommerce product copy needs to be rewritten first.

A 50,000-foot overview of our process

Inflow’s content auditing process changes depending on the client’s goals, needs and budget. Generally speaking, however, here is how we approach it…

  1. Gather all available URLs on the site

    • Use Screaming Frog (or another crawl tool), CMS Exports, Google Analytics and Webmaster Tools
  2. Import URLs into a tool that gathers KPIs and other data for each URL

    • Use URL Profiler, a custom in-house tool, or other data-gathering resources

      • Things to gather: Moz Metrics, Google Analytics KPIs, GWT Data, Magestic SEO metrics, Titles, Descriptions, Wordcounts, canonical tags…
  3. Analyze the content

    • Choose to keep as-is, improve, remove or consolidate.

      • Write detail strategies for each.
  4. Perform keyword research

    • Optional: Provide relevancy scores, topic buckets and buying stage/s for each keyword
    • Match keywords to pages that already rank within a keyword matrix
    • Match non-ranking keywords to the best page for guiding on-page changes
  5. Do content gap ideation

    • Use keywords that did not have an appropriate page match to fill in the Content Gap tab.

      • Optional: Incorporate buying cycles into content gap ideation
  6. Write the content strategy

    • Summarize the findings and present a strategy for optimizing existing pages, creating new pages to fill gaps, explain how many pages are being removed, redirected, etc…


Each piece of the process can be customized for the needs of a particular website. 

For example, when auditing a very large content site with lots of duplicate/thin/overlapping content issues we may skip the entire keyword research and content gap analysis part of the process and focus on pruning the site of these types of pages and improving the rest. Alternatively, a site without much content may need to focus on keyword research and content gaps. Other sites may be looking specifically for content assets that they can improve, repeat in new ways or leverage for newer content. One example of a very specific goal would be to identify interlinking opportunities from strong, older pages to promising, newer pages. For now it is sufficient to know that
the framework can be changed as needed in a way that could dramatically affect where you spend your time in the process, or even which steps you may want to skip altogether.

Our documents

There are several major steps in the content auditing process that require various documents. While I’m not providing links to our internal SOP documentation (mainly because it’s still evolving), I will describe each document and provide screenshots and links to examples / templates so you can have a foundation around which to customize one for your own needs.

Content audit scenarios

We keep a list of recommendations for common scenarios to guide our approach to content audits. While every situation is unique in its own ways, we find this helps us get 90% of the way to the appropriate strategy for each client much faster. I discuss this in more detail later, but if you’d like to take a peek click here.

Content audit dashboard spreadsheet

We were originally working within Google Docs, but as we started pulling in from more sources and performing more vLookups the spreadsheet would load so slowly on big sites as to make it nearly impossible to complete an audit. For this reason we have recently moved the entire process over to Excel, though
this template we’re providing is in Google Docs format. Below are some of the tabs you may want in this spreadsheet…

The “Content Audit” tab

This tab within the dashboard is where most of the work is done. Other tabs pull data from this one by VLookup. Whether the data is fetched by API and compiled by one tool (e.g. URL Profiler) or exported manually from many tools and compiled manually (by VLookup), the end result should be that you have all of the metrics needed for each URL in one place so you can begin sorting by various metrics to discern patterns, spot opportunities and make educated decisions on how to handle each piece of content, and the content strategy of the site as a whole.

Content Audit Metrics Screenshot

You can customize the process to include whatever metrics you’d like to use. Here are the ones we’ve ended up with after some experimentation, as well as the source of the data:

  • Action (internal)

    • Leave As-Is
    • Improve
    • Consolidate
    • Remove
  • Strategy (internal)
    • A more detailed version of “action”. Example: Remove and 301 redirect to /another-page/.
  • Page Type (internal via URL patterns or CMS export)
    • This is and optional step for certain situations. Example: Article, Product, Category…
  • Source (original source of the URL, e.g. Google Analytics, Screaming Frog)
  • CopyScape Risk Score (copyscape API)
  • Title Tag (Screaming Frog)
  • Title Length (Screaming Frog)
  • Meta Description (Screaming Frog)
  • Word Count (Screaming Frog)
  • GA Entrances (Google Analytics API)
  • GA Organic Entrances (Google Analytics API)
  • Moz Links (Moz API)
  • Moz Page Authority (Moz API)
  • MozRank (Moz API)
  • Moz External Equity Links (Moz API)
  • Stumbleupon (Social Count API)
  • Facebook Likes (Social Count API)
  • Facebook Shares (Social Count API)
  • Google Plus One (Social Count API)
  • Tweets (Social Count API)
  • Pinterest (Social Count API)

Screenshot of Content Audit Dashboard

Our recommendations typically fall into one of four “Action” categories: “Keep As-Is”, “Remove”, “Improve”, or “Consolidate”. Further details (e.g. remove and 404, or remove and 301? If 301, to where?) are provided in a column called “Strategy”. Some URLs (the important ones) will have highly customized strategies, while others may have been bulk processed, meaning thousands could share the same strategy (e.g. rewriting duplicate product description copy). The “Action” column is limited in choices so we can sort the data effectively (e.g. see all pages marked as “removed”) while the “Strategy” column can be more free-form and customized to the URL (e.g. consolidate /buy-blue-widgets/ content into /buying-blue-widgets/ and 301 redirect the former to the latter to avoid duplicating the same topic).

The “Keyword Research” tab

This tab includes keywords gathered from a variety of sources, including brainstorming for seed keywords, mining Google Webmaster Tools, PPC campaigns, the AdWords Keyword Planner and several other tools. Search Volume and Ad Competition (not shown in this screenshot) are pulled from Google’s
Keyword Planner. The average ranking position comes from GWT, as does the top ranking page. The relevancy score is something we typically ask the client to do once we’ve cleaned out most of the obvious junk keywords.

Keyword Research Screenshot

The “Keyword Matrix” tab

This tab includes URLs for important pages, and those that are ranking for – or are most qualified to rank for – important topics. It essentially matches up keywords with the best possible page to guide our copywriting and on-page optimization efforts.

Sometimes the KWM tab plays an important role in the process, like when the site is relatively new or unoptimized. Most of the time it takes a back-seat to other tabs in terms of strategic importance.

The “Content Gaps” tab

This is where we put content ideas for high-volume, highly relevant keywords for which we could not find an appropriate page. Often it involves keywords that represent stages in the buying cycle or awareness ladder that have been overlooked by the company. Sometimes it plays an important role, such as with new and/or small sites. Most of the time this also takes a back-seat to more important issues, like pruning.

The “Prune” tab

If it was marked for “Remove” or “Consolodate” it should be on this tab. Whether it is supposed to be removed and 301 redirected, canonicalized elsewhere, consolidated into another page, allowed to stay up but with a robots “noindex” meta tag, removed and allowed to 404/410… or any number of “strategies” you might come up with, these are the pages that will no longer exist once your recommendations have been implemented. I find this to be a very useful tab. For example, one could export this tab, send it to a developer (or a company like
WP Curve), and have someone get started on most or all of the implementation. Our mantra for low-quality, under-performing content on sites that may have a Panda-related traffic drop is to improve it or remove it.

“Imported Data” tabs

In addition to the tabs above, we also have data tabs that are in the spreadsheet to house exported data from the various sources so we can perform Vlookups based on the URL to populate data in other tabs. These data tabs include:

  • GWT Top Queries
  • GWT Top Pages
  • CopyScape Scores (typically for up to 1,000 URLs)
  • Keyword Data

The more data that can be compiled by a tool like URL Profiler, the fewer data tabs you’ll need and the faster this entire process will go. Before we built the internal tool to automate parts of the process, we also had tabs for GA data, Moz data, and the initial Screaming Frog export.

Vlookup Master!

If you don’t know how to do a Vlookup there are plenty of online tutorials for Excel and GoogleDocs Spreadsheets.
Here’s one I found useful for Excel. Alternatively, you could import all of the data into the tabs and ask someone more spreadsheet-savvy on your team to do the lookups. Our resident spreadsheet guru is Caesar Barba, and he has great hair. Below is an example of a simple Vlookup used to bring the “Action” over from the Content Audit tab for a URL in the Keyword Matrix tab…

=VLOOKUP(A2,’Content Audit’!A:C,3,FALSE)

Content Strategy

The Content Audit Dashboard is just what we need internally: A spreadsheet crammed with data that can be sliced and diced in so many useful ways that we can always go back to it for more insight and ideas. Some clients appreciate it as well, but most are going to find the greater benefit in our final content strategy, which includes a high-level overview of our recommendations from the audit.

Content Strategy Screenshot from Inflow

Recommended exports and data sources

There are many options for getting the data you need into one place so you can simultaneously see a broad view of the entire content situation, as well as detailed metrics for each URL. For URL gathering we use
Screaming Frog and Google Analytics. For data we use Google Webmaster Tools (GWT), Google Analytics (GA), Social Count (SC), Copyscape (CS), Moz, CMS exports, and a few other data sources as needed.

However we’ve been experimenting with using
URL Profiler instead of our internal tool to pull all of these data-sources together much faster. URL Profiler is a few hundred bucks and is very powerful. It’s also somewhat of a pain to set up the first time, so be prepared for several hours of wrangling down API keys before getting all of the data you need.

No matter how you end up pulling it all together in the end, doing it yourself in Excel is always an option for the first few times.

A step-by-step example of our process

Below is the step-by-step process for an “average” client – whatever that means. Let’s say it is
a medium-sized eCommerce client with about 800-900 pages indexed by Google, including category, product, blog posts and other pages. They don’t have an existing penalty that we know of, but could certainly be at risk of being affected by Panda due to some thin, overlapping, duplicate, outdated and irrelevant content on the site.

Step 1: Assess the situation and choose a scenario

Every situation is different, but we have found common similarities based on two primary factors – The size of the site and its content-based penalty risk. Below is a screenshot from our list of recommended strategies for common content auditing scenarios, which can be found
here on GoInflow.com.

Inflow Content Audit Scenarios

Each of the colored boxes drops down to reveal the strategy for that scenario in more detail.

Hat tip to
Ian Lurie’s Marketing Stack for design inspiration.

The site described above would fall into the second box within purple column (
Focus: Content Audit with an eye to Improve and/or Prune, followed by KWM for key pages). Here is the reasoning behind that…

The site is in danger of a penalty (though it does not appear to have one “yet”) so we follow the Panda matra:
Improve it or Remove it. The size of the site determines which of those two (improve or remove) gets the most attention. Smaller sites need less pruning (scalpel), while larger sites need much more (hatchet). Smaller sites often need some keyword research to determine if they are covering all of the topic areas for various stages in the customer’s buying cycle, while larger sites typically have the opposite problem —> too many pages covering overlapping topic areas with low-quality (thin, duplicate, irrelevant, outdated, poorly written, automated…) content. Such a site would not require the keyword research, and would therefore not be getting a keyword matrix or content gap analysis, as the focus would be primarily about pruning the site.

Our focus in this example will be to audit the content with an eye to improve and/or Remove low performing pages, followed by keyword research and a keyword matrix for the primary pages, including the home page, categories, blog home and key product pages, as well as certain other topical landing pages.

As it turns out, this hypothetical website has lots of manufacturer-supplied product descriptions. We’re going to need to prioritize which ones get rewritten first because the client does not have the cash-flow to do them all at once. When budget and time is a concern, we typically shoot for the 80/20 rule: Write great content for the top 20% of pages right away, and do the other 80% over the course of 6-12 months as time/budget permit.

Because this site doesn’t have an existing penalty, we will recommend that all pages stay indexed. If they had a penalty already, we would recommend they noindex,follow the bottom 80% of pages, gradually releasing them back into the index as they are rewritten. This may not be the way you choose to handle the same situation, which is fine, but the point is you can easily sort the pages by any number of metrics to determine a relative “priority”. The bigger the site and tighter the budget, the more important it is to prioritize what gets worked on first.

Causes of Content-Related Penalties

For the purpose of a content audit we are only concerned with content-related penalties (as opposed to links and other off-page issues), which typically fall under three major categories: Quality, Duplication, and Relevancy. These can be further broken down into other issues, which include – but are not limited to:

  • Typical low quality content

    • Poor grammar, written primarily for search engines (includes keyword stuffing), unhelpful, inaccurate…
  • Completely irrelevant content
    • OK in small amounts, but often entire blogs are full of it.
    • A typical example would be a “linkbait” piece circa 2010.
  • Thin / Short content
    • Glossed over the topic, too few words, all image-based content…
  • Curated content with no added value
    • Comprised almost entirely of bits and pieces of content that exists elsewhere.
  • Misleading Optimization
    • Titles or keywords targeting queries for which content doesn’t answer or deserve to rank
    • Generally not providing the information the visitor was expecting to find
  • Duplicate Content
    • Internally duplicated on other pages (e.g. categories, product variants, archives, technical issues…)
    • Externally duplicated (e.g. manufacturer product descriptions, product descriptions duplicated in feeds used for other channels like Amazon, shopping comparison sites and eBay, plagiarized content…)
  • Stub Pages (e.g. “No content is here yet, but if you sign in and leave some user-generated-content then we’ll have content here for the next guy”. By the way, want our newsletter? Click an AD!)
  • Indexable internal search results
  • Too many indexable blog tag or blog category pages
  • And so forth and so-on…

If you are unsure about the scale of the site’s content problems, feel free to do step 2 before deciding on a scenario…

Step 2: Scan the site

We use
Screaming Frog for this step, but you can adapt this process to whatever crawler you want. This is how we configure the spider’s “Basic” and “Advanced” tabs…

 And the advanced tab…

Notice that “crawl all subdomains” is checked. This is optional, depending on what you’re auditing. We are respecting “meta robots noindex”, “rel = canonical” and robots.txt. Also notice that we are
not crawling images, CSS, JS, flash, external links…. This type of stuff is what we look at in a Technical SEO Audit, but would needlessly complicate a “Content” Audit. What we’re looking for here are all of the indexable HTML pages that might lead a visitor to the site from the SERPs, though it may certainly lead to the discovery of technical issues.

Export the complete list of URLs and related data from Screaming Frog into a CSV file.

Step 3: Import the URLs and start the tool

We have our own internal “Content Auditing Tool”, which takes URLs and data from Screaming Frog and Google Analytics, de-dupes them, and pulls in data from Google Webmaster Tools, Moz, Social Count and Copyscape for each URL. The tool is a bit buggy at times, however, so I’ve been experimenting with
URL Profiler, which can essentially accomplish the same goal with fewer steps and less upkeep. We need the “Agency” version, which is about $400 per year, plus tax. That’s not too bad, considering we’d already spent several thousand on our internal tool by the time Gareth Brown released URL Profiler publicly. :-/

Below is a screenshot of what you’ll see after downloading the tool. I’ve highlighted the boxes we currently check, though it depends on the tools/APIs to which you already subscribe and will differ by user. We’ve only just started playing with uClassify for the purpose of semi-automating our topic bucketing of pages, but I don’t have a process to share yet (feel free to comment with advice)…

Right-click on the URL List box and choose “Import From File”, then choose the ScreamingFrog export or any other list of URLs. There are also options to import from the clipboard or XML sitemap. Full documentation for URL Profiler
can be found here. Below are two output screenshots to give you an idea of what you’re going to end up with…

The output changes depending on which boxes you check and what API access you have.

Step 4: Import the tool output into the dashboard

As described in the 50,000 foot overview above, we have a spreadsheet template with multiple tabs, one of which is the “Content Audit” tab.
The tool output gets brought into the Content Audit tab of the dashboard. Our internal tool automatically ads columns for Action, Strategy, Page Type and Source (of the URL). You can also add these to the tab after importing the URL Profiler output. Page Type and URL Source are optional, but Action and Strategy are key elements of the process.

Our hypothetical client requires a Keyword Matrix. However, if your “scenario” does not involve keyword research (i.e. if it is a big site with content penalty risks) you can skip steps 5-7 and move straight to “Step 8 – Time to Analyze and Make Some Decisions”.

Step 5: Import GWT data

Match existing URLs from the content audit to keywords for which they already rank in Google Webmaster Tools

There may be a way to do this with URL Profiler. If so, I haven’t found it yet. Here is what we do to grab the landing page and associated keyword/query data from Google Webmaster Tools, which we then import into two tabs (GWT Top Queries and GWT Top Pages). These tabs are helpful when filling out the Keyword Matrix because they tell you which pages Google is already associating with each ranking keyword. This step can actually be skipped altogether for huge sites with major content problems because the “Focus” is going to be on pruning the site of low quality content, rather than doing any keyword research or content gap analysis.

Instructions for Importing Top Pages from GWT

    • Copy and Paste the following script into the console window and press Enter.
    • Log into GWT from a Chrome browser
    • Go to Search Traffic —> Search Queries
    • Switch the view to “Top pages” (default is “Top queries”)
    • Change the date range to start as far back as possible (i.e. 3 months)
    • Expand the amount of rows to show to the maximum of 500 rows
      • This will put the s=500 parameter in the URL. Change s=500 to s=10000 or however many rows of data are available
      • See bottom of GWT page (e.g. 1-500 of ####).
    • In the Chrome menu go to View —> Developer —> Javascript Console
    • This action should expand all of the drop-downs to show the keywords under each “page” URL and then open up a dialog window that will ask you to save a CSV file: (more info here and here).
    • The script is also available in a javascript bookmarklet on Lunametrics.com
  1. (function(){eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('C=M;k=0;v=e.q(\'1g-1a-18 o-y\');z=16(m(){H(v[k]);k++;f(k>=v.c){15(z);A()}},C);m H(a){a.h(\'D\',\'#\');a.h(\'11\',\'\');a.F()}m A(){d=e.10(\'Z\').4[1].4;2=X B();u=B.W.R.Q(d);7=e.q(\'o-G-O\');p(i=0;i<7.c;i++){d=u.J(7[i]);2.K([d,7[i].4[0].4[0].j])}7=e.q(\'o-G-14\');p(i=0;i<7.c;i++){d=u.J(7[i]);2.K([d,7[i].4[0].4[0].j])}2.N(m(a,b){P a[0]-b[0]});p(i=2.c-1;i>0;i--){r=2[i][0]-2[i-1][0];f(r===1){2[i-1][1]=2[i][1];2[i][0]++}}5="S\\T\\U\\V\\n";9=e.q("o-y-Y");6=0;I:p(i=0;i<9.c;i++){f(2[6][0]===i){E=2[6][1];12{6++;f(6>=2.c){13 I}r=2[6][0]-2[6-1][0]}L(r===1);2[6][0]-=(6)}5+=E+"\\t";l=9[i].4[0].4.c;f(l>0)5+=9[i].4[0].4[0].j+"\\t";17 5+=9[i].4[0].w+"\\t";5+=9[i].4[1].4[0].w+"\\t";5+=9[i].4[3].4[0].w+"\\n";5=5.19(/"|\'/g,\'\')}x="1b:j/1c;1d=1e-8,"+1f(5);s=e.1h("a");s.h("D",x);s.h("1i","1j.1k");s.F()}',62,83,'||indices||children|thisCSV|count|pageTds||queries|||length|temp|document|if||setAttribute||text|||function||url|for|getElementsByClassName|test|link||tableEntries|pages|innerHTML|encodedUri|detail|currInterval|downloadReport|Array|timeout1|href|thisPage|click|expand|expandPageListing|buildCSV|indexOf|push|while|25|sort|open|return|call|slice|page|tkeyword|timpressions|tclicks|prototype|new|row|grid|getElementById|target|do|break|closed|clearInterval|setInterval|else|block|replace|inline|data|csv|charset|utf|encodeURI|goog|createElement|download|GWT_data|tsv'.split('|'),0,{}))})();
    	

    Ignore any dialog windows that pop up.

    You can check “Prevent this page from creating additional dialogs” to disable them.

      • Import the resulting download.csv file from GWT into the “GWT Top Pages” tab in the Content Auditing Dashboard.

      Instructions for Importing Top Queries from GWT

      1. Within GWT switch back to Top Queries.
      2. Adjust the date to go back as far as you can.
      3. Expand the amount of rows to show to the maximum of 500 rows
        1. This will put the s=500 parameter in the URL. Change s=500 to s=10000 or however many rows of data are available
          1. See bottom of GWT page (e.g. 1-500 of ####).
      4. Select “Download this table” as a CSV file
      5. Import the resulting TopSearchQueries.csv file from GWT into the “GWT Top Queries” tab in the Content Auditing Dashboard.

    1. Step 6: Perform keyword research

      This is another optional step, depending on the focus/objective of the audit. It is also highly customizable to your own KWR process. Use whatever methods you like for gathering the list of keywords (e.g. brainstorming, SEMRush, Google Trends, Uber Suggest, GWT, GA…). Ensure all “junk” and irrelevant keywords are removed from the list, and run the rest through a single tool that collects search volume and competition metrics. We use the Google Adwords Keyword Planner, which is outlined below.

      1. Go to www.google.com/sktool/ while logged into our Google email account associated AdWords.
      2. Select “Get search volume for a list of keywords or group them into ad groups”, paste in your list of keywords and click “Get search volume”.
        1. Note: At this point you should have already expanded the list as much as you need/want to so you’re just gathering data and organizing them now.
        2. Note: The copy/paste method is limited to 1,000 keywords. You can get up to 3,000 by uploading your simple .txt file.
      3. Go to the “Keyword Ideas” tab on the next screen and Add All keywords to the plan.
      4. Go to the “Ad Group Ideas” tab and choose to Add All of the ad groups to the plan.
      5. Download the plan, as seen in the screenshot below.
      6. Import the data into the AdWords Data tab of the Content Auditing Dashboard

      Use the settings below when downloading the plan:

      Step 7: Tying the keyword data together

      Again, you don’t need to do this step if you’re working on a large site and the focus is on pruning out low quality content. The GWT Queries and KWR steps provide data needed to develop a “Keyword Matrix” (KWM), which isn’t necessary unless part of your focus is on-page optimization and copywriting of key pages. Sometimes you just need to get a client out of a penalty, or remove the danger of one. The KWM comes in handy for the important pages marked as “Improve” within the Content Audit tab just so the person writing the copy understands which keywords are important for that page. It’s SEO 101 and you can do it anyway you like using whatever tools you like.

      Google Adwords has given you the keyword, search volume and competition. Google Webmaster Tools has given you the ranking page, average position, impressions, clicks and CTR for each keyword. Pull these together into a tab called “Keyword Research” using Vlookups. You should end up with something like this:

      The purpose of these last few steps was to help with the
      KWM, an example of which is shown below:

      Step 8: Time to analyze and make some decisions!

      All of the data is right in front of you, and your path has been laid out using the
      Content Audit Scenarios tool. From here on the actual step-by-step process becomes much more open to interpretation and your own experience / intuition. Therefore, do not consider this a linear set of instructions meant to be carried out one after another. You may do some of them and not others. You may do them a little differently. That is all fine as long as you are working toward the goal of determining what to do, if anything, for each piece of content on the website.

      • Sort by Copyscape Risk Score

        • Which of these pages should be rewritten?

          • Rewrite key/important pages, such as categories, home page, top products
          • Rewrite pages with good Link and Social metrics
          • Rewrite pages with good traffic
          • After selecting “Improve” in the Action column, elaborate in the Strategy column:
            • “Improve these pages by writing unique, useful content to improve the Copyscape risk score.”
        • Which of these pages should be removed / pruned?
          • Remove guest posts that were published elsewhere
          • Remove anything the client plagiarized
          • Remove content that isn’t worth rewriting, such as:
            • No external links, no social shares, and very few or no entrances / visits
          • After selecting “Remove” from the Action column, elaborate in the Strategy column:
            • “Prune from site to remove duplicate content. This URL has no links or shares and very little traffic. We recommend allowing the URL to return 404 or 410 response code. Remove all internal links, including from the sitemap.
        • Which of these pages should be consolidated into others?
          • Presumably none, since the content is already externally duplicated
        • Which of these pages should be marked “Leave As-Is”
          • Important pages which have had their content stolen

            • In the Strategy column provide a link to the CopyScape report and instructions for filing a DMCA / Copyright complaint with Google.
      • Sort by Entrances or Visits (filtering out any that were already finished)
        • Which of these pages should be marked as “Improve”?

          • Pages with high visits / entrances but low conversion, time-on-site, pageviews per session…
          • Key pages that require improvement determined after a manual review of the page
        • Which of these pages should be marked as “Consolidate”?
          • When you have overlapping topics that don’t provide much unique value of their own, but could make a great resource when combined.

            • Mark the page in the set with the best metrics as “Improve” and in the Strategy column outline which pages are going to be consolidated into it. This is the canonical page.
            • Mark the pages that are to be consolidated into the canonical page as “Consolidate” and provide further instructions in the Strategy column, such as:
              • Use portions of this content to round out /canonicalpage/ and then 301 redirect this page into /canonicalpage/ Update all internal links.
          • Campaign-based or seasonal pages that could be consolidated into a single “Evergreen” landing page (e.g. Best Sellers of 2012 and Best Sellers of 2013 —> Best Sellers).
        • Which of these pages should be marked as “Remove”?
          • Pages with poor link, traffic and social metrics related to low-quality content that isn’t worth updating

            • Typically these will be allowed to 404/410.
          • Irrelevant content
            • The strategy will depend on link equity and traffic as to whether it gets redirected or simply removed.
          • Out-of-Date content that isn’t worth updating or consolidating
            • The strategy will depend on link equity and traffic as to whether it gets redirected or simply removed.
        • Which of these pages should be marked as “Leave As-Is”?
          • Pages with good traffic, conversions, time on site, etc… that also have good content.

            • These may or may not have any decent external links

      Another Way of Thinking About It…

      For big sites It is best to use a hatchet-approach as much as possible, and finish up with a scalpel in the end. Otherwise you’ll spend way too much time on the project, which eats into the ROI.

      This is not a process that can be documented step-by-step. For the purpose of illustration, however, here are a few different
      examples of hatchet approaches and when to consider using them.

      • Parameter-based URLs that shouldn’t be indexed

        • Defer to the Technical Audit, if applicable. Otherwise, use your best judgement:

          • e.g. /?sort=color, &size=small
            • Assuming the Tech Audit didn’t suggest otherwise these pages could all be handled in one fell swoop. Below is an “example” action and an “example” strategy for such a page:
              • Action = Consolodate
              • Strategy = Rel canonical to the base page without the parameter
      • Internal search results
        • Defer to the Technical Audit if applicable. Otherwise, use your best judgement:

          • e.g. /search/keyword-phrase/
            • Assuming the Tech Audit didn’t suggest otherwise:
              • Action = Remove
              • Strategy = Apply a noindex meta tag. Once they are removed from the index, disallow /search/ in the robots.txt file.
      • Blog tag pages
        • Defer to the Technical Audit if applicable. Otherwise…:

          • e.g. /blog/tag/green-widgets/ , blog/tag/blue-widgets/ …
            • Assuming the Tech Audit didn’t suggest otherwise:
              • Action = Remove
              • Strategy = Apply a noindex meta tag. Once they are removed from the index, disallow /search/ in the robots.txt file.
      • eCommerce Product Pages with Manufacturer Descriptions
        • In cases where the “Page Type” is known (i.e. it’s in the URL or was provided in a CMS export) and Risk Score indicates duplication…

          • e.g. /product/product-name/
            • Assuming the Tech Audit didn’t suggest otherwise:
              • Action = Improve
              • Strategy = Rewrite to improve product description and avoid duplicate content
      • eCommerce Category Pages with No Static Content
        • In cases where the “Page Type” is known…

          • e.g. /category/category-name/ or category/cat1/cat2/
            • Assuming NONE of the category pages have content…
              • Action = Improve
              • Strategy = Write 2-3 sentences of unique, useful content that explains choices, next steps or benefits to the visitor looking to choose a product from the category.
      • Out-of-Date Blog Posts, Articles and Other Landing Pages
        • In cases where the Title tag includes a date or…
        • In cases where the URL indicates the publishing date….
          • Action = Improve
          • Strategy = Update the post to make it more current if applicable. Otherwise, change Action to “Remove” and customize the Strategy based on links and traffic (i.e. 301 or 404)

      Step 9: Content gap analysis and other value-adds

      Although most of these could be put as optional items during the keyword research process, I prefer to save them until last because I never knows how much time I’ll have after taking care of more pressing issues.

      Content gaps
      If you’ve gone through the trouble of identifying keywords and the pages already ranking for them, it isn’t much of a step further to figure out which keywords could lead to ideas about how to fill content gaps.

      At Inflow we like to use the “Awareness Ladder” developed by Ben Hunt, as featured in his book
      Convert!. You can learn more about it here.

      Content levels
      If time permits, or the situation dictates, we may also add a column to the Keyword Matrix or Content Audit which identifies which level of content the page would need to compete in its keyword space. We typically choose from Basic, Standard and Premium. This goes a long way in helping the client allocate copywriting resources to work where they’re needed the most (i.e. best writers do the Premium content).

      Landing page or keyword topic buckets
      If time permits, or the situation dictates, we may provide topic bucketing for landing pages and/or keywords. More than once this has resulted in recommendations for adding to or changing existing taxonomy with great results. The most frequent example is in the “How To” or “Resources” space for any given niche.

      Keyword relevancy scores
      This is a good place to enlist the help of a client, especially in complicated niches with a lot of jargon. Sometimes the client can be working on this while the strategist is doing the content audit.

      Step 10: Writing up the content audit strategy document

      The Content Strategy, or whatever you decide to call it, should be delivered at the same time as the audit, and summarizes the findings, recommendations and next steps from the audit. It should start with an Executive Summary and then drill deeper into each section outlined therein.

      Here is a
      real example of an executive summary from one of Inflow’s Content Audit Strategies:

      As a result of our comprehensive content audit, we are recommending the following, which will be covered in more detail below:

      • Removal of about 624 pages from Google index by deletion or consolidation:

        • 203 Pages were marked for Removal with a 404 error (no redirect needed)
        • 110 Pages were marked for Removal with a 301 redirect to another page
        • 311 Pages were marked for Consolidation of content into other pages
          • Followed by a redirect to the page into which they were consolidated
      • Rewriting or improving of 668 pages
        • 605 Product Pages are to be rewritten due to use of manufacturer product descriptions (duplicate content), these being prioritized from first to last within the Content Audit.
        • 63 “Other” pages to be rewritten due to low-quality or duplicate content.
      • Keeping 26 pages as-is with no rewriting or improvements needed unless the page exists in the Keyword Matrix, in which case it requires on-page optimization best practices be reviewed/applied.
      • On-Page optimization focus for 25 pages with keywords outlined in the Keyword Matrix tab.

      These changes reflect an immediate need to “improve or remove” content in order to avoid an obvious content-based penalty from Google (e.g. Panda) due to thin, low-quality and duplicate content, especially concerning Representative and Dealers pages with some added risk from Style pages.


      The Content Strategy should end with recommended next steps, including action items for the consultant and the client. Here is a real example from one of our documents:

      We recommend the following actions in order of their urgency and/or potential ROI for the site:

      1. Remove or consolidate all pages in the “Prune” tab of the Content Audit Dashboard

        1. Detailed instructions for each page can be found in the “Strategy” column of the Prune tab
      2. Begin a copywriting project to improve/rewrite content on Style pages to ensure unique, robust content and proper keyword targeting.
        1. Inflow can provide support for your own copywriters, or we can use our in-house copywriters, depending on budget and other considerations. As part of this process, these items can also be addressed:

          1. Improve/rewrite all pages in the Keyword Matrix to match assigned keywords.

            1. Include on-page optimization (e.g. Title, description, alt attributes, keyword use, etc.)

              1. See the “Strategy” column for more complete instructions for each page.
          2. Improve/rewrite all remaining pages from the “Content Audit” tab listed as “Improve”.

      Resources, links, and post-scripts…

      Example Content Auditing Dashboard
      Make a copy of this Google Docs spreadsheet, which is a basic version of how we format ours at Inflow.

      Content Audit Strategies for Common Scenarios
      This page/tool will help you determine where to start and what to focus on for the majority of situations you’ll encounter while doing comprehensive content audits.

      How to Conduct a Content Audit on Your Site by Neil Patel of QuickSprout
      Oh wait, I can’t in send everyone to a page that makes them navigate a gauntlet of pop-ups to see the content, and another one to leave. So nevermind…

      How to Perform a Content Audit by Kristina Kledzik of Distilled
      This one focuses mostly on categorizing pages by buying cycle stage.

      Expanding the Horizons of eCommerce Content Strategy by Dan Kern of Inflow
      Dan wrote an epic post recently about content strategies for eCommerce businesses, which includes several good examples of content on different types of pages targeted toward various stages in the buying cycle.

      Distilled’s Epic Content Guide
      See the section on Content Inventory and Audit.

      The Content Inventory is Your Friend by Kristina Halvorson on BrainTraffic
      Praise for the life-changing powers of a good content audit inventory.

      How to Perform a Content Marketing Audit by Temple Stark on Vertical Measures
      Temple did a good job of spelling out the “how to” in terms of a high-level overview of his process to inventory content, assess its performance and make decisions on what to do next.

      Why Traditional Content Audits Aren’t Enough by Ahava Leibtag on Content Marketing Institute’s blog
      While not a step-by-step “How To” like this post, Ahava’s call for marketing analysts to approach these proejcts from both a quantitative (content inventory) and qualitative (content quality audit) resonated with me the first time I read it, and is partly responsible for how I’ve approached the process outlined above.

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      Continue Reading →

      Experiment: We Removed a Major Website from Google Search, for Science!

      Posted by Cyrus-Shepard

      The folks at Groupon surprised us earlier this summer when they reported the
      results of an experiment that showed that up to 60% of direct traffic is organic.

      In order to accomplish this, Groupon de-indexed their site, effectively removing themselves from Google search results. That’s crazy talk!

      Of course, we knew we had to try this ourselves.

      We rolled up our sleeves and chose to de-index
      Followerwonk, both for its consistent Google traffic and its good analytics setup—that way we could properly measure everything. We were also confident we could quickly bring the site back into Google’s results, which minimized the business risks.

      (We discussed de-indexing our main site moz.com, but… no soup for you!)

      We wanted to measure and test several things:

      1. How quickly will Google remove a site from its index?
      2. How much of our organic traffic is actually attributed as direct traffic?
      3. How quickly can you bring a site back into search results using the URL removal tool?

      Here’s what happened.

      How to completely remove a site from Google

      The fastest, simplest, and most direct method to completely remove an entire site from Google search results is by using the
      URL removal tool

      CAUTION: Removing any URLs from a search index is potentially very dangerous, and should be taken very seriously. Do not try this at home; you will not pass go, and will not collect $200!

      After submitting the request, Followerwonk URLs started
      disappearing from Google search results in 2-3 hours

      The information needs to propagate across different data centers across the globe, so the effect can be delayed in areas. In fact, for the entire duration of the test, organic Google traffic continued to trickle in and never dropped to zero.

      The effect on direct vs. organic traffic

      In the Groupon experiment, they found that when they lost organic traffic, they
      actually lost a bunch of direct traffic as well. The Groupon conclusion was that a large amount of thier direct traffic was being attributed as direct—up to 60% on “long URLs”.

      At first glance, the overall amount of direct traffic to Followerwonk didn’t change significantly, even when organic traffic dropped.

      In fact, we could find no discrepancy in direct traffic outside the expected range.

      I ran this by our contacts at Groupon, who said this wasn’t totally unexpected. You see, in their experiment they saw the biggest drop in direct traffic on
      long URLs, defined as a URL that is at least as long enough to be in a subfolder, like https://followerwonk.com/bio/?q=content+marketer.

      For Followerwonk, the vast majority of traffic goes to the homepage and a handful of other URLs. This means we didn’t have a statistically significant sample size of long URLs to judge the effect. For the long URLs we were able to measure, the results were nebulous. 

      Conclusion: While we can’t confirm the Groupon results with our outcome, we can’t discount them either.

      It’s quite likely that a portion of your organic traffic is attributed as direct. This is because of different browsers, operating systems and user privacy settings can potentially block referral information from reaching your website.

      Bringing your site back from death

      After waiting 2 hours,
      we deleted the request. Within a few hours all traffic returned to normal. Whew!

      Does Google need to recrawl the pages?

      If the time period is short enough, and you used the URL removal tool, apparently not.

      In the case of Followerwonk, Google removed over
      300,000 URLs from its search results, and made them all reappear in mere hours. This suggests that the domain wasn’t completely removed from Google’s index, but only “masked” from appearing for a short period of time.

      What about longer periods of de-indexation?

      In both the Groupon and Followerwonk experiments, the sites were only de-indexed for a short period of time, and bounced back quickly.

      We wanted to find out what would happen if you de-indexed a site for a longer period, like
      two and a half days?

      I couldn’t convince the team to remove any of our sites from Google search results for a few days, so I choose a smaller personal site that I often subject to merciless SEO experiments.

      In this case, I de-indexed the site and didn’t remove the request until three days later. Even with this longer period, all URLs returned within just
      a few hours of cancelling the URL removal request.

      Likely, the URLs were still in Google’s index, so we didn’t have to wait for them to be recrawled.

      For longer removal periods, a few weeks for example, I speculate Google might drop these semi-permanently from the index and re-inclusion would comprise a much longer time period.

      What we learned

      1. While a portion of your organic traffic may be attributed as direct (due to browsers, privacy settings, etc) in our case the effect on direct traffic was negligible.
      2. If you accidentally de-index your site using Google Webmaster Tools, in most cases you can quickly bring it back to life by deleting the request.
      3. Reinclusion happens quickly even after we removed a site for over 2 days. Longer than this, the result is unknown, and you could have problems getting all the pages of your site indexed again.

      Further reading

      Moz community member Adina Toma wrote an excellent YouMoz post on the re-inclusion process using the same technique, with some excellent tips for other, more extreme situations.

      Big thanks to
      Peter Bray for volunteering Followerwonk for testing. You are a brave man!

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      Continue Reading →

      Beyond Search: Unifying PPC and SEO on the Display Network

      Posted by anthonycoraggio

      PPC and SEO go better together. By playing both sides of the coin, it’s possible to make more connections and achieve greater success in your online marketing than with either alone.

      That the data found in search query reporting within AdWords can be a valuable source of information in
      keyword research is well known. Managing the interaction effects of sharing the SERPs and capturing reinforcing real estate on the page is of course important. Smart marketers will use paid search to test landing pages and drive traffic to support experiments on the site itself. Harmony between paid and organic search is a defining feature of well executed search engine marketing.

      Unfortunately, that’s where the game all too often stops, leaving a world of possibilities for research and synergy waiting beyond the SERPs on the Google Display Network. Today I want to give you a couple techniques to kick your paid/organic collaboration back into gear and get more mileage from combining efforts across the disciplines.

      Using the display network

      If you’re not familiar with it already, the GDN is essentially the other side of AdSense, offering the ability to run banner, rich media, and even video ads across the network from AdWords or Doubleclick. There are two overarching methods of targeting these ads: by context/content, and by using remarketing lists. Regardless of your chosen method, ads here are about as cheap as you can find (often under a $1 CPC), making them a prime tool for exploratory research and supporting actions.

      Contextual and content-based targeting offers some simple and intuitive ways to extend existing methods of PPC and SEO interaction. By selecting relevant topics, key phrases, or even particular sites, you can place ads in the wild to test the real world resonance of taglines and imagery with people consuming content relevant to yours.

      You can also take a more coordinated approach during a content marketing campaign using the same type of targeting. Enter a unique phrase from any placements you earn on pages using AdSense as a keyword target, and you can back up any article or blog post with a powerful piece of screen real estate and a call to action that is fully under your control. This approach mirrors the
      tactic of using paid search ads to better control organic results, and offers a direct route to conversion that usually would not otherwise exist in this environment.

      Research with remarketing

      Remarketing on AdWords is a powerful tool to drive conversions, but it also produces some very interesting and frequently neglected data in the proces:
      Your reports will tell you which other sites and pages your targeted audience visits once your ads display there. You will, of course, be restricted here to sites running AdSense or DoubleClick inventory, but this still adds up to over 2 million potential pages!

      If your firm is already running remarketing, you’ll be able to draw some insights from your existing data, but if you have a specific audience in mind, you may want to create a new list anyway. While it is possible to create basic remarketing lists natively in AdWords, I recommend using Google Analytics to take advantage of the advanced segmentation capabilities of the platform. Before beginning, you’ll need to ensure that your AdWords account is linked and your tracking code is updated.

      Creating your remarketing list

      First, define who exactly the users you’re interested in are. You’re going to have to operationalize this definition based on the information available in GA/UA, so be concrete about it. We might, for example, want to look after users who have made multiple visits within the past two weeks to peruse our resources without completing any transactions. Where else are they bouncing off to instead of closing the deal with us?

      If you’ve never built a remarketing list before, pop into the creation interface in GA through Admin > Remarketing > Audiences. Hit the big red ‘+ Audience’ button to get started. You’re first presented with a selection of list types:

      ga-remarketing-list-types.PNG

      The first three options are the simplest and least customizable, so they won’t be able to parse out our theoretical non-transactors, but can be handy for this application nonetheless. The
      Smart List option is a relatively new and interesting option. Essentially, this will create a list based on Google’s best algorithmic guess at which of your users are most likely to convert upon return to your site. The ‘black box’ element to Smart Lists makes it less precise as a tool here, but it’s simple to test and see what it turns up.

      The next three are relatively self explanatory; you can gather all users, all users to a given page, or all that have completed a conversion goal. Where it gets truly interesting is when you create your own list using segments. All the might of GA opens up here for you to apply criteria for demographics, technology/source, behavior, and even advanced conditions and sequences. Very handily, you can also import any existing segments you’ve created for other purposes.

      In this figure, we’re simply translating the example from above into some criteria that should fairly accurately pick out the individuals in which we are interested.

      Setting up and going live

      When you’ve put your list together, simply save it and hop back over to AdWords. Once it counts at least 100 users in its target audience, Google will let you show ads using it as targeting criteria. To set up the ad group, there are a few key considerations to bear in mind:

      1. You can further narrow your sample using AdWords’ other targeting options, which can be very handy. For example, want to know only what sites your users visit within a certain subject category? Plug in topic targeting. I won’t jump down the rabbit hole of possibilities here, but I encourage you to think creatively in using this capability.
      2. You’ll of course need fill the group with some actual ads for it to work. If you can’t get some applicable banner ads, you can create some simple text ads. We might be focusing on the research data to be had in this particular group, but remember that users are still going to see and potentially click these ads, so make sure you use relevant copy and direct them to an appropriate landing page.
      3. To hone down on unique and useful discoveries, consider setting some of the big generic inventory sources like YouTube as negative targets.
      4. Finally, set a reasonable CPC bid to ensure your ads show. $0.75 to $1.00 should be sufficient; if your ads aren’t turning up many impressions with a decent sized list, push the number up a bit.

      To check on the list size and status, you can find it in Shared Library > Audiences or back in GA. Once everything is in place, set your ads live and start pulling in some data!

      Getting the data

      You won’t get your numbers back overnight, but over time you will collect a list of the websites your remarketed ads show on: all the pages across the vast Google Display Network that your users visit. To find it, enter AdWords and select the ad group you set up. Click the “Display Network” and “Placements” tabs:

      placement-data-tabs.PNG

      You’ll see a grid showing the domain level placements your remarketing lists have shown on, with the opportunity to customize the columns of data included. You can sift through the data on a more granular level by clicking “see details;” this will provide you with page level data for the listed domains. You’re likely to see a chunk of anonymized visits; there is a
      workaround to track down the pages in here, but be advised it will take a fair amount of extra effort.

      plaecments-see-details.png

      Tada! There you are—a lovely cross section of your target segment’s online activities. Bear in mind you can use this approach with contextual, topic, or interest targeting that produces automatic placements as well.

      Depending on your needs, there are of course myriad ways to make use of display advertising tools in sync with organic marketing. Have you come up with any creative methods or intriguing results? Let us know in the comments! 

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      Continue Reading →

      Syndicating Content – Whiteboard Friday

      Posted by Eric Enge
      It’s hard to foresee a lot of benefit to your hard work creating content when you don’t have much of a following, and even if you do, scaling that content creation is difficult for any marketer. One viable answer is syndicatio…

      Continue Reading →

      Page 1 of 2079