Is It Possible to Have Good SEO Simply by Having Great Content – Whiteboard Friday

Posted by randfish

This question, posed by Alex Moravek in our Q&A section, has a somewhat complicated answer. In today’s Whiteboard Friday, Rand discusses how organizations might perform well in search rankings without doing any link building at all, relying instead on the strength of their content to be deemed relevant and important by Google.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about is it possible to have good SEO simply by focusing on great content to the exclusion of link building.

This question was posed in the Moz Q&A Forum, which I deeply love, by Alex Moravek — I might not be saying your name right, Alex, and for that I apologize — from SEO Agencias in Madrid. My Spanish is poor, but my love for churros is so strong.

Alex, I think this is a great question. In fact, we get asked this all the time by all sorts of folks, particularly people in the blogging world and people with small and medium businesses who hear about SEO and go, “Okay, I think can make my website accessible, and yes, I can produce great content, but I just either don’t feel comfortable, don’t have time and energy, don’t understand, or just don’t feel okay with doing link building.” Link acquisition through an outreach and a manual process is beyond the scope of what they can fit into their marketing activities.

In fact, it is possible kind of, sort of. It is possible, but what you desperately need in order for this strategy to be possible are really two things. One is content exposure, and two you need time. I’ll explain why you need both of these things.

I’m going to dramatically simplify Google’s ranking algorithm. In fact, I’m going to simplify it so much that those of you who are SEO professionals are going to be like, “Oh God, Rand, you’re killing me.” I apologize in advance. Just bear with me a second.

We basically have keywords and on-page stuff, topical relevance, etc. All your topic modeling stuff might go in there. There’s content quality, all the factors that Google and Bing might measure around a content’s quality. There’s domain authority. There’s link-based authority based on the links that point to all the pages on a given domain that tell Google or Bing how important pages on this particular domain are.

There are probably some topical relevance elements in there, too. There’s page level authority. These could be all the algorithms you’ve heard of like PageRank and TrustRank, etc., and all the much more modern ones of those.

I’m not specifically talking about Moz scores here, the Moz scores DA and PA. Those are rough interpretations of these much more sophisticated formulas that the engines have.

There’s user and usage data, which we know the engines are using. They’ve talked about using that. There’s spam analysis.

Super simplistic. There are these six things, six broad categories of ranking elements. If you have just these four — keywords, on-page content quality, user and usage data, spam analysis, you’re not spammy — without these, without any domain authority or any page authority, it’s next to impossible to rank for competitive terms and very challenging and very unlikely to rank even for stuff in the chunky middle and long tail. Long tail you might rank for a few things if it’s very, very long tail. But these things taken together give you a sense of ranking ability.

Here’s what some marketers, some bloggers, some folks who invest in content nearly to the exclusion of links have found. They have had success with this strategy. They’ve basically elected to entirely ignore link building and let links come to them.

Instead of focusing on link building, they’re going to focus on product quality, press and public relations, social media, offline marketing, word of mouth, content strategy, email marketing, these other channels that can potentially earn them things. Advertising as well potentially could be in here.

What they rely on is that people find them through these other channels. They find them through social, through ads, through offline, through blogs, through very long tail search, through their content, maybe their email marketing list, word of mouth, press. All of these things are discovery mechanisms that are not search.

Once people get to the site, then these websites rely on the fact that, because of the experience people have, the quality of their products, of their content, because all of that stuff is so good, they’re going to earn links naturally.

This is a leap. In fact, for many SEOs, this is kind of a crazy leap to make, because there are so many things that you can do that will nudge people in this link earning direction. We’ve talked about a number of those at Moz. Of course, if you visit the link building section of our blog, there are hundreds if not thousands of great strategies around this.

These folks have elected to ignore all that link building stuff, let the links come to them, and these signals, these people who visit via other channels eventually lead to links which lead to DA, PA ranking ability. I don’t think this strategy is for everyone, but it is possible.

I think in the utopia that Larry Page and Sergey Brin from Google imagined when they were building their first search engine this is, in fact, how they hoped that the web would work. They hoped that people wouldn’t be out actively gaming and manipulating the web’s link graph, but rather that all the links would be earned naturally and editorially.

I think that’s a very, very optimistic and almost naive way of thinking about it. Remember, they were college students at the time. Maybe they were eating their granola, and dancing around, and hoping that everyone on the web would link only for editorial reasons. Not to make fun of granola. I love granola, especially, oh man, with those acai berries. Bowls of those things are great.

This is a potential strategy if you are very uncomfortable with link building and you feel like you can optimize this process. You have all of these channels going on.

For SEOs who are thinking, “Rand, I’m never going to ignore link building,” you can still get a tremendous amount out of thinking about how you optimize the return on investment and especially the exposure that you receive from these and how that might translate naturally into links.

I find looking at websites that accomplish SEO without active link building fascinating, because they have editorially earned those links through very little intentional effort on their own. I think there’s a tremendous amount that we can take away from that process and optimize around this.

Alex, yes, this is possible. Would I recommend it? Only in a very few instances. I think that there’s a ton that SEOs can do to optimize and nudge and create intelligent, non-manipulative ways of earning links that are a little more powerful than just sitting back and waiting, but it is possible.

All right, everyone. Thanks for joining us, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Announcing LocalUp Advanced: Our New Local SEO Conference (and Early Bird Tickets!)

Posted by EricaMcGillivray

That’s right, Moz fans, we’re diving into the the Local SEO conference space. Join us Saturday, February 7th in Seattle as we team up with Local U to present LocalUp Advanced, an all-day intensive local SEO conference. You’ll learn next-level tactics for everything from getting reviews and content creation to mobile optimization and local ranking factors. You’ll also have opportunities to attend workshops and meet other people who love local SEO just as much as you.


Don’t miss the early bird deal! The first 25 tickets receive $200 off registration.

Moz or Local U Subscribers:
$699 ($499 early-bird)
General Admission:
$999 ($799 early-bird)

Get your LocalUp Advanced early bird ticket today

Also, to get the best pricing, take a 30-day free trial of Moz Pro or sign up for Local U’s forum.


Who’s speaking at LocalUp Advanced?

Dana DiTomaso

Dana DiTomaso

Kick Point

Whether at a conference, on the radio, or in a meeting, Dana DiTomaso likes to impart wisdom to help you turn a lot of marketing BS into real strategies to grow your business. After 10+ years and with a focus on local SMBs, she’s seen (almost) everything. In her spare time, Dana drinks tea and yells at the Hamilton Tiger-Cats.


Darren Shaw

Darren Shaw

Whitespark

Darren Shaw is the President and Founder of Whitespark, a company that builds software and provides services to help businesses with local search. He’s widely regarded in the local SEO community as an innovator, one whose years of experience working with massive local data sets have given him uncommon insights into the inner workings of the world of citation-building and local search marketing. Darren has been working on the web for over 16 years and loves everything about local SEO.


David Mihm

David Mihm 

Moz

David Mihm is one of the world’s leading practitioners of local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000s. David co-founded GetListed.org, which he sold to Moz in November 2012. Since then, he’s served as our Director of Local Search Marketing, imparting his wisdom everywhere!


Jade Wang

Jade Wang

Google

If you’ve gone to the Google and Your Business Forum for help (and, of course, you have!), then you know how quickly an answer from Google staffer Jade Wang can clear up even the toughest problems. She has been helping business owners get their information listed on Google since joining the team in 2012. 


Mary Bowling

Mary Bowling

Local U

Mary Bowling’s been specializing in SEO and local search since 2003. She works as a consultant at Optimized!, is a partner at a small agency called Ignitor Digital, is a partner in Local U, and is also a trainer and writer for Search Engine News. Mary spends her days interacting directly with local business owners and understands holistic local needs.


Mike Blumenthal

Mike Blumenthal

Local U

If you’re in Local, then you know Mike Blumenthal, and here is your chance to learn from this pioneer in local SEO, whose years of industry research and documentation have earned him the fond and respectful nickname ‘Professor Maps.’ Mike’s blog has been the go-to spot for local SEOs since the early days of Google Maps. It’s safe to say that there are few people on the planet who know more about this area of marketing than Mike. He’s also the co-founder of GetFiveStars, an innovative review and testimonial software. Additionally, Mike loves biking, x-country skiing, and home cooking.


Dr. Pete Meyers

Dr. Pete Meyers

Moz

Dr. Pete Meyers is the Marketing Scientist for Moz, where he works with the marketing and data science teams on product research and data-driven content. He’s spent the past two years building research tools to monitor Google, including the MozCast project, and he curates the Google Algorithm History.


Rand Fishkin

Rand Fishkin

Moz

Rand Fishkin is the founder of Moz. Traveler, blogger, social media addict, feminist, and husband.


Why should I attend LocalUp Advanced?

Do you have an interest in or do you delve into local SEO in your work? If so, then yes, you should definitely join us on February 7th. We believe LocalUp Advanced will be extremely valuable for marketers who are:

  • In-house and spending 25% or more of their time on local SEO
  • Agencies or consultants serving brick-and-mortar businesses
  • Yellow Pages publishers

In addition to keynote-style talks, we’ll have intensive Q&A sessions with our speakers and workshops for you to get direct, one-to-one advice for your business. And as with all Moz events, there will be breakfast, lunch, two snacks, and an after party (details coming soon!) included in your ticket cost. Plus, LocalUp Advanced will take place at the MozPlex in the heart of downtown Seattle; you’ll get to check out Roger’s home!

Get your LocalUp Advanced early bird ticket today

See you in February!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

How Big Was Penguin 3.0?

Posted by Dr-Pete

Sometime in the last week, the first Penguin update in over a year began to roll out (Penguin 2.1 hit around October 4, 2013). After a year, emotions were high, and expectations were higher. So, naturally, people were confused when MozCast showed the following data:

The purple bar is Friday, October 17th, the day Google originally said Penguin 3.0 rolled out. Keep in mind that MozCast is tuned to an average temperature of roughly 70°F. Friday’s temperature was slightly above average (73.6°), but nothing in the last few days indicates a change on the scale of the original Penguin update. For reference, Penguin 1.0 measured a scorching 93°F.

So, what happened? I’m going to attempt to answer that question as honestly as possible. Fair warning – this post is going to dive
very deep into the MozCast data. I’m going to start with the broad strokes, and paint the finer details as I go, so that anyone with a casual interest in Penguin can quit when they’ve seen enough of the picture.

What’s in a name?

We think that naming something gives us power over it, but I suspect the enchantment works both ways – the name imbues the update with a certain power. When Google or the community names an algorithm update, we naturally assume that update is a large one. What I’ve seen across many updates, such as the 27 named Panda iterations to date, is that this simply isn’t the case. Panda and Penguin are classifiers, not indicators of scope. Some updates are large, and some are small – updates that share a name share a common ideology and code-base, but they aren’t all equal.

Versioning complicates things even more – if Barry Schwartz or Danny Sullivan name the latest update “3.0”, it’s mostly a reflection that we’ve waited a year and we all assume this is a major update. That feels reasonable to most of us. That doesn’t necessarily mean that this is an entirely new version of the algorithm. When a software company creates a new version, they know exactly what changed. When Google refreshes Panda or Penguin, we can only guess at how the code changed. Collectively, we do our best, but we shouldn’t read too much into the name.

Was this Penguin just small?

Another problem with Penguin 3.0 is that our expectations are incredibly high. We assume that, after waiting more than a year, the latest Penguin update will hit hard and will include both a data refresh and an algorithm update. That’s just an assumption, though. I firmly believe that Penguin 1.0 had a much broader, and possibly much more negative, impact on SERPs than Google believed it would, and I think they’ve genuinely struggled to fix and update the Penguin algorithm effectively.

My beliefs aside, Pierre Far tried to clarify Penguin 3.0’s impact on Oct 21, saying that it affected less than 1% of US/English queries, and that it is a “slow, worldwide rollout”. Interpreting Google’s definition of “percent of queries” is tough, but the original Penguin (1.0) was clocked by Google as impacting 3.1% of US/English queries. Pierre also implied that Penguin 3.0 was a data “refresh”, and possibly not an algorithm change, but, as always, his precise meaning is open to interpretation.

So, it’s possible that the graph above is correct, and either the impact was relatively small, or that impact has been spread out across many days (we’ll discuss that later). Of course, many reputable people and agencies are reporting Penguin hits and recoveries, so that begs the question – why doesn’t their data match ours?

Is the data just too noisy?

MozCast has shown me with alarming clarity exactly how messy search results can be, and how dynamic they are even without major algorithm updates. Separating the signal from the noise can be extremely difficult – many SERPs change every day, sometimes multiple times per day.

More and more, we see algorithm updates where a small set of sites are hit hard, but the impact over a larger data set is tough to detect. Consider the following two hypothetical situations:

The data points on the left have an average temperature of 70°, with one data point skyrocketing to 110°. The data points on the right have an average temperature of 80°, and all of them vary between about 75-85°. So, which one is the update? A tool like MozCast looks at the aggregate data, and would say it’s the one on the right. On average, the temperature was hotter. It’s possible, though, that the graph on the left represents a legitimate update that impacted just a few sites, but hit those sites hard.

Your truth is your truth. If you were the red bar on the left, then that change to you is more real than any number I can put on a graph. If the unemployment rate drops from 6% to 5%, the reality for you is still either that you have a job or don’t have a job. Averages are useful for understanding the big picture, but they break down when you try to apply them to any one individual case.

The purpose of a tool like MozCast, in my opinion, is to answer the question “Was it just me?” We’re not trying to tell you if you were hit by an update – we’re trying to help you determine if, when you are hit, you’re the exception or the rule.

Is the slow rollout adding noise?

MozCast is built around a 24-hour cycle – it is designed to detect day-over-day changes. What if an algorithm update rolls out over a couple of days, though, or even a week? Is it possible that a relatively large change could be spread thin enough to be undetectable? Yes, it’s definitely possible, and we believe Google is doing this more often. To be fair, I don’t believe their primary goal is to obfuscate updates – I suspect that gradual rollouts are just safer and allow more time to address problems if and when things go wrong.

While MozCast measures in 24-hour increments, the reality is that there’s nothing about the system limiting it to that time period. We can just as easily look at the rate of change over a multi-day window. First, let’s stretch the MozCast temperature graph from the beginning of this post out to 60 days:

For reference, the average temperature for this time period was 68.5°. Please note that I’ve artificially constrained the temperature axis from 50-100° – this will help with comparisons over the next couple of graphs. Now, let’s measure the “daily” temperature again, but this time we’ll do it over a 48-hour (2-day) period. The red line shows the 48-hour flux:

It’s important to note that 48-hour flux is naturally higher than 24-hour flux – the average of the 48-hour flux for these 60 days is 80.3°. In general, though, you’ll see that the pattern of flux is similar. A longer window tends to create a smoothing effect, but the peaks and valleys are roughly similar for the two lines. So, let’s look at 72-hour (3-day) flux:

The average 72-hour flux is 87.7° over the 60 days. Again, except for some smoothing, there’s not a huge difference in the peaks and valleys – at least nothing that would clearly indicate the past week has been dramatically different from the past 60 days. So, let’s take this all the way and look at a full 7-day flux calculation:

I had to bump the Y-axis up to 120°, and you’ll see that smoothing is in full force – making the window any larger is probably going to risk over-smoothing. While the peaks and valleys start to time-shift a bit here, we’re still not seeing any obvious climb during the presumed Penguin 3.0 timeline.

Could Penguin 3.0 be spread out over weeks or a month? Theoretically, it’s possible, but I think it’s unlikely given what we know from past Google updates. Practically, this would make anything but a massive update very difficult to detect. Too much can change in 30 days, and that base rate of change, plus whatever smaller updates Google launched, would probably dwarf Penguin.

What if our keywords are wrong?

Is it possible that we’re not seeing Penguin in action because of sampling error? In other words, what if we’re just tracking the wrong keywords? This is a surprisingly tough question to answer, because we don’t know what the population of all searches looks like. We know what the population of Earth looks like – we can’t ask seven billion people to take our survey or participate in our experiment, but we at least know the group that we’re sampling. With queries, only Google has that data.

The original MozCast was publicly launched with a fixed set of 1,000 keywords sampled from Google AdWords data. We felt that a fixed data set would help reduce day-over-day change (unlike using customer keywords, which could be added and deleted), and we tried to select a range of phrases by volume and length. Ultimately, that data set did skew a bit toward commercial terms and tended to contain more head and mid-tail terms than very long-tail terms.

Since then, MozCast has grown to what is essentially 11 weather stations of 1,000 different keywords each, split into two sets for analysis of 1K and 10K keywords. The 10K set is further split in half, with 5K keywords targeted to the US (delocalized) and 5K targeted to 5 cities. While the public temperature still usually comes from the 1K set, we use the 10K set to power the Feature Graph and as a consistency check and analysis tool. So, at any given time, we have multiple samples to compare.

So, how did the 10K data set (actually, 5K delocalized keywords, since local searches tend to have more flux) compare to the 1K data set? Here’s the 60-day graph:

While there are some differences in the two data sets, you can see that they generally move together, share most of the same peaks and valleys, and vary within roughly the same range. Neither set shows clear signs of large-scale flux during the Penguin 3.0 timeline.

Naturally, there are going to be individual SEOs and agencies that are more likely to track clients impacted by Penguin (who are more likely to seek SEO help, presumably). Even self-service SEO tools have a certain degree of self-selection – people with SEO needs and issues are more likely to use them and to select problem keywords for tracking. So, it’s entirely possible that someone else’s data set could show a more pronounced Penguin impact. Are they wrong or are we? I think it’s fair to say that these are just multiple points of view. We do our best to make our sample somewhat random, but it’s still a sample and it is a small and imperfect representation of the entire world of Google.

Did Penguin 3.0 target a niche?

In that every algorithm update only targets a select set of sites, pages, or queries, then yes – every update is a “niche” update. The only question we can pose to our data is whether Penguin 3.0 targeted a specific industry category/vertical. The 10K MozCast data set is split evenly into 20 industry categories. Here’s the data from October 17th, the supposed data of the main rollout:

Keep in mind that, split 20 ways, the category data for any given day is a pretty small set. Also, categories naturally stray a bit from the overall average. All of the 20 categories recorded temperatures between 61.7-78.2°. The “Internet & Telecom” category, at the top of the one-day readings, usually runs a bit above average, so it’s tough to say, given the small data set, if this temperature is meaningful. My gut feeling is that we’re not seeing a clear, single-industry focus for the latest Penguin update. That’s not to say that the impact didn’t ultimately hit some industries harder than others.

What if our metrics are wrong?

If the sample is fundamentally flawed, then the way we measure our data may not matter that much, but let’s assume that our sample is at least a reasonable window into Google’s world. Even with a representative sample, there are many, many ways to measure flux, and all of them have pros and cons.

MozCast still operates on a relatively simple metric, which essentially looks at how much the top 10 rankings on any given day change compared to the previous day. This metric is position- and direction-agnostic, which is to say that a move from #1 to #3 is the same as a move from #9 to #7 (they’re both +2). Any keyword that drops off the rankings is a +10 (regardless of position), and any given keyword can score a change from 0-100. This metric, which I call “Delta100”, is roughly linearly transformed by taking the square root, resulting in a metric called “Delta10”. That value is then multiplied by a constant based on an average temperature of 70°. The transformations involve a little more math, but the core metric is pretty simplistic.

This simplicity may lead people to believe that we haven’t developed more sophisticated approaches. The reality is that we’ve tried many metrics, and they tend to all produce similar temperature patterns over time. So, in the end, we’ve kept it simple.

For the sake of this analysis, though, I’m going to dig into a couple of those other metrics. One metric that we calculate across the 10K keyword set uses a scoring system based on a simple CTR curve. A change from, say #1 to #3 has a much higher impact than a change lower in the top 10, and, similarly, a drop from the top of page one has a higher impact than a drop from the bottom. This metric (which I call “DeltaX”) goes a step farther, though…


If you’re still riding this train and you have any math phobia at all, this may be the time to disembark. We’ll pause to make a brief stop at the station to let you off. Grab your luggage, and we’ll even give you a couple of drink vouchers – no hard feelings.


If you’re still on board, here’s where the ride gets bumpy. So far, all of our metrics are based on taking the average (mean) temperature across the set of SERPs in question (whether 1K or 10K). The problem is that, as familiar as we all are with averages, they generally rely on certain assumptions, including data that is roughly normally distributed.

Core flux, for lack of a better word, is not remotely normally distributed. Our main Delta100 metric falls roughly on an exponential curve. Here’s the 1K data for October 21st:

The 10K data looks smoother, and the DeltaX data is smoother yet, but the shape is the same. A few SERPs/keywords show high flux, they quickly drop into mid-range flux, and then it all levels out. So, how do we take an average of this? Put simply, we cheat. We tested a number of transformations and found that the square root of this value helped create something a bit closer to a normal distribution. That value (Delta10) looks like this:

If you have any idea what a normal distribution is supposed to look like, you’re getting pretty itchy right about now. As I said, it’s a cheat. It’s the best cheat we’ve found without resorting to some really hairy math or entirely redefining the mean based on an exponential function. This cheat is based on an established methodology – Box-Cox transformations – but the outcome is admittedly not ideal. We use it because, all else being equal, it works about as well as other, more complicated solutions. The square root also handily reduces our data to a range of 0-10, which nicely matches a 10-result SERP (let’s not talk about 7-result SERPs… I SAID I DON’T WANT TO TALK ABOUT IT!).

What about the variance? Could we see how the standard deviation changes from day-to-day instead? This gets a little strange, because we’re essentially looking for the variance of the variance. Also, noting the transformed curve above, the standard deviation is pretty unreliable for our methodology – the variance on any given day is very high. Still, let’s look at it, transformed to the same temperature scale as the mean/average (on the 1K data set):

While the variance definitely moves along a different pattern than the mean, it moves within a much smaller range. This pattern doesn’t seem to match the pattern of known updates well. In theory, I think tracking the variance could be interesting. In practice, we need a measure of variance that’s based on an exponential function and not our transformed data. Unfortunately, such a metric is computationally expensive and would be very hard to explain to people.

Do we have to use mean-based statistics at all? When I experimented with different approaches to DeltaX, I tried using a median-based approach. It turns out that the median flux for any given day is occasionally zero, so that didn’t work very well, but there’s no reason – at least in theory – that the median has to be measured at the 50th percentile.

This is where you’re probably thinking “No, that’s
*exactly* what the median has to measure – that’s the very definition of the median!” Ok, you got me, but this definition only matters if you’re measuring central tendency. We don’t actually care what the middle value is for any given day. What we want is a metric that will allow us to best distinguish differences across days. So, I experimented with measuring a modified median at the 75th percentile (I call it “M75” – you’ve probably noticed I enjoy codenames) across the more sophisticated DeltaX metric.

That probably didn’t make a lot of sense. Even in my head, it’s a bit fuzzy. So, let’s look at the full DeltaX data for October 21st:

The larger data set and more sophisticated metric makes for a smoother curve, and a much clearer exponential function. Since you probably can’t see the 1,250th data point from the left, I’ve labelled the M75. This is a fairly arbitrary point, but we’re looking for a place where the curve isn’t too steep or too shallow, as a marker to potentially tell this curve apart from the curves measured on other days.

So, if we take all of the DeltaX-based M75’s from the 10K data set over the last 60 days, what does that look like, and how does it compare to the mean/average of Delta10s for that same time period?

Perhaps now you feel my pain. All of that glorious math and even a few trips to the edge of sanity and back, and my wonderfully complicated metric looks just about the same as the average of the simple metric. Some of the peaks are a bit peakier and some a bit less peakish, but the pattern is very similar. There’s still no clear sign of a Penguin 3.0 spike.

Are you still here?

Dear God, why? I mean, seriously, don’t you people have jobs, or at least a hobby? I hope now you understand the complexity of the task. Nothing in our data suggests that Penguin 3.0 was a major update, but our data is just one window on the world. If you were hit by Penguin 3.0 (or if you received good news and recovered) then nothing I can say matters, and it shouldn’t. MozCast is a reference point to use when you’re trying to figure out whether the whole world felt an earthquake or there was just construction outside your window. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Eye Tracking in 2014: How Users View and Interact with Today’s Google SERPs

Posted by rMaynes1

In September 2014, Mediative released its latest eye-tracking research entitled “The Evolution of Google’s Search Engine Results Pages and Their Effects on User Behaviour“.

This large study had participants conduct various searches using Google on a desktop. For example, participants were asked “Imagine you’re moving from
Toronto to Vancouver. Use Google to find a moving company in Toronto.” Participants were all presented with the same Google SERP, no matter the search
query.

Mediative wanted to know where people look and click on the SERP the most, what role the location of the listing on the SERP plays in winning views and
clicks, and how click activity on listings has changed with the introduction of Google features such as the carousel, the knowledge graph etc.


Mediative discovered that, just as Google’s SERP has evolved over the past decade, so too has the way in which search engine users scan the page before
making a click.

Back in 2005 when
a similar eye-tracking study was conducted for the first time by Mediative (formerly Enquiro), it was
discovered that people searched in a distinctive “triangle” pattern, starting in the top left of the search results page where they expected the first
organic listing to be located, and reading across horizontally before moving their eyes down to the second organic listing, and reading horizontally, but
not quite as far. This area of concentrated gaze activity became known as Google’s “Golden Triangle”. The study concluded that if a business’s listing was
not in the Golden Triangle, its odds of being seen by a searcher were dramatically reduced.

Heat map from 2005 showing the area known as Google’s “Golden Triangle”.

But now, in 2014, the top organic results are no longer always in the top-left corner where searchers expect them to be, so they scan other areas of the
SERP, trying to seek out the top organic listing, but being distracted by other elements along the way. The #1 organic listing is shifting further down the
page, and while this listing still captures the most click activity (32.8%) regardless of what new elements are presented, the shifting location has opened
up the top of the page with more potential areas for businesses to achieve visibility.

Where scanning was once more
horizontal, the adoption of mobile devices over the past 9 years has habitually conditioned searchers to now scan
more
vertically—they are looking for the fastest path to the desired content, and, compared to 9 years ago, they are viewing more search results
listings during a single session and spending less time viewing each one.

Searchers on Google now scan far more vertically than several years ago.


One of the biggest changes from SERPS 9 years ago to today is that Google is now trying to keep people on the result page for as long as they can.

An example is in the case of the knowledge graph. In Mediative’s study. when searchers were looking for “weather in New Orleans”, the results page that was
presented to them showed exactly what they needed to know. Participants were asked to click on the result that they felt best met their needs, even if, if
reality, they wouldn’t have clicked through (in order to end that task). When a knowledge graph result
exactly met the intent of the searcher, the
study found 80% of people looked at that result, and 44% clicked on it. Google provided searchers with a relevant enough answer to keep them on the SERP.
The top organic listing captured 36.5% of pages clicks—compared to 82% when the knowledge graph did not provide the searcher with the answer they were
looking for.

It’s a similar case with the carousel results; when a searcher clicks on a listing, instead of going through to the listing’s website, another SERP is
presented specifically about the business, as Google tries to increase paid ad impressions/clicks on the Google search results page.

How can businesses stay on top of these changes and ensure they still get listed?

There are four main things to keep in mind:

1.
The basic fundamentals of SEO are as important as ever

Create unique, fresh content, which speaks to the needs of your customers as this will always trump chasing the algorithm. There are also on-page and
off-page SEO tactics that you can employ that can increase your chances of being listed in areas of the SERP other than your website’s organic listing such
as front-loading keywords in page titles and meta descriptions, getting listed on directories and ratings and reviews site, having social pages etc. It’s
important to note that SEO strategy is no longer a one-size-fits-all approach.

2.
Consider using schema mark-up wherever possible

In Mediative’s 2014 Google SERP research, it was discovered that blog posts that had been marked up using schema to show the picture and name of the author
got a significant amount of engagement, even when quite far down the first page—these listings garnered an average of 15.5% of total page clicks.

Note:

As of August 2014, Google removed authorship markup entirely. However, the results are still a good example of how schema mark-up can be used to make
your business listing stand out more on the SERP, potentially capturing more view and clicks, and therefore more website traffic.

In the study, participants were asked to “Imagine that you’re starting a business and you need to find a company to host your website. Use Google to find
information about website hosting companies”. The SERP presented is shown below:

Almost 45% of clicks went to 2 blog posts titled “Five Best Web Hosting Companies” and “10 Best Web Hosting Companies”.

In general, the top clicked posts were those that had titles including phrases such as:

  • “Best…”
  • “Reviews of…”
  • “Top 5…”
  • “How-to…”

According to Google, “On-page markup helps search engines understand the information on webpages and provide richer results…Google doesn’t use markup
for ranking purposes at this time-but rich snippets can make your web pages appear more prominently in search results, so you may see an increase in
traffic.”

Schema markup is probably the most under-utilized tool for SEO, presenting a huge opportunity for companies that do utilize the Google approved tool.
Searchmetrics reported that only 0.3% of websites
use schema markup, yet over a third of Google’s results contain rich snippets (additional text, images and links below the individual search results).
BruceClay.com reports rich snippets can increase CTRs of listings between
15-50% and that websites using schema markup tend to rank higher in search results.

Schema mark-up can be used to add star ratings, number of reviews, pricing (all shown in the listing below) and more to a search results page listing.


3.
Know the intent of your users

Understanding what searchers are trying to discover when they conduct a search can help determine how much effort you should try and put into appearing in
the number one organic listing, which can be an extremely difficult task without unlimited budget and resources—and, even if you do make it the number
one organic listing, traffic is not guaranteed as discovered in this reaserch. If you’re competing with big name brands, or ratings and review sites, and
THAT is what your customers want, they you are going to struggle to compete.

The importance of your business being the first listing vs. on the first page therefore, is highly dependent on the searcher’s intent, plus the strength of
your brand. The key is to always keep
user intent top-of-mind, and this can be established by talking to real people, rather than
guessing. What are they looking for when they are searching for your site? Structure your content around what people really want and need, list your site
on the directories that people actually visit or reference, create videos (if that’s what your audience wants)—know what your actual customers are
looking for, and then provide it.

There are going to be situations when a business can’t get to number one on the organic listings. As previously mentioned, the study shows that this is
still the key place to be, and the top organic listing captures more clicks that any other single listing. But if your chances of getting to that number
one spot are slim, you need to focus on other areas of the SERP, such as positions #4 or higher, which will be easier to obtain ranking for—businesses
that are positioned lower on the SERP (especially positions 2-4) see more click activity than they did several years ago, making this real estate much more
valuable. As
Gord Hotchkiss writes about, searchers tend to
“chunk” information on the SERP and scan each chuck in the same way they used to search the entire SERP—in a triangle pattern. Getting listed at the top
of a “chunk” can therefore be effective for many businesses. This idea of “chunking” and scanning can be seen in the heat map below.

To add to that, Mediative’s research showed that everything located above the top 4 organic listings (so, carousel results, knowledge graph, paid listings,
local listings etc.) combined captured 84% of clicks. If you can’t get your business listing to #1, but can get listed somewhere higher than #4, you have a
good chance of being seen, and clicked on by searchers. Ultimately, people expect Google to continue to do its job, and respond to search queries with the
most relevant results at the top. The study points out that only 1% of participants were willing to click through to Page 2 to see more results. If you’re
not listed on page 1 of Google for relevant searches, you may as well not exist online.

4.
A combination of SEO and paid search can maximize your visibility in SERP areas that have the biggest impact on both branding
and
traffic

Even though organic listings are where many businesses are striving to be listed (and where the majority of clicks take place), it’s important not to
forget about paid listings as a component of your digital strategy. Click-through rates for top sponsored listings (positions 1 and 2) have changed very
little in the past decade. Where the huge change has taken place is in the ability of sponsored ads on the right rail to attract attention and clicks.
Activity on this section of the page is almost non-existent. This can be put down to a couple of factors including searchers conditioned behaviour as
mentioned before, to scan more vertically, thanks to our increased mobile usage, and the fact that over the years we have learned that those results may
not typically be very relevant, or as good as the organic results, so we tend not to even take the time to view them.

Mediative’s research also found that there are branding effects of paid search, even if not directly driving traffic. We asked participants to “Imagine you
are traveling to New Orleans and are looking for somewhere to meet a friend for dinner in the French Quarter area. Use Google to find a restaurant.”
Participants were presented with a SERP showing 2 paid ads—the first was for opentable.com, and the second for the restaurant Remoulade, remoulade.com.

The top sponsored listing, opentable.com, was viewed by 84% of participants, and captured 26% of clicks. The second listing, remoulade.com, only captured
2% of clicks but was looked at by 73% of participants. By being seen by almost 3/4 of participants, the paid listing can increase brand affinity, and
therefore purchase (or choice) consideration in other areas! For example, if the searcher comes back and searches again another time, or clicks to
opentable.com and then sees Remoulade listed, it may benefit from a higher brand affinity from having already been seen in the paid listings. Mediative
conducted a
Brand Lift study featuring Honda that found the more real estate that brands own on the SERP, the higher the
CTR, and the higher the brand affinity, brand recognition, purchase consideration etc. Using paid search for more of a branding play is essentially free
brand advertising—while you should be prepared to get the clicks and pay for them of course, it likely that your business listing will be
seen
by a large number of people without capturing the same number of clicks. Impression data can also be easily tracked with Google paid ads so you know
exactly how many times your ad was shown, and can therefore estimate how many people actually looked at it from a branding point of view.


Rebecca Maynes is a Marketing Communications Strategist with Mediative, and was a major contributor on this study. The full study, including
click-through rates for all areas of the SERP, can be downloaded at

www.mediative.com/SERP.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

More than Keywords: 7 Concepts of Advanced On-Page SEO

Posted by Cyrus-Shepard

“What is this page about?”

As marketers, helping search engines answer that basic question is one of our most important tasks. Search engines can’t read pages like humans can, so we incorporate
structure and clues as to what our content means. This helps provide the relevance element of search engine optimization that matches queries to useful results.

Understanding the techniques used to capture this meaning helps to provide better signals as to what our content relates to, and ultimately helps it to rank higher in search results. This post explores a series of
on-page techniques that not only build upon one another, but can be combined in sophisticated ways.

While Google doesn’t reveal the exact details of its algorithm, over the years we’ve collected evidence from interviews, research papers, US patent filings and observations from hundreds of search marketers to be able to explore these processes. Special thanks to Bill Slawski, whose posts on
SEO By the Sea led to much of the research for this work.

As you read, keep in mind these are only
some of the ways in which Google could determine on-page relevancy, and they aren’t absolute law! Experimenting on your own is always the best policy.

We’ll start with the simple, and move to the more advanced.

1. Keyword Usage

In the beginning, there were keywords. All over the page.

The concept was this: If your page focused on a certain topic, search engines would discover keywords in important areas. These locations included the title tag, headlines, alt attributes of images, and throughout in the text. SEOs helped their pages rank by placing keywords in these areas.

Even today, we start with keywords, and it remains the most basic form of on-page optimization.

Most on-page SEO tools still rely on keyword placement to grade pages, and while it remains a good place to start, research shows its
influence has fallen

While it’s important to ensure your page at a bare minimum contains the keywords you want to rank for, it is unlikely that keyword placement by itself will have much of an influence on your page’s ranking potential.

2. TF-IDF

It’s not keyword density, it’s
term frequency–inverse document frequency (TF-IDF). 

Google researchers
recently described TF-IDF as “long used to index web pages” and variations of TF-IDF appear as a component in several well-known Google patents.

TF-IDF doesn’t measure how often a keyword appears, but offers a measurement of
importance by comparing how often a keyword appears compared to expectations gathered from a larger set of documents.

If we compare the phrases “basket” to “basketball player” in
Google’s Ngram viewer, we see that “basketball player” is a more rare, while “basket” is more common. Based on this frequency, we might conclude that “basketball player” is significant on a page that contains that term, while the threshold for “basket” remains much higher.

For SEO purposes, when we measure TF-IDF’s
correlation with higher rankings, it performs only moderately better than individual keyword usage. In other words, generating a high TF-IDF score by itself generally isn’t enough to expect much of an SEO boost. Instead, we should think of TF-IDF as an important component of other more advanced on-page concepts. 

3. Synonyms and Close Variants

With over 6 billion searches per day, Google has a wealth of information to determine what searchers
actually mean when typing queries into a search box. Google’s own research shows that synonyms actually play a role in up to 70% of searches.

To solve this problem, search engines possess vast corpuses of
synonyms and close variants for billions of phrases, which allows them to match content to queries even when searchers use different words than your text. An example is the query dog pics, which can mean the same thing as:

• Dog Photos   • Pictures of Dogs   • Dog Pictures   • Canine Photos   • Dog Photographs

On the other hand, the query
Dog Motion Picture means something else entirely, and it’s important for search engines to know the difference.

From an SEO point of view, this means creating content using
natural language and variations, instead of employing the same strict keywords over and over again.

Using variations of your main topics can also add deeper semantic meaning and help solve the problem of
disambiguation, when the same keyword phrase can refer to more than one concept. Plant and factory together might refer to a manufacturing plant, whereas plant and shrub refer to vegetation.

Today, Google’s
Hummingbird algorithm also uses co-occurrence to identify
synonyms for query replacement.

Under Hummingbird, co-occurrence is used to identify words that may be
synonyms of each other in certain contexts while following certain rules
according to which, the selection of a certain page in response to a query
where such a substitution has taken place has a heightened probability.

Bill Slawski
SEO by the Sea

4. Page Segmentation

Where you place your words on a page is often as important as the words themselves.

Each web page is made up of different parts—headers, footers, sidebars, and more. Search engines have long worked to determine the most important part of a given page. Both Microsoft and Google hold
several patents suggesting content in the more relevant sections of HTML carry more weight.

Content located in the main body text likely holds more importance than text placed in sidebars or alternative positions. Repeating text placed in boilerplate locations, or chrome, runs the risk of being discounted even more.

Page segmentation becomes significantly more important as we move toward 
mobile devices, which often hide portions of the page. Search engines want to serve users the portion of your pages that are visible and important, so text in these areas deserves the most focus.

To take it a step further,
HTML5 offers addition semantic elements such as <article>, <aside>, and <nav>, which can clearly define sections of your webpage.

5. Semantic Distance and Term Relationships

When talking about on-page optimization,
semantic distance refers to the relationships between different words and phrases in the text. This differs from the physical distance between phrases, and focuses on how terms connect within sentences, paragraphs, and other HTML elements.

How do search engines know that “Labrador” relates to “dog breeds” when the two phrases aren’t in the same sentence?

Search engines solve this problem by measuring the
distance between different words and phrases within different HTML elements. The closer the concepts are semantically, the closer the concepts may be related. Phrases located in the same paragraph are closer semantically than phrases separated by several blocks of text.

Additionally, HTML elements may shorten the semantic distance between concepts, pulling them closer together. For example,
list items can be considered equally distant to one another, and “the title of a document may be considered to be close to every other term in document“.

Now is a good time to mention Schema.org. Schema markup provides a way to semantically structure portions of your text in a manner that explicitly define relationship between terms.

The great advantage schema offers is that it leaves no guesswork for the search engines. Relationships are clearly defined. The challenge is it requires webmasters to employ special markup. So far, studies show
low adoption. The rest of the concepts listed here can work on any page containing text.

6. Co-occurrence and Phrase-Based Indexing

Up to this point, we’ve discussed individual keywords and relationships between them. Search engines also employ methods of indexing pages based on
complete phrases, and also ranking pages on the relevance of those pages.

We know this process as
phrase-based indexing.

What’s most interesting about this process is not how Google determines the important phrases for a webpage, but how Google can use these phrases to
rank a webpage based on how relevant they are.

Using the concept of
co-occurrence, search engines know that certain phrases tend to predict other phrases. If your main topic targets “John Oliver,” this phrase often co-occurs with other phrases like “late night comedian,” “Daily Show,” and “HBO.” A page that contains these related terms is more likely to be about “John Oliver” than a page that doesn’t contain related terms.

Add to this
incoming links from pages with related, co-occurring phrases and you’ve given your page powerful contextual signals.

7. Entity Salience

Looking to the future, search engines are exploring ways of using relationships between entities, not just keywords, to determine topical relevance.

One technique, published as a Google research paper, describes assigning relevance through
entity salience.

Entity salience goes beyond traditional keyword techniques, like TF-IDF, for finding relevant terms in a document by leveraging
known relationships between entities. An entity is anything in the document that is distinct and well defined.

The stronger an entity’s relationship to other entities on the page, the more significant that entity becomes.

In the diagram above, an article contains the topics
Iron Man, Tony Stark, Pepper Potts and Science Fiction. The phrase “Marvel Comics” has a strong entity relationship to all these terms. Even it only appears once, it’s likely
significant in the document. 

On the other hand, even though the phrase “Cinerama” appears multiple times (because the film showed there), this phrase has weaker entity relationships, and likely isn’t as significant.

Practical tips for better on-page optimization

As we transition from keyword placement to more advanced practices of topic targeting, it’s actually easy to incorporate these concepts into our content. While most of us don’t have the means available to calculate semantic relationships and entity occurrences, there are a number of simple steps we can take when crafting optimized content:

  1. Keyword research forms your base. Even though individual keywords themselves are no longer enough to form the foundation of your content, everything begins with good keyword research. You want to know what terms you are targeting, the relative competition around those keywords, and the popularity of those terms. Ultimately, your goal is to connect your content with the very keywords people type and speak into the search box.

  2. Research around topics and themes. Resist researching single keywords, and instead move towards exploring your keyword themes. Examine the secondary keywords related to each keyword. When people talk about your topic, what words do they use to describe it? What are the properties of your subject? Use these supporting keyword phrases as cast members to build content around your central theme.
  3. When crafting your content, answer as many questions as you can. Good content answers questions, and semantically relevant content reflects this. A top ranking for any search query means the search engine believes your content answers the question best. As you structure your content around topics and themes, make sure you deserve the top ranking by answering the questions and offering a user experience better than the competition.
  4. Use natural language and variations. During your keyword research process, it’s helpful to identify other common ways searchers refer to your topic, and include these in your content when appropriate. Semantic keyword research is often invaluable to this process.
  5. Place your important content in the most important sections. Avoid footers and sidebars for important content. Don’t try to fool search engines with fancy CSS or JavaScript tricks. Your most important content should go in the places where it is most visible and accessible to readers.
  6. Structure your content appropriately. Headers, paragraphs, lists, and tables all provide structure to content so that search engines understand your topic targeting. A clear webpage contains structure similar to a good university paper. Employ proper introductions, conclusions, topics organized into paragraphs, spelling and grammar, and cite your sources properly.

At the end of the day, we don’t need a super computer to make our content better, or easier to understand. If we write
like humans for humans, our content goes a long way in becoming optimized for search engines. What are your best tips for on-page SEO and topic targeting?


Special thanks to
Dawn Shepard, who providing the images for this post.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Convincing Old-School Clients that Things Have Changed

Posted by Kristina Kledzik

There’s a reason we use the terms 
“white hat” and “black hat” for SEO: it used to be the Wild West. Black hat tactics were so effective, they were almost necessary to market online. Paying a few thousand dollars to an SEO could get you to rank #1 for almost any term (before you let them go and your competitor paid them the same to outrank you). You only got a few thousand dollars in return for that ranking, though, since there weren’t many people shopping online yet.

Fast forward to today: Ranking well on Google is
insanely profitable—much more so than it ever was in the early days—and Google’s algorithm has advanced dramatically. But former SEOs and people outside our industry still hold on to that idea that a few thousand dollars of “technical SEO” can make them magically rank #1. 

So, how do you convince your old school clients things have changed?

The immediate answer

When this comes up in conversation, I have a few trump phrases that usually bring clients around:

  • “Yeah, that used to be a great tactic, but now it puts you at risk for getting a penalty.” (Really, any response that includes the word “penalty” stops clients in their tracks.)
  • “That makes sense, but Matt Cutts said…” / “Good point, but Google’s official blog recommends…”
  • “I / another coworker / another client / a Mozzer has tried that, and it had disastrous results…”

Basically, acknowledge their idea as valid so you don’t insult them, then explain why it won’t work in a way that scares the shit out of them by mentioning real repercussions. Or, you know, just persuade them gently with logic.

If you can’t persuade/scare the shit out of them, tell them you’ll do some research and get back to them. Then do it.

If that doesn’t work…

Okay, so you have answers for on-the-spot questions now. They will work anywhere from moderately well to amazingly well, depending on your delivery and the respect you’ve gained from your client. But the client may ask for more research, or be skeptical of your answer. To be really effective, the right answer has to be coupled with a lot of respect and a logical, well-delivered explanation. 

Many of you are probably thinking, “I establish respect by being right / talking professionally / offering a lot of case studies during the sales process.” That’s the sort of thinking that
doesn’t earn respect. You gain respect by consistently being:

1. Respectful, even if your clients are wrong

It’s embarrassing to be wrong. When your client says, “What meta keywords should we put on this page?” and you chuckle and say, “Gosh, meta keywords haven’t been used in so long—I don’t even think Google ever used them,” your client is going to fight you on it, not because they’re particularly invested in the idea of using meta keywords, but because you’ve made them feel wrong.


So when your client is wrong, start by validating their idea
. Then, explain the right solution, not necessarily digging into why their solution is wrong:

Client: What meta keywords should we put on this page?

You: Well, I’m going to put together some keywords to target on this page next week, but making them meta keywords won’t make much of a difference. Google doesn’t look at them because it’s so easy to spam (wouldn’t it be nice if they did?). Anyway, when I send you those keywords that we should target, I’ll also include what we need to change on the page in order to target them.

Answering like this will keep your conversations positive and your clients open to your ideas, even if your ideas conflict directly with theirs. 

2. Honest

You’re probably smart enough not to make up client anecdotes or lie about what Matt Cutts has said. Where I usually see dishonesty in consulting is when consultants screw up and their clients call them on it. 

It looks bad to be wrong, especially when someone is paying you to be right. It’s even worse to be caught in a lie or look dishonest. Here’s my mantra:
It’s not wrong to make an honest mistake. When clients tell you you’ve done something wrong, consider it a misunderstanding. Explain where you were coming from and why you did what you did briefly, then fix it.

(Note: this obviously doesn’t work if you made a stupid mistake. If you made a stupid mistake, apologize and offer to fix it, free of charge. It’ll lose you some money up front, but it’ll be worth it in the long run.)

3. Direct

This is the best outline for any answer:

  1. Brief answer, in one sentence
  2. Deeper explanation of answer
  3. Information to back it up
  4. Reiteration of brief answer

I can’t tell you how many times I’ve heard another consultant (or myself) not be entirely sure of an answer and ramble on for a couple of minutes before stopping to complete silence from their client. Or know the answer but think it’s too complicated and deliver an answer that only confuses their client more.

By starting with the answer, the client already knows what’s coming, so all other information you give after that will naturally support your answer as you go, rather than possibly leading them down the wrong path. Consider these alternatives:

Standard answer:

Client: How much will this increase our rankings?

You: Competition is always a huge part of the equation, so we’ll have to look into that. It’s easier to rank for, say, “yellow sapphire necklaces” than “blue sapphire necklaces” because there are more blue sapphire necklaces out there. But this is definitely what we should do to increase our rankings.

Direct answer:

Client: How much will this increase our rankings?

You: I don’t know, it’s not something that we can definitively say in SEO, unfortunately. Competition is a huge part of the equation, so we’ll have to look into that. But, regardless, this is the most effective action that we could take to increase our rankings.

The more direct answer admits doubt, but is still much more convincing in the end (though both are vague and obviously top-of-mind examples… just ignore that). 

4. Complimentary and inclusive

It’s called the 
Benjamin Franklin Effect: “He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.” (Props to Rob Ousbey for telling me about this.)

When your client has done something right, compliment them on how they’ve made your job easier since you don’t have to fix their mistakes. When your client has done something wrong, let them know what they should do to fix it, but help them share in the work to make the change. It’ll make the client feel valued and it’ll take a big part of the workload off of you.

5. Proactive

Good project management is the key to effective consulting. When clients don’t know what you’re working on, they get worried that you’re wasting their money. Make sure that you consistently:

  • Meet; I like to have scheduled meetings once a week
  • Share a 3-6 month project plan, with dates and deliverables outlined
  • Ship those deliverables on time
  • Respond to emails within a day or two, even if the answer is “Great question! I’m prioritizing [other project for the same client right now], can I get back to you in a week or so?”
  • Follow up with open questions; if a client asks you a question in a meeting you don’t know, admit you don’t know, say you’ll get back to them after you research it, then actually do that

I think that project management is often dropped because it seems so easy that it’s de-prioritized. Don’t believe that: this may be the most important of the five traits I’ve listed.

To sum it up: be honest, selfless, and proactive, and your clients are going to love you.

Even if you’re a terrible SEO (though try your best to be a good one), clients are going to respect consultants who put their clients’ business first, are open and honest about what they’re doing and thinking, and get their work done without being micromanaged.

Now that you’ve earned your client’s respect, they will be open to you changing their mind. You just have to give them a reason to.

Nail it with a great argument

When a client says, “Can we rank for ‘trucks’ by putting the word ‘truck’ as the alt text to each image on this page?” our mind immediately says, “No, why would you think that?” That’s not going to win the argument for you.

The reason we SEOs say “why would you think that?” is because we know the answer. So, teach your client. Start by validating their idea (what did we just learn about clients being wrong?), then explain the right answer, then explain why their answer won’t work:

Client: Can we rank for “trucks” by putting the word “truck” as the alt text to each image on this page?

You: Well, that would certainly get “trucks” on the page more often! To really optimize the page for “trucks,” though, we’ll need to put it in the page title, and a few times in the body of the page. SEO is all about competition, and our competition is doing that. We have to at least match them. Once the page is optimized for “trucks,” though, we’ll still have to work to get more backlinks and mentions around the web to compete with Wikipedia, which ranks #1 right now for “trucks.”

Don’t focus too much on their mistake.The more time you spend on the disagreement, the more frustrated your client will get; the more time you spend on your solution, the more impressed they’ll be with you.

If that doesn’t work, do the research to tell an even better story:

  • Give examples from other clients. Don’t give away too many names, of course, but knowing that you’ve solved this problem or a problem like it in the past makes clients feel much more confident in you.
  • If you’ve never seen this problem before, reach out to your SEO community. One of the best parts of working at Distilled is that when a client off-handedly emails me a question, I can email all Distilled consultants and usually get an answer (or at least an educated guess) within an hour or so. If you work on your own, build a community online, through Moz or another online portal, and ask them.
  • Forecast the effects of your solution. I’ll be the first to admit, I’m not good at this because it can take a long time. But if your client is resistant, it’s definitely worth the trouble. Take clients through how you worked out the forecasting so they can see how much they’ll gain by working with you.

Once you’ve got proof behind your argument, restate your position, add your new arguments, and then follow up with your position and what you recommend your client does now. Make sure that you end in an action so there’s something concrete for them to focus on. 

Practice, practice, practice delivery

You can have the perfect explanation and a great relationship with your client, but if you trip over your own words or confuse your client, you won’t be convincing.

Written reports

Edit the paper multiple times. Only include the information that directly leads to an action item, don’t include all of the information that they already know, or that just shows you did your homework. That stuff is boring, and will encourage your client to skim, which will often lead to misinterpretations. Next, have a friend who’s been in SEO for awhile and knows about this old school stuff edit it. It’s hard to know where your descriptions might break down without someone else’s perspective.

Verbal presentations 

Practice your presentation ahead of time: talk through your recommendations to a friend or coworker. Have them interrupt you, because you will definitely be interrupted when you’re talking to your client. Make sure that you’re okay with that, that you can have a separate conversation, then jump back in to the report.

For presentations that are brief and over the phone, make sure that you’ve already sent your client something written. If it’s a report, make it clear and to the point (as described above), if it’s not, outline the action items in an email or a spreadsheet, so your client has something concrete to look at as you discuss. I’ve also found clients are able to digest information much better when they’ve already read it.

For big presentations – the ones that need an accompanying PowerPoint, follow the same advice as I gave in the written report section: Edit to be succinct, and get feedback.

This is pretty much a post on good consulting

I’ve consulted clients on technical SEO, promotions / outreach, creative, and content strategy-based projects, and I’ve found that the key to being effective in every one is a) coming up with a good answer, and b) everything discussed in this post. Building respect and communicating effectively is the foundation that supports your answers in almost every relationship, consulting, in house, or even personal. The key to convincing your clients that their black hat, overly white hat, or completely UX-based solutions are wrong is all sort of the same.

So what do you think? What resistance have you come up against in your consulting projects? Share in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Scaling Geo-Targeted Local Landing Pages That Really Rank and Convert – Whiteboard Friday

Posted by randfish
One question we see regularly come up is what to do if you’re targeting particular locations/regions with your site content, and you want to rank for local searches, but you don’t actually have a physical presence in those locations…

Continue Reading →

Open Site Explorer’s New Link Building Opportunities Section (and a Slight Redesign)

Posted by randfish

Why hello there! You’re looking marvelous today, you really are. And, in other good news, Open Site Explorer has a bit of a new look—and an entirely new section called “Link Opportunities” to help make some link prospecting tasks easier and more automated. Come with me and I’ll show you; it’ll be fun :-)

The new look

We know a lot of folks liked the old tab structure but we ran out of space. With this redesign we now have the flexibility to add new features and functionality simply by popping in new sections on the left sidebar menu. It’s a little bit more like Moz Analytics, too, and we figure some cohesion between our products is probably wise.

  • New side navigation with plenty of room to grow and add new features (spam scoring and analysis, for example, will be coming in Q4—but shhh… I didn’t actually ask for permission to talk about that yet. I figure begging forgiveness will work.)
  • Improved filtering that lets you slice and dice your link data more easily.
  • Notice How Fast the New OSE Is? Oh yeah, that’s the stuff :-)

You can still access the old Open Site Explorer’s design for a few more weeks, but the new features will exist only in the new version.

Introducing the new link opportunities section

Need help finding outreach targets for your link building campaign? We’re introducing three new reports that will help you build a curated list of potential targets. The new reports are available to all Moz Pro subscribers. If you’re a community member, sign up for a
Moz Pro Free Trial and you, too, can kick it with the new functionality.


Reclaim links

A filtered view of Top Pages that lets you easily export a ranked list of URLs to fix.


Unlinked mentions

Powered by FreshScape, you can use
Fresh Web Explorer queries to find mentions of a brand or site without links. Ping sources that may have talked about your brand, website, people, or products without giving you a link and you can often encourage/nudge that link into existence (along with the great SEO benefits they bring)


Link intersect

Find pages that are linking to your competitors but not you. By entering two competitive domains (they don’t have to be directly competitive; anyone you think you should be on lists with, or mentioned by the press alongside, is a good candidate), you can see pages that link to those sites but not yours. Getting creative with your targets here can reveal loads of awesome link opportunities.


This, however, is just the beginning. Be on the lookout for additional insights and opportunities as we improve our link index—we’ve just recently grown the size of Freshscape, which powers Fresh Web Explorer and two of the sections in link opportunities, so you should find lots of good stuff in there, but it can be a challenge. If you’re struggling with query formatting or getting creative around potential opportunities, let us know (in the comments or via Q&A) and we can give you some pointers or maybe find some searches that do the trick.

What about the old OSE?

We changed the workflow a bit and want to make sure you’ve got time to adjust. If you’re cranking through monthly reports or audits and want a more familiar OSE experience, you can switch to OSE Classic for a limited time. Just click on the “View in OSE Classic” link in the top right, and we’ll default to the old version.

But keep in mind new features and enhancements, like improved performance and Link Opportunities, will only be available in the new release. We’ll keep OSE Classic active until December 3rd in case you’re feeling nostalgic.

We’d love your feedback

If you’re using the new OSE and find problems, wish we’d change something, or have a particularly awesome experience, we’d love to hear from you in the comments below, in Q&A, or (especially if your issue is urgent/something broken) via our help team.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

4 Ways to Build Trust and Humanize Your Brand

Posted by MackenzieFogelson

Hey there Mozzers. This is a collaborative post between myself and a good friend of mine, Matthew Sweezey. Sweezey is the head of B2B Marketing Thought Leadership at Salesforce.com and he knows a whole lot about marketing. We’re excited to share this post with you and look forward to your feedback in the comments below.

In 1999, AdAge released its list of 
most influential advertising campaigns of all time. At the top of the list was ‘Think Small,’ a campaign that introduced the Volkswagen Beetle to America. It was given top honors not because of its graphical juxtaposition, or its catchy copy, but rather its honest approach. It was the first major campaign to go against what the American consumer said they wanted. When Chevy was telling consumers bigger was better, Volkswagen acknowledged their shortcomings and advised consumers to, ‘Think Small.’

When a brand is able to make a sincere connection with a consumer, something incredibly powerful happens. Beyond mere fleeting impact, that moment of connection provides a foundation for long-term advocacy, loyalty, and a sustainable bottom line.

The average consumer in today’s market is exposed to more than 5,000 advertising messages per day, wields more computing power in their hand than NASA had to land a man on the moon, and can make a decision about your website in 1/20 of a second. Consumers are overwhelmed with images, and sales pitches and desire a more emotional connection from the companies they support. With this amount of noise, and such fickle consumers, companies who are able to genuinely connect with their customers and community of supporters will have a strategic advantage over those who don’t.

As businesses, we should not look at marketing solely as the ability to sell things, but as the conduit for building relationships. Nourishing this conduit requires all the same steps as any relationship building: For reasons both emotional and practical, you have to build a real connection, listen and take action based on what you hear, prioritize the relationship itself, and deliver on the promises you make.


1. Build an emotional connection

Being authentic and genuine isn’t something that companies can fake. Consumers are smart, and they expect a lot from the brands they choose to support. More than a great product or service, it’s the 
passion and cause at the core of the company that builds this much deeper emotional connection between the brand and the customer. All of which can be fostered through personal, meaningful, and relevant content.

Always does a pretty incredible job of this in their #LikeaGirl campaign.

At the heart of this video is a powerful topic that attracts the attention of girls (and women) everywhere: the confidence that girls possess dramatically declines during puberty.

Always is on a mission to “champion girls’ confidence.” This is a cause that anyone who watches this video can be inspired to change, but certainly the target is girls and women all over the world.

What’s so captivating about the video is that
Always uses the authenticity of story and the transparency of a behind-the-scenes video shoot to build a powerful, emotional association with their brand. An approach that wouldn’t work if Always was taking up this cause just for the numbers on social. Because all of the people in the video are genuinely invested, the message is delivered with conviction not only from the director but also the young adults who are featured in the commercial; they all believe it, too.

What’s great about campaigns like #LikeaGirl, and others like this that Always supports, is that Always isn’t solely focusing their content on promoting their products. They’re supplementing self-promotional efforts with real causes, revealing their sympathy and sensitivity to things that are important to their customers. If they’re this compassionate on the sociological side, and they actually live up to these expectations when their customers are using their products, the relationship will be easily formed, and the deep, emotional connection to their brand will be built.

Making a connection with personality

What if you don’t have a universal cause like girls’ confidence or a large budget to work with? You can build an authentic and emotional connection with your customers and community even when doing simple things like administering a survey.

In order to improve their video hosting product, Wistia created a “Take a Survey” video, featuring their entire company actually doing the Hustle, in order to collect useful feedback. Remember, Wisita is only a fun brand because they choose to be. The video hosting industry is not a sexy, or an exciting one, however they used their content to show off their personality to build a connection with their customers. 

What Do You Want to Learn from Wistia? | Wistia at Work

Rather than going with the generic (and more expensive) gift card or iPad raffle, they decided to use something that was genuine and aligned with their culture as an incentive to complete the survey.

This was not a stretch for Wistia. Their personality and natural brand authenticity shows through in everything they do in their marketing. The sincere enjoyment Wistia has in creating these types of videos is infectious and builds an instant, emotional connection with their audience. People loved it so much, Wistia earned their highest engagement on a survey ever. They saw this as a chance to make a connection even though they were asking you to take a survey.

For Wistia, and for companies building an emotional connection in general, it’s not about doing things like this just to market; it’s got to be part of your culture. That’s where the authentic and emotional connection really comes from. Wistia’s customers know that they genuinely care about what they think and they’d do anything, even dance The Hustle, to show their commitment to the relationship. This is a bond that will help them maintain a connection with their customers long-term.

Making a connection in a boring niche

But the thing is, if your company is not in an innately creative industry, that doesn’t mean you can’t create an emotional connection with your customers. You just might have to work a little harder to figure out what will initiate that bond.

Emotional connection most certainly comes from authenticity but it also comes from a shared interest. Both Always and Wisita have a shared interest with their fans. Always sells feminine products, so supporting and championing young women is an easy fit. Wisita is a video hosting company, so educating (or asking for favors) through video is a natural place to make that connection.

In boring industries, for companies who are willing to dig a little deeper for a common interest and also use some personality, there’s just as much opportunity to forge the emotional connection. You can start by capitalizing on 
Ian Lurie’s Random Affinities to identify the possible interests of your customers. All you need is two random ideas that don’t necessarily have any connection except for a shared interest with your audience.

It’s a great way to show your customers that you have a personality and because you’re working to spark that emotional connection, you’ll have a much easier time building that relationship.

You can also forge an emotional connection by getting involved — both on and offline — in things outside your niche: joining, supporting, or sponsoring community events or causes. But again, the effort needs to be genuine, so pick a cause that you truly believe in and would support regardless of the recognition or positive regard you would earn from your customers. The passion that you feel will be both contagious and attractive to your potential audience.


2. Listen and respond with action

It’s one thing to provide the opportunity for your customers and community to give their feedback and voice their desires. It’s entirely another to show them you’ve listened by responding through action. When you truly listen to someone, you gain their trust, and more importantly, their respect.

Seamly is a company who uses surplus fabric to create unique and limited edition clothing. They take listening to a whole new level by aligning customer feedback with production. They crowdsource in order to design the next pieces in their line.

Seamly collects the feedback on their website and then begins producing the apparel.

Additionally, as Seamly addresses challenges that arise during production, they provide their audience with an opportunity to participate in making decisions that ultimately affect them.

Listening at this level will not only make a difference in Seamly’s products (and their sales), but in the relationships they have with their customers. Being human and
showing their customers that they’ve been heard will build a deeper and lasting connection with the company and their brand.

When you make great things, and you connect with your customers on this level, they love you. They write about you. They tell their friends. They do the work for you.

The thing about listening is that it’s not just about interaction. It’s about providing the opportunity for actual human people to participate in your company’s market research. Seamly is focusing on what their customers actually need rather than just following a fashion trend. They’re not designing their clothes based on what a focus group put together with an ad agency in New York. They’re not allowing the fashion world to dictate. They’re creating and detailing garments that their actual customers like.

Seamly’s approach works because they’re listening to their customers and giving them exactly what they need. That makes Seamly real to them because they’re communicating an understanding of each of their customer’s individual needs. That’s real. That’s genuine. And that’s exactly what inspires trust and loyalty.

Human responses increase sales

There’s a large online bridal retailer who ships thousands of items every day, and on occasion they make a mistake with an order. In an attempt to humanize their brand and listen to their customers (rather than just doing what they, as a retailer, would prefer), they set up a split test to determine which way of apologizing to their customers would be most effective.

To Group A, they sent a $50 dollar gift card, and to Group B, they personally called to apologize. Once the experiment had been executed, the retailer followed up with each group of customers to ask them if they would be likely to buy from them again.

Group B, the group that received a personal phone call apology, was twice more likely to buy from them again. Because they listened, this retailer discovered that a personal, human connection — not a gift card, and not an email, but a real live human conversation — was more meaningful to their customers.

This may not be true for every company or every customer, but finding out directly from your customers what they prefer and then doing exactly that, will show them you’re listening, you care, and you’re worth their money, advocacy, and support.


3. Put the relationship ahead of conversions

In his book 
Permission Marketing, Seth Godin pioneered the idea of content marketing. Create something of value that will motivate people to provide you with an email address. The problem is, we’re competing for attention among so much noise, so earning that conversion has become increasingly difficult. There are over 80 billion business emails sent every week and over 200 millions hours spent on YouTube. Consumers have access to a lot of content, and are weary of another brand sending more emails into their already cluttered lives.

Conversions are a by-product of great relationships. Relationships built on empathy, transparency, and honesty are the ones that last and drive a lifetime of conversions.

The key to creating content that will convert is to optimize for the relationship with the consumer, not the conversion.

About a year ago, Mack Web launched a community building guide. It was our first big “product” and although it took us about 8 months to finish it, we were confident that all of our hard work was going to make a big statement about our company and brand. We thought giving it away for free — no email address and no monetary transaction required — would make the biggest statement of all.

It definitely did.

In less than 12 months, Mack Web earned nearly 6,000 guide downloads (not to mention 373 inbound links). We didn’t require anyone to provide an email address in order to download the guide, but since the guide has launched, we’ve experienced all kinds of amazing benefits including increasing our organic email subscriptions by 50%. The most common feedback we received after launching the guide was that the people reading it couldn’t believe they were giving it away for free.

Mack Web probably could have charged for our guide, but we’re confident that giving this level of content away, no strings attached, helped to make a lasting connection with our audience in the very early and formidable stages of building our brand. At some level, we’ve planted a tiny seed with 6,000 people by providing them something of value for free, ultimately by first earning their trust.

People know when you’re not being genuine and putting on a front just to get something in return. If you’re not thinking of the customer first and providing them with the things
they need, it won’t matter if you have an email address; you won’t build a connection or earn the opportunity of a relationship.

The trick Mack Web learned from giving something away without asking anything in return?

You don’t need any tricks.

Lead with integrity and put your customers ahead of your bottom line.


4. Deliver on your promises

There are undercurrents to every digital interaction you have with your customers. Every promotion, everything you say about your brand, everything you
convey about your brand is a promise. Every conversion, every time they choose to buy, download, or subscribe is an agreement. You promise to provide something of value or to care about certain things or to work toward certain goals; they promise to engage with you as a result.

But here’s the twist: each of these promise-and-deliver interactions is actually a negotiation for further, richer engagement. How you deliver on your promise dictates what happens next: do you build a relationship or do you lose a fan?

There are over 150 million blogs online and 500 million tweets per day. The content choices a person has are endless, so you have to give them a reason to engage with you – to deliver on the value you promised, the value that attracted them to you in the first place. You can’t afford to take the consumers for granted and forget that it’s a negotiation, because they certainly won’t. They are constantly bombarded by ads, by links, and by reminders that they have many, many options.

If you fail to remember this, you may be spending money only to drive people away.

In a study Sweezey conducted of over 400 B2B buyers, he found that 71% of consumers have been disappointed with the content they downloaded from a business. Of those 71%, 25% would never read content from the business again due to their disappointment.

This is, quite simply, because they really don’t have to. If you don’t care to fully listen to and empathize with their needs, to provide them the fullest, richest experience, they can easily find the relationship they are seeking elsewhere. Not surprisingly, 49% of consumers who have a bad experience with content said it had a serious effect on their trust of the brand.

A few weeks ago, I did a webinar for Piqora. There were almost 900 people who signed up for the Webinar, and because I was the presenter, Piquora offered to provide me with the email list. Immediately I told Piqora that we wouldn’t have any use for the list. I’m not overly fond of companies automatically adding me to a list I didn’t sign up for and I didn’t want to return the annoyance and drive people away from our brand, especially at first experience.

In fact, depriving our audience of choice goes explicitly against one of Mack Web’s values: human-centricity. If there’s a practice that personally annoys us, using it on others would be a pretty severe break in the promise of our own brand.

Still, it would be a shame to pass up on the opportunity to connect with these 900 people. So instead, Mack Web
asked them:

We didn’t get all 899 people who signed up for the webinar, but what we did get was 66 people who, by their own free will, felt confident about connecting with Mack Web this way. That effort was the highest signup month we’ve ever had.

The best part for us though, was getting feedback like this:

Because this is how it could have gone:

We’ve all been there. You voluntarily sign up to listen to a webinar, or download a whitepaper you’re really excited about reading. But then, immediately following, in addition to the content you wanted, you also get spammed with all the stuff you never asked for. Immediate relationship infringement.

Giving readers the opportunity to choose demonstrates respect for
their needs which makes you human. Then, all you have to do is continue delivering on the experience that your audiences expects so that you can maintain their trust ongoing.

Follow through both on- and offline

Delivering on your promise as a company needs to happen on every channel, not just your website, social, or email marketing. The experience is everything before, during, and after the interaction, and you must meet customer expectations both on and offline.

Kmart did great work humanizing their brand with the ‘Ship my Pants’ campaign of creative TV commercials, but the experience falls flat when it comes to their actual stores.

The majority of reviews of the Chicago, Illinois Kmarts fall below 3 stars. Some even as low as 2 out of 5.

Where Kmart has gone wrong is by treating their marketing and their in-store performance as separate entities. To reconcile the difference between their ads and their stories, Kmart could choose to be transparent about the subpar performance offline. They could share their plans for improvement with their customers. If their customers and community are aware that they are facing issues and trying to tackle them, rather than covering them up with a creative campaign, Kmart may earn their compassion and trust.

Everything you do as a company communicates the experience of your brand, not just your marketing copy or paid ads. 
Every touch point offers an opportunity to develop an honest relationship with the people who are coming in contact with you. Put honesty and authenticity first and you’ll provide an amazing experience with your brand.


The good news and the bad news

The good news is that all of this stuff is pretty simple. We’re all human beings. We all work with other human beings. We all know that we need to treat our customers how we would like to be treated.

The bad news is that simple is not the same thing as easy. Humanizing your brand, building trust, fostering an authentic and lasting connection with your customers is hard work. It doesn’t necessarily scale. And unless you can tap into some genuine, authentic passion of your own, the connection is going to be a whole lot harder to ignite.

The companies that can do this stuff and do it well are the ones that have, at their heart, a purpose deeper than making money. Maybe it’s something everyone can connect to like
Always advocating for girls everywhere. Maybe it’s something closer to home like providing wearable, unique clothes tailored to your customers’ needs and tastes like Seamly. Whatever your goals, your real passion and drive for that meaning beyond money will keep you going, will inspire you to relentlessly improve your products, and will ensure that your brand is memorable and desirable to your customers.

Which is why all this humanizing stuff has to start from the inside. In order for it to be successful, you have to get the whole company on board and genuinely excited about providing the full human experience to your customers. If your marketing doesn’t 
come from your core, it’s not going to forge a genuine and emotional connection with your customers, and it certainly won’t help you foster the growth of a community.

Flashy ads can help you stand out for a moment. But for the longest-lasting and most loyal customers, you don’t have to outspend or outdo everyone else. You just have to outthink them and do the simple stuff that real humans do.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Announcing the 2014 Local Search Ranking Factors Results

Posted by David-Mihm

Many of you have been tweeting, emailing, asking in conference Q&As, or just generally awaiting this year’s Local Search Ranking Factors survey results.
Here they are!

Hard to believe, but this is the seventh year I’ve conducted this survey—local search has come a long way since the early days of the 10-pack way back in 2008! As always, a massive thanks to all of the expert panelists who in many cases gave up a weekend or a date night in order to fill out the survey.

New this year

As the complexity of the local search results has increased, I’ve tried to keep the survey as manageable as possible for the participants, and the presentation of results as actionable as possible for the community. So to that end, I’ve made a couple of tweaks this year.

Combination of desktop and mobile results

Very few participants last year perceived any noticeable difference between ranking criteria on desktop and mobile devices, so this year I simply asked that they rate localized organic results, and pack/carousel results, across both result types.

Results limited to top 50 factors in each category

Again, the goal here was to simplify some of the complexity and help readers focus on the factors that really matter. Let me know in the comments if you think this decision detracts significantly from the results, and I’ll revisit it in 2015.

Factors influenced by Pigeon

If you were at Matt McGee’s Pigeon session at SMX East a couple of weeks ago, you got an early look at these results in my presentation. The big winners were domain authority and proximity to searcher, while the big losers were proximity to centroid and having an address in the city of search. (For those who weren’t at my presentation, the latter assessment may have to do with larger radii of relevant results for geomodified phrases).

My own takeaways

Overall, the
algorithmic model that Mike Blumenthal developed (with help from some of the same contributors to this survey) way back in 2008 continues to stand up. Nonetheless, there were a few clear shifts this year that I’ll highlight below:

  • Behavioral signals—especially clickthrough rate from search results—seem to be increasing in importance. Darren Shaw in particular noted Rand’s IMEC Labs research, saying “I think factors like click through rate, driving directions, and “pogo sticking” are valuable quality signals that Google has cranked up the dial on.”
  • Domain authority seems to be on its way up—particularly since the Pigeon rollout here in the U.S. Indeed, even in clear instances of post-Pigeon spam, the poor results seem to relate to Google’s inability to reliably separate “brands” from “spam” in Local. I expect Google to get better at this, and the importance of brand signals to remain high.
  • Initially, I was surprised to see authority and consistency of citations rated so highly for localized organic results. But then I thought to myself, “if Google is increasingly looking for brand signals, then why shouldn’t citations help in the organic algorithm as well?” And while the quantity of structured citations still rated highly for pack and carousel results, consistent citations from quality sources continue to carry the day across both major result types.
  • Proximity to searcher saw one of the biggest moves in this year’s survey. Google is getting better at detecting location at a more granular level—even on the desktop. The user is the new Centroid.
  • For markets where Pigeon has not rolled out yet (i.e. everywhere besides the U.S.), I’d encourage business owners and marketers to start taking as many screenshots of their primary keywords as possible. With the benefit of knowing that Pigeon will eventually roll out in your countries, the ability to compare before-and-after results for the same keywords will yield great insight for you in discerning the direction of the algorithm.

As with every year, though, it’s the comments from the experts and community (that’s you, below!) that I find most interesting to read.  So I think at this point I’ll sign off, crack open a
GABF Gold-Medal-Winning Breakside IPA from Portland, and watch them roll in!

2014 Local Search Ranking Factors

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue Reading →

Page 1 of 2084