top of page
Writer's pictureFahad H

3 ways to improve link equity distribution and capture missed opportunities

There’s a lot of talk about link building in the SEO community, and the process can be time-consuming and tedious. As the web demands higher and higher standards for the quality of content, link building is more difficult than ever.

However, few SEOs are discussing how to better utilize what they already have. There seems to be an obsession with constantly building more and more links without first understanding how that equity is currently interacting with the website. Yes, more links may help your website rank better, but your efforts may be in vain if you’re only recouping a small portion of the equity. Much of that work dedicated to link-building efforts would then be wasted.

For many websites, there is a big opportunity to improve upon the link equity that has already been established. The best part about all of this is that these issues can be addressed internally, as opposed to link building which typically requires third-party involvement. Here are some of my favorite ways to reclaim lost link value.

1. Redirect old URL paths

On client websites, I often see discontinued product pages that haven’t been redirected or entire iterations of old websites where almost all of the URLs are returning 404 errors. Leaving these pages broken leaves too much unused link equity on the table.

Finding old URL paths and 301 redirecting them can lead to huge wins in search engine visibility. In one fell swoop, you can reactivate the value of hundreds or even thousands of links that are pointing toward your domain.

So the question becomes, how can you surface these old URLs?

There are a few different methods I use, depending on the resources I have at hand. Occasionally, I’ve had clients who just went through a migration that moved their old website to a staging site. If this is the case, you should be able to configure Screaming Frog to crawl the staging environment (you may need to ignore robots.txt and crawl nofollow links). After the crawl is complete, simply export the data to a spreadsheet and use Find/Replace to swap out the staging domain with the root domain, and you should have a comprehensive list of old URL paths.

However, what if you don’t have access to any resources that list old URLs? For these situations, I use a combination of Ahrefs, Google Analytics and Google Search Console (credit to Dan Shure’s article on redirect chains, which helped me refine this process).

First, using Ahrefs, I’ll enter my domain, and then click the “Best Pages By Links” report.

From there, I export the entire report into an Excel file. It’s important that you export all of the URLs Ahrefs gives you, not just the ones it identifies as 404 errors. Ahrefs will only provide the initial status code the URL returns, which can be misleading. Often, I’ll see situations where Ahrefs identifies the status code as a 301, but the URL actually redirects to a 404.

Once I have my Excel file, I run the URLs through Screaming Frog using “List Mode” and export the 404 errors it finds into a master Excel document.

Next, I go to Google Analytics and navigate to the “Landing Pages” report. I’ll typically set the date ranges for as far back as the account tracks, but this varies for each situation. I’ll export all of the data it gives me to a spreadsheet and then add the domain name in front of the relative URL path using Excel’s CONCATENATE function.

I once again run this list through Screaming Frog and add the 404 errors it finds to the master document.

Finally, I log in to Google Search Console, open up the “Crawl Errors” report, and navigate to the “Not Found” tab. I export these URLs and confirm that they do, in fact, return 404 status codes by using Screaming Frog. I add these 404 pages to the master document.

Search Console Errors

Now there’s one master spreadsheet that contains all of the potential broken URLs in one place. De-dupe this list and run Screaming Frog in “List Mode” and export the URLs that return 404 status codes.

To help prioritize which URLs to redirect first, I connect Screaming Frog to the Ahrefs API, which will allow the crawler to gather the link metrics associated with each page. I sort that list by number of linking root domains and assign priority to the redirections that way.

After I have the final list of 404 errors, it’s simply a matter of identifying the destination pages on the client website each URL should redirect to. To scale this effort, I often use a combination of MergeWords and the OpenList Chrome extension.

2. Analyze the .htaccess file

When evaluating how your website distributes link equity, it’s important to understand how your global redirects are working as well. This is where the .htaccess file comes into play. In this file, you can see the syntax that instructs your website how to handle redirect rules.

When using a tool like Ahrefs, if I’m seeing common redirect patterns, this is a good sign that these rules are defined in the .htaccess file.

0 views0 comments

Comentarios


bottom of page