by

Rethinking Technical SEO Audits

Photo by Diego PH on Unsplash

Technical audits can be one of the most inefficient SEO deliverables shared with clients. They’re inefficient because the time invested in completing them is offset by the low volume of recommendations implemented by the client. If your clients aren’t implementing your recommendations, you’re wasting your time. The majority of clients don’t have enterprise budgets, so time is everything, and there’s none to waste.

We need to rethink our approach. Breaking the technical auditing process into a production flow helps us to develop a better understanding of where where we can improve.

If we liken technical audits to a simple production flow in manufacturing, we have:

1. Our input: information and insight.
2. Our output: The client implementing our recommendations.
3. You: The SEO, whose job it is to add value to the input and deliver the output. 

To truly deliver amazing value for our clients, we need to work smarter. In this post, I’ll be using basic production principles to present technical audits of the future.

Start with the output and work backwards

What are you really trying to achieve with a technical audit? Many SEOs would argue that their job is to deliver an all encompassing technical audit in a neatly structured document, pass it over to the development team, and that’s it, job done. 

Is that good enough?

Here are some of the common complaints I hear in SEO circles:

“The developers didn’t start to implement our recommendations until traffic started to drop”
“They’ve only implemented our low priority recommendations”
“The client has no budget”
“They’ve implemented it, but not the way we recommended”

Most of our complaints can be distilled down into two things:

  1. Recommendations aren’t being implemented. 
  2. Recommendations aren’t being implemented properly.

Both of those points you don’t necessarily control, but you do have influence. You need to reframe your output; it’s not to recommend solutions, it’s to get those solutions implemented.

What’s your limiting step?

It’s important to understand the limiting step in delivering a technical audit. The limiting step is the most difficult, sensitive, longest or most expensive step in the process in which your production flow should be planned around. It will determine the overall shape of your audit.

In my view, the most critical step is the delivery of the audit itself. By this, I mean the production and presentation of a report or presentation to your client. As there’s always a cost associated with your recommendations, it’s critical the client understands and is bought into them. In addition to your client’s time, you’re also likely taking time out of the development team’s schedule to listen to what you have to say.

If it’s critical to get buy-in from key stakeholders to get your recommendations implemented, how can you maximise this step? What do you need to provide your clients to motivate them into action?

Deliver less

Deliver few recommendations well, rather than a lot of average recommendations. Less is more. 

See through your client’s eyes

Be empathetic to different stakeholder needs. A marketing manager needs something entirely different to a developer.  

Executives need: 

  • No jargon
  • Impact on business objectives

Marketers need:

  • Resources needed: How much will it cost? How long will it take?
  • Impact on marketing KPIs
  • Evidence: How can you prove this is the right thing to do? 
  • Upskilling: How can you help the client maintain the solution?

Developers need:

  • Technical language
  • The why: Why should they prioritise this issue? 
  • Platform understanding: Is this even possible on the platform? Have others done it? 

C-suites/executives require less information, but are ultimately responsible for the direction of the business and what it invests in. They’ll rarely be involved in the technical auditing process, but if additional budget is required for your recommendations, you’ll need to justify its impact on the business.

Marketing managers need more specific information than executives. They generally have an allocated budget or development capacity, which they’re accountable for prioritising accordingly. Your technical audit needs to persuade them to prioritise their resources into your recommendations.

Developers require the most information as they’ll be implementing your recommendations. There’s been a lot written about the conflict between SEOs and developers, but this can be managed by being thorough and sensitive to their priorities. Speak their lingo, adapt to their workflow, educate each other, and you can build a trusting relationship.

Align with business objectives

Aligning your recommendations to your clients’ objectives is crucial to influence them to act on them. Clients don’t want broad advice or best practice, they just want to make an impact on their business.

When you present a solution, ask why and then why again. Why is this the right thing to do? And if the answer to your final why is not to achieve your clients’ objectives, your recommendations are wrong for your client.

As an example, if you were to propose the development of an international sitemap, why is that a good idea?

A: To make it easier for Google to crawl and understand international alternatives of URLs.

Why?

A: To improve search visibility internationally.

Why?

A: To achieve the clients’ objective to expand and grow revenue internationally.

Understand platform and resources

Underpin your recommendations with a holistic understanding of your client’s website platform and resources. If your recommendations aren’t tailored to both of these key areas, they’re less likely to be implemented. Gathering this information in a scoping or kick-off session with a client is a necessary input into the production of a technical audit. 

Cultivate high quality inputs

Now that you know your limiting step, how do you make sure the very first step, your input, makes this more effective?

In computer science, garbage in, garbage out implies that poor, or flawed input data produces poor output. While our outputs in SEO don’t operate on such strict logic, garbage in, garbage out is a useful expression to remember to feed your system with healthy inputs. Inputs into a technical auditing process are all about information gathering across the business, systems and the website. Most of which can either be nailed at the first meeting or automated.

Collecting high quality information at the beginning of an audit is one thing that can make all other subsequent steps easier, or even redundant. By having the right information from the client and the meaningful insight on technical opportunities, it straightens your path so you don’t keep running into dead ends. This leads to stronger recommendations in less time. 

Business and system inputs

Before you analyse a website, you need to understand the problem the client aims to solve, and their capabilities to solve that problem. Often I see this in reverse; the website is crawled, issues and broad recommendations are suggested before working ‘semi-collaboratively’ with the client to discard recommendations that are irrelevant or impossible to implement. This negatively impacts your limiting step in two fundamental ways,  it shows a lack of business understanding and wastes the most critical time you’ve got to get buy-in from your client.

Know the problem, know how the systems work, then get to work. In that order. 

You can generally receive a scan of all of the key business information you need to know in the first meeting. A good kick off meeting should be a friendly interrogation that aims to understand and challenge your client’s objectives. I describe this some more in my first post on delivering better SEO strategies. 

Second to the problem, knowing how your client wants to work and how their systems work, is an imperative input into a technical auditing process. Here are some sample questions you ought to know the answer to: 

Platform

  • Can you use the CMS or does the client need to make changes? 
  • Where can and can’t you edit on the website? Which areas require development resource?
  • Are your key stakeholders responsible for all areas of the website; what do they control?
  • Are there templated pages; can changes be made at scale? 

Resources

  • Do they have an in-house development team or work with an external partner? 
  • How much resource do they have for technical changes, and what’s planned in?
  • How does their development team work, and where do they work? Eg. do they work in sprints, and use Jira for issue management? 
  • What’s the sign off process and who needs to be involved? 
  • Can you speak directly with the development team, or do other stakeholders want to be involved? 

Automating insight 

There are now some fantastic crawling tools in the industry which do a lot of the groundwork for you. Many include APIs, schedulers, command line interfaces, Google Search Console & Analytics integrations and beautiful UIs, which makes it easier than it has been historically to translate information into insight.

You don’t want to do a lot of digging for information, before you know where to dig.

Preliminary insight from crawling tools, or from your own proprietary tools, is enough to generate an initial hypothesis of problem areas. You can then dig deeper later to prove, or disprove it.

Many SEOs use technical checklists as their starting point to validate every possible issue. I’m not against checklists as they can be a useful reference point, but they tend to treat every problem the same, without context. Since every issue is analysed, often manually, checklists are exhaustive, but incredibly time consuming.

My preference is to collect as much information from crawling tools as possible, and then wrangle the information in Google Sheets or Excel with APIs to fill the gaps. This will give you a solid basis to then intuitively identify key problem areas.

Information to insight

Not all information from tools is useful, but it’s your job to make it so. If you ran a crawl on a website and discovered that 10,000 pages had missing page titles, is that necessarily a bad thing?

Absolute figures by themselves can be misleading. You need more information to make them meaningful.

Relative values

Scale
10,000 out of 1,000,000 pages (1%) have missing page titles.

Performance
10,000 pages with missing page titles make up 20% of all organic traffic.

Opportunity
10,000 pages with missing page titles rank for keywords making up 35% of the search volume for all keywords found in Search Console. 

Stitching together multiple data sources adds comparative clarity. In the above example, we can see that this isn’t an issue at scale but it is impacting pages contributing a good chunk of organic traffic. What’s more, there’s opportunity to capitalise on a vast amount of search volume.

Some SEO tools provide relative insight as standard, particularly on the scale of technical issues. And, even using the most basic crawler you can discover the performance impact of those issues.

For example, at a very basic level you can: 

  1. Export a list of URLs with missing page titles. 
  2. Export a list all organic landing page URLs with acquisition metrics from Google Analytics. 
  3. Use Excel or Google Sheets to perform a vlookup to return organic sessions per missing page title URL. 
  4. Sum up total organic sessions and divide by the total volume of organic sessions for the time period.  

Value-adding SEOs

What’s your job as an SEO in our technical auditing machine? It’s to add value to the input and deliver the output, of course.

In this refreshed outlook on technical auditing, you need to do 3 things.

1. Use business information and automated insight to hypothesise problem areas.

You can map out technical problems with very little information. Using logical reasoning, you can then make pragmatic assumptions on where to prioritise.

How do you do this? Issues trees.

Issue trees, as explained in my previous article, are a way of grouping sub-problems that are distinct from each other, with no gaps in logic. Using a framework called MECE (mutually exclusive, collectively exhaustive) you can separate a problem into component parts, leaving no stone unturned.

I’ll use an example to explain this in practice.

An ecommerce client has approached you with the common problem: ‘our organic traffic has been in decline over the past 12 months’. After some brief analysis, you know this is largely a wide scale issue across their category pages. Average rankings for important keywords have dropped by 5 positions in the last 12 months.

This is a very simple issue tree. I’ll explain each of the component parts, so you can see how I’ve tried to break the problem down.

You can’t analyse technical problems in isolation, you need to consider technical problems in a wider context. In the above example, I could have focused on technical problems only but it wouldn’t be MECE – you wouldn’t be exhausting all possible reasons for a drop in rankings. Your reasoning cannot have gaps, as you won’t get to the root cause of the problem.

The first branch and second branches of my issue tree use opposites; internal and external. If your rankings have dropped in search results, it’s either because Google has rewarded a competing page or devalued your own website. This is by no means perfect but it’s a good starting point to make sure there’s no overlap in reasoning. It’s straightforward enough to evaluate whether a competitor has improved on merit or as an indirect consequence of a negative change on your own website.

I again use opposites on my final branches. I explore whether the drop is a result of a direct change to category pages or because of a change to other pages on the website that could influence the rankings of category pages (direct vs indirect). In practice, I would extend these branches out even further. For example, you could hypothesise that category rankings dropped because templates were edited on category pages or that meta data had changed.

2. Validate or nullify your hypotheses with data

When you’ve got your inputs right, it makes it a lot easier to falsify whole branches. In my figurative example, let’s imagine the client told you they had replatformed in the last 12 months and you knew with some certainty that competitor landing pages only improved in search results where your rankings had declined. 

Knowing that information, you can bet on a position and exclude branches to narrow your path. All of the paths in grey have been excluded from further analysis and those in green can be validated with data.

From building an issue tree, you can quickly prioritise areas to design your analysis around. This is far more efficient than analysing everything as with traditional technical audits. 

Imagine your inputs already provided the following information:

  1. Faceted navigation on category pages were causing duplicate content.
  2. A quick search in Wayback Machine showed there were changes to category templates in the last 6 month, including a change to faceted navigation.
  3. Coverage report in Google Search Console highlighted an increase in URLs indexed but not submitted in the sitemap.

What more do you need to prove that your assumption is true?

You know there’s been a change to category templates since replatforming but you still don’t know whether it has impacted rankings. For that, you need to analyse:

  • Whether duplicate URLs have replaced canonical URLs for target search queries. 
  • Whether duplicate URLs are causing crawl inefficiencies.
  • Whether duplicate URLs have resulted in category pages having lower PageRank (estimate). 

You can then frame your data analysis around those points to prove or disprove your hypothesis. Structuring your data deep dives around a hypothesis is like taking a torch down a rabbit hole, you’re still heading into the unknown but it’s a whole lot clearer. 

3. Tell the story behind your data

Facts tell, but stories sell. Building a narrative around your data will inspire your clients to act, rather than just absorb information.

Information is only remembered when it’s delivered in the right way. You need to tailor your presentation for your audience and be empathetic to their needs. Ultimately, it’s your client that determines the format of delivery. Don’t deliver something in a report because it’s how you’ve always done it. If you’ve taken the time to understand your stakeholders, take the time to deliver it in a way that motivates them. 

To avoid using a common SEO cliche ‘it depends’ here, there are some general things I’d recommend when delivering technical audits.

Use their language 
Using client acronyms, buzzwords, and any product-specific language in your presentation goes a long way. Not only do clients appreciate it because it shows an understanding of their business, but they’re more comfortable in sharing your audit further up the hierarchy. 

Divide your delivery
Sometimes it makes sense to deliver the same information in a different way. Marketing managers don’t need to know the intricacies of your technical solutions, but developers do. Consider presenting top-level findings in a slide deck to marketing managers and use project/task management systems to detail issues and solutions with developers.

Compliment your narrative with visuals
One of my favourite slides, shown below, is a perfect example of a diagram complimenting a headline.

Source: https://strategyu.co/mckinsey-structured-problem-solving-secrets/

Data visualisation and diagrams are memorable. But without context, they’re just noise.

Impactful headlines need to pair with visuals like wine and cheese. They help to build a picture in your client’s mind and get your point across.

Focus on writing
The purpose of a technical SEO audit is to persuade your client to implement your recommendations. Yet so many SEOs neglect persuasive writing.

I was recently reading about how Amazon has developed a writing culture. Rather than delivering presentations for new ideas, Jeff Bezos makes his team write 2 to 6 page memos. Amazon also dedicates time to train new hires on how to become better writers. They induct executives to the key principles highlighted in the image below:

Source: https://learnings.substack.com/p/creating-a-writing-culture

There are two lessons to be learned from Amazon. Instil your own writing principles to deliver more persuasive audits and encourage writing generally. Getting your ideas down on paper will help to structure the delivery of your audits and simplify your message.

We need to rethink technical SEO audits. My aim with this article is to inspire you to think differently. Think differently about the inputs you need. Think differently about what your role is. And think differently about what the outputs of your audits are. You need to take the technical out of technical SEO audits and consider them in a wider context. Learn from principles discerned outside of the practice of SEO and deliver amazing value for your clients.

Please let me know your thoughts in the comments below. And if you want to hear more from me, you can subscribe to my newsletter at theweeklyseo.com where I curate my favourite SEO articles, and a short piece of insight every week. 

Write a Comment

Comment

  1. Thanks a lot for this article: it is complete and put a step by step way of thinking that could help us get on the right track. I used to put project like you present them in your article, but never put on paper (or screen) the tree … it was usually limited and in my mind; I wil ldo that next time 🙂

    In addition your links about this Amazon presentation and the problem solving secrets are really good!

  2. Thank you for making such a good examples for conducting
    SEO Analysis. Some of the input definitely reflects the way I see auditing recently. I’ll “test” your issue trees, next time. Looking forward to your next article!