Maybe you’re looking for ways to improve your paid search account, or maybe you’re looking at your agency’s newest prospect with fresh eyes. It could be that performance seems to be off lately, or maybe you’re just looking for some new testing ideas.
Whatever your reasons, auditing even the most well-kept account is sure to stir up some opportunities for improvement. Not sure where to start? Read on for post one of a two-part series detailing the steps of a comprehensive paid search audit.
First, get the background
Get the scoop on the account. If it is your own account, then you’ll already have the background, but if you’re auditing an account on behalf of someone else, you’ll want to be sure to get the full historical picture before you get started.
Some questions to ask:
- Who are the targets?
- How is performance measured, and what are the goals?
- How do they track success back to the bottom line?
- Are there secondary goals?
- Are there other channels running, and if so, how do the channels fit together?
The more information you can get, the better. Even if it is a seemingly minor factor, it could impact your recommendation, such as ensuring that you don’t recommend to increase the exposure of call extensions during times that customer service agents aren’t available.
Conversion tracking, types, values and priorities
This should come out as part of the account background, but just in case it doesn’t, be sure that you understand what types of conversions are being tracked, how they are being tracked and what the priorities are. Then, make sure that each conversion is tracking as it should be.
Not all conversions are created equal, so you’ll want to be sure that, if there are multiple conversion types being tracked, your recommendations aren’t based purely on the total sum of conversions. I can’t stress this enough, because different conversion types can make seemingly straightforward optimization suggestions more complex.
For example, maybe mobile devices have a higher CPA, so your instinct might be to pull back mobile bids. But wait –- mobile could be driving more calls while a higher volume of desktop conversions might be for higher funnel micro-conversions. In that case, the higher CPA on mobile might be worth it after all.
Performance trends over time
Performance trends are important, too. You can start with performance trends if you’re looking to resolve a glaring issue. If nothing stands out, and you’re simply looking at trends for the sake of review, it also works to circle back and analyze performance trends at the very end of your process. Unless there is a glaring issue, though, my preference is typically to look at trends as I work through the data –- section by section.
As I go through each of the sections — ads, campaigns, ad groups, keywords, time of day, location, device and so forth — I look at what is working now and what isn’t working. That’s a given. I also look at what previously worked but isn’t working anymore. This can give insight into trouble areas that need attention. Viewing the Change History is usually a good way to follow up on these things.
[newsletter-form id=’6741’ text=’Keep your paid search skills sharp. Get tips and tactics delivered to your inbox.’]
Campaign structure
As you review the account, it’s good to take inventory of the campaign structure. These are some questions that are worth answering as you review campaigns:
- How many keywords are in each ad group? Are they all relevant to each other and to the ads that are within that ad group?
- Are the ad groups within each campaign relevant to each other? Does it make sense for them to be grouped together?
- Are there any performance outliers –- good or bad –- that are altering the average of the campaign? Would it benefit the account to place them into a separate campaign where budget and settings could be controlled?
- Are there any keywords or ad groups that are budget-capped because the campaign’s average performance is poor despite the fact that certain keywords or ad groups within it are performing well?
- Are campaigns organized in a logical way? Is there seemingly some basis or structure by which the campaigns are segmented?
- Did anything come up in part of the audit that would indicate a different structure might perform better (e.g., localization, match type breakout)?
Device performance analysis
Analyzing device performance can uncover low-hanging fruit. I like to export device performance and segment by conversion types. I typically pivot it based upon the campaign, with device and conversion type within each campaign.
Although I do compare device performance across campaigns, I don’t base modifiers on that comparison. Device modifiers should be used only when you’re comparing devices within the same campaign. If the overall CPA for the campaign is too high in comparison to other campaigns, then there’s most likely a greater issue with conversion rate or conversion volume that needs attention.
Geographic performance analysis
Geography is one of my favorite topics. First, I look at the location settings within a campaign. If you have strict geographical restraints, then it almost always makes sense to ensure that you are only targeting people within that area –- and excluding people who are searching for it from other regions.
There can easily be issues with this setting anyway. For example, maybe you are targeting Venice, Florida, but your settings allow people searching for your location to see the ad, too — even if they aren’t within your targeted area — then your ad starts showing for semi-ambiguous searches meant for Venice, Italy, and the CTR is horrible, plus the clicks you do get are just wasted spend. There are some exceptions for which it makes sense to allow users outside your geographical area to search for your location –- such as in the case of tourism. Usually, I don’t allow it. I do the same with the corresponding exclusion setting. I almost always prefer to exclude anyone searching from or about my excluded locations.
In addition to the settings, I like to pull location reports to pivot the data and look for outliers. Bid modifiers can be used, in most cases, to amplify positive results and pull back on poorly performing geographies.
Sometimes there’s such an outlier, though, that it makes sense to separate it into a different campaign –- either to give it more budget or to separate it from the group to ensure it isn’t hiking up the CPA for the other geos while also hogging the budget.
Most often, if a geography is performing really poorly, it makes sense to exclude it, but there are times where there’s enough volume so that it’s worth it to try to make it work. Separating allows you to bring up the campaign averages and open up budget for better-performing locations.
Time-of-day & day-of-week performance analysis
Analyzing time-of-day and day-of-week performance can result in multiple different optimizations, the simplest of which is bid modifiers. Alternate options could include cutting out certain time frames, expanding into additional time frames or adjusting budgets.
Note that adjusting budgets will really only be impactful in certain circumstances. For example, if you aren’t maxing out your full budget, then opening it up at certain times won’t make a difference. Pulling back budgets could still have an effect, though, if pulled back far enough.
Be sure to add in segments (extra layers of data) when analyzing time-of-day and day-of-week performance. I like to look at both hour-of-day and day-of-week in one report, for instance. In the new UI, you can edit the columns and rows to add the extra layers.
In the old UI, using the dimensions reports, you can do this by adding segments after you’ve requested to download the data.
Note: I also like to look at hour-of-day and time-of-day with conversion name as an extra layer, if multiple conversions exist. It is possible to add this as a segment in the time reports within the old UI. In the new UI, you’ll need to look at conversion reports and add the timing element as an extra column. It still functions the same way; it’s just a different path to the data.
When analyzing this data, I look for outliers. What’s performing exceptionally well? How are our positions during that time? Would increasing positions gain more exposure and potentially more leads or sales? If positions are already at 1.0 or close to it, and we aren’t losing impression share by rank, then increasing bids won’t be impactful. But if budgets are capped, toggling budget to make more room for top-performing time frames could be impactful.
Conversely, what isn’t performing well? Is there room to decrease positions? Should the time frame be excluded altogether?
Reviewing ad tests
Are there ad tests running? If so, are there any clear winners? If not, is there something prohibiting the ad tests from having a clear winner, such as too many ads running, or not enough? Are the ads relevant to the keywords in their respective ad groups? I also like to look at previous winning variants that were paused or deleted to see if there’s any potential to resurrect ads, themes or CTAs.
Running and analyzing ad tests warrants a whole post of its own. Brad Geddes wrote the book on ad testing (no, really), so instead of reinventing the wheel, I’ll just strongly advise you to check out the ad testing guide that he put together.
Reviewing ad extensions
As with ad tests, I like to review ad extensions as well. First, I check to ensure that the account is leveraging all appropriate and applicable extension types. Then I review to ensure that the extensions are:
- relevant (Extensions are sometimes treated like a set-it-and-forget-it element — which means you sometimes wind up with outdated promos or messaging that isn’t timely).
- compelling.
- performing well.
Check back next week for the second post with more tips on auditing your paid search account.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.