Guest izzismith Posted September 8, 2020 Posted September 8, 2020 Posted by izzismith Google Search Console is by far the most used device in the SEO’s toolkit. Not only does it provide us with the closest understanding we can have of Googlebot’s behavior and perception of our domain properties (in terms of indexability, site usability, and more), but it also allows us to assess the search KPIs that we work so rigorously to improve. GSC is free, secure, easy to implement, and it’s home to the purest form of your search performance KPI data. Sounds perfect, right? However, the lack of capability for analyzing those KPIs on larger scales means we can often miss crucial points that indicate our pages’ true performance. Being limited to 1,000 rows of data per request and restricted filtering makes data refinement and growth discovery tedious (or close to impossible). SEOs love Google Search Console — it has the perfect data — but sadly, it’s not the perfect tool for interpreting that data. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture1-335577.jpg FYI: there’s an API In order to start getting as much out of GSC as possible, one option is to use an API that increases the request amount to 25,000 lines per pull. The wonderful Aleyda Solis built an actionable Google Data Studio report using an API that’s very easy to set up and configure to your needs. You can also use something out of the box. In this post, the examples use Ryte Search Success because it makes it much easier, faster, and more efficient to work with that kind of data at scale. We use Search Success for multiple projects on a daily basis, whether we’re assisting a client with a specific topic or we’re carrying out optimizations for our own domains. So, naturally we come across many patterns that give a higher indication of what’s taking place on the SERPs. However you use GSC search performance data, you can turn it into a masterpiece that ensures you get the most out of your search performance metrics! To help you get started with that, I’ll demonstrate some advanced and, frankly, exciting patterns that I’ve come across often while analyzing search performance data. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture2-702013.jpg So, without further ado, let’s get to it. Core Updates got you down? When we analyze core updates, it always looks the same. Below you can see one of the clearest examples of a core update. On May 6, 2020, there is a dramatic fall in impressions and clicks, but what is really important to focus on is the steep drop in the number of ranking keywords. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture3-157597.jpg The amount of ranking keywords is an important KPI, because it helps you determine if a site is steadily increasing its reach and content relevancy. Additionally, you can relate it with search volumes and trends over time. Within this project, we found hundreds of cases that look exactly like the examples below: lucrative terms were climbing up pages two and three (while Google perceives ranking relevance) before finally making it up to the top 10 to be tested. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture4-140299.jpg There is a corresponding uplift in impressions, yet the click-through-rate for this important keyword remained at a measly 0.2%. Out of 125K searches, the page only received 273 clicks. That’s clearly not enough for this domain to stay in the top 10, so during the Core Update rollout, Google demoted these significant underperformers. The next example is very similar, yet we see a higher altitude on page one due to the fact that there’s a lower amount of impressions. Google will likely aim to get statistically relevant results, so the fewer impressions a keyword has, the longer the tests need to occur. As you can see, 41 clicks out of 69K impressions shows that no searcher was clicking through to the site via this commercial keyword, and thus they fell back to pages two and three. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture5-118034.jpg This is a typical Core Update pattern that we’ve witnessed hundreds of times. It shows us that Google is clearly looking for these patterns, too, in order to find what might be irrelevant for their users, and what can kiss goodbye to page one after an update. Aim to pass those “Top 10 Tests” with flying colors We can never know for sure when Google will roll out a Core Update, nor can we ever be fully confident of what results in a demotion. However, we should always try to rapidly detect these telltale signs and react before a Core Update has even been thought of. Make sure you have a process in place that deals with discovering subpar CTRs, and leverage tactics like snippet copy testing and Rich Results or Featured Snippet generation, which will aim to exceed Google’s CTR expectations and secure your top 10 positions. Of course, we also witness these classic “Top 10 Tests” outside of Google’s Core Updates! This next example is from our own beloved en.ryte.com subdomain, which aims to drive leads to our services and is home to our vast online marketing wiki and magazine, so it naturally earns traffic for many informational-intent queries. Here is the ranking performance for the keyword “bing” which is a typical navigational query with tons of impressions (that’s quite a few Google users that are searching for Bing!). We can view the top 10 tests clearly when the light blue spikes show a corresponding uplift in impressions. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture6-148803.jpg Whereas that looks like a juicy amount of impressions to lure over to our site, in reality nobody is clicking through to us because searchers want to navigate to bing.com and not to our informational Wiki article. This is a clear case of split searcher intent, where Google may surface varying intent documents to try and cater to those outside of their assumptions. Of course, the CTR of 0% proves that this page has no value for anyone, and we were demoted. Interestingly enough, this position loss cost us a heck load of impressions. This caused a huge drop in “visibility” and therefore made it look like we had dramatically been hit by the January Core Update. Upon closer inspection, we found that we had just lost this and similar navigational queries like “gmail” that made the overall KPI drop seem worse than it was. Due to the lack of impact this will have on our engaged clicks, these are dropped rankings that we certainly won’t lose sleep over. Aiming to rank high for these high search volume terms with an intent you’re unable to cater to is only useful for optimizing for “visibility indexes”. Ask yourself if it’s worth your precious time to focus on these, because of course you’re not going to bring valuable clicks to your pages with them. Don’t waste time chasing high volume queries that won’t benefit your business goals In my SEO career, I’ve sometimes gone down the wrong path of spending time optimizing for juicy-looking keywords with oodles of search volume. More often than not, these rankings yielded little value in terms of traffic quality simply because I wasn’t assessing the searcher intent properly. These days, before investing my time, I try to better interpret which of those terms will bring my business value. Will the keyword bring me any clicks? Will those clickers remain on my website to achieve something significant (i.e. is there a relevant goal in mind?), or am I chasing these rankings for the sake of a vanity metric? Always evaluate what impact this high ranking will bring your business, and adjust your strategies accordingly. The next example is for the term “SERP”, which is highly informational and likely only carried out to learn what the acronym stands for. For such a query, we wouldn’t expect an overwhelming number of clicks, yet we attempted to utilize better snippet copy in order to turn answer intent into research intent, and therefore drive more visits. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture7-110757.jpg However, it didn’t exactly work out. We got pre-qualified on page two, then tested on page one (you can see the corresponding uplift in impressions below), but we failed to meet the expectations with a poor CTR of 0.1%, and were dropped back down. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture8-127574.jpg Again, we weren’t sobbing into our fine Bavarian beers about the loss. There are plenty more worthwhile, traffic-driving topics out there that deserve our attention. Always be on the lookout for those CTR underperformers Something that we were glad to act on was the “meta keywords'' wiki article. Before we have a moment of silence for the fact that “meta keywords” is still heavily searched for, notice how we dramatically jumped up from page four to page one at the very left side of the chart. We were unaware of this keyword’s movement, and therefore its plain snippet was seldomly clicked and we fell back down. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture9-182398.jpg After some months, the page one ranking resurfaced, and this time we took action after coming across it in our CTR Underperformer Report. The snippet was addressed to target that of the searcher’s intent, and the page was enhanced in parallel to give a better direct answer to the main focus questions. Not only did this have a positive impact on our CTR, but we even gained the Featured Snippet. It’s super important to identify these top 10 tests in time, so that you can still act and do something to remain prominent in the top 10. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture10-242150.jpg We identified this and many other undernourished queries using the CTR Underperformer Report. It maps out all the CTRs from queries, and reports on where we would have expected a higher number of clicks for that keyword’s intent, impressions, and position (much like Google’s models likely aim to do, too). We use this report extensively to identify cases where we deserve more traffic, and in order to ensure we stay in the top 10 or get pushed up even higher. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture11a-89812.jpg Quantify the importance of Featured Snippets Speaking of Featured Snippets, the diagram below demonstrates what it can look like when you’re lucky enough to be in the placement vs. when you don’t have it. The keyword “reset iphone” from a client’s tech blog had a CTR of 20% with the Featured Snippet, while without the Featured Snippet it was at a sad 3%. It can be game changing to win a relevant Featured Snippet due to the major impact it can have on your incoming traffic. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture11-137986.jpg Featured Snippets can sometimes have a bad reputation, due to the risk that they could drive a lower CTR than a standard result, especially when triggered for queries with higher informational intent. Try to remember that Featured Snippets can display your brand more prominently, and can be a great sign of trust to the average searcher. Even if users were satisfied on the SERP, the Featured Snippet can therefore provide worthwhile secondary benefits such as better brand awareness and potentially higher conversions via that trust factor. Want to find some quick Featured Snippet opportunities for which you need only repurpose existing content? Filter your GSC queries using question and comparison modifiers to find those Featured-Snippet-worthy keywords you can go out and steal quickly. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture12-63700.jpg You’re top 10 material — now what? Another one of our keywords, “Web Architecture”, is a great example of why it’s so crucial to keep discovering new topics as well as underperforming content. We found this specific term was struggling a while ago during ongoing topic research and set out to apply enhancements to push its ranking up to the top 10. You can see the telltale cases of Google figuring out the purpose, quality, and relevance of this freshly renewed document while it climbs up to page one. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture13-146577.jpg We fared well in each of our tests. For example, at positions 10-8, we managed to get a 5.7% CTR. which is good for such a spot. After passing that test, we got moved up higher to positions 4-7, where we struck a successful 13% CTR. A couple of weeks later we reached an average position of 3.2 with a tasty CTR of 18.7%, and after some time we even bagged the Featured Snippet. This took just three months from identifying the opportunity to climbing the ranks and getting the Featured Snippet. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture14-267933.jpg Of course, it’s not just about CTR, it’s about the long click: Google’s main metric that’s indicative of a site providing the best possible result for their search users. How many long clicks are there in comparison to medium clicks, to short clicks, and how often are you the last click to demonstrate that search intent is successfully fulfilled? We checked in Google Analytics and out of 30K impressions, people spend an average of five minutes on this page, so it’s a great example of a positive long click. Optimize answers, not just pages It’s not about pages, it’s about individual pieces of information and their corresponding answers that set out to satisfy queries. In the next diagram, you can actually see Google adjusting the keywords that specific pages are ranking for. This URL ranks for a whopping 1,548 keywords, but pulling a couple of the significant ones for a detailed individual analysis helps us track Google’s decision making a lot better. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture15-128611.jpg When comparing these two keywords, you can see that Google promoted the stronger performer on page one, and then pushed the weaker one down. The strong difference in CTR was caused by the fact that the snippet was only really geared towards a portion of its ranking keywords, which led to Google adjusting the rankings. It’s not always about a snippet being bad, but about other snippets being better, and whether the query might deserve a better piece of information in place of the snippet. Remember, website quality and technical SEO are still critical One thing we always like to stress is that you shouldn’t always judge your data too quickly, because there could be underlying technical errors that are getting you down (such as botched migrations, mixed ranking signals, blocked assets, and so on). The case below illustrates perfectly why it’s so much better to analyze this data with a tool like Ryte, because with GSC you will see only a small portion of what’s taking place, and with a very top-level view. You want to be able to compare individual pages that are ranking for your keyword to reveal what’s actually at the root of the problem. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture16-113225.jpg You’re probably quite shocked by this dramatic drop, because before the dip this was a high-performing keyword with a great CTR and a long reign in position one. This keyword was in position one with a CTR of 90%, but then the domain added a noindex directive to the page (facepalm). So, Google replaced that number one ranking URL with their subdomain, which was already ranking number two. However, the subdomain homepage wasn’t the ideal location for the query, as searchers couldn’t find the correct information right away. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture17-75226.jpg But it got even worse, because then they decided to 301 redirect that subdomain homepage to the top level domain homepage, so now Google was forced to initially rank a generic page that clearly didn’t have the correct information to satisfy that specific query. As you can see, they then fell completely from that top position, as it was irrelevant, and Google couldn’t retrieve the correct page for the job. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture18-91219.jpg Something similar happened in this next example. The result in position one for a very juicy term with a fantastic CTR suddenly returned a 404, so Google started to rank a different page from that same domain instead, which was associated with a slightly similar but inexact topic. This again wasn’t the correct fit for the query, so the overall performance declined. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture19-99525.jpg This is why it’s so important to look not just at the overall data, but to dig deeper — especially if there’s multiple pages ranking for a keyword — so that you can see exactly what’s happening. Got spam? The final point is not exactly a pattern to consider, but more a wise lesson to wrap up everything I’ve explored in this post. At scale, Google is testing pages in the top 10 results in order to find the best placement based on that performance. With this in mind, why can’t we ask people to go to the SERPs, click on our results, and reap the tasty benefits of that improved position? Or better yet, why don’t we automate this continually for all of our top-10-tested queries? Of course, this approach is heavily spammy, against guidelines, and something against which Google can easily safeguard. You don’t have to test this either, because Marcus (being the inquisitive SEO he is!) already did. One of his own domains on job advertisements ranks for the focus keyword of “job adverts”, and as you can imagine, this is a highly competitive term that requires a lot of effort to score. It was ranking at position 6.6 and had a decent CTR, but he wanted to optimize it even further and climb those SERPs to position one. He artificially cranked up his CTR using clever methods that ended up earning a “very credible” 36% CTR in position nine. Soon in position 10, he had a CTR of 56.6%, at which point Google started to catch wind of the spammy manipulation and punted him down the SERPs. Lesson learned. http://d1avok0lzls2w.cloudfront.net/uploads/blog/picture20-215561.jpg Of course, this was an experiment to understand at which point Google would detect spammy behavior. I wouldn’t encourage carrying out such tactics for personal gain, because it’s in the best interests of your website’s health and status to focus on the quality of your clicks. Even if this test was working well and rankings improved, over time your visitors may not resonate with your content, and Google might recall that that lower position was initially in place for a reason. It’s an ongoing cycle. I encourage you to reach your results organically. Leverage the power of snippet optimization in parallel with ongoing domain and content improvements to not only increase the quantity and quality of your clicks, but the very experiences on your website that make an impact to your long-term SEO and business growth. Conclusion To summarize, don’t forget that GSC search performance data gives you the best insight into your website’s true performance. Rank trackers are ideal for competitor research and SERP snapshots, but the position data is only one absolute ranking from one set variable like location and device. Use your own GSC data for intrinsic pattern analyses, diagnostics, and growth discovery. But with great data, comes great responsibilities. Make sure you’re finding and understanding the patterns you need to be aware of, such as struggling top 10 tests, underperforming snippets, technical faults, and anything else that deprives you of the success you work so hard to achieve. Ready for more? You'll uncover even more SEO goodness from Izzi and our other MozCon speakers in the MozCon 2020 video bundle. At this year's special low price of $129, this is invaluable content you can access again and again throughout the year to inspire and ignite your SEO strategy: 21 full-length videos from some of the brightest minds in digital marketing Instant downloads and streaming to your computer, tablet, or mobile device Downloadable slide decks for presentations Get my MozCon 2020 video bundle Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! http://feeds.feedburner.com/~r/MozBlog/~4/re18SDDpLeA Continue reading...
Recommended Posts