SEO ·

How to Rank SEO Content in the Era of Generative AI by Bernard Huang of Clearscope

Bernard Huang

Webinar recorded on

Join our weekly live webinars with the marketing industry’s best and brightest. It’s 100% free. Sign up and attend our next webinar.

Bernard (co-founder of Clearscope) joined the webinar to share how to rank SEO content in the era of AI.

Bernard covered:

  • Evolution of Google’s main content algo updates

  • What is “Helpful Content”

  • What is “Information Gain” and SERP similarity

  • How AI and Large Language Models (LLMs) work

  • The role of Manual Search Evaluators

  • What does “Experience” look like?

Join our weekly live webinars with the marketing industry’s best and brightest. It’s 100% free. Sign up and attend our next webinar.

Watch the full webinar

Check out Bernard's slide deck here.

Listen to the webinar as audio only on your favorite podcast platform:

About Bernard Huang:

Bernard is the co-founder of Clearscope, the leading SEO optimization and monitoring software for high-quality content teams. Before Clearscope, Bernard started an SEO consulting agency, was a growth advisor in residence at 500 Startups, and led growth at a YC startup called 42Floors.

Read the transcript

Bernard: Thanks so much for the wonderful intro, Travis. It's been a long time since I've been on a Clearscope webinar, so I'm excited. There's a lot of things to unpack and share as it relates to all the developments going on in AI search, generative experience, G P T, you know, this has been the flavor of the year in terms of AI and its implications on search engine optimization.

I think that it's very fascinating to see the industry evolve, and there's a variety of things that I think we need to be careful of as we head into this. AI-assisted world that we find ourselves in. So as Travis mentioned, I wanna make this as casual as back and forth as possible. So if you have questions, or thoughts, feel free to drop them in the q and section.

That'd probably be preferable, but if the chat works better for you, just drop it wherever, and we'll get around to it as they come up. So how do you rank SS e o content in the era of ai, and what do you need to be paying attention to? As Travis mentioned, I am one of the co-founders of Clearscope. We help thousands of high-growth name brand companies with their search strategies.

Today we're gonna be talking about a variety of different things. More importantly, what Google's been up to, and this whole idea of this helpful content algorithm update that we're actually about to see a version two released sometime this year. Now we're gonna link that to a very popular idea that's been coming up over the last year around information gain, and then we're gonna link that to what artificial intelligence and large language models are doing when they're thinking about content creation.

We're gonna touch on the role of manual search evaluators because I think they don't get enough. Talk in terms of how they're actually influencing Google's algorithm, and then what does the extra E or experience look like in terms of the EE that Google is prescribing that US content creators are thinking should think more about?

So what has Google been up to? Long story short, Google's mission has always been to make the world's information on the internet universally accessible and useful. That has not changed since the start of Google in the early two thousands, and it's now more than ever true in today's day and age and the algorithm that we're all playing against.

You can see here just a quick view of all of the content-focused Google algorithm updates over the past decade or so, and you can very much see that starting in 2021. You, you, you're getting a lot more content-specific updates, and perhaps there's, there are things more around off-page or technical, but Google just kind of bundles those up as the core updates or the, you know, whatever, March, may, whenever they come out.

You know, bucketing those and then calling out more specifically. Review, update, helpful content update. And so I think that that's interesting to just pay attention to from a, okay. You know, where is Google spending a lot of its efforts in terms of improving and enhancing the algorithm that we are trying to rank on?

So what is helpful content? I think that there's a variety of ways that others e o influencers and other people have taken a look at this. But I want to take a look at this in more granular detail. I think I saw a Kevin Indig or maybe Eli Schwartz kind of go through, you know, this press release bit and like to answer each one of the questions as it relates to how we should then change how we approach SS em.

So I figured that would be a good take on what we should be doing to react to Google's helpful content update, especially because there's gonna be a new one coming out sometime this year. So this is the press release, the formal thing that comes out on Google's Webmaster Central, and you can see here, right, if we're to go through each one of them.

Questions that they put on that article. Then starting with number one, do you have an existing or intended audience for your business or site that would find the content useful if they came directly to you? What I'm interpreting that to mean is that direct and branded search traffic matters. Now, if you've been in s e O for a while, I think that this notion has always been floating around that direct and branded search traffic.

For a brand, that means Googling Clearscope and then clicking on the Clearscope homepage would be the way to signal to Google that people care about your brand and your brand has legitimacy looking at this, right? I think that we can then conclude, okay, direct and branded search traffic matters, and that's also why you see, I.

Typically speaking, winners in search continue to win. People know and love and care about wire cut or NerdWallet Healthline, and they're specifically looking for that. And the presence of those results in certain types of query classes will then make the go. The user wants to go back to Google and perhaps type the best credit cards nerd wallet.

Thus making it, you know, a navigational search against the brand that they're actually caring about. Number two is, does your content clearly demonstrate firsthand experience or expertise and a depth of knowledge? For example, expertise that comes from actually having used a product or service or visiting a place.

Really, you know, this is then just saying, okay, the extra E that Google has put into the EAT matters. And it matters more specifically, I think, in the lens of manual search evaluators. That's where the term e a t originally came from, when the P d F of search criteria was leaked or, you know, put on the internet for, for everybody to see.

So we'll talk about this more in terms of the role that manual search evaluators play and how that's different from the algorithms that we've all been playing over the last like 10 years. This one, right? Does your site have a primary purpose or focus? I mean, the way that I interpret this is that, yes, topical authority or related subtopic rankings do, in fact, influence your content's ability to rank.

If you're not familiar with topical authority, it is just this concept that says, you know, should you write a piece of content on, you know, the screenshot says bone broth health benefits, and have that do well, then chances are I, I guess we started with wine health benefits. But chances are, you know, you probably can write well on the related subtopics of drink health benefits, so then everything tends to do better if you have a focus or authority on specific topics.

This one is right. After reading your content, will someone leave feeling they've learned enough about a topic to help achieve their goal? I think that this refers to knowledge graph entity coverage. So does your content actually cover the subject matter comprehensively? This is why Clearscope and content optimization tools work and exist.

We give you insights into what are all of the related entities that somebody is likely to care about, and then by arming you with that knowledge, you can create comprehensive, high-quality content that has a higher likelihood of giving the user what they want. This one is also interesting. Will someone reading your content leave feeling like they've had a satisfying experience and this?

Commonly refers to the idea that search engine results pages should conclude search journeys, right? I've been talking about this for as long as I can remember now, but if we performed a search for bone broth, then we're seeing rank one from how to make bone broth or bone broth recipe graduate into the front page, and then number one from bone broth health benefits graduate into the front page for bone broth.

Because essentially, right, these are the intents that users are caring about. And Google realizes that when they're sending users with these types of intents, very often, users are not coming back to Google to perform a subsequent search or a subsequent action. So that relates to that. The last one, I think, is pretty funny, but are you keeping in mind our guidance for core updates and product reviews and for product reviews?

And I think that's just Google saying, Pay more attention to what we're saying, and you know, you'll be blessed with more search results. Okay, so all of that's to say that then how does this relate to the actual impacts of what we've seen from the first iteration or version of the helpful content update that we saw released at the end of last year?

And you can see here that should you have studied the impact for the most part, you would've probably walked away and said, you know what? Like, ah, I didn't think like anything really happened. And for the most part, nothing really did happen, like you can see here. But the Big Bang never happened. And except for a variety of different classes of website categories like ringtones coding and lyrics, pages.

So that and grammar pages were also like in there, but this, this was very interesting for a variety of different reasons. And before we dive into, and unpack what's going on here, we must introduce this new, or you know, maybe you've heard of it, a concept called information gain. All right. So what is information gain?

Information gain is, is fairly simple-ish, and the way that I've come to describe it is concepts and entities on the fringe of Google's knowledge graph, given the topic that is being discussed. If we're gonna take ss e o as an example, you can imagine, right? One distance away from SS e o as a topic, you would have quality content, technical, s e o, and you know, maybe backlinks, whatever, right?

And then one distance away from quality content. You'll have Clearscope, topic clusters, topical authority from technical SS e o, you'll have internal linking link outreach domain authority. And let's just pretend for a second that this emerging subtopic that's in red is artificial intelligence. Large language models, right?

So you can imagine SS e o is a topic, and for the most part, topics will change or evolve over time. As new interesting developments happen to any given topic, then the topic is going to be introduced to new relevant entities that are emerging as close associations to the subtopic into the main topic.

So back to the ex, this example, information gained within search engine optimization would likely be around artificial intelligence. Within artificial intelligence, we would be talking about large language models like G P T. We would be talking about, you know, ai, like, you know, inaccuracies, hallucinations.

We'd also be talking about productivity increases, you know, commoditized content and all of that stuff. And so, Basically, information gained in relation to SS e o since we're all here would look like that, and we would need to recognize that that's where this particular topic could be heading and introduce some more interesting entities there.

The reason why this is important is because certain topics will have a high amount of agreement. I'm calling that SERP similarity, where we all more or less agree that X is X or Y is Y, and search engine optimization. Back to that example, we would all say, yeah, to do good Ss e o, you would need keyword research backlinks, technical Ss, e o, quality content, and here's where.

You want to then think about information gain as to how you can bring something new and interesting to the table around a particular topic that you're writing about, but in a relevant way, right? Google doesn't want you to say, oh, you know, start talking about Barbie, the movie, or Oppenheimer as it relates to SS e o, just for the sake of adding new entities to that particular topic.

Because Barbie and Oppenheimer are gonna be way too far away from SS e o as it relates to, you know, what's going on now, of course, G P T LLMs, they're gonna be a lot closer, and so you beat high similarity SERPs by talking about the fringes of where you see the topic heading, and therefore contributing meaningful and interesting entities to Google's knowledge graph that then has a chance of.

Feeding what's currently working. So the highest confidence similarity breeds a featured snippet. And I also think that in its current form, search generative experience or SS g e is simply just a better-featured snippet. And these will probably go together unless Google makes some pretty significant changes to how Sge E is supposed to work.

Now you can see here, back to the impact of the helpful content update. I have this nice tweet from Lily Ray. She says, Hey, here are five sites that appear to be affected by the helpful content update. I don't know why it's a lyric and grammar website, but what I've noticed is that the content is virtually the same.

And then there's, you know, this other potential insight on ads, affiliate links, you know, worse user experience. So on the surface, I think a lot of ss e o influencers kind of walked away from the impacts of helpful content updates to say, oh, well, this is nothing more than Panda. In that panda checked for duplicative content that was copy pasted, you know, in all kinds of different places, and was trying to find the canonical or original piece of content to attribute that copy and pasted content.

So on the surface, it looked very similar in that they're the, the websites that were the most affected by the helpful content update were virtually the same. Now I have a different hypothesis as to what is going on with the helpful content update, why there is going to be, you know, a version two and then a version three, and then a version four, and why you need to pay a lot of attention to what's going on here.

So you can imagine, right? Let's go with the Lyric website example. Probably a lot of people lately are searching for Barbie Girl because the Barbie movie has just come out and ranked one, rank two, and Rank three. All more or less have the same content. But perhaps more importantly, there are no new entities being surfaced by a lot of the Lyric websites.

Now that's not entirely true because you have, you know, u G C heavy websites like genius.com who have, you know, a lot of interesting comments and, you know, assessments of what certain lyrics are supposed to mean or whatnot. But you can imagine, for the most part, a lot of lyric websites did not make an effort to actually have any interesting entities and concepts.

And for the most part, that made a lot of sense, right? Lyrics are lyrics, you know, grammar mistakes are grammar mistakes, and a lot of these things just simply looked like one another. Again, I think the more important part of all of this is that none of them added to the knowledge graph of relevant entities that Google is looking for as it relates to helpful content.

So you can imagine this is, you know, one of the lyric websites, and they basically shot up and then got completely crushed by the helpful content update. So then, right? How does this actually relate to AI content? Well, you can see here this is the other synopsis of what Google's press release on the helpful content had to offer, except for the negative.

Perspective, right? Is your content primarily to attract people from search engines rather than made for humans? Are you producing lots of content on different topics? Are you using extensive automation? Are you mainly summarizing what others have to say? So are you mainly summarizing what others have to say without adding much value?

You can imagine what that creates is content. Content that is not inherently adding to the knowledge graph of what Google already understands about the topic that's being written. Barbie girl lyrics look like barbecue girl lyrics, and there are no new interesting developments of where that particular content has to go.

And you can see here you are using extensive automation to produce content on many topics, right? That's kind of like a, ooh, you know, maybe you don't wanna use ai, except you can see here Google says, okay, well you can use AI or reward high-quality content. However, it is produced. My opinion on this is that it is extraordinarily difficult for Google to actually know what was purely AI-generated, what was, you know, AI and human-generated together, and all of that.

So they're just gonna say whatever, you know, we're not gonna really try too hard to, you know, suss that out. Instead, we're gonna look at whether or not your content is helpful, and in the sense adding to the entities and knowledge graph that we expect that the topic is likely to be heading towards.

Alright, so let's look at how AI and large language models work. Again, at a basic high-level overview perspective, how AI and large language models work is. Well, they get trained on a bunch of data, and typically, I think currently the, the way that a lot of AI and LLMs are, are being trained is actually through scraping a lot of Google-specific content for the given topics that, you know, we want to train the models on.

That's not entirely true, right? There are specialized LLMs that are coming up that are solely being trained on, you know, research papers in X or, you know, market reports and y. But a lot of the more generic, you know, high-level ones are scraping Google search results and content for the given topics. And you can see where they're constructing their own internal knowledge graphs of related topics and entities.

Now, current AI models are then using the scraped content and the understanding of entity relationships, and what they're producing are outputs that are more, more or less like a sophisticated auto-complete that you, you know, and you're aware of from Google. So you can see here in this particular tool that I've been using, So this is, I think, like a GT two text analyzer.

Obviously, G P T four is, you know, very much, so ADV Advance is much more advanced than GPT two, but you can imagine this is more or less how it comes together, right? The first president of the United States I like covered over states, and you can see here that it was just that states had about an 80% probability of following the United States.

And that makes sense because Nations has, you know, about a 9% chance of following United and that makes sense. Arab had that one doesn't make as much sense to me. Oh, United Arab. Yeah, Emirates. And then you have the Kingdom, and then you have airlines and, right, so that makes sense, right? These are the most likely words that follow words.

And then you can imagine there are gonna be more likely sentences that follow sentences. So, This is how it looks, right? AI models will have different probabilities of outputs based on how the embeddings or training data was fed into the language model. And with some amount of confidence, they're gonna say that, you know, these are more likely outputs, and then they'll generate what you think is what they think is the most.

A structured way that's confident and right because language models, generally speaking out of the box, want to reduce inaccuracies. So unless you specifically specify, I think in language model speak, this would be a temperature. If you specify a very volatile temperature, then yeah, they might actually try something that's like closer to 70% or, you know, whatever.

I don't know the exact nuances, but that's why temperature and variance come to play. If you wanted something that you know to be exactly accurate and exactly true, then you would reduce the variance associated with the model to get what's the highest likelihood output. Okay. So you give it, then the quick fox jumps over, and then the language model says, okay, it's very likely to be the lazy dog.

For those of you who are unaware, this is that American-like sentence or something where it includes like one or yeah, one instance of the, like each word in, in this particular sentence. So that then brings us to the point of what is large language model content creation good for and what is it bad for.

And, of course, what do you need to do in this AI and human world that we are now coexisting in? So L L M content creation has been amazing for short form. Content. As you can see here as an example, if we have five sentences where the likelihood of accuracy is 99%, then over a paragraph of five sentences, you would have an output that is 95% correct.

So small inconsistencies in short writing produce small discrepancies. And this is also why you saw the advent of GPLLM shine in relation to generating ad copy when generating small amounts of text. This is great. People loved it. You saw great results, and people go, holy crap, this thing is great.

Where you start to see L L M content creation start to falter is in a variety of different places. And this is just the surface-level analysis right on the left-hand side. If. The language model does not know much about the topic. It is drawing from a smaller pool of data, and chances are it is going to be less accurate.

And if we reduce the accuracy of the language model sentence by sentence by, you know, like 10%, then having it be 90%, you can imagine that over the course of a paragraph, you're only gonna get something that's about 60% accurate. Well, obviously, that's not good, and that's why people are like, okay, language models have problems with hallucinations in accuracy because right, you go down, you go down enough in this generation type model, and you gotta end up with weird things here and there that kind of don't make any sense.

On the right-hand side, you have the other reason why language models are difficult, and that's because. Well, the reason is that it's like really long-form content right now. If we took the example from number one where we got a 95% correct paragraph, we'll multiply that over five paragraphs, and now we have a 77% correct looking article.

So you can imagine, right, language models currently struggle with producing long-form content because the longer the content gets, the higher the likelihood of inaccuracies, hallucinations, and all of that stuff. So let's go back to L L M content creation. Context on search engine optimization, and I've taken this knowledge graph idea and split it into two distinct sections.

On the left, you have what the language model is still unsure of in relation to search engine optimization, right? If we still subscribe to the idea with, you know, zero plugins that G P T is trained on data that is, you know, September 2021 or something like that, then yeah, the language model is not gonna know that SS e o is actually very concerned with language models and G P T and AI because it wouldn't know that.

And even if it did, there wouldn't be that much training data that informs that language models and s e O and AI all go together. Now on the right, you have things where the confidence has been established, right? But the problem is that if the confidence has already been established, you can imagine that Google has already also established the confidence.

So robotic content, as much as it's cool and interesting and actually surprisingly good, is not bringing anything new to the knowledge graph that Google is expecting. You can see where I'm going with this and linking together the helpful content and what's going on with the lyric websites and all of that stuff.

So again, right? The problem with using commoditized AI content for generation is that you're gonna run into the same reason why, you know, certain lyric websites got completely wiped from Google. You're not bringing any new entities to the table. And if Google's just gonna look at your content in relation to what.

The current top-ranking content is doing, and you've produced content using ai, well, it's not gonna add anything new, and chances are it's not gonna do well in search. So that means that should you want to use AI content meaningfully within your content workflows, you want human-directed language model content creation, right?

Because simply prompting the artificial intelligence for, you know, content on a particular topic will lead to commodity content that is high confidence, which looks good on the surface, but low information game, which means that it's actually not bringing any new entities to the table. If you want to go down this path, what we've been recommending to our customers is to really think about it as a human-assisted content creation back and forth with the AI model, right?

So you want to tell the model specifically that certain entities or emerging concepts are things that it should care about and should write. And if you don't do that and you stick to more of a generic, write me a piece of content on topic X, you're gonna get just a lot of commoditized non-information, gain content out.

So that's my take on AI content. If you want to use it, you actually have to go back and forth. It's more of a, you know, assisted model where you're still providing interesting insights to the model to then make sure that it's covering those in a relevant and interesting manner. That brings us to the final piece of this algorithm problem that we're running into is manual search evaluators.

So what do manual search evaluators do? They essentially review pages listed that they get fed on some screen, and they're determining, you know, whether or not the page looks high quality and whether or not a searcher should land on the page that their needs would be met. You can imagine how that is.

Like links in my mind are, it's probably like a Tinder-like experience where, you know, there's, I don't know how many, but I think thousands of, you know, manual search evaluators that are probably, you know, given prompts that are like website A or website B, and they're like, all right, yeah, swipe right, that one looks good, swipe left.

That one doesn't look good. So manual search evaluators then are training Google's algorithm on the subjectiveness or qualitative criteria that, again, the normal algorithm cannot at least currently comprehend, right? It just knows, okay, you know, there's this amount of backlinks pointing on the page. Page load speeds are this many milliseconds, blah, blah, blah.

But when you add the manual search evaluators to the equation, you get this subjective sheet. Learning and that the algorithm starts to take on. And, of course, that's going to change as Google updates their search evaluator guidelines. E a t becoming EAT is a huge leap forward in saying that manual search evaluators are now going to be looking for different subjective criteria that, again, are gonna change over time.

You can see here, just as an example, you remember here are images. I mean, some of you probably still use a good amount of them, but my guess here is that from some period before to some period, you know, like 2018, manual search evaluators were training Google's algorithm to say that the presence of a hero image is good.

So you saw a lot of people adopt hero images as best practices. You know, what we then saw happen is that you know, the inclusion of statistics and citations and stuff, right? Those were good. And chances are manual search evaluators were trained to say that content results that did not Provo provide those citations or statistics likely, you know, are not looking that good.

Now, you can imagine what we're looking at from a search world 2023 and beyond. You know, I would. Obviously, putting experience within that bucket is what a lot of manual search evaluators are going to be trained to look for. But along with that, you can imagine the idea of authorship has, has come up quite a bit.

I think that that's actually a manual search evaluator concern where they're gonna take a look at, you know, some of these pages like what you see happening with their chalet and helping and say, yeah, you know, this person looks like a subject matter expert. So the presence of these little bubbles and these edits and these, like other pages that supplement why Amanda or Amy are qualified to write about this, is good.

And then you're gonna see, you know, a lot of people adopt that. If not, you know, already a lot of people have to adopt that. So manual search evaluators actually provide, in my opinion, the X factor in Google's algorithm to say, let's train the algorithm on qualitative, subjective things that, you know, the algorithm's not gonna know.

And the only way that is gonna know is when humans come in and say, yeah, that one looks good. And then Google's model is gonna, you know, kind of nebulously know that like, okay, when it looks like this, or when there's these little bubbles and author things, and chances are that's, that's gonna be pretty good.

Alright, well, I saw that there is a question that came up, so I figured I'd answer that on the spot, and then we can dive into what the experience looks like from a manual search evaluator perspective. Let's see here. All right, so Vera asks, Are there any best practices that a writer can follow to increase the chance that a copy will land in front of a manual search evaluator?

No, I don't think so. I think manual search evaluation just happens like on the backend. Chances are, if your content is ranking on page one, page two, page three, I bet it's going through the queue in terms of, you know, these manual search evaluators swiping right or swiping left. So that's the best way that I can answer that question is that they're not gonna care if your content is not even on the top three pages of Google.

If it is, then that's where Google wants to be, really refining the overall experience and engagement associated with content results. On page one, page two, and page three. Cool. And then, what does the experience look like? I've talked about this at length, so I'm just gonna give a very short bit to make it all come together.

Your traditional topic cluster is very templated, very cookie cutter, and very, I'm not gonna say union informative, but just very flat. In terms of the, you know, who, what, when, where, why. And we'll throw in the best because that's where we make money. So that's a traditional topic cluster. But the emerging experience viewpoints around things are more around like, are we sure this is good?

Do we know this is good? I tried it, and here's like, what happened? You know, I trust so and so, and they say it's good. And these are actually more of the subtle nuances of what people are expecting from search today. Sure. 10 years ago, you probably wanted to know, you know, how to do SS e o or what is s e o.

But now you know you don't necessarily care as much about those. I mean, sure, you do, but maybe you care more about, you know, what somebody has to say about SS e o, who you trust and respect, and what's their perspective or experience on the subject. So as Google has increased its computing power has been able to.

Rely more on user engagement signal, which is, again, when presented with a search engine result page, are users satisfied? Right? And they're gauging that, I think, by saying, okay, how many subsequent actions or searches are performed when a legitimate user gets. Search engine results page for the search on the topic that they're ranking for.

So I present to you at least some ideas on how you can think about the experience in a way that while manual search evaluators are likely looking for, you can imagine, more importantly, what we as humans want to click on and engage with. So right here, right, if we Google, like start a blog, there are things that people wish they knew before starting a blog.

And so if you're gonna consider doing something like how to do, how to start a blog, how to trade for a marathon, you know, challenging things like adopting a keto diet or something like that. And then, right, what people wanna know is, Well, things to consider before training for a marathon. How to train for the marathon.

Common mistakes while training for a marathon, and perhaps reasons not to train for a marathon, right? These are all within the consideration points of what somebody would care about should they, you know, want to do something. So, you know, I kind of wrote it out here, but how to do X right is best for the consideration viewpoint, and you can create any variety of these different article types, and that will obviously be interesting and useful to the user at the different stages in their journey.

They're, they're thinking about also noticing, right? I wish I knew common injury mistakes for your first triathlon, presumably based on my own experience. Right? So these things, right, they come out not reading as formulaic as the old ss e o guard used to, right? Like, I don't even know. Yeah. Like how to even wrap my head around it would just say like how to start a blog, you know, bracket 2023, updated guide, you know, that's like a very classical ss e o thing.

And then now we're seeing the emergence of these like more subjective, not s e o looking things. And I think that the reasoning behind that is, again, Google's insistence on experience and then manual search out evaluators, training the algorithm to then think, look at things a little bit more qualitatively, and then you start to see this sort of thing happen.

Just another idea or perspective would be experience. Right? And that's essentially what Google is saying. I just pulled these from some of my older slides when I was actually saying how the experience was gonna be a very core perspective that people cared about. So, right, the topic, a review of the topic, and then, you know, I actually tried it and here's what happened.

I think that you know, product review updates, review, update, you know, these are all trying to focus on the fact that, well, a lot of people in the affiliate. For better, her worse have not actually tried the product. And so, you know, can you breed more trustworthiness on the different, you know, consumer packaged goods or reviews or travel to actually show the user, Hey, I like got, I tried it, I'm legit.

And so you kind of see these types of, you know, content pieces working. So search perspective, contrarian, right? Just saying, like, running against the normal grade. And this is generally good for trendy new topics, questionable, you know, bold claims like we have here. Learn to code, and please don't learn to code.

So I think early on during the AI-like hype train that I think is actually dying down a bit, we had this piece of content that Travis published called,, should you use ai? Like, why you should and shouldn't. And it did spectacularly because right, we were like saying here are the pros and cons, here are the things to, to consider.

And right. We weren't, I guess, at that point taking a completely contrarian stance, but arming people with things that they should know about in relation to this. So right now, I do feel like I. You know, a piece of content that's something along the lines of like, you know, AI for SS e o like is dead or won't work.

You know, like, is likely to do well if I'm able to go into it and back it up with just saying that, Yeah. Commodity data is like a prompt is gonna generate commodity data out. From the large language model and in that very basic sense, yeah, I, you know, there might be a short window where that type of content is likely to perform just because right?

It's just newer, and there are a lot of gaps within queries where commodity data is out. It actually is just better than what, you know, the, like, Google's understanding of that topic looks like. But we're gonna see that whole plugged-in; I would say, very quickly in the next, like, year to two. And then, you know, the entity gains information, the gain idea is gonna really be the more important thing to consider.

So just some frameworks for you to think through to add experience of viewpoints into your pieces of content, right? You have stages of consideration; you have contrarian viewpoints, you have extra expert viewpoints, experience viewpoints, discussion viewpoints, current events, and, you know, predicting the future.

And all of these have to do with your own subject matter expertise and experience on the topic that you are writing about or getting content written for. So just to sum it all up, I've been saying this for a while, but the Brian Dean skyscraper approach where you cram as much stuff into one piece of content, call it the ultimate guide to X, and then have it, you know, rank for everything, is just not the model that I see working in a compelling way moving forward.

And most of this has to do with the fact that you currently only get one title tag, one above-the-fold experience. And Google is very interested in user engagement signals. So when they send people to an article or piece of content, they want to know, did the user find what they needed? Now you could say, well, Bernard, if they're going to my skyscraper piece of content, wouldn't they find what they needed?

And the chances are, you know, they probably could, but. A lot of topic clusters in a skyscraper-type model are really just too long and unscannable. And so a lot of people are gonna look at that and be like, oh, all right, well I'm not, I'm not gonna read that. Maybe they do like a control or command F and find what you're actually looking for and then leave.

But you can imagine. That's gonna look way different than if somebody's looking for the benefits and like bone broth in this example. And we can say, well, does it have benefits? Have an article with a distinct above-the-fold experience that is just answering exactly that intent that the user cares about within the topic that you're, you're trying to rank for.

So, all in all, ranch style SS e o re refers to the fact that people are going to want more experienced viewpoints on shorter pieces of content that better matches the intent that they care about for the topic. And the reason why this matters is, again, twofold. Number one is that manual search evaluators are swiping right and saying, yeah, these pieces of content look good.

And I, you know, I don't know that this is actually happening, but you can imagine four. Certain types like these topic clusters or skyscraper type pages, maybe they're looking at those as swiping left and being like, well, sure it covers everything, but it just doesn't, it just doesn't like, you know, feel right or read right.

And secondly, the reason why I think this is important is because Google has a lot more computing power than it has ever had. And so, It's a lot easier for Google to actually rely on user engagement in a machine learning world, and machine learning; you can imagine if your title tag you're above the fold experience is just a better match for what the user is looking for.

Well, Google's gonna know that, and they're gonna wait for that, and that's just going to do better. I'm just gonna add this one last note, which is about the video. I think that video is really an experience and legitimacy booster. So, you know, Google owns YouTube, so I mean, many searches start on YouTube featured videos, video carousels appear.

But really, video content is a lot more expensive to produce, and B, therefore, not as commoditized as written content is becoming with the advent of AI content. And so, video content builds trust better. The consumption of it is more normalized in that, you know, if you're watching an instructional YouTube video at work, you're not being ridiculed for watching YouTube at work.

Anymore. I do think five years ago, if you were watching YouTube at work, then you know, maybe you got reprimanded, maybe even YouTube was blocked by, you know, the company domain or, or firewall. Cool. So I'll leave it at that. Any questions, thoughts, or comments on what's going on with AI and, you know, the considerations and pitfalls to think through?

Should you adopt them in your content process?

Travis: Yeah, I have a couple of questions. I think the first one is from the guy who says this is like the marketing buzzwords, small business owners, and c-suite companies.

Bernard: I like or asking for it and expect it to write

Travis: entire articles and get rid of content teams. I'm assuming it's; we're talking about ai.

What would be the key points to give them? They have short

Bernard: attention spans. Yeah, that I. I'm trying to, like, think about, yeah, the, let's see, small business owners and C-suite companies alike. Yeah. I think the synopsis, I'm trying to dumb it down as much as possible, but I think its commodity prompting for language models means commodity output, and commodity output is not good enough to rank on Google search because it's not adding anything of value to the topic that's being written about.

That's how I would dumb it down, guy. Let me know if that works for you. If it does it, then, you know, I, I'm still trying to figure out what's the right way to pitch the pitfalls of just using stock AI to write stock content as well. Yep. And

Travis: Vera asked, so, is long-form content dead, in your opinion? Pillar articles range between 2020 and 500.

Are they no

Bernard: longer relevant? Yes, absolutely. That's the short answer I'm trying to pull out of the slide here. No, here. Yes. Yeah, so if we take a look at this, you'll see,

you'll see this, are you writing a particular word count? Because you've heard that Google hasn't preferred word count. Like, No, we don't. It's like, yeah, as long as it is helpful and answers the question the user cares about; then it doesn't matter how long. That said, you know, I do think that generally speaking, you wanna be like our above a thousand, but.

You know, like obviously, the classic ss e o answer here is it depends, and yeah, anyways, particular workouts don't really matter. It really just depends. And then, you know, focus on answering the user's intent or question as quickly as possible. Doing your best to prevent subsequent searches around the subtopic or query that they're, they're looking for.

And then, you know, however long it takes you to do that, do that. And then, you know, my other just high-level SS e o had is like, yeah, but a thousand like words, I think, still is a, like distinct cutoff, probably just from a, you know, like if I were a user and I landed on your article, and it just looked, I have like a paragraph, then I'm probably like not gonna take it as seriously.

Good point.

Travis: And then Kathleen asked, so they've adopted the ranch style ss e o for car insurance cluster. And they're running into some cannibalization issues because it's impossible not to use overlapping keywords. What do you recommend here?

Bernard: Yeah, so I think we have to first ask ourselves, is cannibalization even bad?

And, well, I know I, I like saying some, some potentially controversial things here. But first of all, lemme see if I can explain it to y'all. Let's pull up this, this good old, this good old text editor. So basically, cannibalization happens in the sense that, like, let's say you have 1000 impressions for a keyword, and then you have page A, which is targeting the keyword, and you have page B, which is also targeting the keyword like this.

So the reason why is cannibalization. Got a bad rep is because, like, if you had two separate pages targeting the same keyword, what ended up happening from a Google perspective is that they had to take, and let's say you're on the front page of Google for, you know, like at least one of them, except Google is confused because there's two of them.

So what Google has to do is that it has to split its sampling in terms of user engagement between two sets of pages. So you can imagine, right? What that means is that there are only 500 impressions in this particular example that are being sent to each page. So you can imagine the process by which Google arrives at a statistically significant and confident result that one page is better than another page is split by two.

You're dividing the two pages by the keyword that somebody is searching for. This really rears its head. In like an ugly way when there are way less impressions, and guess what? You're not on the front page. So you can imagine this thing, like let's say there's 10 monthly searches, well guess what?

You're splitting. Google's testing resources for both of your pages, between two sets of pages and five impressions, means that you're doubling the amount of time that Google needs to take to arrive at a confidence that page A is, in fact, better than page B. So, That's where keyword cannibalization gets, in my opinion, the bad rap.

Now you can imagine, like, let's say you did have a thousand impressions for the keyword, and now it's split like this. Well, this isn't as bad because right, you're, you're basically still getting a good amount of like testing done in the Google algorithm with a high impression keyword. You're just splitting the resources between two pages and then also what we've been seeing; I don't know if I can reduce this.

Let's see how to get rid of dim balls. But you're, you're actually seeing, right, like this, right? Four natural wastes as fast as possible, and then how to get rid of acne 14 remedies for pimples. So this one targets a separate intent, right? Natural and fast. And this one targets another intent. Home remedies, and really

Like this, all right? So like this, right? This really isn't so bad. We actually have more real estate because you've covered two intents that the user cares about for this particular topic, and you, you get an indented search result. So anyways, that was a long-winded way to answer your question, Kathleen, but I don't think it's bad.

I think that it increases Google's. Testing time to verify that your website is going to be good. But if you're already good, then what you could do is qualify for more screen real estate in, in the car. Awesome. Awesome.

Travis: And then Amanda asked do you have any thoughts on using AI images in content, and does it impact ss e o in any way?

Bernard: Great question. I think the, my, my understanding of it is, I, I don't see why it would be negative or positive. It's just kind of like, if your content is better because of it, then use it. But if it's not, then, then don't. I'll just kind of pull this up. I, I like to use this a lot as an example, but I.

Like Healthline, I don't think this piece of content has any images. Right. And I mean, it is specifically related to something like that, all right. I mean, that's okay. There's a video now, but yeah. So I think the truth of the matter is like if it helps to just enhance the overall page experience, sure.

But we're actually seeing a general decline of impact with image use. Because my opinion is that it's back to the manual search evaluators. Like they're not actually caring that a page has ima imagery anymore. That's my hypothesis. So take it, take it or leave it. Yeah.

Travis: I agree with that.

And then Amanda also asked if long-form content is dead, how will Clearscope's word count estimate be impacted? Yes. Should be

Bernard: ignored. It should be ignored. I've always been recommending, all right, so this is my stance on it for, you know, those of you who are Clearscope users, essentially, right? We give you this text editor experience that then you can put in, you know, your piece of content on, you know, how AI content works.

You'll also notice that we give you a typical word count that is the median of what we see ranking on the top results. My opinion is that longer words tend to be better because you can imagine in the past if you just wrote more content, And chances are you would actually do a better job hitting all of the known entities that you should probably be hitting in an accidental-like way.

So longer content just then happened to produce basically, you know, things that would cover the overall content, like just in a more comprehensive way. But with the advent of content optimization tools and Clearscope, you can actually do a good job covering, right, all of the different entities that the user is likely to care about for this particular topic without writing as many words.

So I actually generally will recommend like 20 to 30% below the typical, and then this a plus, plus being the most important thing because right back to this whole bit of information gain. Wherever it is, like this, right? You're actually then right, covering what Google expects and also adding to the knowledge graph.

And then I'm just gonna point this out, because everybody's here, you can actually see there's a way to add custom terms in Clearscope. And when you add custom terms, we will actually bring out a list of suggested terms for your particular. Topic content report, and then simply these. So these are basically entities that we've detected that are not quite important enough to throw into the main term list on the right but still actually quite interesting.

So we don't want to obviously inundate the writers to say, you gotta cover all, like, you know, like 150 like terms. So, we like to consolidate it and make it more simple or more approachable. But actually, then adding these is sort of like scoring as the plus, plus in Clearscope because right G P T three came out on, you know, how AI content works.

I guess that makes a lot of sense. But you can see here these are gonna be the fringes of where we see the knowledge graph evolving for the topic. And I think that this would be the most important thing to do and think about. Awesome.

Travis: Perfect. Yeah, that's all the questions we have for today.

So thanks again, Bernard. Everyone, if you have, we'll send the recording tomorrow. If you have any more questions, feel free to reach out to us on Twitter or via email. Bernard, do you have any last words before we give everyone their time back?

Bernard: Thanks for tuning in, and we got a lot of exciting stuff for you in the ClearScale world surrounding continuous monitoring of your content quality.

Especially as, you know, topics change over time. So definitely stay tuned for more updates and upgrades to the platform in relation to content inventory. If you have not had a chance to play around with content inventory, definitely reach out. We'll get you set up. We're giving you insights into internal linking within your content, inventory insights into content a k, and then in the future, we're gonna give you, you know, keyword and content idea recommendations based on top emerging topics.

And lots of, lots of good stuff to come. Awesome. Yeah, if you

Travis: wanna access, just give us a, send us an email at support@clearscope.io. We'll be happy to add it to your account.


Written by
Bernard Huang
Co-founder of Clearscope

Join thousands of marketers who receive our weekly emails.

We share content marketing best practices and SEO strategies from the brightest minds in marketing. You’ll also be the first to learn about and join our next weekly webinar with the industry’s best.

Join today
©2024 Mushi Labs. All rights reserved.
Terms of service, Privacy policy