AI Content ·

How AI Can Make Content Better (And Much, Much Worse) by Ryan Law of Animalz

Bernard Huang

Webinar recorded on

Join our weekly live webinars with the marketing industry’s best and brightest. It’s 100% free. Sign up and attend our next webinar.

Ryan Law, VP of Content at Animalz gave a splendid presentation on "How AI Can Make Content Better (And Much, Much Worse)".

Did you know that Ryan actually used AI/GPT-3 to generate the webinar title and concluding remarks?

  • You'll learn more about how AI content generation works, whether AI generated content is here to stay, and other topics including:

  • AI & the history of writing technologies

  • GPT-3 explained

  • Six ways to use GPT-3 for content marketing

  • Our experiment in writing long-form content with GPT-3

  • Analysing the strengths & weaknesses of GPT-3

  • The era of infinite content

  • The ranking potential of AI-generated content

  • Three ways humans can compete with AI

Watch the full webinar

About Ryan Law:

Ryan Law is the VP of Content at Animalz, where he's worked with companies including Google, GoDaddy, Clearbit, and Algolia. He has ten years of experience as a writer, content strategist, marketing director and agency founder.

Follow Ryan Law on Twitter: https://twitter.com/thinking_slow

About Animalz:

Animalz is a content marketing agency that delivers high-quality content to enterprise companies, startups, and VC firms.

Follow Animalz on Twitter: https://twitter.com/AnimalzCo

Read the transcript

Bernard:

I would say that it's a perfect time to get started. We have here Ryan Law. Ryan is the VP of Content at Animalz, where he's worked with companies, including Google, GoDaddy, Clearbit. And Algolia. He has 10 years of experience as a writer, content strategist, marketing director, and agency founder. Animalz, have you not heard of it, is one of the top content marketing agencies out there. They deliver high quality content to enterprise companies, startups and BC firms. Could not be more excited to have Ryan here to talk about AI and content. Stage is all yours, Ryan.

Ryan:

Yeah. Thank you for having me, Bernard. Yeah, we basically missed 10 minutes of us digging into this stuff just for fun. I think we're both really excited to talk about this topic. I'm going to go ahead and share my screen and get started. Cool. Getting my timer there. As we've alluded to, the thing I want to talk a whole bunch about today is the role AI is currently playing and potentially will play in content marketing. As a quick little demo of that, I thought it would be fun to actually have the webinar title written by AI. So this is brought to you by GPT-3 directly, and we'll talk a little bit more about that later.

Ryan:

In terms of my background, I am not a developer. I'm not a researcher. I'm somebody that's never really given that much thought to AI beyond the science fiction use case of it. It's interesting, maybe we'll become bugs crushed under the heel of our sentient robot overlords, but until that point, I'd not really given much thought to it, until AI did the, I don't know, very rude thing of forcing itself into my area of expertise, my area of passion and enjoyment; writing.

Ryan:

And the reason I spend so much time writing, as Bernard said, is because I work at a company called Animalz. We are basically knocking onto 130 or so people that spend all of our time thinking about good writing, good content marketing. And when you've got that many people thinking about writing, spending that many hours cranking out blog posts, we're very attuned to any new tools and technologies that appear that can potentially make that process a little bit easier, a little bit more enjoyable, a little bit more productive.

Ryan:

And with that in mind, we were very, very highly attuned to a sudden deluge of AI writing technologies that appeared in the last year or so. AI has actually been involved in content marketing for quite a long time. Take something like our beloved Clearscope as well. There is AI involved in that, in the same way there are AIs involved in other parts of the content marketing workflow in terms of search optimization and different, discrete parts of what we do. But these tools were set apart because they are muscling in on the thing that we do, the territory that we've claimed for ourselves as human beings. These are here to help actually write content for us, is all AI writing assistance.

Ryan:

And you've probably heard of some of these. Jarvis used to be called Conversion AI, I think. Copy AI, that's probably the one that I use the most. Headlime, Copysmith, Rytr, a whole bunch of these tools around. And the reason so many of these appeared when they did is because they're all effectively building on the same underlying technology, a natural language model called GPT-3, which you've probably heard of as well. Now, when this came out, everyone was a buzz with it. My Twitter feed lit up and I had no idea what it was, what it meant, how we were meant to interact with it or what it would do for us.

Ryan:

Thankfully, I was able to enlist the help of Andrew, our wonderful Head of R&D, former neuroscientist, somebody that is generally much smarter than me. And he basically explained that GPT-3, everything you need to know about it is actually inherently there within the acronym itself. So in reverse order, T stands for transformer. This is slightly beyond my comprehension, but this basically is the algorithm the model uses. It is a specialist in understanding a role words play in natural language. And the way it does that is because it is P, pre-trained.

Ryan:

Machine learning generally needs access to a big old data set of information for it to be able to analyze and find associations and relationships between. That would be a hard thing for us to do. We don't have billions of pages of content to share with these models, but thankfully, the folks at Open AI pre-trained it for us. So GPT-3 has effectively read well over three billion pages of text. That's all of the English language Wikipedia. That is the entirety of the Common Crawl Dataset, which is an index of the most commonly visited websites in the world. It's even read all of Google books as well. So there's a staggering amount of writing that has gone into this language model.

Ryan:

And G stands for generative. The whole point of GPT-3 is to generate text. It basically has an understanding of the rules of language that it's cobbled together through the training that's happened and is using that to predict the words that are likely to follow on from whatever words you've given it. And you can probably guess what three means. This is the third iteration of GPT. There is GPT, GPT-2 and GPT-3. And I think that's worth mentioning because there have been orders of magnitude advancement between these different models. The difference between GPT-2 and three has been staggering. And therefore, GPT-4 is likely to bring even more change to bear. So everything we talk about today is likely to apply double, 10 times in the future as well.

Ryan:

When we think about GPT-3, we want it to actually do useful stuff for us, what is actually happening? I thought I'd put together a quick visualization to see how we're interacting with GPT-3, just so you can get a sense of it. And the example I'm using here is what I actually did in order to come up with the title for this webinar. Basically, every time we interact with GPT-3, we are priming it with a given piece of input. It's basically us providing a fragment of information that allows it to contextualize all the data it has access to and use that to focus in on the right kinds of output to generate.

Ryan:

In this instance, I was using Copy AI and I asked for a title for a webinar about content marketing and artificial intelligence. Now, one of the cool things about using Copy AI, as opposed to accessing GPT-3 directly through the API, is that these tools generally have more guardrails added to them. As we found out when we used the API on its own, this technology is like a fire hydrant. You can get some absolutely mad stuff back from it. It is just this boundless, amazing wellspring of creativity. But by saying we'd like things of a certain length, in a certain format, maybe even a certain tone, we can actually bound that output to things which are a lot more useful for us.

Ryan:

When I did that, I fed this input prompt into Copy AI. These are some of the actual outputs it spat back. The first one, very demonstrative of what you'd expect a webinar to sound like; Content Marketing and AI, A Biz Growth Strategy. That's interesting in the sense that it has identified that content marketing and AI probably is a business growth strategy. It could do something good for your business. It's even clocked onto the shorthand biz, instead of business. I would not be surprised if a human had written that title.

Ryan:

Output number two, even more interesting from my perspective, because it's interesting, actually. Learning from Machines to Create Better Content. It's got the same relevance there. That is fundamentally what we're talking about today, but there's an element of intrigue associated with it. Who are these machines, and what do we have to learn from them? And output three is a really great example of going off the rails a little bit, creating something that is not particularly useful, but you can see how it arrived at this point. A webinar is a type of presentation. Artificial intelligence correlates to brain. There's a semantic relationship there. I probably wouldn't use this as my webinar title and I'm hopefully not giving people a bunch of brain freeze in the process. It doesn't always get it right.

Ryan:

But the cool thing is even in its current form, interfacing with it through these tools, there is actually a lot that GPT-3 is already very, very good at. I've picked out six or so content related use cases that I think you can go away and probably use this technology for right now. Probably the archetypal use case is product descriptions. If you imagine an eCommerce site with a thousand different skews of inventory, that's somebody's job to go away and write interesting, vivid, engaging descriptions for every single one of those products. And that sucks for somebody to have to sit down and do.

Ryan:

This is one of the features within Copy AI. I actually asked it to generate a product description for me, and the only input I gave it were these four words: a personalized leather satchel. And this is literally the first output it generated for me, and it's pretty darn good. Opens by describing something that's stylish, practical, and luxurious, all things I would want from a leather satchel. I'm not a big fan of it repeating the word stylish twice, but can't have everything. I can edit that out easily enough.

Ryan:

It's then even gone as far as trying to brand our satchel for us. It's called it the Elliot Satchel. It's obviously realized that this type of writing commonly has brands in it. So it is actually gone out and it's created a brand for us to use if we wanted to do that. And it's gone on to list a whole bunch of features, which all fit in within the context of a leather satchel. Full height compartment that can fit a laptop, sturdy leather shoulder strap, removable zip bag. These may not actually be the features of our leather satchel. We've not provided enough context for it, but they could well be. This could be my job done as a copywriter.

Ryan:

And then even a nice, inspiring, concluding sentiment about Italian leather, handcrafted by skilled artisans, made in that lovely, fancy place, England, where I hail from. This is actually not bad for something that took literally seconds to generate. And in that same vein, also very good for things like metadata. I absolutely hate writing meta descriptions for my blog posts. One of the least enjoyable parts of my job. This is something that GPT-3 can help you with in a big way. Describe what you've written about, even share a snippet of what you've written, and it can create usable summaries that will function as good metadata.

Ryan:

Probably the most fun thing we did with this, we got access to GTP-3 when it was still in beta. We were accessing it directly through the API. Thanks to Andrew, again. Couldn't have done that myself. And one of the things we did was we fed it a bunch of tweets from Slack's Twitter account. And we said, "Go away and give us more of these, please. We want to see what you can do." And I've mocked up the output in to look like tweets, but all of the copy, even the URLs here, were generated by GPT-3. And again, it's pretty good actually.

Ryan:

It uses emojis in a contextual, interesting, relevant way. It opens sentences and it closes sentences with them. It's included topics that are relevant to what Slack talks about in terms of releasing an emoji pack or work/life balance and remote working, and those kind of things. And it's even gone as far as adding URLs into situations where you would expect URLs. And one interesting aside here actually, it generated these URLs effectively randomly, but they linked out to actual websites. So we had to redact these before we used them, because if you just published this verbatim, you'd be linking out to who knows what on the internet.

Ryan:

Another thing you can do with that is build out website copy as well. If you describe a product or a service, it can very easily generate ideas for you to use as titles, value propositions, even hero text as well. And probably my favorite use cases, the things I'd probably actually advocate for using it with; titles and headers. When I think about the most successful blog posts I've ever written, most of them are the ones that had good titles. And I know to come up with that one good title, I often had to come up with 20 or 30 different variations of that. But one of the quirks of the human mind is that as soon as you come up with one idea for a blog post title, it's very hard to then put that aside and come up with 19 other totally different variations.

Ryan:

And what normally happens when I ask people to do this exercise is you end up with 20 variations of the same theme. GPT-3 does not have that problem. You can give it an input prompt in terms of a title, and it can spit back infinite numbers of titles that are very creative, very different, and can serve as really good inputs to your own brainstorming process and help you out of the mental rut you've created for yourself. And that can even be used to come up with topics to write about as well. We are big fans of a company called CB Insights. They do a really great job at mixing data analysis with big, heady journalistic topics.

Ryan:

We fed it five titles actually from the CB Insights blog. And we said, "Give us some more ideas, give us some stuff we can write about." And almost with that exception, the topics that came up with are things that would have fitted on the blog. "300 companies using machine learning to better understand and serve customers, 50 startups and projects redefining retail." There's no guarantee that there's actually a story within these. Maybe we wouldn't want to write them, but we could. They fit, and they serve as a really useful prompt for us to latch onto and think maybe we should go out and do some reading and research here.

Ryan:

And so far, we have talked about content adjacent processes, little discrete parts of our workflow that we can expedite with AI, but what about the big, meaty thing? What about the big payoff, the big mother load? Can GPT-3 write whole blog posts? Can we retire? Can we hang up our pens and let AI do our job for us? As a team of, again, 130 people that write blog posts all day, we really wanted to know the answer to this. So one of the very first things we did when we got access to the API was try and write an entire blog post using just GPT-3. And the way we did that was bootstrapping it.

Ryan:

We basically gave it a very simple input prompt, for example, a title; GPT-3 and the future of content marketing. Very meta. And whatever output it generated for us, we fed back in is the input prompt for the next go round of generation with some very, very minor curation of the output as a result. And we did that until we had a fully fledged article, some 1500 or 2000 words generated in an absolutely minuscule fraction of the time it would've taken me to write it, for example. And armed with that, we did what we always do with every blog post and we submitted it to our editing team to see what they thought.

Ryan:

Gail, a wonderful Director of Quality did not like it. She did not like it one bit. This is a screenshot of the actual doc. We tagged her in and asked her to edit. All of those yellow and red and those comments are things she was calling out as being pretty darn bad. Few of the comments, there were lots of instances where GPT-3 would just meander away from the topic at hand and say something that didn't really fit in with what we were trying to communicate. There were some sentences that literally fried Gail's ... well, I'd say figuratively, fried Gail's brain. Stuff like, "Our future will not be dominated by machines." Cryptic. I'm not entirely sure what to make of that sentence.

Ryan:

And probably my favorite bit of feedback Gail left, "The reasoning is dizzying and illogical. A paragraph will begin by stating that X is true and end stating that X is false. My brain hurts." So she generally had a very rough time trying to edit this and try and get it into shape. And it was not even slightly comparable to anything any human writer we've ever worked with would've created. And there were a few common themes. We went back and we looked at the feedback, areas where it was not particularly strong.

Ryan:

One of the things that sets human writing apart from, well, any other forms of writing is our ability to construct narrative. We can take individual ideas and we can group them together in a cohesive, coherent story. And that is something that is very, very hard for GPT-3 to do. It is excellent, as we've seen, at generating little fragments, self-contained pieces of information, but as soon as you ask it to do that at any great length, the relationship between those, the associations get very blurry and fuzzy and messy. And the end result is something that is quite hard to read, and also not particularly persuasive because there's no narrative binding it altogether.

Ryan:

There's also no insight. There's nothing new in here. GPT-3 obviously has a vast dataset at its disposal, but it is a fixed dataset. It can't go out and do the things that we do, like interview people or conduct original research, or find any new information that nobody else has access to. So everything in here largely can be traced back to things that already exist, have already been written about before. And probably most alarming of all, the lack of evidence it gave for things. There are lots of pretty big, bold claims in here. Very few of them have anything to back it up. It's just words. It's just hyperbole.

Ryan:

There were some instances where it did try to back this up by using things like mathematical examples to talk us through the logic of it, and it would get the maths wrong because what it's doing is it is generating text and saying, "Hey, we expect there to be a mathematical formula here." That's the sort of thing people would use in this context. And it's evaluating it from the perspective of semantic accuracy ... do these words and symbols make sense in this order ... and not looking at it from a mathematical perspective.

Ryan:

And there are even some cases where there would be amazing, really interesting, pithy quotes attributed to real people and real brands. And as soon as we went looking for those, you would realize that they were completely made up. Again, GPT-3 is saying, "We expect there to be a quote here. This is the kind of thing this content has," but it's not going out and finding real quotes. It's creating quotes that fit that archetype. The obvious implication of that then is if you publish this without any human intervention, you are putting words in people's mouths. You are saying things that people have never said, and this is before we even touch on the big, meaty topic of AI, which is bias.

Ryan:

The hopeful end state of humans writing is that we are able to use our critical faculties to evaluate what we've written and make sure there are no unintended consequences to it. There is nothing harmful that's happening as a result of the words we've put on paper. That doesn't always happen, but we are theoretically capable of doing that. That is much harder for something like GPT-3 to do. And although it has access to all of the world's best writing, the works of Shakespeare, for example, in its dataset, it also has access to much of the world's worst writing in that vast dataset as well. So without any human intervention, much of the bias that exists in the web in its current format can make it through into this AI generated content.

Ryan:

This is from a piece of research that came out quite early on in the wake of GPT-3, just looking at different types of bias and what it generated. And this particular example was the descriptive words associated with each gender, male and female, in this case. There was quite a staggering gulf in the words used. It was pretty terrible. Male descriptive words were things like fantastic, protect, jolly, stable, personable, and the most common words associated with women were things like bubbly or naughty or pregnant. There's obviously a huge gulf there. That is not okay. And without anybody to filter through this information and make sure this isn't working into the end product, GPT-3 and AI models like it can perpetuate harmful bias as well.

Ryan:

Reflecting on all of this, what is GPT-3 good at, and what is it bad at? Andrew, again, he's the master of the two by two framework. He reflected on our experiment with it, and he came up with this beautiful way of summarizing what this stuff is good at. He basically characterized it as GPT-3 thriving in these short creative quadrants of this. Imagine two axes, long content up here and short content down here, factual content like muse, for example, and creative content like poetry over here. GPT-3 is very good at creative and short, as we've seen. It can create paragraphs that are compelling, interesting, sentences that flow and are really interesting, and it is often free from much of the creative restraint that humans have that we impose upon ourselves.

Ryan:

But as we've also seen, it really struggles to weave that together into a long form, cohesive narrative. And it often creates things that have the appearance of fact, but are not factually accurate. I would characterize this then as very great for lots of little parts of content marketing, but not necessarily good at writing whole blog posts, at least not in its current format. But that is not to say that people are not already trying to do exactly that. This is a bit of research from a wonderful SEO I follow on Twitter, a guy called Jake. He has been digging into a few blogs that he suspects are scaling their content operations on the back of AI generated content.

Ryan:

Now, the obvious caveat here is that there is no guarantee that this is AI generated content. As we'll touch on later, there are no necessarily obvious hallmarks of that, but it is definitely possible, even likely, that this is AI generated content. This one example was particularly compelling. In the space of a few months, this site had racked up 550,000 index pages, an average of almost 800 words per page and keyword rankings of 4.3 million keywords that content was ranking for. Jake was quick to point out that these were not big money terms, no big head terms. These were very small, long tail terms, but still, in aggregate, those rankings were generating 1.3 million page views per month. And they were doing that to monetize through display advertising. So really interesting use case.

Ryan:

You may think, all right, this is a slightly different use case to what we do as content marketers. This isn't going to be a replacement for what we do. You can get away with slightly worse stuff monetizing through display ads, whatever. And I would disagree with that actually for a few reasons. The big thing I hear whenever I bring this topic up is that, "Ryan, you've just spent so long explaining why AI content isn't actually very good. It's just not going to be good enough to rank. It won't perform the way we expect it to." And I would say, yeah, AI content is bad, but so is most human content.

Ryan:

If you spend any time looking at these search results for almost any query, the same problems that plague AI generated content, plague human written content as well; lack of narrative structure, poor evidence and citation, lack of original insight and information. The content, the words we write in a page, are very important for ranking, but there are also other ranking factors that go into the total puzzle of what determines whether something ranks or not. If bad human content already ranks very well, I think there's a good argument that bad AI content will also rank in some capacity.

Ryan:

The next refutation is the idea that, well, you can't just publish a blog post and have good results. There needs to be some guiding strategy, some direction behind it. And there's this idea of cutting the Gordian knot. If somebody gives you a really big, tangled knot, you can either spend ages trying to untangle it, or you can cut straight through it. And AI lets you cut through this Gordian knot in a pretty impressive way. Take keyword research. You can either sit down and spend a lot of time researching the best keywords for your niche and your industry and prioritizing which ones you want to write, or if you've got AI, you can create content for all of them. Why bother filtering a thousand keywords down to 50 if you can generate a thousand articles at the click of a button? It's a slightly hyperbolic example, but I think this is totally possible. Companies will have a go at doing this in some format.

Ryan:

The next line of defense is obviously plagiarism. Google is very hot on plagiarized content. There have been many updates over the years that have tried to penalize plagiarized content. This type of content is going to fall foul of that as well. And I've got bad news for you on that front as well. GPT-3 is not spinning content in a way we normally think of spinning content. It's not taking words and then rephrasing it in a way that is similar and could be detected. It is actually creating new writing inspired by it. By almost any measure, GPT-3 content is original. It's a series of words that have probably not appeared in that sequence anywhere else on the internet. It is not spun. It is going to be very hard to be detected by conventional plagiarism detectors.

Ryan:

And the final argument is that Google is going to adapt. Anything that fundamentally changes the nature of the search results and the way people interact with them, Google has an interest in adapting and making sure nothing gets too out of whack. And I think, yeah, probably, eventually. It's going to be hard to do that for all the reasons we mentioned. There are no obvious hallmarks of AI generated content. Fundamentally, it's going to take time, and that is probably time that we do not have. Say it takes a couple of years for Google to spin up an update to actually deal with this. That is time where your competitors could be building a unbreachable moat of back links that you cannot compete with that could make life very, very hard for you in the future.

Ryan:

As a one way of thinking about this, I've thought about this through the lens of the era of infinite content. If we think about the things that have determined how much content exists in the world, there have been four radical technologies that have augmented this. Obviously, writing was a huge pivotal technology in the first place. We went from transmitting information through oral histories, person to person, to being able to document it and share it through time and across distance. The printing press took that to another level as well. For the first time, we didn't have to write everything down by hand. We could scale the production of content and allow that to disseminate to vast corners of the world in a fraction of the time.

Ryan:

And the internet took things to another level. Suddenly, anybody could write anything and share it with anybody else an infinite number of times through the medium of the internet. And AI writing, I think, is the next part of this trajectory. For the very first time, we've actually decoupled effort from output. You no longer have to sit down and spend time to write content. You can press buttons and get something like GPT-3 to do it for you. Just to use a quote from a blog post I wrote about this, "We're basically entering an era where the marginal cost of writing content has nose-dived. We've gone from multiple skilled person-hours to potentially minutes in a freemium SaaS product," changing the game totally.

Ryan:

And if I put on my most dystopian hat and think about the implications this could have, there are three things I'm spending a lot of time thinking about. Any time any technology offers a competitive advantage, there is a company and several companies that will take advantage of it. In the near future, I think we are going to see more and more companies take advantage of this to try and pump out as much content as they possibly can. I think they will be in the minority, but the fact that they can create so much content so easily could still be problematic for everyone else, all the good actors in the system. That could mean that every article is going to end up being competed with by hundreds, maybe even thousands, of other articles.

Ryan:

There's even a world in which stumbling upon a search result that your competitors haven't found, maybe that could go away because if you can publish a thousand articles a day, you can saturate every possible topic you'd ever want to write about. And I think the logical consequence of this is a change in the ranking signals that Google uses to evaluate search results. When every single article is a remixed version of the same information, the logical consequence of that is having to devalue the contents of a blog post. This could potentially worsen the existing problem we have, which is things like off-page ranking factors, like back-link to the main authority, playing a disproportionate role. They could become even more important because we can't trust written content. And that could worsen many of the problems we currently have. Big companies that had a head start, built a bunch of back-links that smaller companies can no longer compete with.

Ryan:

Though, in case you're despairing, it's okay, there's stuff we can do about this. There's plenty of things we can do now. And even if this dystopian scenario is not realized, this is all stuff that is really good to wrap your head around and get started with today. Quickly going to cover three things, wrapping your head around the topic of information gain I think is going to be very important, investing more in primary research and having a little think about diversifying your content beyond just SEO.

Ryan:

We already have a bit of a problem with what we at Animalz call copycat content. Most of the ways people write blog posts, they have a topic they know nothing about, and they do their research by reading the existing articles on that topic. They pick and choose parts of that information, and they incorporate it into their own blog post. And the problem with that is you end up with a whole bunch of search results that have the same information, just slightly remixed. They're all slight variations of each other. And AI could potentially dramatically worsen that because, as I said, everything it generates is pulling from a fixed data source. It can't go out and get new information.

Ryan:

But this is a problem that Google is already thinking about. And there's a pattern that came out with around the idea of information gain that sheds light on how it's thinking about this. This is basically Google's way of trying to measure the new information a given article provides over the existing articles on that topic. And that is super important because it's basically an acknowledgement that your blog post lives in an ecosystem of other blog posts. And the way Google is potentially going to use this, according to the patent, is that if somebody reads article X, they may then be recommended a different set of articles afterwards to somebody that reads article Y because each of those articles has different information and the articles they would find useful afterwards may well be different.

Ryan:

So this is something that is already working its way in some capacity into search rankings. This is really important for us to wrap our heads around because not only is this potentially going to be rewarded by Google, this is something that is always rewarded by people, because if you read something, there is always more information you want that you haven't had access to and your content can provide that. And the way it can provide that is I think through primary research. This is effectively the ultimate form of information gain. Things like survey data or expert quotes, SME interviews, finding insights from your own product usage data, analyzing existing data sets, even addressing parts of a topic that nobody else has ever talked about before, these are all mechanisms through which you can add brand new information to the discussion.

Ryan:

And that is something that is hard for AI content to replicate. This is something we try and do at Animalz a lot. We advocate for this all the time. It's not just enough to rehash the same information, especially the search results are getting more competitive. You have to bring something new to the table. And going out and doing the slightly hard but amazing thing of getting primary research is a defensible way of doing that. Another form of information gain, a way of doing this, is actually thought leadership content. We spent a very long time throughout this presentation talking about SEO content, because I think it's here, that AI is going to broach first.

Ryan:

A lot of tactical, traditional SEO formats are actually pretty darn easy for AI to replicate. Simple, how-to posts, what-is posts, articles where the subject matter is very well understood and there is very little differentiation between the articles on that existing topic. But things like opinion, experience and analysis are much, much, much harder for AI to do anything with. We think about this as thought leadership. This is basically a way of generating awareness, building trust by sharing your earned secrets. These are basically the experiences and perspectives that only you and your company have through virtue of the job you do, the people you know, the experiences you've had.

Ryan:

And this thought leadership can take a whole bunch of different forms. Counter narrative opinion, you can argue against the status quo, why you think it's wrong based on your experience. You can share your own personal perspective. GPT-3 can make that stuff up, but it has never gone out and done something the way that you have. Data storytelling, from a dataset, using your comprehension and understanding, what does that mean? What is the big picture that we should care about within that data? Same with industry analysis, you know your industry better than anybody else. What is happening that is interesting and worth paying attention to? Now, failing all of that, who in your network, who do you know, has these opinions and ideas? Who can you elevate and use in your content marketing?

Ryan:

I wanted to hand over the finale of this presentation to GPT-3. I wanted to give it a chance to wrap things up in a suitably pithy manner for us. So I tasked Copy AI. I said, "Please write an inspiring concluding sentence for a webinar about content marketing and AI." And this is what it came up with. "The truth is that we don't know what the future may hold for AI and machine learning algorithms." That's true. Very true. "However, one thing is certain: in a content world where a tsunami of information threatens to drown the audience in meaningless facts and jargon, a creative strategy that uses an intelligent reasoning machine is a blessing in disguise." And apart from being actually fairly interesting and not badly written, I like this. I think this is actually pretty true.

Ryan:

And that leads me to the thing I want to leave lodged in your brain as my final message on this topic. There is a vast difference between AI-assisted writing and AI-generated writing. AI-assisted writing is I think the best use case for something like GPT-3. Humans are still doing the heavy lifting. We are still creating cohesive narratives, we're still vetting things for accuracy, but we are using it in areas where we are weak; brainstorming, coming up with titles, new angles, new pitches, even automating some of the parts of the process that don't require much skill.

Ryan:

I think the problems emerge when we rely solely on AI-generated writing. When we say, "Hey, go away, write a big, long form thing," and we don't bother vetting it for accuracy or interest or narrative structure. So if you want to go and get started with GPT-3 today, and I think you probably should, this is the way to do it. Use it to augment your workflow, use it to become a little bit smarter and hopefully a little bit more creative in the process. That's what I'm trying to do with it as well. Cool, thank you for sticking with me for 40 odd minutes of ranting.

Bernard:

Wow, Ryan, that was a beautiful and a gorgeous presentation that I think encapsulates a lot of ideas surrounding natural language generation in a very articulate manner. And I love how you meta the GPT-3 to create for you examples that, in a lot of places, made sense for the short, more pointed arguments, and then obviously pointed out the flaws where it struggled with the longer form stuff. Yeah, I would say I echo a lot of the same sentiments as somebody who has created a tool, which you could say encourages copycat content to a certain degree. Philosophically speaking, we want Clearscope to be more of a, "Okay, if you're writing about this topic, you should cover all of your bases and then add some additional information, gain subject matter expertise to the table."

Bernard:

But everybody will use a tool in whichever way he or she sees fit. And a lot of that is now this era of infinite content. If a system can be gamed, it oftentimes will be gamed. So it's just such a poetic way of putting together I think a lot of thoughts that people have about GPT-3. And I think that tactical SEO practitioners, that veer more on the gray or black side of things, you can totally expect them to be spamming the internet even harder than it currently is. So lots of great points. We do have a couple of questions so far to kick off the Q&A session. Please, if you have other Q&As that come in, I can't think of a better person than Ryan to be answering all of these things surrounding AI and natural language generation. First question comes from Enrique. He asks, "Could AI writing be used to refresh old blog content? And what if you feed it an outline?"

Ryan:

We've actually been thinking about this. Andrew, our Head of R&D has been thinking about potential tools we can build out, processes. And one of the areas that requires a level of skill, but is not necessarily as involved as writing a blog post from scratch, is refreshing, because quite often, there are systematic things you want to do to a blog post to improve the performance. Theoretically, yeah, I think you could. If you come up with a set of rules for refreshing content, things you generally want to do each time, then there's no reason you couldn't feed those as parameters into a model like this.

Ryan:

Obviously, the same caveats apply with everything I've said in the sense that you are still going to need somebody to review that and edit that and make sure it fits into this cohesive hole. But I think maybe that's a good way of bridging the gap from individual paragraphs of information up to full blog posts, is that middle state of content refreshing. Yeah, that's a great point.

Bernard:

Totally. Totally. Yeah. We have another question then from Rick. He asks, "Does Google's authority and trustworthiness update already signal a move against AI content?"

Ryan:

I'm going to defer to you on this, Bernard. I feel like I don't spend that much time versing myself in the minutia of every update. So I wouldn't want to mislead people here.

Bernard:

Yeah. My thoughts on this are actually very close to this idea that Ryan proposed around thought leadership, specifically Augmenting your content with new information that previously didn't exist out there. I think that concept is actually a very interesting concept and what we see as this idea of introducing new search perspectives that the user is going to care about. What we're seeing then in the context of this prediction that Ryan has about the future of search is that Google is getting very smart about what we call search engine results page optimization for a given topic.

Bernard:

You could imagine if somebody were to search for something surrounding CBD, like benefits associated with CBD, all of a sudden, Google is trying to figure out what is the additional searches that the user's likely to perform within that particular topic and defend against the user having to perform an additional search or click on many non-relevant additional results. You're seeing this search engine results page essentially graduate from 10 results, specifically saying, "Six benefits, eight benefits, 12 benefits," to these different fragments of perspectives that the user is going to care about. This is where the, "I tried this, and here's what I thought", the "We interviewed seven leading experts within the CBD or marijuana space and here's what they have to say," start to enter into the fold.

Bernard:

So I think [inaudible 00:45:24] is different fundamentally, I think [inaudible 00:45:28] is more specifically related to this idea where people who are performing searches for certain topics, primarily health related things, your life, or finance related things, your money, if they were presented with information that was potentially flawed or wrong, then that could dramatically influence somebody's livelihood or health. So Google wants to make sure that if you are creating content, making claims about certain things being beneficial, like supplements to your health, Google's saying, "Look, we don't want to present the user with not credible information."

Bernard:

I think they're two separate concepts, fundamentally speaking. And we're saying I think GPT-3 skew Google search results to a more diverse set of perspectives that the user cares about. Whereas the expertise, authority and trustworthiness update was more about making sure that should content make claims about things that those are more likely to be factually correct and credible sources. All right, we have a comment from Sonya. "Awesome presentation." She missed the first few minutes. "How do we access GPT-3?"

Ryan:

Yeah, good question. The way I've been using it recently is through one of these tools I mentioned earlier. I've been using Copy AI as an access point. It's very fun to play with. I recommend it. There are lots of others, very accessible. The way we started with it was actually accessing the API directly. That was less good for me because I'm not very technically competent. So I had to rely on other people to help me set that up. If you want a really low friction way of getting started with that, check out one of those tools.

Ryan:

A lot of them, they're slightly different flavors of GPT-3, in the sense they are geared towards different use cases and they have different guardrails to allow you to get what you want out of it. So maybe take a look. Some of the functionality and Copy AI stuff, like generating titles, product descriptions, I saw they even added, referencing this earlier comment earlier, an outline to blog post generator, which I've not tried with yet but I definitely want to get stuck into.

Bernard:

Yeah, yeah. I would say I'm not sure if they're out of beta or whatever. I just checked the website and it still looks like you have to join the wait list. I got access fairly early as well. GPT-3 has explicitly said the use of a GPT-3 to spin content for search engine optimization is explicitly disallowed. I believe the way that tools get around that idea is that GPT-3 will cap the amount of text that you're able to generate at a certain amount of ... they call it tokens, which represent word fragments. So the way that a lot of tools get around that parameter is that they just simply have GPT-3 create 300 to 500 words. I don't know what the exact hard limit is. And then it stops.

Bernard:

And then literally, all you need to do is click another button and then have it generate an additional 300 to 500 words. And that is an acceptable use case of GPT-3 as deemed by the Open AI team. So you can imagine tools then can just say, "Okay, does this look good?" And then as long as you hit a button, it's like, "Yes, continue." Then basically, you'll be able to use GPT-3. In terms of pricing, there's different models associated with GPT-3. They're most super one is probably going to be called The DaVinci, and it's not cheap. Let's just put it that way. It's not expensive in the sense that paying Animalz or consulting with them is not cheap, but it's a lot cheaper than that. But actually, to create a nice piece of content, it could run you five to $10 or something like that. All right, one last question so far from Victor. He says, "When you experimented with AI content, what was your most significant turn off?"

Ryan:

Probably the thing I mentioned actually about it creating quotes from people that have never been made before. I think because at Animalz, obviously we spend a lot of time validating everything we write, ensuring it's accurate, ensuring it's cited properly. And there are a few things more abhorrent to us as a team of writers and editors as something which is wrong or is a lie, or actually puts words in other people's mouths. And that is actually potentially a very, very damaging thing. I think Bernard was talking earlier about we're entering a world where it's very hard to trust the legitimacy of media because of deep fakes in almost any media discipline, like video or speech or text as well.

Ryan:

When you can't trust the provenance of a piece of writing, I think that dramatically undermines the trust of any brand that publishes it. So yeah, the ability to create quotes and attribute it to people that have never said that was a huge, huge turnoff. And just the other thing is narrative structure matters in content. You quite often see search articles which are mini snippets of information that match different intents. You can jump between them, but as soon as you try and read the whole thing together, it's not interesting. It's not cohesive. As somebody that spent my whole life writing and trying to improve written content, that made me very sad.

Bernard:

Yeah. The citation or just making stuff up is problematic for sure. I would be very turned off by that. But I think in terms of having it spark ideas and to help with brainstorming, whether that's headers, outlines, title tags, I think that makes a lot of ... or concluding sentences or paragraphs, it's just like, "Okay I'm stuck. I'm hitting some writer's block. Okay, well, give me some ideas." That makes a ton of sense I think for the current state of GPT-3. I think maybe GPT-4 just changes the ball game completely, or I can imagine there's probably other academics or commercial people out there working on some kind of GPT-3 competitor that uses probably a different model or something like that. All right, maybe last question, depending on if y'all have anything else. "How do you guys envision the future, five to 10 years, of AI-assisted or generated writing, and how would it impact content professionals in general?"

Ryan:

Well, maybe thinking on a slightly longer time horizon, like what you were just talking about there, Bernard, I actually can see a world where AI does a very good job with all forms of written content. I think the level of generation we've seen with GPT-3 was just mind blowing to me when I came across it. I was writing a novel at the time and I fed it some lines from that to see what it spat back, and it was really good. The idea that AI could do that in a fraction of a second blew my mind. I can actually see the development from that. I see no reason why it couldn't do as good a job as lots of human writers with a bigger dataset, more processing power behind it.

Ryan:

The thing is a lot of people are scared of that. And I totally understand that. This is the thing we know best. This is our livelihood. But for one, these changes always happen gradually, and human beings, we're adaptive. We find ways to fit around these new technologies to make them work for us. And secondly, I think the reason we spent so much time learning about this is because I think it's good to run at the things that scare you sometimes. If this could work for you, if you can learn a new skill or develop a new way of operating that other people aren't doing, that is a competitive advantage for you in whatever AI augmented future exists. So the more scared you are of it, maybe the stronger you should consider actually getting stuck in and having a play with it and thinking about how could I use this in my life somehow?

Bernard:

Yeah, yeah. I would totally agree with Ryan here. I think we're seeing the mass adoption of a lot of natural language generation tools these days, whether it be Copy.ai or Jarvis, because it's doing a really good job in certain use cases of creating copy. I do feel though that there is this societal shift, we'll say, that I think has been happening, which is that people are becoming a lot less trustworthy of content in general. And as a way to continue to get content that they deem to be high signal or to be trustworthy, what they're instead doing is they're turning towards communities and influencers and saying, "Look, Tim Ferris, or Gary Vaynerchuk, or Joe Rogan, what you are saying, I simply white-list."

Bernard:

And I think maybe a way of thinking about this paradigm is when, in the early days, it was just the printing press or traditional media, before the internet, everything was default white-list and then you would maybe black-list off the boy who cried wolf or certain outlets that you just said, "I don't trust them." I think with the era of the internet and now infinite content, we're moving towards a society where a lot of people are default black-list everything and I will choose to white-list the Super Path community or Ryan Law on Twitter or these kinds of things.

Bernard:

This then, in my opinion, is actually a great nod towards, again, this idea of just general thought leadership. If you're creating unique, interesting content with no purpose of specifically ranking on search engine optimization, I feel like you are actually then breaking through the noise in the sense that people will resonate with the authentic-ness, perhaps share it with their audience or within the communities that a lot of people are opting into. And as a result, I also see search evolving towards that. It's still extraordinarily difficult for AI to form an opinion, to make a compelling narrative of a case by pointing out evidence that is fresh and unique.

Bernard:

So I feel like people are gravitating towards that, and user engagement signal ultimately from Google itself is very difficult to gain. You can spin a bunch of content, but at the end of the day, if a user lands on your page looks at, says, "Okay, this doesn't make any sense," and leaves after 20 to 30 seconds and clicks on another lower ranking result that seems a lot more thought out and actually is, you can expect that that content will basically come to light and do a lot better from a search perspective, and also just from an audience and community engagement perspective.

Bernard:

So I guess all of that's to say content, because it's becoming a commodity, it's then very polarizing in terms of what the outcomes are going to be. And the companies that basically take a stance to say, "Look, I'm going to invest in quality content," at the end of the day, I think will still stand out. Whereas, yes, a lot of people will compete for the bottom end of the content market, but it's quality over quantity I think.

Ryan:

I desperately want to write a blog post called The Default Black-list now. That's a great hook right there. I love it.

Bernard:

There's so many things that you were saying in the presentation. You have such good metaphors and analogies. I'm just like, "Ugh, I want to sound like Ryan." Well, we are at time. Again, Ryan, that was truly a well thought through presentation from just a really high level, and count me a big fan. I'm going to link everybody who talks about GPT-3 to this recording and the slides.

Ryan:

Oh, thank you so much for having me and kudos to Andrew and the whole Animalz team for making me smart enough to actually string some thoughts together on this topic.

Bernard:

Totally. Totally. All right, well, take care and I'll follow up with you shortly.

Ryan:

Awesome. Thanks for that. Thanks, everyone.


Written by
Bernard Huang
Co-founder of Clearscope

Join thousands of marketers who receive our weekly emails.

We share content marketing best practices and SEO strategies from the brightest minds in marketing. You’ll also be the first to learn about and join our next weekly webinar with the industry’s best.

Join today
©2024 Mushi Labs. All rights reserved.
Terms of service, Privacy policy