SEO Predictions: Will Google Penalize AI Content Soon?
Generative content AI, through the use of large language models, is a very contentious subject right now. On the one hand, you have marketers, business executives, MBAs, growth hackers, scammers, fraudsters, and many more who are excited to use generative AI for everything they can. On the other hand, you have people who recognize that it's not delivering anything original, people whose work was stolen to train the AIs, and people whose jobs have disappeared because of the AIs, like translators, low-level content writers, data analysts, and more.
There are a lot of different fights going on in the AI space right now. From copyright issues to legal challenges to the impact these systems have on the climate, it's a difficult space that is also quickly evolving.
One of the biggest open questions right now is what Google thinks of all of it. I'm not an insider, and I don't have any special connections to give you a glimpse behind the curtain, but I've been in the content marketing space for a while now, and I have some guesses about what the future holds. How they shake out remains to be seen, but if you want to dig this post up in a year and see how I did, feel free.
30 Second Summary
You have two important choices when using AI for content: you can use it for grunt work or creative work. If you use AI to write your main content, Google might penalize you because AI text tends to be low quality and unoriginal. You'll get better results if you use AI for basic tasks like writing meta descriptions or product details, while keeping the creative writing human-made. You need content that gives real value to readers and comes from trustworthy sources - especially for topics about health, money or safety. AI can help with outlines and ideas, but you should always add human insight and fact-checking.
Google's Stance on AI in General
First, let's talk about Google and its position on AI.
It's safe to say that Google has thrown their support behind AI as a concept, though perhaps not as deeply as Microsoft has. Google's integration of AI summaries in their search results – to often disastrous effect – shows that they certainly support AI in some capacity.
Gemini's integration in their productivity suite is also an example.
At the same time, you can bet that their webspam team probably has their faces permanently bonded to their palms every time their AI teams open their mouths.
There's a point I like to make about Google that I've heard from several different insiders over the years, which is that Google, such as it is, isn't really a monolithic company. The various teams involved are often at odds with one another, potentially working on competing projects or even complementary projects with no communication.
It's why Google Drive and Google Photos barely talk to each other despite both managing image content in a Google account and why there were several different Google Gaming initiatives that mostly just step on each others' toes. I get the impression that the heavily AI-focused teams and the teams that have to deal with their fallout are not on speaking terms.
To get around the obvious fact that Google supports AI to the extent of making their own, I'm focusing pretty much entirely on the webspam, search, and associated teams. These are the people who have to make the search index, search results, and all of that stuff useful to the humans who drive Google's services. Yes, I know, they haven't been doing a great job these last few years.
If you look specifically at Google's policies, they don't say you can't use AI. They say they penalize pages that use AI, or any other kind of automation, to make sites meant to manipulate search results.
There's a fine line between trying to grow a business through search marketing and trying to manipulate search results to grow a business. Often, it's as much a matter of perspective as it is anything tangible. As it stands, it means Google has their work cut out for them.
Will Google Penalize AI Content in the Future?
I don't think it's controversial to say the answer is yes.
Here's why: they already do. Just not in the way you might think.
Google doesn't actually have any problems with using generative AI to create content for the internet. And, despite how down I am on AI in general, I can also recognize that it has some uses. If you have a spreadsheet of product details and you need product descriptions for 5,000 items written up, having an AI do it isn't a bad idea. It's drudgery that needs to be done, but no one could call it high art.
What Google penalizes is content that is:
- Poor quality.
- Derivative.
- Spammy.
- Copied from another source.
- Used for exploitative purposes, like ad-saturated spam sites.
It doesn't really matter if that content is created by people or by machines. Generative AI just made it infinitely faster and easier to spin up a PBN or a spam site than it used to be and made it marginally harder to detect.
I'm not just making all of this up either. There have been case studies ongoing for the last year or so, watching and proving these conjectures. Nathan Gotch posted one back in March with some compelling evidence that Google penalized sites made wholly off of AI content, and when the AI content was replaced with human-written content, they recovered.
One of the things a lot of people don't realize is that LLMs are not capable of creating anything unique, almost by definition. They remix words according to mathematical formulas and instructions given to them on the back end. Even the newest ChatGPT model claims to have "reasoning" but that reasoning is just a series of instructions that are easily reverse-engineered. It's functionally no different than the guided workflows used in something like Jasper.
You can tell that OpenAI doesn't want anyone to know this if they can stop it, too, because they've been sending threatening emails to anyone who tries or even uses phrases that might be adjacent to trying, even if those phrases are explicitly used to avoid it.
All they're doing is putting a layer of abstraction between prompt and output to make it look more like it's thinking, to excite the gullible people who think LLMs have literally anything in common with general intelligence. Since some of those people have a lot of money, it's a good business move to trick them, that's all.
Incidentally, this is another strong reminder that nothing you put into a ChatGPT prompt is confidential. I've seen some pretty distressing indications of businesses just plugging in confidential business data, including data with actual information controls on it, and assuming that it's secure.
It really, really isn't.
We create blog content that converts - not just for ourselves, but for our clients, too.
We pick blog topics like hedge funds pick stocks. Then, we create articles that are 10x better to earn the top spot.
Content marketing has two ingredients - content and marketing. We've earned our black belts in both.
E-E-A-T, YM;YL, and AI
Another important element of this situation is the push Google has been making for the last half a decade or so to enhance trust in search results, especially for more important topics.
If you search for sports team jerseys and buy one from a knock-off vendor without realizing it, it's annoying, and if you get a sub-par jersey with a misspelled team name, or even if you're just scammed, you've lost nothing but a bit of cash, some time, and a little of what remaining trust you had in humanity.
If you search for medical symptoms and get advice from some random site telling you to take a Tylenol and sleep off that chest pain, you could die.
Google recognizes that not all topics and not all queries are equal. So, they've put certain categories of topics into the general YM;YL heading. That stands for Your Money or Your Life, the old-timey bandit demand. Anything that affects your money (bank information, social security, insurance, legal, investments, etc.) and anything that affects your life (medical information, foreign travel advisories, food recalls, nutrition, etc.) fall under this heading.
The more damage that can be done to your life, directly or through your finances, the higher the standard is as far as Google is concerned. That's why a random marketing blog, a magnet fishing review blog, an informational blog about birds, and so on can just kind of be whatever, but a blog about medications or one on financial investing needs to have fact-checking and an authoritative name attached to really succeed.
E-E-A-T is related to this. It's their metric for evaluating the quality of the source of information via four factors: the expertise, experience, authority, and trustworthiness of a site, an author, or a source.
Who would you rather take medical advice from: a blog that cites the NCBI or a blog that cites Reddit?
What does all of this have to do with AI?
Generative AI has no concept of fact. There is no real mechanism within the LLMs to be able to tag "facts" as true or false, at least not consistently. That's why Google's AI search results can tell you to put glue in pizza or eat rocks because they contain minerals, and it's why you can get ChatGPT to tell you the opposite of what it just told you by asking it to roleplay as a cat. Generative AI puts words in an order that, mathematically, resembles a significant portion of other content it has ingested and trained on. It looks real because it's made to look real; it's not factual because it wasn't made to be factual.
This essentially means that for non-YMYL content, AI-generated content isn't as big a deal. It still falls short of originality and other metrics, but at least it's less obvious about it than just spun content. On the other hand, for these high-EEAT-required topics, AI is a disaster. It's not that the AI is wrong. If it was wrong all the time, then you could write it off entirely, and it wouldn't change a thing. The problem is that it can't be trusted. Without independent verification and validation of the things it says, there's no way to know whether it's factually accurate or a hallucination.
Even when AI cites its sources, like Gemini or Bing's AI, you still have to go through and evaluate whether those sources are legitimate, parodical, flippant, or simply lies. It's not like the internet isn't full of misinformation or anything, after all.
Once you're at the point where you need to fact-check every statement the AI makes, it doesn't even really make sense to use the AI. Why not just have a human do the thinking, the writing, and the source evaluation all at once?
How to Use Generative AI and Not Get Penalized
If Google doesn't penalize AI usage out of hand, how can you use it in a way that won't trigger the penalties they do issue?
Well, it all comes back to the same things it always has. You need content that is original, useful, valuable, actionable, trustworthy, and all those other adjectives. You need content that understands who your users are and what their intent is with their searches. You need content that provides real, actionable information based on what they want to find. You need fact-checking and validation. You need a trustworthy name attached to it, someone whose brand is built around accuracy and whose reputation is at stake if it goes to pot.
Using generative AI is most useful, I find, for things like creating outlines, developing basic ideas, and brainstorming. It can help you think about a topic in different ways – even if those ways are nonsense – and can give you ideas you might forget or overlook. But it's up to you to actually write them with a human touch, human logical flow, and a consistent voice and tone.
Google penalizes AI content because the people who use AI to generate content are usually doing so because they can pump out hundreds of "blog posts" per day, scale to the size of a mega site in a week, and attempt to dominate the search results for an industry in a month.
Fortunately for all of us, that's very easy to notice and very easy to penalize. Unfortunately, as long as it keeps being marginally profitable for those sites to exist for a month or two before they're slapped back down, it's going to keep happening.
My honest recommendation is largely just to avoid posting anything on your domain that is created by AI and not filtered by humans. AI in the background for mechanical tasks, things like filling out image alt text, generating metadata summaries, creating product descriptions, and the like? Those are fine. Those free up your time for the more detailed, human thinking and creativity. That's the freedom AI should be bringing to the table.
Unfortunately, it seems like the goal of current generative AI is an attempt to free humans from creative and fulfilling tasks so they can spend more time doing drudge work to earn money for shareholders.
Hopefully, one day, that will change.
Comments