Back to Blog Posts

8 Signs An Article Has Been Written Using AI

8 Signs An Article Has Been Written Using AI

Look, for anyone who does want to use AI to write complete articles, that’s your prerogative.

There’s also no harm in using AI to help structure your article and perform some background research. 

However, some people want to avoid using AI completely. And if you’re one of them, this article has your back. 

How to tell if an article is AI-generated

If you regularly have blogs, landing pages, emails or any type of online content sent to you, it can be hard to determine if someone has sneakily used AI. It’s getting that good. 

Want to sniff out AI content like a hound picking up on a dodgy scent? Yeah, us too.

So to help you lovely people (and human copy-loving folk), we’ve compiled a list of red flags. 

Here’s how to determine whether content has been written by AI.

1. AI detection tools

There are a ton of AI detection tools that can help you discover if an article has been written by AI. However, I’ve used a few that I know for a fact have been incorrect. For example, I’ve had a writer write their own copy and put it into a tool, and it says it’s AI when it’s not. 

So, a word of warning: if you’re going to use these tools and point accusations, you need to make sure you’re using a tool you can trust.

Also, this AI tool detection article outlines a really interesting study. They ran some tests, and some AI writing tools like Jasper and Frase produced copy that was detected by the tests as human. 

HOWEVER, none of the AI content passed the plagiarism test. Why? Because it’s all scraped from all over the internet, ergo, nothing is original. If this doesn’t put you off using pure AI content, I don’t know what will. Because Google has always taken duplicate content very seriously and penalised those who take copy from other sources. 

So, you can use the tools to pick up on AI content. But some of them can’t even pick up on AI content from some of the smarter language learning model tools. So, you can run your copy through them all you want, but you may not get a definitive answer. Our advice? Learn to spot the red flags yourself. Which are…

2. Odd phrasing

‘The cow jumper over the moon and the cucumber ran away with the spoon.’

Nah, that doesn’t sound right, does it?

Look, if you’ve read a lot of AI content you’ll start to notice some very strange phrases that just get thrown in. Phrases where you think, hmmmm, there’s something off with that. And if you get that feeling, it’s definitely a red flag. AI writing is often very random. 

3. Weird-sounding grammar 

We’ve used plenty of AI-powered spelling and grammar tools over the years. And one MAJOR thing we’ve learned is to take them with a pinch of salt. Don’t let them change everything without considering if the changes actually make sense. 

Sometimes, you need to ignore the odd grammar rule to give your writing some pizzazz. With AI copy, it follows the rules so closely that it makes it sound robotic. It may be correct in terms of the English Dictionary, but the sentence just doesn’t flow or sound good to the reader. 

4. No mentions of personal experience 

The one thing we have over AI (at the moment, anyway) is that it can’t pluck personal experiences out of thin air. That beach you visited in Cornwall with the pretty pebbles and soothing seas, well, AI didn’t go there, so it doesn’t know what being there FEELS like. 

When you read AI-generated content, you’ll notice it doesn’t include any personal anecdotes. It may pull some from stuff it has found on the web, but it may not fit the copy all that well, and it isn’t real. 

5. Lacking in emotion

Just like personal experiences, AI struggles to nail emotion. It may describe different emotions accurately, but it feels as though it has been pulled from a dictionary definition. When we experience emotions as humans, we feel something deep within our soul. And a good writer will be able to convey that beautifully in their prose. 

6. Just reads a bit flat

If you feel like there’s something you can’t put your finger on when you’re reading an article, it could potentially be AI-generated. Often, it has something about it and just reads a bit flat. A bit lifeless. You might read it and think, “meh, that was rather bland.” 

7. Factual errors

Many of the AI writing tools take their data from a few years ago, so they won’t be up to date with current events. Also, from the testing we’ve done, we’ve realised AI tools like ChatGPT will actually MAKE UP STUFF. 

If it doesn’t have the correct answer, it will still try to answer your query by just filling it with whatever it can find. This results in completely incorrect or even made-up information. This is wrong on so many levels, and if you use AI copy without checking it properly, you could end up spreading false information. Plus, they also often include totally made-up sources if they can’t pull out anything that fits. Not cool AI, not cool. 

8. Predictable format/layout

Language learning models pull info from different sources all over the web. But there are often standard layouts that they tend to pick up on. They’ll pull popular structures for articles and churn them out for you. If you start to read a few articles by a writer, and they seem to be a bit rigid/unimaginative with their layout, this could be a red flag. 

We hope you’ve found this article helpful. We know a thing or two about creating and editing brilliant copy. We scan tons of articles on a weekly basis, so we’ve got pretty good at picking up on things. Need content edited and polished? Get in touch with The Editing Wolves now.