As AI generated content gets ‘better’ — in the sense of feeling, appearing realistic — does that mean we will more readily accept it? Or will we more readily dismiss it — and all content that might appear to be generated by AI?

What happens when we start to suspect that everything is AI generated?
And are we already at that point?
I’m increasingly sensing that even quality YouTube commentaries (usually TV or movie criticism) sound to my ear to be at least partially written by AI.
I’m a journalist, not a linguistician or whatever they’re called. But I can recognise a sentence structure pattern when I see one, and I think I can increasingly see such patterns in AI. Take the emphatic contrast or contrastive emphasis structures, for example:
“It’s not about X, it’s about Y”
“Not only X but also Y”
“Instead of [x], [y]”
“This wasn’t [x]; it was [y]”
“This wasn’t just [x], it was [y]”
On the surface this looks and sounds good. We’re clarifying what it isn’t — or isn’t merely — and asserting clearly what it is. But when you realise it crops up in a lot of stuff you know is AI-generated, you start seeing it everywhere. And by then you can’t unhear/read it, nor get away from the sneaking suspicion you’re having AI smoke blown up your hind quarters.
So what’s wrong with this? Surely if an AI is that good, it’s probably helpful content, right? Worth taking seriously?
Well, no and no. And while I’m not sure it’s an exact match, I would argue that this takes us into the world of source credibility bias, where we tend to judge the quality of the information based on who the source is, rather than on the information itself. This is usually perceived negatively since it overlaps with a better known bias — confirmation bias, where give more weight to sources that match our own beliefs, or affinity bias (preferring information from sources perceived as part of our own tribe), or the halo effect, where we tend to have an exaggeratedly positive (or horn, negative) view of a source of information, making them more (or less) credible than they deserve.
But here we’re talking about something slightly different: we are judging the information based on our estimation about whether the source is in whole or part AI-generated. And what is most disconcerting about this is that it’s a phenomenon has already become a central feature of our lives.
Bias Bias (it’s a thing)
Let’s walk through this.
We all have a mix of the above biases, but that mix changes as we change. As an aspiring (and failed) academic, I once tended to look down on journalists, thinking they only reported the first draft of history, seeing only a part of the elephant, and that it was up to historians like me to put it all in context and give it meaning that would endure. An affinity bias, I suppose.
When I became a journalist, I saw it the other way around. When I interviewed academics, I found much of their analysis wanting, lacking the touch that could only come from actually being there, witnessing change. I felt they either underestimated change because they didn’t see that things could change in an instant (the old Hemingway quote about how bankruptcy happens that equally applies to changes in power: gradually, then suddenly) or they exaggerated change because they saw something that wasn’t really there.
Now I’m a bit of both (wannabe historian and semi-retired journalist), I can see that expertise comes in lots of different flavours, and judging an expert by their age, looks, qualifications and the number of followers they have is not the way to go.
As a Spurs fan, for example, I rely on a broad swathe of people to tell me what’s really going on, and as far as I can work out none of them has a real job and none has been a footballer beyond the Sunday kickabout, and they all seem to work out of their parents’ spare rooms. I don’t care. They explain things well and they make sense — often more than the high-paid pundits on TV.
Sausages and dodgy merch
Oddly, being able to see them in their bedroom with dodgy lighting merch in the background makes them more credible to me than the suits in TV studios. I would argue this is a sort of transparency bias — I tend to believe them more because I can see how the sausage is made.
The same with politics, with movies and TV, with quirky subjects journos and academics wouldn’t touch with bargepoles. I choose them because over time they prove that they think deeply, work within their experience and knowledge, and explain themselves well. In shorthand, I find them credible because they’re lived-in, human.
And that last bit is the problem.
If I get the faintest whiff that what they talking about has been generated by AI, I’m outta there. To me any AI involvement in the thought-to-content process taints the result. I don’t mind a bit of AI research, as long it’s been checked. What I do mind is something that might have been constructed by AI, or partly by AI. To me that is unacceptable.
Why?
I’m not sure. I’ve been trying to figure that out. I think it has something to do with the Weltanschauung — world view — of the creator. I need to know that the ideas I’m hearing are coming from something that is not synthetic. Sure, we can run ideas past AI, I suppose — I got help from it up there because I couldn’t recall the origin of the bankruptcy quote (I’m embarrassed to say I had no idea it was from Ernest Hemingway). To me that’s more or less OK, because I checked it elsewhere, though it’s still a mark against me because I’m trying to show I’m better-read than I really am. You would be right in thinking less of me because I didn’t have the reference in my head, and was surprised it was Hemingway’s.
What I can’t accept is that an analysis, say, of Three Days of the Condor, or a Pluribus episode, is partially or wholly composed by AI. To me that’s like saying: this commentary is derived at least in part from previous content by a machine, excluding any personality, any conscious or unconscious reflection upon the past by the author, on their experience, on what they might have dreamed last night, on the state of their heart, mind, stomach.
It’s not who we are, it’s how we got here
This is all synthetic, in short. And we humans are not synthetic. We’re a bubbling pool of neurons and soul, heart and head, scars and serenity, our every thought and feeling rooted in our experience, whether it’s through books, TV, love affairs or trauma. We’re all a big mess inside, and that’s what makes every sentence we write so interesting. And so unique. Whoever we are.
(The Psychology of Robots and Artificial Intelligence, a 2025 paper, pointed to research that argued people generally perceive AI-generated artwork to be of lower quality because it lacks the emotional expression and uniqueness that connects them to the artist’s mind, rendering the art as inauthentic and incapable of reflecting true experience. I’d argue that’s also exactly true of any AI-generated content that is not already formulaic — a stock market report, football scores, the weather, most of which has been automated for at least the past 20 years already.)
And so yes, even if a sentence, a phrase, a single insight has been generated by AI, I would hit the purist button and say: if any part of this is synthetic, then the whole is violated and invalid. Because we now cannot tell what is real and what isn’t. We can’t tell whether the thought process that went into the piece took a synthetic turn somewhere, and so we have to discard it all.
That might sound a bit extreme. And I’m probably being a hyprocrite here. Perhaps we should be laying down some rules: It’s OK to check your ideas with the AI. It’s OK to start with an AI as long as you pick up early enough and take over. It’s OK to have AI correct and improve your grammar, because we do that all the time with Clippy and co, right?
No, I don’t think it is. I hate formulaic sentences, and I really hate it when I feel an email or message sent to me is pro-forma. Even more, bizarrely, if it’s written in a faux-friendly style. I’d reach for my gun, if I had one.
Coke: the Real Thing, except the ads
And no, it’s not simply a question of adding a little “AI helped in this.” If it did, all you’re telling the audience is that they have reason to suspect the whole thing, unless indicated otherwise. If the content is not authentically you, then why am I bothering investing time in watching/reading/listening to your content? Are these your ideas, or AIs? Where did this idea start, and who constructed the argument? We take a dim view of such admissions because we commit our time and attention to not just the content but the person/people behind it. If some of this content is actually artificial, it undermines that implicit contract. Why should we invest in someone whose worldview is at least in part derived from a machine?
As larger organisations seek to cut costs, they’re inevitably going to turn to AI. Look at Coca-Cola’s trainwreck of a Christmas ad: Coca-Cola AI Holiday Ad Glitches Highlight Generative AI Shortcomings.
Already there is a cottage industry in AI debunkers — people who study the content closely and can highlight where the AI glitches are. (This one, by Dino Burbidge, is excellent: The truck is different in every shot of Coca-Cola’s AI Christmas ad. Surely that matters? | The Drum) In this case, Coca-Cola, thought they’d get in front of it by saying the ad was generated by AI. But they still got a hiding, and so they should. (They might have read an academic paper called The transparency dilemma: How AI disclosure erodes trust before they embarked on their quest.)
Those who might argue that most video is already CGI have a point, but it misses the mark. CGI is the backdrop, the framing, but the actor — the human — is real. (I’m not talking cartoons and stuff here, of course). We expect the actor to act, to bring their best to the scene, even if all the see is green screen and someone in a green suit suspending them as if in flight. We want Robert Downey to put in a performance, and we don’t expect his face to be digital even if the rest of him is. But when Coca-Cola populates its AI ad with fake people, fake expressions, where everything you see is fake but trying to appear real then the spell, the suspension of disbelief, is broken, because, simply, we know.
Technically speaking we’re now in the AI equivalent of robots’ uncanny valley – where as robots become more human-like, our affinity abruptly collapse into revulsion. Research suggests that something similar happens with AI-generated text. AI content might not trigger the gag reflex as much as robots do, but we don’t like being fooled, and it’s this that I think will be the source of the largest pushback against use of AI in anything remotely creative, not purely in artistic terms but in any kind of content that draws on, or pretends to draw on, the creator’s experience, knowledge, qualifications, training, personality and background.
I’m not saying AI is a bubble, but I’d be willing to put money on the labelling of content, products, services etc as “100% Human Generated, No AI” appearing everywhere, and companies that seem to be relying on AI on external-facing stuff to be punished quite severely in the marketplace.
If I don’t post before, have an AI-free Christmas and a slop-less New Year.
(No sentences or ideas in this piece were generated by AI. AI was used in searching for some references and sources)