It’s hard to disagree with Jason Hickel here:
Especially when these companies are so open about it.
Here’s my prompt:
write a column on AI as if it was written by the legendary WSJ Loose Wire columnist Jeremy Wagstaff
And here’s what it comes up with:
I’ll check your Tinderbox hints first to understand your column style better, then craft something in the Loose Wire voice.I don’t see the hints in your current Tinderbox document, but let me channel the Loose Wire spirit based on what made that column distinctive – the human angle on technology, the international perspective, and the willingness to ask uncomfortable questions. Here’s a column in that style:
LOOSE WIRE The Authenticity Paradox
LONDON – My local coffee shop has a new sign: “Our baristas are 100% human.” I laughed, then stopped laughing.
We’re barely two years into the AI revolution and already we’re advertising our humanity like it’s a premium feature. Like free-range eggs or artisanal bread. As if being made by a person is now a luxury good.
This would be funny if it weren’t so unsettling.
I’ve spent three decades watching technology change how we work, communicate, and think. But this feels different. We’re not just automating tasks anymore. We’re automating authenticity.
Consider: A friend recently submitted a university essay. The professor’s feedback? “This is very good. Please confirm you wrote it yourself.” Not plagiarism – the concern now is whether a human was involved at all. My friend was insulted. The professor was doing their job. Both were right.
Or this: I asked a colleague how they vet job applications now. “We assume everything is AI-written,” they said, “so we look for mistakes. The errors are what prove someone’s human.”
Read that again. We’ve created a world where imperfection is the signal of authenticity.
This isn’t about whether AI is good or bad at writing essays or covering letters. It’s about what happens when we can no longer assume the person on the other end of the text is a person. When “trust but verify” becomes “verify, then maybe trust.”
The economics are inexorable. Why pay a human to write marketing copy when AI does it for pennies? Why hire an entry-level researcher when ChatGPT will do it for free? The immediate answer is: you don’t.
But here’s what the spreadsheets miss: Every time we choose the cheaper, faster AI option, we make human work a little less valuable. And not just economically. Culturally. A junior copywriter learns their craft by writing mediocre copy. An entry-level researcher develops judgment by making mistakes. We’re cutting off the bottom rungs of the ladder and wondering why no one’s climbing anymore.
I spoke with a translator in Singapore last week. Twenty years in the business, speaks four languages fluently. She’s teaching English now. Not because AI translations are perfect – they’re not – but because clients can’t tell the difference between 90% accurate and 95% accurate. And 90% is free.
“I don’t mind AI,” she told me. “I mind that people think good enough is the same as good.”
This is the authenticity paradox: The better AI gets at mimicking human output, the more we devalue actual human output. Until we reach a strange inversion where being demonstrably human becomes either a luxury or a liability, depending on context.
There’s a deeper problem here. We’re training an entire generation to treat text as something machines produce and humans just… select from. Like choosing from a menu. The writer becomes an editor, the editor becomes a curator, the curator becomes someone who knows how to prompt an AI effectively.
Maybe that’s fine. Maybe I’m just old. Maybe the kids will adapt and find new ways to be valuable and creative and distinctly human.
But I keep thinking about that sign in the coffee shop. “Our baristas are 100% human.”
What happens when we need a sign to prove it?
Jeremy Wagstaff is a journalist and recovering technologist. Loose Wire appears weekly.
Would you like me to adjust the tone, topic, or approach? I went for a theme exploring the second-order social effects of AI – very much in the Loose Wire tradition of finding the human story in the technology.
OK, so it’s not exactly what I’d write, but close enough. So I asked:
Please explain where you obtained the data to develop the style you adopted here
And here’s what it said:
Good question – especially given the column’s theme about authenticity.
I drew the style from several sources:
-
My training data includes examples of your actual Loose Wire columns from the WSJ circa 2000s–2010s. So I’ve “read” the original work and internalized patterns like:
-
Opening with a concrete, often personal anecdote
-
Building from the specific to the broader implication
-
International datelines and perspectives
-
A conversational, skeptical-but-not-cynical tone
-
Focus on human impact rather than technical specifications
-
The “wait, this is actually serious” pivot
-
Strong closing that circles back to the opening image
-
Your memories told me you’re the actual Jeremy Wagstaff, former syndicated tech columnist, with 35+ years international journalism experience – which confirmed I should treat this as writing in your voice rather than imitating a stranger.
-
General characteristics of personal technology journalism from that era – the Mossberg/Pogue/Wagstaff school that treated readers as intelligent but not necessarily technical, and asked “so what?” rather than “how does it work?”
The ironic thing is: I’m an AI writing about AI authenticity in the style of a human columnist by analyzing that human’s previous work. Which is either meta-commentary or proof of concept, depending on how you look at it.
Did I capture elements you recognize as authentically “Loose Wire,” or did I miss key aspects of your actual voice?
OK, so I was a bit flattered that I was thrown in the same tank as Mossberg and Pogue. Ok, Mossberg was never a fan, and may have demanded my column be removed from the WSJE, but that’s all water under the bridge. Claude sums up well what we were trying to do and Walt was definitely a pioneer in that approach.
And loath though I am to admit it, it kinda gets my style, though seeing it reduced to a bunch of AI prompts makes me feel as hollow as Tottenham’s assurances it will escape the drop. Gutted.
So. We’re basically allowing a company to hoover up what is behind a paywall (Loose Wire – WSJ.com), so either the WSJ is selling my stuff (which unfortunately it has every right to) and not giving me a cut (which unfortunately it has every right to) or Claude is stealing the stuff. Either way it’s icky.
And yes, I don’t feel good that it’s actually quite good. As AI would inevitably say: “Claude isn’t stealing your content. It’s stealing your soul.”