This is the first of a series of pieces based on Mary Meeker’s recent deck about AI.
It can be bewildering and discombobulating to try to absorb the rapid rise of AI, and I can understand why many of us choose to ignore it, dismiss it as horribly overhyped, or throw up our hands in despair. All of these reactions have some justification. But I think it’s worth taking a step back and trying to place what is happening in some context. Doing so might help in accommodating what is happening, even to draw some benefit and comfort from it. A 340-slide deck may not sound like a good way to do this, but I’ll try to condense it. I hope it will be worth it.
The deck is from Mary Meeker, a veteran of Silicon Valley and someone who, over the years, has got a lot of things right. She’s also at an age, not unlike myself, to have witnessed the miracle of Internet-based technologies and so can see a bit more clearly than those who grew up with the Internet (essentially anyone younger than 45). A slide from a Mary Meeker deck, therefore, is usually worth 100 slides from most other folk, so her recent dump on AI is worth the time. (With some caveats I’ll leave to a later post)
One of the peculiarities about AI is that while it threatens the livelihood of millions, it’s also one of, if not the fastest adopted technology/ies in history. It took 33 years for the internet to reach 90% of users outside North America; it has taken ChatGPT three. It took Netflix more than 10 years to reach 100 million users; it took ChatGPT less than 3 months.
But this is not where the real change will come from. Let’s face it, ChatGPT is easy to adopt because it’s quasi-human. We interact with it in the same way we communicate with our friends. We haven’t adopted generative AI as a technology so much as allowed its anthropomorphic version to insert itself into our lives. This is unsurprising: we have known since the 1970s that we humans tend to accommodate anything, live or dead, into our lives if it hits certain (but not all) anthropomorphic notes. A cute animal (behaviour, features), a favourite teddy-bear (inertia that we take to be stoicism and loyalty), Alexa (voice, responsiveness). A sign our adoption is complete is that we then yell at it when it doesn’t do what we want.
So our embrace of the technology is not really where change is coming from. The change is in how fast company CEOs are adopting AI — and want to be seen to be adopting AI. In late 2023 the proportion of S&P 500 firms mentioning AI during quarterly earnings calls was about 10%, a number that had risen slowly from zero since 2015. By 2025 that proportion had risen to 50%. (Slide 68)
And this is not just CEOs barking whatever buzzwords their media and IR teams are throwing them. In a survey of CMOs by Morgan Stanley in December 2024 two thirds said their companies were running initial tests and/or exploring using Generative AI for marketing activities. (Slide 70)
Agent AI
So where are those tests taking them? The chatbot image of generative AI is not what is really getting CEOs excited. What is getting them excited is the next wave of AI: agents. Agents form a “new class of AI… less assistant, more service provider”, in Meeker’s words. Where chatbots operate “in a reactive, limited frame”, agents
are intelligent long-running processes that can reason, act, and complete multi-step tasks on a user’s behalf. They don’t just answer questions –they execute: booking meetings, submitting reports, logging into tools, or orchestrating workflows across platforms, often using natural language as their command layer.
Meeker compares this shift to that of the early 2000s, which
saw static websites give way to dynamic web applications –where tools like Gmail and Google Maps transformed the internet from a collection of pages into a set of utilities – AI agents are turning conversational interfaces into functional infrastructure.
The key thing to understand here is that an agent is not “responding” so much as “accomplishing”. They don’t need much guidance — indeed, they may quickly need no guidance, but instead autonomously execute, in Meeker’s words, reshaping “how users interact with digital systems “from customer support and onboarding to research, scheduling, and internal operations.” This is where the bulk of the enterprise appetite is going, not just experimenting but “investing in frameworks and building ecosystems around autonomous execution. What was once a messaging interface is becoming an action layer.” (All quotes are from Slide 89)
Strip away the glitter here and it’s this: An agent is essentially a human in disguise (or multiple humans.) Once briefed, it works independently, executing, learning, improving and extending. And companies are investing in the infrastructure to support this autonomous activity. You don’t have to be paranoid to see how agents, barely acknowledged a year ago, are now the focus of significant investment, which would likely have been directed towards investment in human-led processes.
Meeker cites a handful of examples: Salesforce’s Agentforce not only handles customer support but resolves cases, qualifies leads and tracks orders. Anthropic and OpenAI have agents that can control a user’s computer screen directly to handle tasks like pulling data and making online purchases. (Slide 91). Where AI was a research feature, it has since 2023 become a CapEx line item. It has become, in the words of Microsoft President Brad Smith, a “general-purpose technology” like electricity — “the next stage of industrialisation.”
Mary Meeker again:
The world’s biggest tech companies are spending tens of billions annually – not just to gather data, but to learn from it, reason with it and monetise it in real time. It’s still about data – but now, the advantage goes to those who can train on it fastest, personalise it deepest, and deploy it widest. (Slide 95)
This is where size helps. Training a model costs more than $100 million. Anthropic’s CEO has said that these costs could rise to $10 billion — per model. Inferences, while falling in unit cost, will likely “represent the overwhelming majority of future AI cost,” in the words of Amazon CEO Andy Jassy, because training is a periodic cost — done from time to time per model — while inference costs will be constant — every query. In Meeker’s words:
The economics of AI are evolving quickly – but for now, they remain driven by heavy capital intensity, large-scale infrastructure, and a race to serve exponentially expanding usage.
Meeker doesn’t walk us all the way down the path, and I’ll go into more detail in the third of this series of pieces — but it’s clear that worker productivity is top of most corporate agendas for embracing AI. She quotes a Morgan Stanley survey from November 2024 (Slide 330) where workers are top of mind: the largest adoption of AI was focused on employee productivity, the second highest worker savings.
Meeker avoids reaching her own conclusions on this. Instead she gives over a whole slide (Slide 336) to NVIDIA’s Jensen Huang, who paints a picture in the rosiest of terms. Yes, he says, jobs will be lost. But only to those who don’t take the opportunity. In fact, he argues, there’s a shortage of labour and anyone who takes advantage of AI will benefit. Here are the two bookends to the slide’s overall quote which probably encapsulate his thinking best, and illustrate the fist inside the velvet glove:
It is unquestionable, you’re not going to lose a job – your job — to an AI, but you’re going to lose your job to somebody who uses AI… I would recommend 100% of everybody you know take advantage of AI and don’t be that person who ignores this technology.
Meeker offers no annotation to this slide on the subject of workers. In the next couple of pieces I’ll try my hind
(No AI was used in the writing or illustrating of this post. AI was used in research but its results have been checked manually. This is another in a series of pieces exploring the frontiers between human and AI work. Here’s another.)
The Defence of Rorke’s Drift, 1879, Alphonse de Neuville
AI is here to stay, and it’s moving fast. We are a little like defenders of Khe Sanh, the Alamo, Rorke’s Drift, take your pick. The defensible perimeter is shrinking as AI turns out to be getting better and better at doing what we thought was an unassailable human skill.
I would like to try to alter that thinking, if I can, to present as just another kind of challenge that we’ve seen countless times before, We need to think of AI as a sort of horizontal enabler, kicking down the walls between professions and disciplines. Historians have traditionally been sniffy about people not trained as historians (John Costello, among others, tarred with the brush of being ‘amateur’ and not in a good way). We journalists are notorious for going from 0-80, from knowing nothing about a subject to parading as an expert after a few hours’ research.
AI is definitely replacing creative jobs, but many of those jobs in themselves relied on an expertise that was ported in. Duolingo, for example, has apparently fired most of its creative and content contractors in favour of AI. (I agree that the product will suffer as a result, but we need to keep in mind that this job itself didn’t exist until 2021.)
We tend to assume that our skills, training, education and experience are moats, but they’re not, really. The moats are usually artificially constructed — see how long we’ve had to live with the internal contradictions of quantum orthodoxy because of academic inertia and bias. We create degrees and other bits of paper as currency that limits serious challenges to conventional wisdom to a selective few, and are then surprised when the challengers come from outside.
The lesson here, I believe is that we shouldn’t be too precious about what we do that we can’t acknowledge that machines might be able to do it as well as us. Brian Eno, inventor of ambient music and a great mind, was quite happy to create an app that generated the very genre of music he invented, and still produces. He recognised that creativity could, under the right circumstances, be generated automatically by tweaking some variables.
Yes, we should rightly worry about the way AI is being used to make large sections of the creative (I use the term loosely, to include bullshit jobs and PR content creation) workforce redundant. But we should worry less about the fact and more about our standards. We are, as I have argued previously, decided to accept inferior quality — just good enough not to have users running their finger nails down the blackboard in frustration — and not demand a higher standard. This has less to do with AI, I believe, than with the way AI is used. While AI content is usually atrocious, it depends what you give it to work with. This is a function, in other words, of what is being fed into the bio harvester, rather than (only) the shortcomings of the bio harvester itself.
What we should really worry about is this: how do we encourage creative types to embrace AI and to leverage it to improve the quality of what they do and to create remarkable new things? We should be thinking: how do we keep ahead of AI? The answer: we use AI to do what we can do better and something that (for now) the AI cannot do on its own.
This means having a different attitude to work, to learning, to doing business, to thinking. The closest parallel I can think of right now is how online freelance workers have been working for the last 15-20 years. I wrote a story about this back in 2012, when I visited a town in the Philippines to see how a librarian had transformed her neighbourhood by switching to offering her librarian skills online, and then, having won her clients’ trust, upskilled herself to do more complex work for them, and hiring neighbours to help her. She and others I spoke to were constantly reinventing themselves, something that I think a lot of online workers do as an obvious part of their business. For them there are no moats, only bridges to new skills and better-paid work.
My x.com feed is full of people selling the idea that AI can churn out money-making work while you sit back and relax. Fine. This might work for a while, a sort of arbitraging the transitions from human to AI. But this is not (in the long term) what AI should be used for. Yes, it can definitely help us be more efficient — help us scale the walls of knowledge and competence that we did not view as either necessary or desirable. I, for example, ask AI to help me figure out bits of Google Sheets that I can’t get my head around, or how to automate the sending of transcribed voice notes to my database. It’s not always right — actually it’s rarely right — but I know its quirks and can work around them.
But this is small potatoes. I need to rethink what I want to do, what I want to be paid for, and how high I want to fly. I’m almost certainly going to put some people out of work on the way (I don’t have as much work for my virtual assistant as I used to, I’m ashamed to admit), but I’m sure I’m not alone in noticing that the demand for well-ideated and executed commercially sponsored content has dried up in the past year. I’m guessing a lot of that stuff is now done in-house with a bit of AI. McKinsey, BCG etc now all use some form of AI to prepare reports based on their own prior content and research.
In short, join the march to new pastures. But what new pastures?
The first thing is to acknowledge whatever trajectory you had in mind for your professional life may no longer exist. This is not an easy thing to accept, but it’s better to accept it now than to wait and hope. Whatever AI does badly today, or not at all, it will get better and better is all most people need. There are no moats, only ditches to die in.
I don’t know, of course, exactly how this will play out. We’ll need to get back at co-existing at AI, learning about the langauge we use to communicate with them (what is rather pompously called prompt engineering).
But that’s just the interface. The skills we develop through that interface could be anything. Think of the skills acquisition explosion that Youtube supports, such as channels that focus on manual crafts (Crafts people for example, has 14.2 million subscribers).
But we need to skate to where the puck will be, not where it is.
Indeed, the irony of AI may be that it helps our world pivot from digital to real, as politics and climate crisis push us to abandon our addiction to rampant consumerism and regain a respect for tools and products that last, and that we can fix ourselves.
More important though will be the new work that we can build on the back of AI. If I knew what those jobs are I probably wouldn’t be sitting here telling you lot, but be building them myself. But for the sake of argument let’s ponder how we might figure it out: we first need to identify a need that exists, or will exist (remember the puck) and that the need itself cannot be be done by AI, at least for the foreseeable future.
The most obvious one is the mess AI leaves us. I have never come across something done by AI that can’t be improved upon, or corrected. And we know that AI does not explain itself well, so even if AI can find the tumour a specialist couldn’t, we really need to know how it found it, even if it means trying to understand something that it is unable to explain itself.
But there are bigger issues, bigger problems, bigger needs that we need to face. It is not as if there aren’t things that we need to do, we just never seem to have the money or the political will to do them. We are suffering an epidemic of mental health issues, and while I understand how AI might be helpful in easing some of that pain (particularly the pain of loneliness), but far better would be a process that expanded the cohorts of people with enough counselling skills to be able to connect to sufferers and help them, professionally or as part of their daily work. Instead of potholes being something reported and fixed by some remote arm of government, the process of collating data could be crowdsourced (I remember a World Bank pilot project in the Philippines in the early 2010s) and the work assigned to a nearby volunteer suitably skilled via AI in filling in potholes.
Story of a pothole in Cebu City – training materials created for MICS (from TRANSPORT CROWD SOURCE ICT DEMONSTRATION COMPONENT 2 FINAL REPORT)
AI could and does help us acquire new skills that are not ‘careers’ in themselves, but can contribute to a blend of physical, mental and creative work that sustains our future selves.
Another way of looking for opportunities is this: What work would have been too expensive or too time consuming to be considered a business, and can AI help? Garfield AI, a UK commpany, offers an automated way for anyone to recover debts of up to £10,000 through the small claims court. This is potentially a £20 billion market — the amount of unpaid debts that go uncollected annually — and it’s probably one that is not manageable by most legal companies, and too cumbersome for small creditors. Here’s a piece by the FT on the company: AI law firm offering £2 legal letters wins ‘landmark’ approval.
This doesn’t mean that we can’t also be doing ‘mental’ creative work — using AI, say, to work out who is responsible for the mess that is the British water industry by creating an army of AI-enabled monitors and investigators, identifying polluters and making them accountable. This could be initiated by one individual with the smarts to figure out the legal challenges and to find ways to incentivise as many people as possible to gain the skills necessary to contribute effectively.
In other words: we need to stop thinking in terms of what jobs AI is taking and thinking what jobs that don’t exist that we can now do with AI.
Finally, a word on creativity. Creativity is, in the words of Eurythmic Dave Stewart, “making a connection between two things that normally don’t go together, the joining of seemingly unconnected dots.” That meant for him and Annie Lennox trying to find a connecting point between their talents — synths for him, voice for her — which as we know eventually paid off. Steve Jobs talking about creativity being “just connecting things.” It’s not as if AI can’t do this, but it can’t, according to researchers, compete with humans’ best ideas, their best connecting together disparate things.
I understood this a little better talking to an old friend who is a famous writer, but has been suffering from Long COVID since the pandemic. It prevents him, he says, from keeping in his head the necessary elements for a novel, but he can just about manage short stories. He still writes, and writes a lot, but that extra genius/muscle of creativity that has propelled him into the ranks of Britain’s best writers is currently not working properly. To me that explains a lot about what is really going on in human creativity, at least at the highest level. It’s frustrating for him, of course, but to me it’s an insight that should inspire us to make the most of our extraordinary minds, and to acknowledge that at their best our minds are no match for any kind of AI.
I’m changing my mind about generative AI. I don’t think hallucination is the show-stopper anymore. I think we are.
Perplexity.ai, which I’ve been using as my main AI research assistant, has this month launched a Deep Research tool, essentially an upgraded version of itself, which doesn’t just do a cursory web search and build its response based on that. (It’s obviously presenting itself as a competitor to OpenAI’s tool with the same name, launched a couple of weeks earlier, and available to pro users in some jurisdictions. It’s faster than Perplexity’s, and is comparable.) Deep Research prepares itself — and shows you how it is doing so — by breaking down the prompt and continually reassessing itself and customising searches until it spits out a response, which is presented as a paper of sorts, with a beginning, middle and end.
I have to admit I was impressed. Perplexity, of course, is not the only one who has reached this level but to roll it out to existing users so seamlessly is impressive. Unfortunately it’s too good. Whereas previously I would run a prompt through Perplexity as an initial foray into a topic, now I more or less feel there’s nothing much to add. The result: a motivation-sapping sensation where the question: what’s the point of doing anymore on this? is hard to answer.
Still from The Hill (1965)
The case in point was a thought I had while watching Sidney Lumet’s 1965 film The Hill. The themes, the tone, the conclusion felt like more than an echo of another film released only a few months earlier, Fail Safe, and I figured it was relevant enough, with all that’s going on the world, to compare and contrast the two and see what lessons Lumet might have been trying to convey that could be applied to us, a quarter century into the millennium: The dangers of rigid protocol and process especially in military environments, the toxic power of sociopaths manipulating those systems, the futility of trying to do the right thing in the face of those forces, and a general pessimism about whether humans can build systems that still retain their humanity.
The Hill may be about a prison camp in North Africa to hold and rehabilitate criminals within the British armed forces, and Fail Safe may be about the U.S. and the Soviet Union locked in a nightmare scenario of mutually assured destruction, but both were really about power, and whether the systems we build to project and protect that power are fit for purpose. Both films are extraordinary white-knuckle rides, and both end with little hope offered.
Fonda and Hagman in Fail-Safe (1964)
I ran Perplexity, just to see if there were any other aspects of this I should be aware of: perhaps the conceit had already been spotted and dissected elsewhere. Perhaps I was just following other people’s footsteps. My prompt was this:
Write a report exploring the movies The Hill and Fail Safe by Sidney Lumet, comparing and contrasting the themes and other elements
I didn’t try to pre-empt things or tip the scales by giving Perplexity a glimpse of my own thoughts, and I gave equal weight to comparison and contrast. I also tried not to restrict discussion to themes by adding ‘other elements’ — suggesting camera work, acting, location, lighting etc, could equally be explored.
And here’s the crunch. Perplexity’s answer was so good I didn’t really feel there was much more to say. Why not just send you all the Perplexity idea and head off to the beach?
Perplexity.ai’s response
Here’s how it started off:
Sidney Lumet’s The Hill (1965) and Fail Safe (1964) stand as stark examinations of institutional power, systemic failure, and the human cost of rigid hierarchies. Though distinct in setting—one a WWII military prison in North Africa, the other a Cold War nuclear crisis—both films interrogate the moral decay inherent in systems that prioritize control over humanity. Through unflinching realism, claustrophobic direction, and morally complex characters, Lumet crafts two parallel narratives that reflect the anxieties of their eras while offering timeless critiques of authority. This report explores the thematic and stylistic intersections and divergences between these works, revealing Lumet’s mastery in exposing the vulnerabilities of systems designed to suppress individual agency.
I’m not a professional film reviewer, nor am I an academic, but there’s not much there I would disagree with — and I would readily admit my own thoughts on the topic are pretty prosaic compared to this. Perplexity goes on under clear headings to dissect the topic, before concluding thus:
The Hill and Fail Safe exemplify Lumet’s ability to dissect power structures through intimate, high-stakes narratives. While The Hill exposes the brutality of colonial and military hierarchies, Fail Safe interrogates the existential threats of technological overreach. Both films, however, converge on a central truth: systems designed to protect often enslave, and humanity’s greatest vulnerabilities lie in the very institutions meant to uphold order. In an era of resurgent authoritarianism and AI-driven warfare, Lumet’s films remain urgent reminders of the costs of unchecked power.
Perplexity here is not just parrotting its sources, it’s pulling them together into a paper that responds to my prompt at a level that’s breathtaking. It is thinking, in the sense that it is not merely summarising from collated sources. There’s little to quibble about here. The effect is jarring, I’ll admit. We’re not in Kansas, anymore.
Or are we?
Perplexity prides itself on showing its thought processes, and that in itself is impressive. You’re watching a computer work, you’re a teacher checking your student’s construction of an argument.
Perplexity.ai’s train of thought
But we’re still hamstrung, in a way that we’ve been hamstrung before in GAI, by a lingering opacity. I can now see how Perplexity decided to go about the task, and I can see, up to a point, how its thinking evolved based on the sources it found, but I’m still not sure of the provenance of each element. Perplexity provides links in the text, sometimes after each sentence, but upon closer inspection it’s not always clear how and even whether the source cited is relevant.
Indeed, when asked Perplexity a follow-up question about which of the sources it cited made any direct comparison of the two films, I realised that we had at least one foot still in Kansas. How much of its answer was its own work, I was asking, essentially.
It replied that in only one of the 31 sources cited was there any link between the two films, and that, it said, was fleeting. In fact, the link it provided made no mention of The Hill and when challenged Perplexity apologised and corrected its answer. And it said:
The most relevant comparisons between the two films actually come from source1, which mentions both films in Sidney Lumet’s filmography. However, it doesn’t provide a direct comparison. The closest it comes to comparing them is in separate sections about each film…
It concluded its answer thus:
The comparative analysis in my previous response was largely synthesized from separate information about each film, rather than from a source directly comparing the two. I apologize for any confusion this may have caused.
I’m not going to quibble. In this and countless other responses Perplexity is hitting them out of the park. Checking its work is easier than in a lot of similar tools, but it’s still problematic. But in terms of delivering a well-constructed argument that not only synthesises sources but actually creates an argument from scratch, providing enough detail to know that it’s not just conjuring fake stuff up (hallucinating), then we’re already on a new level.
Matthau holds forth in Fail-Safe (1964)
But.
Perplexity and its ilk should be congratulated for what they’ve done, and I’m sure things will improve more quickly.
But for us creative/knowledge professionals what does this mean? What does this mean for journalists?
We’ve hopefully gotten past the low-hanging fruit, mostly knee-jerk responses that it’s going to put everyone out of work. I don’t deny that this is happening already. Why hire people to write a mealy-mouthed press release or marketing blurb when AI does it for you — without all the fingers that insist on adding unwieldy, yawn-inducing buzzwords? Why hire people like me to write sponsored content, ‘white papers’, or other nonsense? Nobody reads those things anyway, so why not have a machine to do it (as, no doubt, is already happening on the side supposedly reading these things)?
In that sense Perplexity and its ilk are simply moving the already-moved goalposts. Seen clearly, we’re moving at a pace to a world where ‘just good enough’ — where we accept a level of mediocrity because it is more efficient, and because that’s what is, under current conditions, considered acceptable — is now just about good enough to roll out at a level most industries and professions would be willing to sign off on. We’re not just talking creatives/media, here: we’re talking law, government, manufacturing — any sector which relies, at some point in its procedures, on the written word. (Think instruction manuals, documentation, briefs, summaries etc.) One AI will create, and another will check. The human involvement will be as minimal as a modern production line. We can’t escape this fate.
Connery in The Hill (1965)
So for journalists and other creatives — where we’re being paid to think up new ways of saying things, connecting things, reinventing stuff, moving stuff from one medium to another —the obvious answer is that it allows us to test story ideas, theories, half-baked thoughts. But that in itself is somewhat half-baked. It’s the ‘faster horse’ fallacy.
So let me lay down a few Wagstaff Laws, none of which come from AI, I promise, but might perhaps have begun reading other people’s ideas.
First off, we need to recognise that the better these things are, the harder it will be to find errors. We have to assume there to be errors, and to assume those errors can be big or small — an incorrectly cited source (breeding claims of plagiarism, IP theft etc.) Perplexity has shown that its footnoting, though impressive, is still flawed, and I’m guessing might never be perfect. The old Perplexity provided half a dozen sources for a response; Deep Research cites more than 30.
Finding those errors isn’t going to be fun, and might end up taking more time than actually researching and writing the piece from scratch. Yes, AI could do the checking, but then you’ll need to check that as well. And how can I be sure its search, though apparently thorough, didn’t miss some stuff? I take some pride in my ability to search stuff online. I have, after all, been doing it for nearly 35 years. How do I know that Perplexity covers all the bases I would have?
And that’s just for the small stuff. How about your AI comes up with some great idea that it doesn’t know it cribbed from one of its sources? We’re now into dangerous territory, and not just for academics. If journalists rely on AI developing story ideas it has to be assumed that the AI is going to find out if someone else has a similar idea. But is it going to tell you?
Related to that, my second law: The more we rely on AI for ideas generation, the weaker we will become. We will depend on it, not because we couldn’t come up with it ourselves, but because we don’t need to come up with it ourselves. So we’ll effectively outsource that part of our cognitive effort, and we’ll wake up one day entirely dependent on something beyond our control. This is fine if it’s something like knowing how to put up a gutter pipe, or even knowing how to drive. But it’s not fine when it’s a skill that is unique — my idea for a story, unless it’s people in a burning building, or the new GDP figures, is never going to be identical to someone else’s, because of all the experience, successes, failures, peccadilloes, biases, enthusiasms, hobbies etc I have allowed in over the years. If I let go of some of that not only will I gradually get worse at it, but I won’t be absorbing and integrating new experiences, skills, pecadilloes etc. I will no longer be the journalist I was. The point is: we don’t really understand what makes us us, so outsourcing a core piece of what makes me me will lead to a different, and I would have to assume lesser, me.
The generals in Fail-Safe (1964)
For example: I’ve gone through all the sources for the The Hill/Fail Safe thing but I’m still not comfortable. I still don’t know where the sources end and the ideation starts. And now I don’t know where Perplexity’s ideation ends and mine begins. That dissipation of motivation to write the piece? That comes from the blurring of these lines. My creative engine depends on believing that this is my idea, my story, my execution, my name at the top. What happens when a story is mostly the work of AI? So these are pretty obvious laws. The third is less so: The more we use tools like this, the more we’ll have to reinvent both how we use them and, in the process, ourselves. This in some ways is quite exciting. I’ve not come across many people using Perplexity but I’m. sure there are lots. Eventually, just as with Google, we’ll all be using it, or something like it. So there’s little point in journalists working on pieces that, to an acceptable level of mediocrity, can be produced by AI. So we’ll have to change. Perhaps, our jobs will, if they survive.
To do that we’ll need to figure out ways to retain our soul — by which I mean, the indelible proof that what we produce is ‘us’, even if it’s a markets wrap. I suspect it will be by learning the art of prompts well enough to squeeze out of AI what only it can do, or what only it can do well. What I mean is this: I can ask Perplexity to help me think out a piece juxtaposing two films, something I could have, and should have, done myself. But if I asked it something I couldn’t possibly do myself, at least in the time available, wouldn’t that make me a better journalist/writer/thinker?
For example: Take the scripts of the following directors (Kubrick, Lumet, and 20 others) and gauge from the language used which of them would have been banned or deemed unacceptable according to current laws. Perhaps that’s too easy. I was listening to an interview with Peter Turchin, an academic who applies mathematical rigour to history (‘cliodynamics’). When the interviewer, Nate Hagens, asked him what he’d say if he got five minutes with Donald Trump, he talked about funding to do the research necessary to explore what lessons could be applied from his work to the present state of the world. The research, he said, would take five years. Nate, quite reasonably, asked: are we still going to be here in five years?
Peter Turchin (left) and Nate Hagens
To me it was an opportunity missed for Turchin. While I think we probably do have five years, it’s true to say time is something we don’t have a lot of, whether talking personally, politically, planet-wise. Can AI help us on that? Yes. I would like a decent AI to look at the things that Tuchen is looking at — essentially looking at periods of elite-rule and wealth-pumping, and which lead to catastrophe and which don’t — and do it all in five days rather than five years. If Turchin and his team are good, and I’m sure they are, they should be able to rise to the occasion, and at least get us a ‘good enough’ working conclusion to be able to take it to Trump and help him make more informed decisions. There are some things where getting stuff done quickly is more important than getting things done perfectly, to paraphrase Voltaire, or Aristotle, or whoever it was.
As journalists we need to rise to the challenge of this. We need to realise that our core skill is coming up with better ideas than the other guy, and executing those ideas. We need to mesh that with another skill: ‘better prompts’, which in turn should lead us to ‘better stories’ which may, in the end, look nothing like what we now think of as stories.
There is another option. Which may in itself not be a bad strategy. AI will shape our world more than any other technology, because it will shape us, our minds, our bodies, and our dependence on it. It’s not like a car or a kettle. We can live without both. But AI has already made it impossible for us to switch it off. And so it follows the most creative act is one of independence — to essentially live without AI. To not use it as a crutch, whether it’s finding the cheapest flight or to assess CT scans. For a journalist this would mean writing as a sort of clandestine, subversive activity. It would be like keeping one eye closed when turning on a light to retain some night vision. You would reject all technologies that make you dependent on something you can’t fix yourself (so a manual typewriter might be OK.) You would truly be an independent thinker, and I wouldn’t be surprised if you ended up being smarter than the people around you.
This is a radical solution, of course. I’m convinced that we must remain masters of our technology — a red line I suspect we’ve passed in many cases already — but I believe that entails not allowing ourselves to accept the false efficiency god of mediocrity. As artists like Mark Haddon have sought in a joint letter published today to get the UK government to rethink plans to weaken copyright laws governing artwork and writing for the sake of improving AI, we should not fool ourselves about what is at stake if we allow mediocrity to win. The creativity that every human is born with — and assiduously develops until it’s school-ed out of them — would wither and die in the face of AI-generated content. We somehow have to keep the door to mediocrity closed as long as possible while seeking new definitions of what our human minds can do, to discover and unleash those capabilities we know are hidden somewhere within.
AI should be the spur that drives an assault on that bastion of locked-in creativity. Our future happiness depends on it.
I assumed that by now we’d be using something better than email. (Remember Google Wave? No, I don’t either.) But we aren’t. So it’s probably time for me to make a confession: I’m still struggling to find the perfect email app.
After a recent spate of embarrassing moments when important emails just passed me by, I vowed to come up with a system that didn’t let that happen again.
For years I’ve been using Apple’s stock email program (‘Mail’) and souping it up with rules, extensions and smart mailboxes. It looked good. I had at least 40 different smart mailboxes, a dozen or so rules, and everything seemed to be under control.
It was an illusion. By automatically directing things into smart mailboxes I cleaned up my inbox but to the point where I missed messages. Sometimes it’s not even clear why I missed it. Was it a bad rule I set up? It drove me nuts. An email from an accountant. Missed. An email from a teacher. Missed. An email from the guy who does, well, I’m not supposed to tell you what he does. Missed.
So I went back through the email apps I had played with in the past. There must be something better out there by now. It’s been 35 years we’ve been playing around with email apps (very happy to see Pegasus, my first email love, still going strong); surely we’ve got something solid and dependable to sink our teeth into?
Not really, is the answer.
I’m a Gmail user but not that’s one of four email accounts I need to have regular access to, so it couldn’t just be a Gmail client (like the discontinued Mailplane, and Mimestream.) Airmail is OK, but it looks and feels a bit dated these days. The same with Thunderbird. It’s great to see Mailbird with a MacOS version, noting the work Andrea Loubier did to get the app going, but it didn’t seem to add much to what Mail and others already do for free. Postbox is another dead man walking, having been acquired by eM Client in October. (eM Client is more of a productivity tool, and while it’s competitively priced, it fell outside my zone of candidates.)
The field ranges from an all-singing, all-dancing app at one end that promises to do everything except clean the coffee machine, to ones that strip everything away and keep a tight focus (by hiding everything from the user, like a zen Marie Kondo apartment where everything has been stuffed in a cupboard.) I needed something half way between the two. In the end I settled on Spark (both the iOS and MacOS versions are available on Setapp. Canary is another email on Setapp worth a look.). There were, and are, things I don’t like about it, but I’ve tried to focus on what I need the email app to prioritise.
In short I need it to make sure I know that an important email has arrived. Spark is pretty good at that, allowing the user to mark sender(s) of a message as a priority sender. That means that anything from them will be shunted to the top of your list.
Then you’re able to assign an email thread as a priority, which colours it in a subtle orange. (A little Harry Potter scar appears alongside the thread to help you identify it.)
Everything is shunted into one of three categories Personal, Newsletters and Notification (a sort of half way between the two.) It actually works pretty well, although I’d love to be able to have more control with tagging (works with Gmail, not with other email services) or folders and filters.
And there’s also some AI guff that I really could do without. I never want anyone who receives an email from me to think it might have begun life as an AI prompt. All this kind of stuff is going to seem embarrassingly lame in a year or so and I wish development dollars weren’t spent on this when there are better things to work on.
Security, for example. Spark’s owners Readdle have been criticised for storing email on their servers, something they argue is necessary for some of the features. I don’t like it, though, and I wish there was a way to avoid this. It’s one reason that you can’t access Protonmail with Spark even through the former’s mail bridge (which works with apps like Apple Mail.)
I’m pretty happy with the outcome, and while I don’t have any use of the team-based functions, where you can presumably work on drafts with others, there’s clearly ambition to make the app better and smarter. Which is good: the app does crash, and sometimes the little x’s that you click on to close a mini-window don’t respond. And I’m not happy they only offer an annual billing. It’s $60 a year though there is a free version which I think offers most of the features that make it work. So try that first if you’re interested.)
Spark’s inbox
Given I’ve been writing columns about email for nearly 25 years, I’m a bit shocked I’m a) still writing about it and b) have had to admit I screwed up with lost emails enough to take action.
So I’m probably not the person to offer advice on the topic. I’d rather hear from you. But in the meantime, here are some lessons I’ve learned:
be a good emailer: acknowledge when you receive an email if you don’t intend to respond in detail immediately. You may save a lot of misunderstanding;
build your email world around the people you care enough about to not leave them hanging. Tags, folders etc are fine but if you’re missing emails, or forgetting to reply to those that deserve a reply, you’re doing something wrong.
think about email as the start of things. Spark has some pretty good extensions to help you connect with other apps that do a better job of handling tasks, calendars etc (my favourites: Agenda for contractual and project-level discussions, NotePlan for calendar+ and personal tasks, TickTick for assigning tasks to and managing others.)
don’t, whatever you do, use AI to write your emails unless the person you’re sending them to deserves it.
belt and braces might be a good way to go: I’ve kept my Apple Mail app going, and use that for digging around and organising the many newsletters I subscribe to.
Thanks for reading and let me know what I’m missing — either in terms of apps, or of the greener pastures of Windows.
Drones are changing the way wars are fought, but they’re also changing the way we experience those wars.
Television brought the brutality of war into the comfort of the living room. Vietnam was lost in the living rooms of America — not on the battlefields of Vietnam. — Marshall McLuhan (1975)
Hamas leader Yahya Sinwar is sitting injured in a what was once a red armchair in a bombed out house in Gaza, his face covered with a scarf, when an Israeli drone tracks him down. As it hovers a few feet from it he throws a stick at it, apparently knowing his fate. Tank shells and missiles rained down on the building, killing him.
A Ukrainian drone weaves between blast-shredded trees in the Kreminna forest until it finds a target: a single Russian soldier hiding in a bunker beneath a tree. He stares at the drone a few feet away and tries to fire his weapon, but it’s too late. The video turns to static, signifying the drone has exploded. (Source: GWAR69)
There’s something deeply unsettling when a war doesn’t just come to your living room, but when it makes you, the viewer, the angel of death. You are the payload, looking into the eyes of those about to be destroyed.
We are used to these videos now, either cheering along with them, or, more likely, skipping past them. The juxtaposition of technology and the banal horrors of the battlefield is nothing new, but staring into the low-res eyes of the condemned takes a bit of getting used to.
The war in Ukraine has helped gamify warfare for the civilian as for the drone operator, feeding us recordings of “first-person view” (FPV) drone attacks on vehicles, civilians, buildings and trenches — clips usually accompanied by martial music and cheering from the operators.
In some ways it’s the logical conclusion of the consumerisation of IT. First it was the office, where we brought our iPads and demanded they work with our office computers. Now it’s the battlefield, where consumer drones are pimped into service, giving us a FPV of combatants as they are hunted down in buildings, in bunkers, in fields, in vehicles. The medium, in Marshal McLuhan’s words, is the message: such videos are intuitive to us because they are no different to a first-person shooter game. The only quirk is that when the drone finds its target, the feed is lost and our imagination fills in the blanks.
Fog
It’s hard to overstate the impact of drone innovation on the way wars are being fought — and will be fought in the future — and to recognise that this is being driven not by the military industrial complex but by small, quasi-entrepreneurial teams using existing drones, 3D-plastic and -wood printers, cheap Raspberry Pi computers etc. Yes, it’s still a war of heavy artillery, jet fighters and tanks, but drones have forced their way into nearly every facet of the battlefield, to the point where commanders cannot afford to operate without them. South Korea has begun replacing mortar teams with drone teams, and former Google chief Eric Schmidt has said the US should replace its tanks with drones.
Perhaps the greatest is in the way it has not just lifted some of the fog of war, but made both intelligence and targeting highly bespoke: a soldier in the battlefield can no longer count on camouflage, the protection of steel or concrete, or even speed, to evade the enemy.
A Tet enforcement drone in Oblivion (2013)
We’ve grown used to the trope in popular culture: the all-seeing eye, despatched to track down a protagonist, in Oblivion, His Dark Materials, War of the Worlds (2005), Dune etc. But few writers envisaged the reach and scale of what is being reported from Ukraine (and of course, we have to factor in that most sources have a bias, even those translating reports from Russian Telegram channels. Everyone has a dog in this fight.) Nevertheless, there’s little doubt that in the nearly three years since the Russian invasion of Ukraine beyond those parts annexed in 2014, no other weapon has evolved so quickly.
Drones patrol the sky hunting for prey. Any soldier is a potential target for a kamikaze drone. A Russian soldier walking to a nearby village for spare parts is killed on his way back to his mortar battery. His platoon commander and a fellow crewman are spotted; the latter is killed, the former loses a leg. (Source: WarTranslated)
Soldiers have grown used to surrendering to drones — what is called “non-contact surrender”, according to David Hambling, writing in Forbes. In one image shared online, what appears to be a Russian soldier attempts to surrender to a drone, holding up a piece of equipment as a bargaining chip. The equipment is likely a relatively basic Russian-made drone jammer.
Russian soldiers have reported that they have been threatened with attack from their own drones if they do not comply with orders to advance. “If you don’t want to go, they throw a grenade from above to speed you up,” the soldier, Lt Oleg Guivik, is quoted as saying by military researcher Chris O, who translated text and a video found on a Russian Telegram channel. (It’s not possible to verify the source material.)
Substitute robot for drone and you can see how far we have come, and where all is this going.
Indeed the tail has begun to wag the dog. Social media teams embedded in units quickly edit and circulate FPV videos. Their popularity has allegedly inspired Russian commanders to co-opt the drones for filming propaganda for their superiors rather than for military purposes, according to one Telegram channel.
Battle-ground
The result is that drones factor in to many battlefield decisions. Artillery weapons fired less than 20 km behind the front line, says Trent Telenko, a specialist in military logistics, can’t relocate fast enough to avoid drone counter-fire. Evacuation of the wounded and civilians is vastly complicated by the threat of drones, according to a piece by Kris Parker in Byline Times, even when the evacuation is done by civilian volunteers.
Although Putin has taken the trend seriously — last year he said Russia aimed to increase its supply of drones by a factor of 10 this year, to 1.4 million. Ukrainian President Volodymyr Zelensky said it can produce 4 million drones a year, according to Parker. The numbers are probably meaningless, for several reasons. Many drones are either bought by the units themselves, or donated. Some never appear: Russian front line troops have complained that commanders take donated drones and sell them at local markets.
Barbecue
What is clear is that the evolution of drone warfare will continue to be rapid. It is a war of attrition, where each tries to parry the other’s technological thrust.
Russian soldiers have done their best to protect themselves and their vehicles from drones, with limited effect. They have constructed wire cages atop their tanks, called ‘cope cages’ by some, though their effectiveness is unclear, given the Russians prefer to call them ‘barbecues’, and their larger, turtle-like shell, a ‘king barbecue’, suggesting they don’t have much faith in them. Indeed Ukrainian munition producer Shock Wave Dynamics has recently produced a 3D-printed bomb that punches debris in a specific direction, nullifying any cage or shell, according to Stuart Rumble of BFBS Forces News.
Jamming the signal between drone and operator is still the most effective defence. Beyond jamming, bothsides have been experimenting with net launchers, which are released to ‘capture’ the target drone and bring it down. FPV drones have been successful in intercepting long-range fix-winged drones like the Russian Gerbera, usually by crashing into them kamikaze-style. Other approaches include trying to nudge it off course.
In response to jammers, Russians started using fiber-optic cable drones, using a spooling line of angel-hair thin cable to control the drone and feed back high definition video until it reaches its target. While the fibre optic cable reduces some its manoeuvrability, it renders useless drone jammers which rely on intercepting wireless signals between drone and operator.
Autonomy
It is inevitable that drones both large and small will become more autonomous, free of the need to be directed by a remote operator, able to find the target itself, and therefore be invulnerable to jamming of communications or GPS signal. In other words, a seeing drone able to make its own decisions.
Max Hunder of Reuters wrote in July that Ukrainians are already working on this, looking to create a targeting system that is cheap and easy to build, using Raspberry Pi computers and a simple program. Indeed, this is where the next breakthroughs are likely to come. The cost of making drones is falling, but thousands of them are expended each week. The goal will have to be to make drones that can not only evade defences but deliver a punch worth the effort.
So expect to see more like Kyiv-based Swarmer, a startup which is using AI to automate the launching and flight of fleets of drones, according to a Hunder. Possibly helping such initiatives will be that they are less dependent on government budgets and more on private money, especially Silicon Valley’s. Swarmer just finished a round that included not just a defence tech company (R-G.AI) but also a Web3 player, an LA-based VC focused on Ukrainian startups and aerospace-focused VC Radius Capital Ventures.
AI is already being used to direct some of the country’s long-range drone strikes inside Russia, Hunder wrote, quoting an unnamed official as saying that such attacks sometimes involve a swarm of about 20 drones, in part to divide up tasks and to distract and confuse defences.
What is clear is that the lines between a military drone and a consumer drone are blurring. At the beginning of the war drones could drop little more than a hand grenade. Now Ukraine’s “Baba Yaga” drones can carry a 40 kg bomb, according to Daniel R, an imaging physicist, citing photos from Ukrainian X feed @watarobi. (Much of the bomb and release appear to have been 3D-printed.)
Ukraine’s thermite “Dragon” drones appeared in late August, one dropping molten metal as it sped along a tree-line, setting trees and undergrowth ablaze in a display frighteningly reminiscent of napalm raining down on Indochina. Thermite is a combination of oxidised iron and aluminium that burns at about 4,440° F (2,448.89° C), according to Howard Altman of The War Zone, effectively making it a sort of modern-day flamethrower, dropped or sprayed. They are now being used on tanks and bunkers, small enough to navigate their way through narrow slits that were previously considered impenetrable.
So what does all this mean?
I’d offer a few thoughts.
Assassins
From a tech point of view, we’re seeing drones, 3D printing and a startup mentality (and VC funding) all find their natural fit: dual-use. AI is on the way to doing something similar: ex-Google CEO Schmidt, has been working on AI-driven military drones with a startup he founded, White Stork. As I mentioned above, he has called on the US to ditch its tanks, and in a speech in Saudi Arabia last month, predicted that artillery would also disappear. He called drone warfare “the future of conflict” driven by the rapid decline in the “cost of autonomy.”
He’s got stuff to sell, so of course he would say that. But there’s no denying a revolution has taken place without us really noticing. We have entered the era of robot warfare, but not as we envisaged it. We expected humanoid droids of some kind, but instead we are getting consumer quadcopters strapped together with duct tape, bombs and their hardpoints 3D-printed from plastic or wood, granted intelligence by an off-the-shelf computer the size of a small stack of credit cards. Cost: about $150, compared to a military drone costing between $60,000 and $25 million.
Think of them as small, fast, roving assassins, for now. The next step is swarms of robot drones each with a slightly different task, operating, as Schmidt predicts, autonomously.
All of this will be, is being, captured on video, and beamed onto your phone, should you wish to see it.