Breaking the wall: Drawing the right lessons from Blade Runner(s)

By | September 3, 2024

This is the second in a series of pieces I am writing on dystopian movies — — broadly defined — and what they tell us, or could tell us, about our own condition and what prescriptions they might offer for a way forward. In this piece I offer a different interpretation of the two Blade Runner movies and the three commissioned shorts, arguing they can and should offer us a timely piece of advice at two of the most pressing problems of our age.

Zhora (Joanna Cassidy) tries to escape Deckard in the first Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

This is a review of the Blade Runner movies. But it’s really about where we are today. Although I think Blade Runner 2049, Denis Villeneuve’s 2017 sequel to Ridley Scott’s 1982 original, is deeply flawed, I believe that if we take the two movies together we can learn important lessons about our bipolar world, and where we should fit technology, in particular AI, into it.

It’s a lesson I haven’t seen others draw. And it’s based on a rather subjective view of the two movies which you might not agree with. So strap in.

(I’m assuming you’ve seen both movies, so if you haven’t, I would recommend watching them first. I’ll still be here when you get back.)

The first Blade Runner was unleashed on an unsuspecting, and somewhat unreceptive world, in June 1982 (and September in the UK). Largely ignored in the U.S. at the time, it gradually became a cult classic, casting a shadow over anyone who dared consider a sequel. Eventually Denis Villeneuve had a go, releasing Blade Runner 2049 in October 2017. Once again underwhelming at the box office (though critics mostly loved it). I’ll come out and say it: I still don’t like the second one, and am not sure I ever will. And that throws me in the trenches with those who, with very detailed and cogent arguments, also don’t like it, hurling epithets at those in the other trenches who, with equally detailed and cogent arguments, love it.

I’m not (necessarily) here to persuade you to join me in the anti-trench. But I my thesis about the two movies carrying a key, poorly understood, message for us in this particular moment in our real world narrative depends to some extent on me arguing my corner. I’ll try not to be bombastic about it. And of course, there’ll be plot spoilers in there.

Love stories

Both films are love stories, where the protagonist learns how to love. Deckard learns that it doesn’t matter whether Rachael is a replicant, or whether her memories aren’t real. She is, and that, he learns on the roof-top watching Roy Batty die amid tears and rain, is enough for him. Similarly in 2049 the protagonist K learns that while Joi, the female hologram love doll, is not real, his feelings are, his interactions with her are, and that therefore he has agency. He is capable of love, he has a soul of sorts, and he change things. His decision to help Deckard find his daughter, is that change.

Some or this interpretation, I know, is contentious, so let me briefly substantiate. In Blade Runner, when Deckard and Rachael are at the piano in his apartment, she shares her confusion about whether the memories she has are her own or not, whether her ability to play piano is from her memory or not. Deckard tells her that it doesn’t matter. “You play beautifully,” he says. She can play, therefore she is. (I’m definitely not the first to point out that Deckard is Philip K. Dick’s nod to Descartes, whose philosophy populates both book and the film.)

Rachael (Sean Young) and Deckard (Harrison Ford) at the piano in the 1982 Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.) 

In 2049 there’s also uncertainty about K’s reaction to seeing the skyscraper-high Joi hologram, commercialism in all its brash nakedness, single him out as he walks home. When she uses the name she had given him — Joe — as a generic come-on — “You look like a good Joe” — K goes through another life-changing moment. Was their love just a clever bit of commercial programming?

The original script makes clear his thought process:

The NAME goes through K like an arrow. Joe? Jo? His mind fills with doubt and hope and doubt again. Was it all part of her program? Was she ever real?

No answers from “Joi.” Only a knowing wink and her mannequin smile as she looks back out on the city. Selling herself to the world.

CLOSE ON K. _His eyes close. As if saying goodbye. To her. To everything he learned from her to dream and hope for.

K is letting go of his memory of her as a lover, but not to the memories, the lessons she shared. For him it doesn’t matter that she was a hologram, or even an off-the-shelf virtual love doll. He learned from her. To dream. To hope. Enough for him to say goodbye, rather than, as some have interpreted the scene, discard the whole experience to trash. Like any love affair.

A constructed world

So what does all this have to do with AI?

Well, a lot of the discussion about the first Blade Runner has been about whether replicants are human, and what the differences are between the two. We are persuaded to conclude that replicants are in a way more human than human (beyond the trite motto of the company that manufactures them), because they don’t carry our baggage, they want to live, and because they have a termination date they know that time is sacred. Most of us humans are guilty of often forgetting we have a termination date too.

But this is not, in my view, the whole picture. Ridley Scott thought deeply about the movie he wanted to make, as did the author of the original script, Hampton Fancher (who also wrote the story and initial draft for the sequel), and so we should be looking deeper for richer treasures. A film well made, after all, is a constructed world that reflects our world back at us with fresh eyes.

Similarly, no-one will accuse Villeneuve of directing superficial films. Arrival (2016) is an extraordinary journey through the concept of time, and how, were time not linear, we might still decide to live our lives as we do. Sicario(2015) takes the idea of protagonist and subverts it, leaving us questioning what we believe and how we see the wall between us and the way the world really is. And don’t get me started on the Dune movies (2021, 2023).

In short, closer attention in both films is rewarded, though we shouldn’t expect, or want, the same message from both. In the first Blade Runner, Ridley understood that the message of the film was quite a tender one — as he describes it, the hunter falls in love with the hunted — but the movie needs to explain why and how that happens.

The untermenschen

It happens because humans have screwed up, and replicants are the answer to their problem. They’ve screwed up the planet, they don’t have enough people to go do the empire building off-world, and so they created an untermenschen, an underclass to do the work. It’s not the first time humans have done this, and it won’t be the last. The only difference here is that the untermenschen are artificially created humanoids.

The problem the movie presents is that the answer to the problem has itself become a problem: these replicants have rebelled and started to infiltrate Earth. The protagonist, Harrison Ford’s Deckard, is the fall-guy, the gumshoe, the Philip Marlowe who has to go do the dirty work of removing this problem. At this point in the story (2019, 37 years into the future when the first film was made) the human populace is not aware of this infiltration, and only with a sophisticated device, the Voight-Kampff test, can blade runners identify them. Even that is not infallible: In a deleted scene, one of Deckard’s colleagues, Holden, complains:

maybe it doesn’t work on these ones, Deck… These replicants aren’t just a buncha muscle miners anymore, they’re no goddamn different than you or me… It’s all over, it’s a wipe-out, they’re almost us, Deck, they’re a disease…

By the time of the second movie, released in 2017 and set 32 years hence, a lot has happened (this is all explored in three short prequels commissioned by Villeneuve, and without which a lot of 2049 is barely intelligible): three years after Deckard’s travails, gangs of humans hunt down and kill the latest generation of replicants, who have no artificial lifespan, in turn prompting a replicant terrorist attack, where an EMP causes a planet-wide blackout and erases its databases. Out of the ashes emerges another inventor-terrible, who solved world hunger and is now creating another generation of replicants. Earlier generations of replicants are still being hunted, but now their replacements are an acknowledged part of the scenery and machinery. The blade runner, replicant or not, is the middleman, policing the no-man’s land between replicant and human.

In other words, a lot has changed, but a lot hasn’t. By the time of the second movie replicants are more advanced and easier to identify (they have a serial number under their right eyeball) and live among humans, but are still treated as a subspecies. (Indeed 2049 opens with a blade runner ‘retiring’ a rogue replicant in a rehashed version of how Ridley Scott had proposed the first movie begin with, right down to the sketches.)

The canvas wall

It’s this broad canvas — two movies, three shorts — upon which the love story/detective story plays out. But of course the canvas is in many ways the story. The canvas is a world deeply divided. We see K being spat on by fellow officers, abused by his neighbours and his front door sporting welcome-home graffiti “fuck off skinner”. The only other replicants he encounters are those he’s been told to kill, prostitutes, known as doxies, or the super replicant, Luv, who works as a henchwoman for the inventor-terrible, Niander Wallace.

So we’re still stuck in a hierarchical world, where one species looks down on the other. But now they’re living cheek-by-jowl. The only thing keeping them apart is the certainty that one can side can reproduce themselves, and the other can’t. And as K’s boss Joshi puts it: “The world’s built on a wall that separates kind. Tell either side there’s no wall and you’ve bought a war, or a slaughter.”

This tension is best understood with the prequels; it does not permeate Villeneuve’s world sufficiently to convey the menace/promise upon which the movie is built, in my view. But it’s vital to the storyline because K is later forced to make a decision, just as Deckard was a generation before: whose ‘kind’ do I belong to? In other words: do I accept this definition of my world as a civil war? Do I fight, and if so, what for?

In the final scenes of Blade Runner, after Deckard watches his saviour Roy Batty die, he is confronted immediately with a choice. His police shadow, Gaff, descends in a police spinner. “You’ve done a man’s job, sir,” he shouts. Gaff, the human, is taunting Deckard that he might not be. He throws a gun at Deckard, hoping it’ll pick it up. He doesn’t. “It’s too bad she won’t live,” Gaff says. “But then again, who does?”

Gaff (Edward James Olmos) throws a gun at Deckard: “Too bad she won’t live”. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

For Deckard it’s now clear what he has to do. An earlier version (23/2/81) of the script was clearer. Gaff exhibited some sadness — that Deckard had someone to love, and that love doesn’t last forever. “I wouldn’t wait too long,” he says. “I wouldn’t fool around. I’d get my little panocha and get the hell outta here.” And when Deckard’s car “bullets through the woods” as he and Rachael escape the city and outrun their pursuers, a voiceover spells out Deckard’s choice:

I knew it on the roof that night. We were brothers, Roy Batty and I! Combat models of the highest order. We had fought in wars not yet dreamed of… in vast nightmares still unnamed. We were the new people… Roy and me and Rachael! We were made for this world. It was ours!

Deckard has made his choice, chosen his side. He escapes with his lover, and the spiritual guide of his saviour Roy Batty. This bit was dropped from the shooting script, although the studio later imposed a clunky voice-over — but which, crucially, didn’t resurrect any of this talk of the world belonging to ‘us’. In any of the emerging cuts of the film Ridley Scott chose to make the story end like a love story, with the two lovers disappearing into the night. In Ridley Scott’s world Deckard had not indicated any decision to throw his lot in with the replicants.

And this is where things get confusing (spoilers ahead). The sequel chooses to build itself around the idea that Rachael and Deckard’s love story becomes the origin story of a second replicant uprising — the war that Joshi fears. Their child is proof that an earlier generation of replicant is capable of procreation. If the Deckard in 2049 was the same Deckard who believed “we were the new people” then they had in his child the rallying cry for the “wars not yet dreamed of” that would make this world “ours”.

It’s no clearer in 2049 whether this cause is what Deckard has committed himself to. Unfortunately it’s this part of the story, this narrative that sustains both films, wobbles and, I would argue, collapses in the second half of 2049. There are a number of plot holes but the key problem is not so much a plot hole as a poor solution to a major problem in the narrative. And it’s this:

All the various actors in the drama want the same thing, even if it’s not always overtly expressed: to find the replicant love child. Joshi and the police because they want to stop a war. Luv and her über-industrialist boss because he wants to reverse engineer it to build self-replicating replicants. K because his boss wants him too, but increasingly because he believes he is the love child. Deckard because he’s the father. An underground replicant army because they see an opportunity to have he/her lead an uprising.

But in 2049 this ‘race’ is half-hearted, poorly developed and incoherent. The pivot of the film is when K finds out that he is not Rachael and Deckard’s child, the growing conclusion he — and therefore we — had been coming to for much of the preceding two hours and five minutes. It is a big moment, though I’m not alone in feeling that it’s not as earth-moving as the film would like us to think it was. The problem is that this key moment overlaps with another key moment: the replicant army leader Freysa instructing K that he must kill Deckard, because he may reveal the location of his child to Luv and hence to the über-industrialist.

Denis Villeneuve directs Harrison Ford and Ryan Gosling in Blade Runner 2049 (Photo courtesy of Warner Bros. Pictures)

This is where the plot falls apart. And here’s why.

Somewhere before or during filming a bigger problem was fixed, leaving this rather awkward, almost hackneyed scene of combined key plot points, where suddenly a host of new characters appear, for no obvious reason and presumably at great risk to themselves.

The problem was this. In the original ‘shooting script’ — written by Fancher, the same man who wrote the first key drafts of the first movie — Freysa dismisses K’s fears that Deckard’s abduction places him in danger. “Don’t pay no mind on that. He always wanted to die for his own. Never had the luck. Officer did him a favor.” In short: Deckard identified with the replicants and wanted to die heroically for them.

She then goes on (she is written as speaking a sort of pidgin English, something that was thankfully discarded): “Deckard only want his baby stay safe. And she will. I wish I could find her… I show her unto the world. And she lead an army!” In short: Freysa doesn’t know where the replicant love child is. If she did, she would present her to the world as the leader of the replicant army — apparently with or without her say-so.

Of course a lot of this is overshadowed by the new information that floors K — namely that if the love child is female, then it’s not him. But this leaves all sorts of problems that Villeneuve needed to fix, not least of which was that it made Deckard’s fate dramatically and narratively irrelevant. He wanted to die for something so there we go. It gave very little for K to do other than accept his underwhelming fate as a normal replicant. It leaves Freysa and her army with nothing more to do except wonder where the child is — and, incidentally, not to ask K whether he had any thoughts on that, given he’d been working on this for a while and nearly died for it. And it leaves us, the audience, wondering why we had just sat through all this K-might-be-a-replicant-love-child only to find he’s, er, not.

An unfixable hole

It’s obvious why Villeneuve decided that this problem had to be fixed because it leaves K desolate and with no obvious path forward. The original script leaves only a hint that K would try to rescue Deckard (‘K can’t live with that’ when he hears Deckard would be happy dying for a cause) but there’s no real dramatic tension, no wrestling with a decision to be made about whether he rescues Deckard, kill him, do something else, or nothing. It’s a lousy pre-climax scene. And worse, it leaves only thin strand of Freysa’s motivation to go to the effort to follow K, save him and reveal to him the existence of the army and their location. If she wants him to join and help them it’s unclear why.

So something has to be done. It’s a patch-up job and not worthy of such a great film-maker, and it raises questions about how this significant problem apparently sat there all the way to the shooting script. (I would argue there are numerous other major problems that lead to this big problem, but that’s for another time.)

So the final version is this forced conflict, where K has to think about what he’s going to do with Deckard — kill him to stop him divulging the whereabouts of Freysa — or to save him (and lead him to the person he now realises is his daughter.) You don’t need to be a rocket scientist to realise it wasn’t really an either-or choice: he could save Deckard before he is forced to divulge the information and take him to meet his daughter if he’d wanted to. Which (plot spoiler) he does. Presumably, given Deckard’s stated fear that his daughter might, if identified as the replicant love child, be captured and “taken apart, dissected,” he wasn’t about to then help her lead the replicant army.

It’s probably as good an ending as Villeneuve can manage with the material. And it could be argued that the ambiguity of motivation behind K’s final act of courage and selflessness — the question that Deckard asks of him, “Why? Who am I to you?” — leaves the film as open-ended as the first film. But of course, that’s not really true. Deckard’s motive was to escape with Rachael from his human-centred world, to find something and someone better. Implicit was the idea that he had changed sides. With K, there is no such open-ended future for him, as (spoiler alert) he expires on the steps in the snow.

The only mystery then is the question he doesn’t answer: why K went to such lengths to make the reunion happen. And the possible answer, or answers, are interesting: did he realise his love for Joi was ultimately doomed and pointless, that it was better to find love in the sacrifices you make for others? A whole host of things which reflect the complexity of what makes us human.

We wrestled, along with Deckard, with similar questions when Roy Batty saves Deckard at the end of Blade Runner. This time, however, there’s no one to keep K company as he dies. Or is there? In the “shooting script” K, lying awkwardly on the steps, hears Joi’s voice asking “Would you read to me?”

Just as she said when we first met her. K smiles at this ghost of memory.

Of course.

A thready whisper of his baseline. Their old favorite. “And blood-black nothingness began to… spin… a system of cells interlinked…”

If that had been left in, it would have taken us full circle, showing us that the real love story in this film was the one between a replicant and a mass-produced hologram. In some ways that might have helped support what I believe was the thread, that Scott, Villeneuve, Fancher and the other writers wanted to be their key takeaway: that there is no wall — between replicants and humans, between replicants and holograms. There are memories, there are experiences that populate those memories, which when shared can connect every living thing.

A replicant like Roy Batty can learn to love life in the abstract, and Deckard as an embodiment of that. Rachael the replicant can learn to love and trust her own feelings. K, a replicant, can love a hologram, a supposedly lower form of AI, which in turn can learn to love, and sacrifice her/itself for love.

And the arc concludes with K choosing to himself die for love — in this case between a father who isn’t his, and a daughter who may not know she had one.

Joi (Ana de Armas) tells K (Ryan Gosling) he looks like “a good Joe” (Copyright Warner Bros., Denis Villeneuve, 2017. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

AI

So what does this have to do with AI?

Well, it’s simple enough. The villain of the piece is of course the über-industrialist, who kills replicants for fun, and exploits the feelings and loyalties of those he commands. He wants to find a replicant child to reverse engineer to expand his empire, colonise the galaxy. His megalomania sounds a lot like some of the techbro titans who bestraddle the world — and particularly those who talk of AI being both the biggest threat and the biggest opportunity humanity has faced.

Some of that hyperbole — or appetite for that hyperbole — has died down a little of late, but that doesn’t mean these ideas, ambitions are not still being pursued. All of that is taking place without any serious consideration by the rest of us, and I would argue that science fiction, dystopian fiction, in writing and film, is as good a way as any of exploring potential outcomes. Both Blade Runners conjure up a world which should be the beginning of a useful conversation.

And we shouldn’t think this is all some way off. One reviewer of the first Blade Runner said she found it all a bit far fetched, a world too noisy, dense, technofied and neon-tinted until she walked out afterwards into Leicester Square. (I had that exact same experience, and when later I lived in Bangkok, Singapore, Hong Kong and Jakarta I felt I was living in Ridley Scott’s dense, compressed, retrofitted, gridlocked world.)

But more seriously, think about the ‘replicants’ around us. Those same cities — and many others like them — are home to wall after wall after wall, keeping one kind from another. Countries like Singapore limit visas to specific workers of specific genders, and they are required to live apart from the rest of the population, often shipped around like cattle from work site to work site. Social media is full of scorn for these people who go into debt in the hope of helping their family, a silent underclass. Beyond them — in Syria, Myanmar, Sudan — are vast populations of the dispossessed, stateless, homeless. And beyond them is the animal kingdom, where we have anointed ourselves as monarch ruling over all other species. Underclasses are everywhere if we choose to look.

The genius of the first movie is that Ridley Scott gave us a compelling kaleidoscope of images, some real, some imagined, a world where our existing beliefs find themselves mutating. There is the dense streetscape of words, people, animals, street hardware (their detailed designs include a parking meter), much of which passes in a blur on a first viewing, while in Batty’s final speech we are asked to conjure up images of massive galactic battlefields and structures which are all the better for not being visualised for us. In this disorienting but all-absorbing world he asks us to question whether the difference between man and machine really exists and that feelings, emotions are rendered fluid. As Deckard says in an unused voiceover:

Replicants weren’t supposed to have feelings, neither were Blade Runners. What the hell was happening to me?

We are not good at knocking down walls. We need films like these to help us look back at ourselves and think more deeply about what those walls are and whether they should exist. From refugee to robot, we think we know where we stand in the abstract, what our values are. But it’s only when we are confronted with the reality we realise we are not so well prepared. The Blade Runner sequence gives us a glimpse of that.

And while real replicants are not yet in the shops we are already used to disembodied voices like Alexa, or GPT chats. But we haven’t even started to understand what we want from these early AIs: We expect these tools to be anthropomorphic because we are hard-wired to interact with everything — man, animal, machine — in that way. But we are far from really understanding what that means. When Claude GPT prefixes answers with — Great question!/That’s an interesting question about the Atari brand appearing in the Blade Runner films/You’ve raised an excellent point that highlights a subtle but important detail in the original Blade Runner film/You’re quite observant to notice this discrepancy (all genuine responses) I feel patronised and irritated. It may be a small thing for now, but to me this is going to be the hardest part, or one of the hardest parts, to reconcile as our AIs move from generic to increasingly personal, bespoke, bilateral nature of computer interaction.

For me the problem is this: we are already in this dangerous world where we interact with machines, without any notion of what constitutes civilised behaviour. We curse and roll our eyes at Alexa’s stupidity, but then laugh at her attempts to make friendly chit-chat. Similarly with ChatGPT we are so over the novelty of it, even though it is a hugely powerful tool, and one that I’ve been using almost as Deckard and K interacted with their machines. But we have no baseline, no manual of the appropriate etiquette. We have already established our dominance over machines, and so it’s a relatively small step for those machines to cross the uncanny valley, where we will continue to treat them as machines.

The Blade Runner stories may be driven by love, but they are really ethical journeys, preparing us for that moment, when a human creates something that approximates sentient life. Key to that discussion is what, and who, led us to that moment. What that sentient life is depends on which human, or humans, creates it, and this is, I suspect, is the root cause of our unease. Neither über-industrialist Tyrell nor Wallace is portrayed as a pleasant, civic-minded or moral individual, which perhaps tells us all we need to know of who, out here in the real world, we should be keeping an eye on.

Sources

Too many to list here, but the main ones I drew on are these. Apologies for any omissions.

  • Blade Runner 2049, story by Hampton Fancher, screenplay by Hampton Fancher and Michael Green, ‘Final Shooting Script’, no date
  • Blade Runner, screenplay by Hampton Fancher and David Peoples, February 23, 1981
  • Blade Runner: The Inside Story, Don Shay, July 1982
  • Do Androids Dream of Electric Sheep, Philip K. Dick, 1968
  • Are Blade Runner’s Replicants “Human”? Descartes and Locke Have Some Thoughts, Lorraine Boissoneault 11–2017
  • Deckard/Descartes, Kasper Ibsen Beck, Google Groups, 1999
  • A Vision of Blindness: Blade Runner and Moral Redemption, David Macarthur, University of Sydney (2017)
  • Several interviews with Hampton Fancher: Sloan Science and Film (2017), Forbes (2016), Unclean Arts (2015)
  • Blade Runner Sketchbook, 1982
  • Philosophy and Blade Runner, Timothy Shanahan, 2014
  • The Illustrated Blade Runner, 1982

The Civil War in Our Heads

By | September 3, 2024

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I learned an unpleasant lesson

Burning bin at Holiday Inn Express in Rotherham, Aug 5 2024, screengrab from Sky News

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I found both compelling but hard to watch.

I realised that what I saw in Civil War wasn’t what most other people seemed to see. It follows four (and for while six) journalists through an America in the midst of war between at least two groups, culminating in the journalists following one faction into the White House in what appears to be the final moments of the war. The reasons for the conflict and for the state-level alliances are never made clear or explored.

Although superficially it’s about journalists covering a war, it’s really about the war itself, and while reviewers have tended to bemoan the lack of hack-like things — most notably discussion among the four journalists about what caused the war, that is exactly what journalists would do by this point in a story — they might discuss how the war might end, but Garland rightly focuses on what that means for the journalists themselves, who of course obsess about scoops, money shots and that critical knack of being in the right place at the right time.

Kirsten Dunst and Cailee Spaeny in Civil War (2024)

What I see in the movie is this: From the opening scene, with the flag-carrying suicide bomber running into a crowd of people waiting for a water truck, Garland focuses on the dehumanised nature of the conflict. He focuses on the conventions of war, and essentially demonstrates methodically that in this conflict there are none. This is as murderous, bloody, savage and inhuman as any other war — in Serbia, in Rwanda, in Somalia, in El Salvador, in Myanmar, in Ukraine, in Syria, in Gaza, you name it.

This is Americans killing each other in the most brutal ways on American soil, not really caring who they are, even whether they’re on the same side. There is no code of honour, no Geneva Convention, no taking of prisoners, no distinguishing between civilian and combatant. The first shots we witness are from khaki-clad fighters peppering an opponent trying to hide behind a pillar. When the group the journalists are following prevail, they shoot dead an incapacitated soldier and execute their prisoners with an anti-aircraft gun.

The idea of side is constantly derided by participants. One sniper mocks Wagner Moura’s character when he asks them who he’s shooting at. “Someone’s trying to kill us. We’re trying to kill them.” When Jesse Plemons’ character, the most bloodthirsty and indiscriminate of all those the journalists encounter, asks where each of them comes from, the answers don’t seem to matter. What type of American are you, he asks, inscrutable behind his red sunglasses, without ever seeming to know what type he is looking for.

Garland is saying: we are no better; Americans are no better, than anyone else. We pretend we are, but when the last vestiges of democratic rule have gone — and many have already gone — this is how we will treat each other, already are treating each other.

Jesse Plemons and Cailee Spaeny in Civil War (2024)

From the opening suicide bomb to the (spoiler alert) Saddam-like, Gaddafiesque, murder of the president lying on the floor of the Oval Office, Garland is saying: this is where we’re at. It doesn’t matter how we got here; we’re already here. He used journalists as the medium because as individuals they were observers to the tragedy, hungry to see the worst of it, hungry for the rush, but they’re useful idiots in portraying his message.

We journalists might be occasionally tender to one another, and allow our feelings to sometimes protrude, but then we, like the stories, move on. Kirsten Dunst’s Lee Miller, the war photographer we follow throughout the movie, dies on the carpet protecting her protégée, but neither her colleague/friend, or the protégée, stop to check for a pulse. They move on, drunk by the high of being in the right place at the right time. The character arcs of Jessie and Lee — the ‘narrative’ of the story — predictably cross over as one becomes desensitised to war enough to photograph it, while the other goes in the opposite direction. They are us.

Garland is saying: journalists are shits, have to be shits to do their job, but so, under the right conditions, are we all.

This is not a tale about humanity or the triumph of the spirit, or a heartfelt paean to the lost era of journalism. There is no humanity here, Garland is saying, so why should I waste time pretending that there’s someone to care about, a dog to save, or a kid, that usually works as a salve to our conscience in such movies — “the whole world froze over/blew up/was destroyed by aliens but Timmy the Dog was safe!” Garland is saying: this is the reductio ad absurdum of all our efforts to put allegiance to party, cause and power over country. There is no magic brake that somehow stops America — or any country — from falling into civil war.

I know I’m sailing against the wind here. I’ve seen reviews like this one, which go the other way: “(Cailee Spaeny’s character Jesse)’s decision to keep shooting through Lee’s sacrificial death becomes Civil War’s final insistence that there is a unique nobility to this profession. They care about the truth; it’s why Jessie captures the president’s extrajudicial killing.” I am happy to debate the point, but I just don’t buy that. It’s true that the profession has its better moments, but the idea that somehow taking a photo of the president’s pleading last words and ignominious death was somehow noble, or a resolution of sorts, somehow making worthwhile the deaths of half of the journalists who made the trip, is to misunderstand, I believe, what Garland was attempting to say.

It was instead the manner of the president’s death (and all the other combatant and non-combatant deaths), and the fact that the soldiers know they’re being photographed doing it, but don’t care, and even enjoy it, that was Garland’s point. The end of one American dictatorship ended like any Third World uprising or coup. That the journalists felt no danger, no fear that the soldiers may murder them to conceal their assassination, is the message. Take any other decent journalist flick — Killing FieldsYear of Living Dangerously, Under Fire, Salvador, I could go on — and the narrative is built around the idea that journalists stumble upon a warcrime and risk their lives to get the story out. Nothing like that happens here: it is Abu Ghraib-level selfies, confident there will be no repercussions.

Posing with the dead president, Civil War (2024)

Garland hasn’t made a masterpiece, though time may prove me wrong on that. I don’t think that was particularly his intent. This is polemic more than poetry. I don’t think he was interested in, or wanted us to be interested in, the characters, beyond making them substantial enough to add heft to the film’s credibility (and the performances are excellent). As a journalist I have few quibbles with how the profession is portrayed, though I would have expected to see more calls from editors, filing stories, recording expenses. Garland captured the heady mix of gut-dropping fear and by hysterical relief as the journalists seek out, and then try to extricate themselves from, fiefdoms controlled by unpredictable men and women with guns.

He presented the battle scenes as journalist reportage to show us how these things — fantasies, fears, nightmares, our daily lives — may play out when we go down this road, without us getting caught up too much in the outcome. Garland shows us how quickly the process brutalizes us, squaring the circle around our own throats. Dunst’s character talks about how she thought that her career covering foreign wars would send a warning home: “Don’t do this. But here we are.”

This is Garland’s message. Journalism plays a useful role here as a vehicle — once America (the West) thought so highly of itself that it paid individuals to risk their lives covering dangerous power plays in dangerous parts of the world, so we could wring our hands and remind ourselves at least our values were intact. No more. Garland is saying those journalists serve no purpose anymore, because we are just a blockaded door in the Capito away from the exact same power play. We’ve become the dangerous part of the world, and we don’t yet see it.

Building Bridges: The PC’s (Important) Forgotten Origin Story

By | September 3, 2024
Sir Clive Sinclair mosaic, made with original keys from Sinclair computers, Charis Tsevis, 2011 (Flickr)

Technology-wise, we’re presently in what might be called an interregnum.

There is no clear outcome for AI, especially generative AI. We can’t tell whether it’s a saviour, a destroyer, or a damp squib. More importantly, generative AI — and a lot of other stagnant technologies — doesn’t offer a bridge between where we currently are, and where we need to go.

But what might that bridge look like?

This is best explained by looking at an earlier paradigm shift: when personal computing began the journey from a DIY hobby for (mostly) 14-year old males to being a household tool or entertainment console. In blunt terms, the computer went from being a Meccano kit on a workbench to an item of furniture in the living room. When computers started to be useful and affordable.

Here’s a great visualisation which captures this sudden growth in the early 1980s (Source: Home computer sales, 1980–93 | Retro Deam/YouTube)

(The video doesn’t include IBM-compatible PCs, which of course eventually stole most of the market for a few decades. But you get the idea.)

For the masses

Exploring why and how this shift happened might help us understand what we should be looking for right now: what engine of change should we be keeping an eye out for, or, if we were of entrepreneurial bent, cooking up in a lab somewhere?

A Canadian Holocaust survivor, Jack Tramiel, was the first to recognise that computing was as much a consumer device as a business one. The trick was to make it visually appealing, fun, and cheap. His Commodore 64, unveiled in January 1982, was the first real mass market computer, with sound, graphics and software — all for less than $600. The computer “for the masses, not the classes” took North America by storm in 1981. At the time an Apple computer, though far better, would cost at least twice that. The Commodore 64 sold upwards of 12 million units.

Tramiel had understood computers were not confined to the workplace or science. But a British entrepreneur also understood that and may, on balance have contributed more to building the bridge that connected ‘computing’ with ‘appliance’. Sinclair (later Sir Clive) had sold circuit boards and kits profitably to a market of do-it-yourself enthusiasts since the early 1960s, but he was sure there was a much bigger market if he could make the products accessible and cheap enough.

Personal Computer World, April 1980

Sinclair wanted to sell them below £100 — about $200 at the time. He knew the market well — enthusiasts wanted to get their hands on one to tinker with, to write code on, to learn how the machines worked, but they were often kids, and didn’t have a lot of cash.

Crucially for bridge building, he focused on marketing and distribution.

According to Bill Nichols, who ran his PR at the time, Sinclair understood that the market itself needed to be carefully nurtured in the crucial early days. Clive “always insisted that every product first went to market mail order under the total control of the company and the marketing team.” This direct connection, via a cohort of outsourced customer service staff allowed the company to hear and respond to feedback directly from early customers. “Then and only then when the awareness and initial demand was created — typically after six months or so — did it go retail,” he told me.

WH Smith ad for the ZX81, date unknown. (FunkYellowMonkey, imgur)

“Retail” here didn’t mean hobby and computer shops but high street heavy hitters like stationery and newpapers chain WH Smith and pharmacy and healthcare giant Boots. Now the demand had been nurtured enough to be self-supporting, and Sinclair’s marketing and pricing had taken hold, retailers could be confident there was enough demand across a broad customer base to make it worth their while to display them prominently.

Tomb Raider and GTA

For kids and parents they were now easy to find, impossible to resist, and fun to use. The ZX Spectrum, the third incarnation of the Sinclair computer launched in April 1982, sold over 5 million units. Sinclair was a suddenly a serious competitor to Commodore: the Commodore sales team felt them to be enough of a threat to post a photo of Sinclair’s marketing chief Nichols on their dart board.

In one year the computer had gone from an obscure piece of machinery to a consumer device. It also played an important role in creating what we’d now call the ecosystem to support the transition. It was still a little unclear what computers might do for us. Games was the most obvious thing, and the ZX series are credited with spawning a generation of ‘bedroom coders’, kids who would develop games and sell them via hobby magazines. The British video game industry now employs over 20,000 people, and major franchises like Grand Theft Auto and Tomb Raider can trace their roots to companies founded in this early wave. By 1984 the UK had more computers per household (13%) than the U.S. (8.2%) — with the U.S. only catching up in the 2000s.

Sinclair made mistakes. A crucial one was trying to build a market with professionals with the next ZX model, the QL. The device was as half-baked as the rationale — professionals who might need a spreadsheet would already have an IBM PC — and its failure led to him selling the ZX business to a consumer electronics company, Amstrad. Amstrad’s owner, Alan Sugar, had a better instinct for what people needed a computer for: his line of PCW computers were marketed as “word processors”. He sold 8 million of them. (I was no techie but I bought one, and it kickstarted my journalism career. My dad wrote a book with it.)

Me and my Amstrad, 1986

Sir Clive and Jack Tramiel are mostly forgotten figures now but it’s no exaggeration to say this: he probably contributed as much as the likes of Bill Gates, Steve Wozniak and Steve Jobs to the silicon landscape we inhabit today. Both built bridges from the narrow enclaves of ‘enterprise’ and ‘hobby’computing to something called ‘personal computing’, without which we wouldn’t be writing, distributing and reading this on computing devices. In 1980 some 750,000 devices that could be called personal computers were sold. In 1990 there were 20 million. (Source)

That this period is largely forgotten is to our cost. Sure, Gates was instrumental but he was late to the game — Windows 95 was arguably its first consumer-facing product. Jobs understood the market better, but Apple’s first Macintosh, launched in 1984, cost $2,500, ten times a ZX computer. Both Microsoft and Apple scaled the divide between office and home device, but it was the Commodore, ZX, and a handful of other much cheaper devices that built the bridge between them.

Sliding the rules

So what do we learn here? What lessons can we apply to the place we’re in today?

Well, first off, we have lousy memories. That the Sinclairs of this world, and what they did, are rarely mentioned shows just how little we understand about how computing got to be the ubiquitous, ambient thing it is today.

Secondly, there’s only so much we can learn through the lens of Silicon Valley’s favourite business consultant. Superficially at least, Sinclair was an early entrepreneur in the Clayton Christensen mould: catering to underserved segments, building products that were affordable, simplified and user-friendly, and making incremental improvements to gain market share. But he was so much more.

Sinclair was obsessed by several things, one of them fatal. He understood two key concepts: price and size. While semiconductor manufacturers would discard chips that didn’t meet their specifications, many of those rejects might work fine if he designed a product to more lenient specs. “Good enough,” in Clayton Christensen’s words. For the rest of his life he would follow a similar pattern: dream up a product, scout out the technology to see whether it could be built, and then pare back the product (and the size and quality of components) to fit a specific price point and size.

Sinclair’s obsession with the miniature emerged less from notions of disruptive innovation and more, his sister Fiona believes, from the “very confused childhood” they shared, which led her to therapy and him to seeking to impose order on the world. “Everything he makes, everything he designs is to do with order — making things smaller, finer, neater,” she said of him.

The resulting products all leveraged technology to build bridges from niche market to mass. His calculators were small and stylish enough to be desirable in a way calculators hadn’t been before, but cheap enough for everyone. It’s hard to overstate the impact this had. Schoolkids like me were still being taught how to use a sliderule to make calculations in the 1970s, but when the first kid brought a Sinclair Oxford 100 into class (£13) we knew those log table books were doomed.

Vintage Aristo Darmstadt Slide Rule, Joe Haupt (Flickr)

But he had another infatuation: the new. The Sinclair QL, which effectively killed his computer business, arose out of his reluctance to build on a good thing, throwing the earlier ZX model out and trying something completely new, for a market that was already being catered to and which didn’t care overly about price. Launched in 1984, the QL was discontinued within two years after selling around 150,000 units, and Sinclair was forced to sell.

Sinclair understood about the bridge, but in this case misread it. The bridge here was a bridge backwards, returning to a market that already existed, and where users weren’t overly sensitive to price, but were sensitive about usability.

A calculator in your stocking

To me the key lesson to be drawn is this: Sometimes there needs to be an intermediary technology, or technologies, that can be pulled together in a new (and cheaper) way to create a new device, possibly even one that doesn’t have a name. There may be no demand for such a device, but that demand can, with good marketing and distribution, be created. By doing so you create a new market beyond the old one.

This might sound easy and obvious, but it’s not. Sinclair was already well-versed in this approach before he applied it to computers. He had built tiny transistor radios, taking the radio out of the living room and into the pocket or bedroom; home-made amplifiers to take hifi away from the deep-pocketed connoisseur; calculators out of science labs and accounting departments; digital display watches light years ahead of your smart watch; and (spectacularly, in terms of grand failures) electrical vehicles out of the auto industry and milk float sector.

Not all of these, ok not many of these, were successful, but it helped Sinclair develop a good understanding, not so much of invading existing markets a la Christensen, but of creating new ones. No one realistically thought that everyone wanted a calculator until Sinclair suggested it would make a great Christmas present. No one thought a computer would be much fun until he got them into stationers, toy shops and pharmacies. He built a bridge to a place no-one thought existed.

The common view of Sinclair: Daily Telegraph obituary, 18 Sept 2021

We are in something similar now. We have been sitting on a plateau of new, technology-driven consumer products for nigh on a decade now. Interesting technology — materials, blockchain, AI, AR, VR — hasn’t created any real mass market, and that, I believe, is in part due to a lack of imagination and understanding of how bridge-building to new markets works.

I don’t claim to know exactly where those new places are. It could be that some part of AI makes it possible for us to develop a taste for always-new music, say: so instead of us seeking out the familiar when it comes to aural entertainment, we demand AI creates something new for us. (I’ve mentioned before how intriguing I find the likes of udio.com. Here’s a rough stab at “a song in the style of early Yellow Magic Orchestra” with abject apologies to the souls of Yukihiro Takahashi and Ryuichi Sakamoto. )

This is probably too obvious and too narrow an assessment of the market’s potential. Sinclair’s advantage was that he was a nerd first, but a consumer a close second. He dreamed of things he’d like, he understood the technologies, their availability or lack of it, and he cared deeply about form factor. He brought disparate software, materials, circuitry and functionality together to make something that people either never thought they needed, or never imagined they could afford.

Others took his ideas and made them better, cheaper, more reliable: Casio’s calculators (and calculator watches); Amstrad’s computers; even his C5 electric trike, which I’ll explore more deeply elsewhere, became the opening salvo in a decades-long struggle that brought us EV scooters, even the Tesla.

It takes an uncommon mind to see these markets and build the bridges between them. We would not be here if it weren’t for folks like Sinclair who felt people would like these technologies if they were cheap enough, and fun enough, and he understood, at least a little, where we might go with them if we had them.

Now it’s time for someone to ask the same questions and build us some bridges from where we are — computing devices that have been good enough for a decade, software that is mostly a retread of last year’s, and an AI that is undoubtedly impressive, but also depressingly flawed and ultimately dissatisfying.

Over to you.

Anticipating the wave train of AI

By | July 3, 2024

We’ve been poor about trying to predict the real, lasting impact of generative AI.

It’s not through lack of trying: some have talked about rethinking the way our economies run and how we think about our lives, to treating it as an existential risk, to treating AI as a foundational, or general purpose, technology that will change everything.

Soldiers gathered around a transistor radio to listen to a broadcast. 1966.
Oliver Noonan/Associated Press

I’m not above a bit of grand predicting, and I’ll make some here, but it took me a while to realise why all these efforts sat awkwardly for me. We’re used to tech players having a very clear and reasoned notion about the likely impact of their technologies, even if they’re wrong: we want information to be free; we want to unlock the value in owned property — cars, houses; we want to empower everyone on the planet by connecting them to the internet.

All fine, but AI, or in particular generative AI, seems to lack one of these clarifying bumper stickers. It’s vague, amorphous, grand but somehow silly, as if we’ve given the pub mic to the only person in the room who has never done stand-up.

It’s partly, I suppose, because of the nature of AI. We have found it useful for stuff: identifying cats, driving cars, taking better pictures, sorting search results. GAI, meanwhile, is a different beast. In a way we’re struggling for a use case that justifies the vast expense in building and running large language models.

But it’s something else. It’s because Silicon Valley has, for much of the past 20 years or so, been built on the idea of disruptive innovation — that technology will always find a way to do something differently, which will somehow find a way to dislodge an incumbent technology.

And not only have we really worked out what that incumbent technology is when it comes to GAI, we have been too uncritical in our belief in the concept of disruptive innovation. We need, I believe, to overhaul the concept, in order for it to be more useful at this point in our technological progress.

Let me show what I mean by talking about humble transistor radio.

Transistor radios, 2015, by Roadsidepictures

This cute little fella, smaller than most modern smartphones, is often wheeled out as a great example of disruptive innovation, where a newcomer to the scene spots an opportunity to undercut expensive vacuum-tube radios with cheap and cheerful devices that were “good enough” for consumers. Here’s how the founder of disruptive innovation, Clayton Christensen, told the story in his “The Innovator’s Dilemma”:

In the early 1950s, Akio Morita, the chairman of Sony, took up residence in an inexpensive New York City hotel in order to negotiate a license to AT&T’s patented transistor technology, which its scientists had invented in 1947. Morita found AT&T to be a less-than-willing negotiator and had to visit the company repeatedly badgering AT&T to grant the license. Finally AT&T relented. After the meeting ended in which the licensing documents were signed, an AT&T official asked Morita what Sony planned to do with the license. “We will build small radios,” Morita replied. “Why would anyone care about smaller radios?” the official queried. “We’ll see,” was Morita’s answer.

This isn’t accurate, on several counts. Bell Labs, co-owned by AT&T and Western Electric, had been showing off its transistors’ radio capabilities as early as 1947, though use cases and bugs were still being fixed. It’s unthinkable that AT&T had not already thought of the transistor radio — their problem had been finding consumer manufacturers to partner with.

Christensen continues:

Several months later Sony introduced to the U.S. market the first portable transistor radio. According to the dominant metrics of radio performance in the mainstream market, these early transistor radios were really bad, offering far lower fidelity and much more static than the vacuum tube-based tabletop radios that were the dominant design of the time. But rather than work in his labs until his transistor radios were performance-competitive in the major market (which is what most of the leading electronics companies did with transistor technology), Morita instead found a market that valued the attributes of the technology as it existed at the time — the portable personal radio. Not surprisingly, none of the leading makers of tabletop radios became a leading producer of portable radios, and all were subsequently driven from the radio market.

The little gadget that could

In fact, Sony weren’t actually the first to introduce a small transistor radio, nor were they the ones who first recognised the transistor’s commercial potential. Already by 1950 the idea of the transistor radio was being described as the “find of the century” for commercial usage; three years later the first prototype transistor radios were being appearing in the wild, and one or two journalists were already writing about them. In January 1953 freelance science writer Leonard Engel wrote a piece headlined Curtain Lifts on Little Gadget Likely to Revolutionise Radio:

Star Weekly (Toronto, Ontario, Canada) · Sat, 10 Jan 1953 · Page 10

The delay wasn’t only due to indifference. Commercialisation was in part held back due to technical constraints — engineers had yet to master how to mould germanium, a key material, into the special forms required for transistors — but also in part due to the fact that most transistors were “going to the military for secret devices.” (Transistor technology originated in Allied research into radar during the war.)

And while Sony’s Morita may have been quick to recognise the opportunity, the first to market was Texas Instruments, with the launch of the Regency TR-1 in October 1954. They weren’t cheap: adverts at the time showed them retailing around $100 — roughly what an iPhone 15 Pro Max would cost today.

The Indianapolis News (Indianapolis, Indiana) · Mon, 18 Oct 1954 · Page 31

And while cheaper versions eventually brought the price down to $5 or less, Sony’s initial offerings such as the TR-63 cost more or less the same as the TR-1. (See The Transistor Radio in Nuts & Volts Magazine.)

This reality doesn’t fit the myth Christensen heard, and it doesn’t fit the disruptive dogma still powering Silicon Valley.

‘Destroying the social life of mankind’

But it’s not just the chronology of those first few years that is skewed. The focus on disrupting market incumbents misses out so much.

The introduction of the transistor radio, and its rapid spread across the globe, had a profound effect on mass communication, including the building of a mass infrastructure to support the radio’s reach and influence. In 1950 there were 46 countries without any radio broadcasting transmitter at all. By 1960 that number had fallen to 14. In 1950 more than half of the 159 countries surveyed had less than 10 radio receivers per 1,000 people. By 1960 that number had halved. The number of radio receivers in the world doubled between 1950 and 1960 — about half of them in the U.S. (Statistics on Radio and Television, 1950–1960, UNESCO, 1963) . The numbers rose even faster as the Beatles gained popularity: sales in the U.S. almost doubled, to 10 million transistor radios, between 1962 and 1963. (Source: Transistor radios: The technology that ignited Beatlemania — CBS News)

UNESCO had realised early on that radio could be a powerful tool for education — particularly given illiteracy, which was as high as 85% in some countries. (UNESCO Technical Needs Commission, Recommendations of the Sub-Commission on Radio, Aug 1948). War, too, had already convinced many countries of radio’s awesome political power: onetime BBC Governor Sir Ian Fraser had called it “an agency of the mind, which, potentially at least, can ennoble or utterly destroy the social life of mankind” (Broadcasting in the UK and US in the 1950s, edited by Jamie Medhurst, Siân Nicholas and Tom O’Malley, 2016). Indeed, both British and German governments had pushed for cheap radio sets to ensure their propaganda could spread as widely and quickly as possible during the war. ( Source: Television and radio in the Second World War | National Science and Media Museum).

Radio stations were a required target for any respecting coup plotter. Edwin Luttwak, in his Coup d’Etat: A Practical Handbook (1968), prescribed the seizing of the key radio station and establishing a monopoly on information by disabling any others, using “cooperative technicians” where possible. He ascribed part of the reason for the failure of Greek King Constantine II’s counter coup in late 1967, was the fact that the government radio station, Radio Larissa, reached only a fraction of the population because of its weak transmitter and unusual wavelength.

A Wall of Tinny Sound

Transistor radios, in other words, changed the way people got information, what they listened to and their habits. The TR-1 was released in October 1954 into a fast-fermenting musical world: Bill Haley released his Shake, Rattle and Roll the same month. By 1963, the now untethered average American teen listened to the radio for slightly more than three hours per day, according to Steve Greenberg’s piece on the BeatlesBruce Springsteen believed the power and importance of his transistor radio could not be overstated: “I lived with it, tucked it in my schoolbag during the day, and tucked it beneath my pillow all hours of the night.” Radio became a “medium of mobile listening”, in the words of academics Tim Wall and Nick Webber (PDF), and music producers and song-writers adapted accordingly

Phil Spector, for example, built his “wall of sound”, filling the recording studio with mini-orchestras to create a big fat sound that was then fed through special “echo chambers” (See Lance LeSalle’s answer to the question What was Phil Spector’s ‘wall of sound’? on Quora). Spector’s innovations have fed pop music until today.

News spread faster, more immediate: initial reports of JFK’s assassination in November 1963 were largely heard on transistor radios, in fieldsbuses and on the streetJames Sperber described how he heard from a friend who had smuggled a radio into their school in San Diego that JFK had been shot. Meanwhile the teachers, summoned one by one to learn the news themselves from the principal, decided not to burden their pupils with the news unaware many already knew.

The power of the transistor had been proven: from then on the race was to find more uses and scale the technology. Televisions, hearing aids and computers soon followed. Famously IBM president Thomas J. Watson bought 100 Regency radios and ordered that its computers immediately shift to using only transistors. “If that little outfit down in Texas can make these radios work for that kind of money, they can make transistors that will make our computers work, too,” he told executives. (Source: Crystal Fire: The Invention of the Transistor and the Birth of the Information Age)

The Soviet Union’s secret weapon

All this to say: The most disruption caused by transistor radios had very little to do with dislodging the old expensive valve radio manufacturers. It had to do with seismic shifts in behaviours, of consumption, of movement, of accessibility, of attention. It’s not hard, once you’ve absorbed the above tale, to draw a direct line between the transistor radio with the smartphone — via the Walkman, the CD Walkman, the iPod — and to see a certain inevitability. Miniaturisation and mobility became the bywords of consumption: so much so the Soviet Union earned much needed foreign currency by building the world’s smallest radio, the Micro, in the 1960s. It became the rage in Europe and the U.S., especially after Nikita Khrushchev gave one to Queen Elizabeth II (See: Soviet Postcards — “Micro” miniature radio, USSR, 1965–69 ).

Soviet Amsa Micro radio, 1968 (Micro Transistor Radios, defunct website)

We tend to think of disruption in terms of what it does to incumbents — companies, industries, workers etc. Really we should be looking beyond that: Much is missed if we assume the first wave of a tsunami is the last. The first is often not the most powerful: what follows is a ‘wave train’, the timings between them uneven, including edge waves, bores and seiches. We lack the imagination to predict and the tools to measure these serial disruptions that follow an innovation.

It is, I agree, not easy to think through the potential long-term impacts of a technology, especially one like AI. But it might help for us to at least be informing ourselves about the technology, its capabilities and its limits. I am an avid user of whatever tools I can lay my hands on, including both commercial services and any open source offerings. But we are too vulnerable to binaries: critical rejection or unquestioning embrace, which the transistor radio story shows us is both predictable and unhelpful to understanding its wave train.

Take, for example, hallucination. Loyal readers will know I’m not impressed by GAI’s ability to be honest. Indeed I would argue, 18 months in, that we’re no closer to solving the problem than we were before. I won’t bore you with the details but every tool I’ve used so far has failed basic robustness tests. If we choose to use these tools to create supposedly factual content, we will accelerate our decline into content mediocrity where every press release, every article, every email, every communication that might conceivably have been done with the help of AI will be dismissed as such.

Spikes and spin

Pretending this is not an issue is impossible, but the trick now seems to be to say that it’s just a blip. Despite its efforts to put a positive spin on the results, McKinsey’s recent survey on AI was forced to acknowledge that

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk.

That’s putting it mildly. The percentage of respondents who believed that inaccuracy was a risk rose from 56% to 63% between 2023 and 2024, making it their biggest concern out of the 10 or so on offer — the next one being IP and then cybersecurity, all three being deemed relevant by more than half of respondents. And sure, respondents are looking more closely at mitigating the problem but the number is around half of those who consider it a relevant problem (32% last year, 38% this.)

I’m sorry, but that to me essentially says a good chunk of those asked believe the technology is dangerous. And these people are apparently near the coalface. The 1,363 respondents are described as “representing the full range of regions, industries, company sizes, functional specialties, and tenures.” Nearly three quarters of them said their organizations “had adopted AI in at least one business function” and two thirds said their organizations “were regularly using gen AI in at least one function.” So for them to say in those numbers that GAI is making stuff up is the headline.

And yet the title of the paper is: The state of AI in early 2024: Gen AI adoption spikes and starts to generate value. The race to adopt AI, the race to sell AI-related services and consulting (which of course is what McKinsey is doing here) will unfortunately override the annoying voices in the boardroom saying, what are we doing here exactly?

Take the IP thing. There are some extraordinary, creative GAI tools out there — for making images and videos, music (try Udio), sound effects and text to voice (try out ElevenLabs) , but once again it’s inevitable that the companies behind these tools are either bought out by the bigger players or are bankrupted by intellectual property litigation. Those that survive will either leverage their own IP or retreat to a much more cautious approach, producing bland stuff that doesn’t fool anyone. (Just as it’s now relatively easy to spot visual content that’s been created by AI, so some people are already able to distinguish between AI-produced pop, such as Rick Beato’s son. )

So what we are left with? I’m told that LLM tools that use RAG, Retrieval-Augmented Generation, which confine the corpus an AI draws on, is proving useful in some verticals such as helping call centre support staff better help customers. That’s great but I’m not holding my breath. My experience with RAGs has not been a positive one. So I guess we’re left with writing faster, better emails, product descriptions and the like, none of which sound very exciting to me, nor particularly disruptive.

And, indeed, that is the problem. Companies hate to spend money on customer support, and so any money-throwing at GAI will be in the hope of saving more money later. Use cases like this will be the low-hanging fruit, because it’s relatively easy to sell, replacing one underpaid group of human drones with non-human ones.

Underwhelming. GAI may well have to rethink itself to avoid another AI winter or survive a race to the bottom, where every product and service has an AI feature that ends up being as little used as Siri or Clippy.

But if we’ve learned anything from the transistor radio, it’s that we have learned very little from transistor radios.

‘A revolutionary device’

Take for example what happened both before and after its launch. While Christensen’s founding myth of the cheap transistor radio was off, he was right that there was widespread scepticism about the transistor’s potential, with both academia and engineers only gradually being won over:

After indulging themselves for a year or two in the resentful skepticism with which academia generally greets revolutionary new concepts, physicists and electronics engineers gradually warmed to the expansive new possibilities offered by semiconductor electronics. By 1953, The Engineering Index, an annual compendium of scholarly monographs in technical fields, listed more than 500 papers on transistors and related semiconductor devices. A year later there were twice as many. (Source: The Chip: How Two Americans Invented the Microchip and Launched a Revolution — T.R. Reid)

But the transistor radio’s success quickly threw up another problem: scaling. Even as pocket radios were jumping off the shelves in the late 1950s, “the tyranny of numbers began to emerge. Enthusiasm gave way to frustration, even desperation,” according to Reid. The first transistor radios had 2–7 transistors: The problem was finding an economic and reliable way to cluster the thousands, even tens of thousands necessary for the devices which stood to benefit most from the transistor revolution. A Naval aircraft carrier had 350,000 electronic components; a single computer like the Control Data CD 1604 contained 25,000 transistors, according to Reid.

Solving that problem led to the integrated semiconductor circuits of our era, and the rest, as they say, is history. This story arc, however, is left out of the paeans to disruptive innovation, despite it being one we should be familiar with. A great idea comes along and takes a few visionaries (or good business minds) to see how it might be used. That in turn sparks a movement whose expansion is checked only by some limitations of the technology — usually related to scale. Solving those problems leads to even greater expansion, and we are into that wave train of change.

This describes, arguably, the evolution of AI, and even within AI for the evolution of instances of AI, such as neural networks, and, more recently generative AI. At any point someone predicting the long-term impact of the transistor radio might have looked foolish. Journalist Leonard Engel had been writing about the potential of transistors as far back as 1947, but he was a rare voice writing excitedly about the transistor’s potential. Even he was careful about when this revolution would start. In January 1953 he wrote:

A radio small enough to fit into the palm will be available in a few years. Your portable TV set will be the size of a small typewriter. And for the hard-of-hearing there will be a hearing aid the size of a matchbox which will run for a year on a single set of batteries. What’s even more important, air travel will be safer because better radio aids can be found. These are but a few of the revolutionary devices now possible thanks to a recent invention — a tiny device the size of a kernel of corn called the transistor.

He relegated to the bottom of his piece the real visionary:

Dr. E(lmer) W(illiam) Engstrom, head of the RCA laboratories, says that in less than five years transistors have come as far as vacuum tubes did in 20. He predicts that in another few years this amazing little gadget will be used not only in home radios and TV sets, but in all sorts of revolutionary devices that can scarcely be imagined today.

That last sentence stands the test of time, and that offers a helpful blueprint for thinking about AI. Not the self-serving (and misdirecting) noise about the existential threats, not the banal attempts to make customer service, legal research or content creation simpler and cheaper, but about what new things we might do, how society, health, war, creativity, might change and and how it may impact our lives in the same way that little transistor radio did.

We need to talk about our AI fetish

By | July 3, 2024

Artificial intelligence puts us in a bind that in some ways is quite new. It’s the first serious challenge to the ideas underpinning the modern state: governance, social and mental health, a balance between capitalism and protecting the individual, the extent of cooperation, collaboration and commerce with other states.

How can we address and wrestle with an amorphous technology that has not defined itself, even as it runs rampant through increasing facets of our lives? I don’t have the answer to that but I do know what we shouldn’t do.

The streets

But in other ways we have been here before, making the same mistakes. Only this time it might not be reversible.

Back in the 1920s, the idea of a street was not fixed. People “regarded the city street as a public space,open to anyone who did not endanger or obstruct other users”, in the words of Peter Norton, author of a paper called ‘Street Rivals’ that later became a book, ‘Fighting Traffic.’ Already, however, who took precedence was already becoming a loaded — and increasingly bloody — issue. ‘Joy riders’ took on ‘jay walkers’, and judges would usually side with pedestrians in lawsuits. Motorist associations and the car industry lobbied hard to remove pedestrians from streets and for the construction of more vehicle-only thoroughfares. The biggest and fastest technology won — for a century.

Bangkok traffic, photo by Joan Campderrós-i-Canas

Only in recent years has there been any concentrated effort to reverse this, with the rise of ‘complete streets’, bicycle and pedestrian infrastructure, woonerfs and traffic calming. Technology is involved — electric micro-mobility provides more options for how people move about without involving cars, improved VR and AR helps designers better visualise what these spaces would look like to the user, modular and prefabricated street design elements and the adoption of thinking such as ‘tactical urbanism’ allows local communities to modify and adapt their landscape in short-term increments.

We are getting there slowly. We are reversing a fetish for the car, with its fast, independent mobility, ending the dominance of a technology over the needs and desires of those who inhabit the landscape. This is not easy to do, and it’s taken changes to our planet’s climate, a pandemic, and the deaths of tens of millions of people in traffic accidents (3.6 million in the U.S. since 1899). If we had better understood the implications of the first automobile technology, perhaps we could have made better decisions.

Let’s not make the same mistake with AI.

Not easy with AI

We have failed to make the right choices because we let the market decide. And by market here, we mean as much those standing to make money from it, as consumers. We’re driven along, like armies on train schedules

Admittedly, it’s not easy to assess the implications of a complex technology like AI if you’re not an expert in it, so we tend to listen to the experts. But listening to the experts should tell you all you need to know about the enormity of the commitment we’re making, and how they see the future of AI. And how they’re most definitely not the people we should be listening to.

First off, the size and impact of AI has already created huge distortions in the world, redirecting massive resources in a twin battle of commercial and nationalist competition.

  • Nvidia is now the third largest company in the world entirely because its specialised chips account for more than 70 percent of AI chip sales.
  • The U.S. has just announced it will provide rival chip maker Intel with $2o billion in grants and loans to boost the country’s position in AI.
  • Memory-maker Micro has mostly run out of high-bandwidth memory (HBM) stocks because of the chips usage in AI — one customer paid $600 million up-front to lock in supply, according to a story by Stack.
  • Data centres are rapidly converting themselves as into ‘AI data centres’, according to a ‘State of the Data Center’ report by the industry’s professional association AFCOM.
  • Back in January the International Energy Agency forecast that data centres may more than double their electrical consumption by 2026. (Source: Sandra MacGregor, Data Center Knowledge)
  • AI is sucking up all the payroll: Those tech workers who don’t have AI skills are finding fewer roles and lower salaries — or their jobs disappearing entirely to automation and AI.. (Source: Belle Lin at WSJ
  • China may be behind in physical assets but it is moving fast on expertise, generating almost half the worlds top AI researchers (Source: New York Times).

This is not just a blip. Listen to Sam Altman, OpenAI CEO, who sees a future where demand for AI-driven apps is limited only by the amount of computing available at a price the consumer is willing o pay. “Compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute.”

In other words, the scarcest resource is computing to power AI. Meaning that the rise in demand for energy, chips, memory and talent is just the beginning. “There’s a lot of parts of that that are hard. Energy is the hardest part, building data centers is also hard, the supply chain is hard, and then of course, fabricating enough chips is hard. But this seems to be where things are going. We’re going to want an amount of compute that’s just hard to reason about right now.” (Source: Sam Altman on Lex Fridman’s podcast)

Sam Altman, 2022. Photo courtesy of Village Global

Altman is probably the most influential thinker on AI right now, and he has huge skin in the game. His company, OpenAI, is currently duking it out with rivals like Claude.ai. So while it’s great that he does interviews, and that he is thinking about all this, listening to only him and his ilk is like talking to motor car manufacturers a century ago. Of course they’ll be talking about the need for more factories, more components — and more roads. For them, the future was their* technology. They framed problems with the technology in their terms, and painted themselves as both creator and saviour. AI is no different.

So what are the dangers?

Well, I’ve gone into this in past posts, so I won’t rehash them here. The main point is that we simply don’t enough to make sensible decisions on how to approach AI.

Consider the following:

  • We still don’t really know how and why AI models work, and we’re increasingly outsource processes of improving AI to the AI itself. Take, for example, Evolutionary Model Merge, an increasingly popular technique to combine multiple existing models to create a new one, adding further layers of complexity and opacity to an already opaque and complex system. “The central insight here,” writes Harry Law in his Learning from Examples newsletter, “is that AI could do a better job of determining which models to merge than humans, especially when it comes to merging across multiple generations of models.”
  • We haven’t even agreed on what AI is. There is as yet no science of AI. We don’t agree on definitions. Because of the interdisciplinary nature of AI it can be approached from different angles, with different frameworks, assumptions and objectives. This would be exciting and refreshing were it not simultaneously impacting us all in fast moving ways.
  • We don’t really know who owns what. We can only be sure of one thing: the big fellas dominate. As Martin Peers wrote in The Information’s Briefing (sub required):”It is an uncomfortable truth of the technology industry currently that we really have little clue what lies behind some of the world’s most important partnerships. Scared about regulators, Big Tech is avoiding acquisitions by partnerships that “involve access to a lot of the things one gets in acquisitions.” Look at Microsoft’s multi-billion dollar alliance with OpenAI and, more recently, with Inflection AI.

Not only does this concentrate resources in the hands of the few, it also spurs the ‘inevitability of the technology’, driven by those dominant companies who have sunk the most into it driven by dreams of the profitability that may come out of it . In short, we’re in danger of imposing a technological lock-in or path dependence which not only limits the range of technologies on offer, but, more seriously, means the most influential players in AI are those who need some serious returns on their investment, colouring their advice and thought leadership.

We’re throwing everything we’ve got at AI. It’s ultimately a bet: If we throw enough at AI now, it will solve the problems that arise — social, environmental, political, economic — from throwing everything we’ve got at AI.

We’ve been here before

The silly thing about all this is that we’ve been here before. Worrying about where AI may take us is nothing new. AI’s ‘founders’ — folk like John von Neumann, I.J. Good — knew where things were going: science fiction writer and professor Vernon Vinge, who died this month, coined the term singularity, but he was not the first to understand that there will come a point where the intelligence that humans build into machines will outstrip human intelligence and, suddenly and rapidly, leave us in the dust.

Why, then, have we not better prepared ourselves for this moment? Vinge first wrote of this more than 40 years ago. In 1993 he even gave an idea of when it might happen — between 2005 and 2030. Von Neumann, considered the father of AI, saw the possibility in the 1950s, according to Polish-born nuclear physicist Stanislav Ulam, who quoted him as saying:

The ever-accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

The problem can be explained quite simply: we fetishise technological progress, as if it is itself synonymous with human progress. And so we choose the rosiest part of a technology and ask users: would you not want this? What’s not to like?

The problem is that this is the starting point, the entry drug, the bait and switch, for something far less benevolent.

Economist David McWilliams calls it the dehumanisation of curiosity — as each new social technology wave washes over us, it demands less and less of our cognitive resources until eventually all agency is lost. At first Google search required us to define what it was that we wanted; Facebook et al required us to do define who and what we wanted to share our day with, and Twitter required us to be pithy, thoughtful, incisive, to debate. Tiktok just required us to scroll. At the end it turned out the whole social media thing was not about us creating and sharing wisdom, intelligent content, but for the platforms to outsource the expensive bit — creating entertainment — to those who would be willing to sell themselves, their lives, hawking crap or doing pratfalls.

AI has not reached that point. Yet. We’re in this early-Google summer where we have to think about what we want our technology to do for us. The search prompt would sit there awaiting us, cursor blinking, as it does for us in ChatGPT or Claude. But this is just a phase. Generative AI will soon anticipate what we want, or at least a bastardised version of what we want. It will deliver a lowest-common denominator version which, because it doesn’t require us to say it out loud, and so see in text see what a waste of our time we are dedicating to it, strip away while our ability to compute — to think — along with our ability, and desire, to do complex things for which we might be paid a salary or stock options.

AI is, ultimately, just another algorithm.

We need to talk

We need to think hard about not just AI, but what kind of world we want to live in. This needn’t be an airy-fairy discussion, but it has to be one based on a human principles, and a willingness of those we have elected to office to make unpopular decisions. When back in 1945 UK Prime Minister Clement Atlee and his minister Aneurin Bevan sought to build a national health service free to all, they faced significant entrenched opposition — much of it from sources that would later change their mind. The Conservative Party under Winston Churchill voted against it 21 times, with Churchill calling it “a first step to turn Britain into a National Socialist (Nazi) economy.” (A former chairman of the British Medical Association used similar language). Doctors voted against it 10:1. Charities, churches and local authorities fought it.

Anenurin Bevan, Minister of Health, on the first day of the National Health Service, 5 July 1948 at Park Hospital, Davyhulme, near Manchester. Photo courtesy University of Liverpool

But Bevan won out, because as a young miner in the slums of southern Wales, he had helped develop the NHS in microcosm: a ‘mutual aid society’ that by 1933 was supplying the medical needs of the 95% of the local population, in return for a subscription of pennies per week. Bevan had seen the future, understood its power, and haggled, cajoled and mobilised until his vision was reality. The result: one of the most popular British institutions, and a clear advantage over countries without a similar system: In 1948 infant mortality in the UK and US was more or less the same — around 33 deaths per 1,000 live births. By 2018 the number had fallen to 3.8 in the UK, but only to 5.7 for the United States. That’s 1.3 million Brits that survived being born, according to my (probably incorrect) maths. (Sources: West End at WarHow Labour built the NHS – LSE)

AI is not so simple. Those debating it tend to be those who don’t understand it at all, and those who understand it too well. The former fish for sound-bites while the latter — those who build it — often claim that it is only they who can map our future. What’s missing is a discussion about what we want our technology to do for us. This is not a discussion about AI; it’s a discussion about where we want our world to go. This seems obvious, but nearly always the discussion doesn’t happen — partly because of our technology fetish, but also because entrenched interests will not be honest about what might happen. We’ve never had a proper debate about the pernicious effects of Western-built social media, but our politicians are happy to wave angry fingers at China over TikTok.

Magic minerals and miracle cures

We have a terrible track record of learning enough about a new technology to make an informed decision about what is best for us, instead allowing our system to be bent by powerful lobbies and self-interest. Cars are not the only example. The building industry knew there were health risks associated with asbestos since the 1920s, but it kept it secret and fought regulation. Production of asbestos — once called the ‘magic mineral’ — production kept rising, only peaking in the late 1970s. The industry employed now-familiar tactics to muzzle dissent: “organizing front and public relations companies, influencing the regulatory process, discrediting critics, and manufacturing an alternative science (or history).” (Source: Defending the Indefensible, 2008) As many as 250,000 people still die globally of asbestos-related diseases each year — 25 years after it was banned in the UK.

Operator Clémence Gagnon watches a machine carding asbestos fibre, Johns Manville factory, Asbestos, Que., 1944 Photo courtesy of Library and Archives Canada

Another mass killer, thalidomide, was promoted and protected in a similar manner. Its manufacturers, Chemie Grünenthal, “definitely knew about the association of the drug with polyneuritis (damage to the peripheral nervous system)” even before it brought it to market. Ignoring or downplaying reports of problems, the company hired a private detective to discredit critical doctors, and only apologised for producing the drug, and remaining silent about the defects in 2012. (Source: The Thalidomide Catastrophe, 2018)

I could go on: leaded gasoline, CFCs, plastics, agent orange. There are nearly always powerful incentives for companies, governments or armies to use a technology even after they become aware of its side-effects. In the case of tetraethyl lead in cars, the U.S. Public Health Service warned of its menace to public health in 1923, but that didn’t stop Big Oil from setting up a corporation the following year to produce and market leaded gasoline, launching a PR campaign to promote its supposed safety and discredit its critics, while funding research which downplayed health risks and and lobbying against attempts to regulate. It was only banned entirely in 1996.

An AI Buildup

We are now at a key inflection point. We should be closely scrutinising the industry to promote competition, but we should also be looking at harms, and potential harms — to mental health, to employment, to the environment, to privacy, to whether AI represents an existential threat. But instead Big Tech and Big Government are focusing on hoarding resources and funding an ‘AI buildup.’ In other words, just at a time when we should be discussing how to direct and where to circumscribe AI we are locked in an arms race where the goal is to achieve some kind of AI advantage, or supremacy.

AI is not a distant concept. It is fundamentally changing our lives at a clip we’ve never experienced. To allow those developing AI to lead the debate about its future is an error we may not get a chance to correct.