a way out of subscription hell

By | October 17, 2024

If your revenue model relies on misleading, impoverishing and gaslighting your customers, then you probably should rethink your model.

That would seem to be a reasonable statement to make, and one most businesses might agree with. But the reality is that companies are falling over themselves to charge people for things they didn’t ask for and to keep doing it.

In the UK alone, consumers in 2023 spent £688 million (US$912 million) on subscriptions they didn’t use, didn’t want, and didn’t know they had. That’s nearly double that for the previous year. These include those who didn’t realise a subscription had auto-renewed, signed up for a trial but forgot to cancel before the paid bit kicked in, or who thought they were making a one off-purchase. (I’m only using UK figures because they were the most credible ones I could find. It’s highly likely it’s the same elsewhere.)

Google searches for ‘cancel subscription’, 2004–2024

Welcome to the subscription economy, which should more rightly be called the subscription trap economy. For the past decade it has been the business model du jour. And, possibly, it may be on the way out.

California has led other US states into implementing stricter auto-renewal laws, and the FTC is cracking down on “illegal dark patterns” that trick customers into subscriptions, and in the UK rules that limit predatory practices will likely come into force by 2026. The FTC has taken Adobe to court, alleging it had “trapped customers into year-long subscriptions through hidden early termination fees and numerous cancellation hurdles.”

Enshitification and the doctrine of Subscriptionism

I’m pleased they’re doing all this (and the EU, India and Japan also have legislated in this direction). But I’m not holding my breath. Subscriptions have been such a boon for companies they’re not likely to ditch them, or the dark arts around them that help make all this so profitable.

While I’m tempted to throw all this into the “enshitification” dumpster, it’s not quite the same. Enshitification means gradually eroding the quality of a free or near-free product in order to prod users to pay, or to make the service more profitable in other ways (for advertisers, for example). Subscriptionism, for want of a better term, is what happens after the enshitification has been successful. The customer throws up their hands, signs up for whatever gives them less enshitification, and then the company ensures, Hotel California-like,

  • customers never leave because it’s too painful to figure out how to unsubscribe;
  • they don’t quite realise how much they’re paying (or they never realise they are paying);
  • that the paid service itself gradually enshitifies, pushing the user up to the next tier, which is merely a more expensive version of the original tier they were paying for. (Think Meta, Netflix, Max (ex HBO Max), Amazon Prime, X etc);
  • rinse and repeat.

So no, I don’t think that companies will really make it easy for you to unsubscribe. It’s simple economics. Acquiring a new customer is hard, which is why it’s so easy to sign up. Churn, therefore, is the enemy. Companies will spend a lot of money on trying to keep you aboard. The FTC found that more than three quarters of subscription services used at least one ‘dark pattern’ (think misleading wording or buttons, etc), while EmailTooltester found that Amazon Prime was the worst offender, averaging 7.3 dark patterns per cancellation.

I can confirm this last one, and it provides a good illustration of the dark arts. When I get through to the right page, and click on Cancel Prime, I was greeted by a confusing page. It took me a while to figure out whether or not I actually had cancelled. It’s like a game of Where’s Waldo/Wally? I finally found out by looking at the bottom right corner:

Amazon Prime cancellation page

So no, I haven’t. I don’t think so. Someone somewhere was paid big bucks to discombobulate me enough to think I had already canceled.

There is a correlation between such dark patterns and the problems Amazon is facing. Subscription, according to RetailX, an analyser of retail data, accounted for a 7.7% of Amazon’s total revenue in 2023, more than twice the 3.15% share reported in 2014:

However, the subscription share of Amazon’s revenue has plateaued in recent years. While it doubled between 2014 and 2018, when it made up 6.36%, it has risen by just under 0.5 percentage points in the following five years.

So while Amazon’s subscription revenue is increasing it’s nothing compared to Amazon Web Services, online stores and retail third-party seller services. If subscriptions were so damned wonderful, why would a company work so hard to not let you leave the shop?

The Dark Art of Notificationism

This is not a one-off. Researching this topic I came across Rocket Money which is supposed to help me figure out all my subscriptions and cancel those I don’t want. Or something like that: It only seems to work in the U.S. so I gave up. But by then I had an account and getting out wasn’t going to be so easy. I tried to unsubscribe from emails but was greeted by a page where radio buttons had been pre-selected for 20 different kinds of notification, and which required me to click on every single one to remove myself. When I tried to delete my account I found the option hidden at the bottom of a Profile menu, which required me to click on a unlabelled gear icon, and then to scroll down beyond the visible pane to find the delete account option. Even then it still wasn’t clear that I had actually removed my account: trying to log in to check instead requires me to set up Two Factor Authentication, via an app or SMS, before I could check. So I left it there and I will probably be getting Rocket Money notifications for the rest of my days.

The foxes are indeed in charge of the hen house. Rocket Money seems keener on adding me to their subscription world rather than help me unsubscribe from the others. Indeed, it’s interesting to watch the businesses who tout themselves as kings of this domain. Recurly, for example, presents itself as the “leading subscription management platform” and says it wants to help its customers “create a frictionless and personalised customer experience.” Amusingly, it lists as its clients six companies, five of which have themselves been criticised for cancellation issues.

The company is upbeat in its 2024 Subscription Trends, Predictions, and Statistics:

Over the past year, Recurly has seen a 15.7% increase in active subscribers, and when comparing this to the figures from 2020, the growth surges to an impressive 105.1%.

But a closer look reveals a somewhat different picture.

  • For the past four years the monthly rate at which new subscribers are signing up has been falling, from 5.3% in 2020 to 3.7% last year.
  • Trials, the most common form of getting people to sign up, are not converting to paid subscriptions: in 2020, the trial-to-paid conversion rate was about 60%; in 2023 it had fallen to 50%.

And I wonder how many of those people actually realised they were signing up, since in most cases companies require some form of payment mechanism to get those trials. The Paramount+ subreddit (a Recurly client) is littered with painful stories of folk who didn’t realise they hadn’t canceled their trial. One guy lost $250. In its white paper Recurly recommends a trial period of no more than 7 days before payment kicks in. Once again: if the product was so good, why do you feel the need to pressure the user into a financial armlock so early on?

I don’t have an easy answer to all this. Everyone seems to be in on it: try to cancel a New York Times subscription (call during US office hours, wherever you signed up). I’m still trying to find out why I’m still paying for a newsletter which moved from Substack to their own platform, without any apparent paper trail. While I managed to get one of the payments stopped by my neo-bank, I was told they couldn’t block any future transactions from the provider. (Paypal is much better at this, allowing you to cancel a recurring payment without any fuss and I’m going to shift everything across to that. I would advise the same.)

My bigger worry is this: as everything becomes subscription-based, two things are going to happen:

We’re going to lose access to things we care about

Subscriptions don’t leave you with anything, most of the time. The most civilised version is that practiced by the likes of Tinderbox, one of my favourite Mac apps, which requires an annual fee entitling you to updates during that year, which are yours to keep if you decide to stop (or pause). But most times subscriptions don’t leave you with anything. Those carefully curated playlists on Spotify? All those emails on Gmail? Gone, unless you’ve downloaded all your emails first, and unless you use illegal downloaders in the case of Spotify.

We’re going to find it harder to assign a value to things

I have used Setapp for several years, a subscription-based app store for Macs and iOS, with a wonderful array of apps. But at what point have I overpaid? I’m not sure. Depending on how you calculate it I’ve saved $1,920 or lost $898.

The good guys are going to have to find a better way

I love subscribing to people whose work I love. From an Australian creator of amazing dioramas to a mad British scientist in rural France, I offer meagre sustenance via Patreon. Until a recent cull I was a paid subscriber to more than a dozen Substack newsletters. And this is the thing: while these methods are a great way to support independent content, they don’t scale. When we as consumers add up how much we’re spending on subscriptions we realise it’s a lot more than we thought, and we have to take action. Sadly that means opting for a $10 monthly streaming service, while cutting two $5 per month Substack newsletters. It’s the little guy who loses, unfortunately. They’re doing great work, but so are a lot of others, none of whom can really scale beyond the one-man-and-a-dog model to compete with the behemoths.

Modeling porn: A Boulder Creek diorama

So we’ll see changes. We have to. Substack writers will have to band together and form partnerships, offering discounts or freebies to subscribers. Bundling will become the norm, both for big players and small. The New York Times bought The Athletic and offers a financially attractive bundle. It wouldn’t surprise me if they folded dozens of Substack writers into a similar arrangement.

And that’s the thing. We may well find our way back into a more centralised model not unlike the one we thought we were disrupting. Newspapers consolidated pamphlets, advertisements and other printed matter because they could enjoy economies of scale; I see it inevitable that something similar happens.

Cable content networks bundled dozens of channels together to offer a take-it-or-leave-it proposition to consumers back in the 80s and 90s. As studios and platforms vie for your attention it seems inevitable they’ll further consolidate beyond the platform subscriptions that Amazon Prime, Netflix and Apple already offer.

This would all take us back to something we thought we’d got away from. I have enjoyed the transformation of entertainment that Netflix has ushered in. I even like Spotify’s ability to help me find songs from my childhood I really should have grown out of. But at what cost?

There is another way. I spend a lot more time digging around for video and audio that isn’t on these platforms, such as Bandcamp, where a lot of musicians allow you to pay what you want for their work. And Youtube, far from becoming a cesspit of clickbait and rabbit holes, can actually provide some extraordinarily erudite and enlightening content, often asking only for a donation on Patreon.

1941 New Theatre Oxford Pantomime Poster

Subscriptions in themselves are not bad. They enabled a flourishing of music, theatre and art, for example, from the 19th century. Subscription allows creators, producers, providers to plan ahead, to be sure that the interest in their creation will last (at least) the year. It also allows a closer relationship between creator and subscriber. Substack and Patreon have helped revive this engagement.

The problem, then, is that the Subscription Model has bifurcated, between one where the subscriber feels closer, is closer, to the producer, and one where the opposite is true. On that model there is, instead, a faux closeness, driven entirely by AI and algorithms, but which is always trying to steer whatever excuse for a conversation takes place into upselling, or, if the subscriber cannot be deterred from cancelling, by offering a far cheaper price, a pause, or some other sop to keep them. In other words, the only time a subscriber may be offered something which is not available to them is when they threaten to leave.

In other words, the only time a subscriber may be offered something which is not available to them is when they threaten to leave.

The owner of an alcohol-free bar in Liverpool, SipSin, told BBC Radio Four this week that the bar had to close when it became apparent that drinkers of non-alcoholic beverages don’t tend to drink more than two and instead prefer to chat. Not the regular drink-more-need-more alcohol crowd. The interviewer asked her, Heather Garlick, what she planned to do next. She wasn’t sure, she said, but was thinking of a subscription/membership type model.

SipSin (@sipsinliverpool) • Instagram photos and videos

To me this captures the purpose of subscription models well. The idea is to create something that can endure, by ensuring that users and provider are committed to some sort of financial arrangement over the longer term. The publican is assured of income; the patrons are assured of somewhere they can go and find others of a like mind. (The book subscription market is doing something similar to the point where it’s worth $11.7 billion, with subscribers getting a curated box of books each month.)

Subscription is about demonstrating confidence that what you get in the future is going to be as good as what you’re getting now.

It is not a method of hoodwinking and impoverishing your customer. This should go without saying, but apparently needs to be said. In their response to a call for input to the UK government’s Digital Regulation Cooperation Forum Work plan last year, the UK charity Citizens Advice found that

consumers in vulnerable circumstances and from marginalised communities are at the sharp end of these practices and suffer particularly bad outcomes. We found that 26% of people have signed up to a subscription accidentally, but this rises to 46% of people with a mental disability or mental health problem, and 45% of people on Universal Credit.

Let’s not drag these practices any lower than they already are. Let’s instead try to reinstill some of the magic of paying for something you’re really excited about, confident that someone somewhere is using your money wisely and with the aim of keeping you hooked — without no dark patterns in sight.

Breaking the wall: Drawing the right lessons from Blade Runner(s)

By | September 3, 2024

This is the second in a series of pieces I am writing on dystopian movies — — broadly defined — and what they tell us, or could tell us, about our own condition and what prescriptions they might offer for a way forward. In this piece I offer a different interpretation of the two Blade Runner movies and the three commissioned shorts, arguing they can and should offer us a timely piece of advice at two of the most pressing problems of our age.

Zhora (Joanna Cassidy) tries to escape Deckard in the first Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

This is a review of the Blade Runner movies. But it’s really about where we are today. Although I think Blade Runner 2049, Denis Villeneuve’s 2017 sequel to Ridley Scott’s 1982 original, is deeply flawed, I believe that if we take the two movies together we can learn important lessons about our bipolar world, and where we should fit technology, in particular AI, into it.

It’s a lesson I haven’t seen others draw. And it’s based on a rather subjective view of the two movies which you might not agree with. So strap in.

(I’m assuming you’ve seen both movies, so if you haven’t, I would recommend watching them first. I’ll still be here when you get back.)

The first Blade Runner was unleashed on an unsuspecting, and somewhat unreceptive world, in June 1982 (and September in the UK). Largely ignored in the U.S. at the time, it gradually became a cult classic, casting a shadow over anyone who dared consider a sequel. Eventually Denis Villeneuve had a go, releasing Blade Runner 2049 in October 2017. Once again underwhelming at the box office (though critics mostly loved it). I’ll come out and say it: I still don’t like the second one, and am not sure I ever will. And that throws me in the trenches with those who, with very detailed and cogent arguments, also don’t like it, hurling epithets at those in the other trenches who, with equally detailed and cogent arguments, love it.

I’m not (necessarily) here to persuade you to join me in the anti-trench. But I my thesis about the two movies carrying a key, poorly understood, message for us in this particular moment in our real world narrative depends to some extent on me arguing my corner. I’ll try not to be bombastic about it. And of course, there’ll be plot spoilers in there.

Love stories

Both films are love stories, where the protagonist learns how to love. Deckard learns that it doesn’t matter whether Rachael is a replicant, or whether her memories aren’t real. She is, and that, he learns on the roof-top watching Roy Batty die amid tears and rain, is enough for him. Similarly in 2049 the protagonist K learns that while Joi, the female hologram love doll, is not real, his feelings are, his interactions with her are, and that therefore he has agency. He is capable of love, he has a soul of sorts, and he change things. His decision to help Deckard find his daughter, is that change.

Some or this interpretation, I know, is contentious, so let me briefly substantiate. In Blade Runner, when Deckard and Rachael are at the piano in his apartment, she shares her confusion about whether the memories she has are her own or not, whether her ability to play piano is from her memory or not. Deckard tells her that it doesn’t matter. “You play beautifully,” he says. She can play, therefore she is. (I’m definitely not the first to point out that Deckard is Philip K. Dick’s nod to Descartes, whose philosophy populates both book and the film.)

Rachael (Sean Young) and Deckard (Harrison Ford) at the piano in the 1982 Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.) 

In 2049 there’s also uncertainty about K’s reaction to seeing the skyscraper-high Joi hologram, commercialism in all its brash nakedness, single him out as he walks home. When she uses the name she had given him — Joe — as a generic come-on — “You look like a good Joe” — K goes through another life-changing moment. Was their love just a clever bit of commercial programming?

The original script makes clear his thought process:

The NAME goes through K like an arrow. Joe? Jo? His mind fills with doubt and hope and doubt again. Was it all part of her program? Was she ever real?

No answers from “Joi.” Only a knowing wink and her mannequin smile as she looks back out on the city. Selling herself to the world.

CLOSE ON K. _His eyes close. As if saying goodbye. To her. To everything he learned from her to dream and hope for.

K is letting go of his memory of her as a lover, but not to the memories, the lessons she shared. For him it doesn’t matter that she was a hologram, or even an off-the-shelf virtual love doll. He learned from her. To dream. To hope. Enough for him to say goodbye, rather than, as some have interpreted the scene, discard the whole experience to trash. Like any love affair.

A constructed world

So what does all this have to do with AI?

Well, a lot of the discussion about the first Blade Runner has been about whether replicants are human, and what the differences are between the two. We are persuaded to conclude that replicants are in a way more human than human (beyond the trite motto of the company that manufactures them), because they don’t carry our baggage, they want to live, and because they have a termination date they know that time is sacred. Most of us humans are guilty of often forgetting we have a termination date too.

But this is not, in my view, the whole picture. Ridley Scott thought deeply about the movie he wanted to make, as did the author of the original script, Hampton Fancher (who also wrote the story and initial draft for the sequel), and so we should be looking deeper for richer treasures. A film well made, after all, is a constructed world that reflects our world back at us with fresh eyes.

Similarly, no-one will accuse Villeneuve of directing superficial films. Arrival (2016) is an extraordinary journey through the concept of time, and how, were time not linear, we might still decide to live our lives as we do. Sicario(2015) takes the idea of protagonist and subverts it, leaving us questioning what we believe and how we see the wall between us and the way the world really is. And don’t get me started on the Dune movies (2021, 2023).

In short, closer attention in both films is rewarded, though we shouldn’t expect, or want, the same message from both. In the first Blade Runner, Ridley understood that the message of the film was quite a tender one — as he describes it, the hunter falls in love with the hunted — but the movie needs to explain why and how that happens.

The untermenschen

It happens because humans have screwed up, and replicants are the answer to their problem. They’ve screwed up the planet, they don’t have enough people to go do the empire building off-world, and so they created an untermenschen, an underclass to do the work. It’s not the first time humans have done this, and it won’t be the last. The only difference here is that the untermenschen are artificially created humanoids.

The problem the movie presents is that the answer to the problem has itself become a problem: these replicants have rebelled and started to infiltrate Earth. The protagonist, Harrison Ford’s Deckard, is the fall-guy, the gumshoe, the Philip Marlowe who has to go do the dirty work of removing this problem. At this point in the story (2019, 37 years into the future when the first film was made) the human populace is not aware of this infiltration, and only with a sophisticated device, the Voight-Kampff test, can blade runners identify them. Even that is not infallible: In a deleted scene, one of Deckard’s colleagues, Holden, complains:

maybe it doesn’t work on these ones, Deck… These replicants aren’t just a buncha muscle miners anymore, they’re no goddamn different than you or me… It’s all over, it’s a wipe-out, they’re almost us, Deck, they’re a disease…

By the time of the second movie, released in 2017 and set 32 years hence, a lot has happened (this is all explored in three short prequels commissioned by Villeneuve, and without which a lot of 2049 is barely intelligible): three years after Deckard’s travails, gangs of humans hunt down and kill the latest generation of replicants, who have no artificial lifespan, in turn prompting a replicant terrorist attack, where an EMP causes a planet-wide blackout and erases its databases. Out of the ashes emerges another inventor-terrible, who solved world hunger and is now creating another generation of replicants. Earlier generations of replicants are still being hunted, but now their replacements are an acknowledged part of the scenery and machinery. The blade runner, replicant or not, is the middleman, policing the no-man’s land between replicant and human.

In other words, a lot has changed, but a lot hasn’t. By the time of the second movie replicants are more advanced and easier to identify (they have a serial number under their right eyeball) and live among humans, but are still treated as a subspecies. (Indeed 2049 opens with a blade runner ‘retiring’ a rogue replicant in a rehashed version of how Ridley Scott had proposed the first movie begin with, right down to the sketches.)

The canvas wall

It’s this broad canvas — two movies, three shorts — upon which the love story/detective story plays out. But of course the canvas is in many ways the story. The canvas is a world deeply divided. We see K being spat on by fellow officers, abused by his neighbours and his front door sporting welcome-home graffiti “fuck off skinner”. The only other replicants he encounters are those he’s been told to kill, prostitutes, known as doxies, or the super replicant, Luv, who works as a henchwoman for the inventor-terrible, Niander Wallace.

So we’re still stuck in a hierarchical world, where one species looks down on the other. But now they’re living cheek-by-jowl. The only thing keeping them apart is the certainty that one can side can reproduce themselves, and the other can’t. And as K’s boss Joshi puts it: “The world’s built on a wall that separates kind. Tell either side there’s no wall and you’ve bought a war, or a slaughter.”

This tension is best understood with the prequels; it does not permeate Villeneuve’s world sufficiently to convey the menace/promise upon which the movie is built, in my view. But it’s vital to the storyline because K is later forced to make a decision, just as Deckard was a generation before: whose ‘kind’ do I belong to? In other words: do I accept this definition of my world as a civil war? Do I fight, and if so, what for?

In the final scenes of Blade Runner, after Deckard watches his saviour Roy Batty die, he is confronted immediately with a choice. His police shadow, Gaff, descends in a police spinner. “You’ve done a man’s job, sir,” he shouts. Gaff, the human, is taunting Deckard that he might not be. He throws a gun at Deckard, hoping it’ll pick it up. He doesn’t. “It’s too bad she won’t live,” Gaff says. “But then again, who does?”

Gaff (Edward James Olmos) throws a gun at Deckard: “Too bad she won’t live”. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

For Deckard it’s now clear what he has to do. An earlier version (23/2/81) of the script was clearer. Gaff exhibited some sadness — that Deckard had someone to love, and that love doesn’t last forever. “I wouldn’t wait too long,” he says. “I wouldn’t fool around. I’d get my little panocha and get the hell outta here.” And when Deckard’s car “bullets through the woods” as he and Rachael escape the city and outrun their pursuers, a voiceover spells out Deckard’s choice:

I knew it on the roof that night. We were brothers, Roy Batty and I! Combat models of the highest order. We had fought in wars not yet dreamed of… in vast nightmares still unnamed. We were the new people… Roy and me and Rachael! We were made for this world. It was ours!

Deckard has made his choice, chosen his side. He escapes with his lover, and the spiritual guide of his saviour Roy Batty. This bit was dropped from the shooting script, although the studio later imposed a clunky voice-over — but which, crucially, didn’t resurrect any of this talk of the world belonging to ‘us’. In any of the emerging cuts of the film Ridley Scott chose to make the story end like a love story, with the two lovers disappearing into the night. In Ridley Scott’s world Deckard had not indicated any decision to throw his lot in with the replicants.

And this is where things get confusing (spoilers ahead). The sequel chooses to build itself around the idea that Rachael and Deckard’s love story becomes the origin story of a second replicant uprising — the war that Joshi fears. Their child is proof that an earlier generation of replicant is capable of procreation. If the Deckard in 2049 was the same Deckard who believed “we were the new people” then they had in his child the rallying cry for the “wars not yet dreamed of” that would make this world “ours”.

It’s no clearer in 2049 whether this cause is what Deckard has committed himself to. Unfortunately it’s this part of the story, this narrative that sustains both films, wobbles and, I would argue, collapses in the second half of 2049. There are a number of plot holes but the key problem is not so much a plot hole as a poor solution to a major problem in the narrative. And it’s this:

All the various actors in the drama want the same thing, even if it’s not always overtly expressed: to find the replicant love child. Joshi and the police because they want to stop a war. Luv and her über-industrialist boss because he wants to reverse engineer it to build self-replicating replicants. K because his boss wants him too, but increasingly because he believes he is the love child. Deckard because he’s the father. An underground replicant army because they see an opportunity to have he/her lead an uprising.

But in 2049 this ‘race’ is half-hearted, poorly developed and incoherent. The pivot of the film is when K finds out that he is not Rachael and Deckard’s child, the growing conclusion he — and therefore we — had been coming to for much of the preceding two hours and five minutes. It is a big moment, though I’m not alone in feeling that it’s not as earth-moving as the film would like us to think it was. The problem is that this key moment overlaps with another key moment: the replicant army leader Freysa instructing K that he must kill Deckard, because he may reveal the location of his child to Luv and hence to the über-industrialist.

Denis Villeneuve directs Harrison Ford and Ryan Gosling in Blade Runner 2049 (Photo courtesy of Warner Bros. Pictures)

This is where the plot falls apart. And here’s why.

Somewhere before or during filming a bigger problem was fixed, leaving this rather awkward, almost hackneyed scene of combined key plot points, where suddenly a host of new characters appear, for no obvious reason and presumably at great risk to themselves.

The problem was this. In the original ‘shooting script’ — written by Fancher, the same man who wrote the first key drafts of the first movie — Freysa dismisses K’s fears that Deckard’s abduction places him in danger. “Don’t pay no mind on that. He always wanted to die for his own. Never had the luck. Officer did him a favor.” In short: Deckard identified with the replicants and wanted to die heroically for them.

She then goes on (she is written as speaking a sort of pidgin English, something that was thankfully discarded): “Deckard only want his baby stay safe. And she will. I wish I could find her… I show her unto the world. And she lead an army!” In short: Freysa doesn’t know where the replicant love child is. If she did, she would present her to the world as the leader of the replicant army — apparently with or without her say-so.

Of course a lot of this is overshadowed by the new information that floors K — namely that if the love child is female, then it’s not him. But this leaves all sorts of problems that Villeneuve needed to fix, not least of which was that it made Deckard’s fate dramatically and narratively irrelevant. He wanted to die for something so there we go. It gave very little for K to do other than accept his underwhelming fate as a normal replicant. It leaves Freysa and her army with nothing more to do except wonder where the child is — and, incidentally, not to ask K whether he had any thoughts on that, given he’d been working on this for a while and nearly died for it. And it leaves us, the audience, wondering why we had just sat through all this K-might-be-a-replicant-love-child only to find he’s, er, not.

An unfixable hole

It’s obvious why Villeneuve decided that this problem had to be fixed because it leaves K desolate and with no obvious path forward. The original script leaves only a hint that K would try to rescue Deckard (‘K can’t live with that’ when he hears Deckard would be happy dying for a cause) but there’s no real dramatic tension, no wrestling with a decision to be made about whether he rescues Deckard, kill him, do something else, or nothing. It’s a lousy pre-climax scene. And worse, it leaves only thin strand of Freysa’s motivation to go to the effort to follow K, save him and reveal to him the existence of the army and their location. If she wants him to join and help them it’s unclear why.

So something has to be done. It’s a patch-up job and not worthy of such a great film-maker, and it raises questions about how this significant problem apparently sat there all the way to the shooting script. (I would argue there are numerous other major problems that lead to this big problem, but that’s for another time.)

So the final version is this forced conflict, where K has to think about what he’s going to do with Deckard — kill him to stop him divulging the whereabouts of Freysa — or to save him (and lead him to the person he now realises is his daughter.) You don’t need to be a rocket scientist to realise it wasn’t really an either-or choice: he could save Deckard before he is forced to divulge the information and take him to meet his daughter if he’d wanted to. Which (plot spoiler) he does. Presumably, given Deckard’s stated fear that his daughter might, if identified as the replicant love child, be captured and “taken apart, dissected,” he wasn’t about to then help her lead the replicant army.

It’s probably as good an ending as Villeneuve can manage with the material. And it could be argued that the ambiguity of motivation behind K’s final act of courage and selflessness — the question that Deckard asks of him, “Why? Who am I to you?” — leaves the film as open-ended as the first film. But of course, that’s not really true. Deckard’s motive was to escape with Rachael from his human-centred world, to find something and someone better. Implicit was the idea that he had changed sides. With K, there is no such open-ended future for him, as (spoiler alert) he expires on the steps in the snow.

The only mystery then is the question he doesn’t answer: why K went to such lengths to make the reunion happen. And the possible answer, or answers, are interesting: did he realise his love for Joi was ultimately doomed and pointless, that it was better to find love in the sacrifices you make for others? A whole host of things which reflect the complexity of what makes us human.

We wrestled, along with Deckard, with similar questions when Roy Batty saves Deckard at the end of Blade Runner. This time, however, there’s no one to keep K company as he dies. Or is there? In the “shooting script” K, lying awkwardly on the steps, hears Joi’s voice asking “Would you read to me?”

Just as she said when we first met her. K smiles at this ghost of memory.

Of course.

A thready whisper of his baseline. Their old favorite. “And blood-black nothingness began to… spin… a system of cells interlinked…”

If that had been left in, it would have taken us full circle, showing us that the real love story in this film was the one between a replicant and a mass-produced hologram. In some ways that might have helped support what I believe was the thread, that Scott, Villeneuve, Fancher and the other writers wanted to be their key takeaway: that there is no wall — between replicants and humans, between replicants and holograms. There are memories, there are experiences that populate those memories, which when shared can connect every living thing.

A replicant like Roy Batty can learn to love life in the abstract, and Deckard as an embodiment of that. Rachael the replicant can learn to love and trust her own feelings. K, a replicant, can love a hologram, a supposedly lower form of AI, which in turn can learn to love, and sacrifice her/itself for love.

And the arc concludes with K choosing to himself die for love — in this case between a father who isn’t his, and a daughter who may not know she had one.

Joi (Ana de Armas) tells K (Ryan Gosling) he looks like “a good Joe” (Copyright Warner Bros., Denis Villeneuve, 2017. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

AI

So what does this have to do with AI?

Well, it’s simple enough. The villain of the piece is of course the über-industrialist, who kills replicants for fun, and exploits the feelings and loyalties of those he commands. He wants to find a replicant child to reverse engineer to expand his empire, colonise the galaxy. His megalomania sounds a lot like some of the techbro titans who bestraddle the world — and particularly those who talk of AI being both the biggest threat and the biggest opportunity humanity has faced.

Some of that hyperbole — or appetite for that hyperbole — has died down a little of late, but that doesn’t mean these ideas, ambitions are not still being pursued. All of that is taking place without any serious consideration by the rest of us, and I would argue that science fiction, dystopian fiction, in writing and film, is as good a way as any of exploring potential outcomes. Both Blade Runners conjure up a world which should be the beginning of a useful conversation.

And we shouldn’t think this is all some way off. One reviewer of the first Blade Runner said she found it all a bit far fetched, a world too noisy, dense, technofied and neon-tinted until she walked out afterwards into Leicester Square. (I had that exact same experience, and when later I lived in Bangkok, Singapore, Hong Kong and Jakarta I felt I was living in Ridley Scott’s dense, compressed, retrofitted, gridlocked world.)

But more seriously, think about the ‘replicants’ around us. Those same cities — and many others like them — are home to wall after wall after wall, keeping one kind from another. Countries like Singapore limit visas to specific workers of specific genders, and they are required to live apart from the rest of the population, often shipped around like cattle from work site to work site. Social media is full of scorn for these people who go into debt in the hope of helping their family, a silent underclass. Beyond them — in Syria, Myanmar, Sudan — are vast populations of the dispossessed, stateless, homeless. And beyond them is the animal kingdom, where we have anointed ourselves as monarch ruling over all other species. Underclasses are everywhere if we choose to look.

The genius of the first movie is that Ridley Scott gave us a compelling kaleidoscope of images, some real, some imagined, a world where our existing beliefs find themselves mutating. There is the dense streetscape of words, people, animals, street hardware (their detailed designs include a parking meter), much of which passes in a blur on a first viewing, while in Batty’s final speech we are asked to conjure up images of massive galactic battlefields and structures which are all the better for not being visualised for us. In this disorienting but all-absorbing world he asks us to question whether the difference between man and machine really exists and that feelings, emotions are rendered fluid. As Deckard says in an unused voiceover:

Replicants weren’t supposed to have feelings, neither were Blade Runners. What the hell was happening to me?

We are not good at knocking down walls. We need films like these to help us look back at ourselves and think more deeply about what those walls are and whether they should exist. From refugee to robot, we think we know where we stand in the abstract, what our values are. But it’s only when we are confronted with the reality we realise we are not so well prepared. The Blade Runner sequence gives us a glimpse of that.

And while real replicants are not yet in the shops we are already used to disembodied voices like Alexa, or GPT chats. But we haven’t even started to understand what we want from these early AIs: We expect these tools to be anthropomorphic because we are hard-wired to interact with everything — man, animal, machine — in that way. But we are far from really understanding what that means. When Claude GPT prefixes answers with — Great question!/That’s an interesting question about the Atari brand appearing in the Blade Runner films/You’ve raised an excellent point that highlights a subtle but important detail in the original Blade Runner film/You’re quite observant to notice this discrepancy (all genuine responses) I feel patronised and irritated. It may be a small thing for now, but to me this is going to be the hardest part, or one of the hardest parts, to reconcile as our AIs move from generic to increasingly personal, bespoke, bilateral nature of computer interaction.

For me the problem is this: we are already in this dangerous world where we interact with machines, without any notion of what constitutes civilised behaviour. We curse and roll our eyes at Alexa’s stupidity, but then laugh at her attempts to make friendly chit-chat. Similarly with ChatGPT we are so over the novelty of it, even though it is a hugely powerful tool, and one that I’ve been using almost as Deckard and K interacted with their machines. But we have no baseline, no manual of the appropriate etiquette. We have already established our dominance over machines, and so it’s a relatively small step for those machines to cross the uncanny valley, where we will continue to treat them as machines.

The Blade Runner stories may be driven by love, but they are really ethical journeys, preparing us for that moment, when a human creates something that approximates sentient life. Key to that discussion is what, and who, led us to that moment. What that sentient life is depends on which human, or humans, creates it, and this is, I suspect, is the root cause of our unease. Neither über-industrialist Tyrell nor Wallace is portrayed as a pleasant, civic-minded or moral individual, which perhaps tells us all we need to know of who, out here in the real world, we should be keeping an eye on.

Sources

Too many to list here, but the main ones I drew on are these. Apologies for any omissions.

  • Blade Runner 2049, story by Hampton Fancher, screenplay by Hampton Fancher and Michael Green, ‘Final Shooting Script’, no date
  • Blade Runner, screenplay by Hampton Fancher and David Peoples, February 23, 1981
  • Blade Runner: The Inside Story, Don Shay, July 1982
  • Do Androids Dream of Electric Sheep, Philip K. Dick, 1968
  • Are Blade Runner’s Replicants “Human”? Descartes and Locke Have Some Thoughts, Lorraine Boissoneault 11–2017
  • Deckard/Descartes, Kasper Ibsen Beck, Google Groups, 1999
  • A Vision of Blindness: Blade Runner and Moral Redemption, David Macarthur, University of Sydney (2017)
  • Several interviews with Hampton Fancher: Sloan Science and Film (2017), Forbes (2016), Unclean Arts (2015)
  • Blade Runner Sketchbook, 1982
  • Philosophy and Blade Runner, Timothy Shanahan, 2014
  • The Illustrated Blade Runner, 1982

The Civil War in Our Heads

By | September 3, 2024

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I learned an unpleasant lesson

Burning bin at Holiday Inn Express in Rotherham, Aug 5 2024, screengrab from Sky News

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I found both compelling but hard to watch.

I realised that what I saw in Civil War wasn’t what most other people seemed to see. It follows four (and for while six) journalists through an America in the midst of war between at least two groups, culminating in the journalists following one faction into the White House in what appears to be the final moments of the war. The reasons for the conflict and for the state-level alliances are never made clear or explored.

Although superficially it’s about journalists covering a war, it’s really about the war itself, and while reviewers have tended to bemoan the lack of hack-like things — most notably discussion among the four journalists about what caused the war, that is exactly what journalists would do by this point in a story — they might discuss how the war might end, but Garland rightly focuses on what that means for the journalists themselves, who of course obsess about scoops, money shots and that critical knack of being in the right place at the right time.

Kirsten Dunst and Cailee Spaeny in Civil War (2024)

What I see in the movie is this: From the opening scene, with the flag-carrying suicide bomber running into a crowd of people waiting for a water truck, Garland focuses on the dehumanised nature of the conflict. He focuses on the conventions of war, and essentially demonstrates methodically that in this conflict there are none. This is as murderous, bloody, savage and inhuman as any other war — in Serbia, in Rwanda, in Somalia, in El Salvador, in Myanmar, in Ukraine, in Syria, in Gaza, you name it.

This is Americans killing each other in the most brutal ways on American soil, not really caring who they are, even whether they’re on the same side. There is no code of honour, no Geneva Convention, no taking of prisoners, no distinguishing between civilian and combatant. The first shots we witness are from khaki-clad fighters peppering an opponent trying to hide behind a pillar. When the group the journalists are following prevail, they shoot dead an incapacitated soldier and execute their prisoners with an anti-aircraft gun.

The idea of side is constantly derided by participants. One sniper mocks Wagner Moura’s character when he asks them who he’s shooting at. “Someone’s trying to kill us. We’re trying to kill them.” When Jesse Plemons’ character, the most bloodthirsty and indiscriminate of all those the journalists encounter, asks where each of them comes from, the answers don’t seem to matter. What type of American are you, he asks, inscrutable behind his red sunglasses, without ever seeming to know what type he is looking for.

Garland is saying: we are no better; Americans are no better, than anyone else. We pretend we are, but when the last vestiges of democratic rule have gone — and many have already gone — this is how we will treat each other, already are treating each other.

Jesse Plemons and Cailee Spaeny in Civil War (2024)

From the opening suicide bomb to the (spoiler alert) Saddam-like, Gaddafiesque, murder of the president lying on the floor of the Oval Office, Garland is saying: this is where we’re at. It doesn’t matter how we got here; we’re already here. He used journalists as the medium because as individuals they were observers to the tragedy, hungry to see the worst of it, hungry for the rush, but they’re useful idiots in portraying his message.

We journalists might be occasionally tender to one another, and allow our feelings to sometimes protrude, but then we, like the stories, move on. Kirsten Dunst’s Lee Miller, the war photographer we follow throughout the movie, dies on the carpet protecting her protégée, but neither her colleague/friend, or the protégée, stop to check for a pulse. They move on, drunk by the high of being in the right place at the right time. The character arcs of Jessie and Lee — the ‘narrative’ of the story — predictably cross over as one becomes desensitised to war enough to photograph it, while the other goes in the opposite direction. They are us.

Garland is saying: journalists are shits, have to be shits to do their job, but so, under the right conditions, are we all.

This is not a tale about humanity or the triumph of the spirit, or a heartfelt paean to the lost era of journalism. There is no humanity here, Garland is saying, so why should I waste time pretending that there’s someone to care about, a dog to save, or a kid, that usually works as a salve to our conscience in such movies — “the whole world froze over/blew up/was destroyed by aliens but Timmy the Dog was safe!” Garland is saying: this is the reductio ad absurdum of all our efforts to put allegiance to party, cause and power over country. There is no magic brake that somehow stops America — or any country — from falling into civil war.

I know I’m sailing against the wind here. I’ve seen reviews like this one, which go the other way: “(Cailee Spaeny’s character Jesse)’s decision to keep shooting through Lee’s sacrificial death becomes Civil War’s final insistence that there is a unique nobility to this profession. They care about the truth; it’s why Jessie captures the president’s extrajudicial killing.” I am happy to debate the point, but I just don’t buy that. It’s true that the profession has its better moments, but the idea that somehow taking a photo of the president’s pleading last words and ignominious death was somehow noble, or a resolution of sorts, somehow making worthwhile the deaths of half of the journalists who made the trip, is to misunderstand, I believe, what Garland was attempting to say.

It was instead the manner of the president’s death (and all the other combatant and non-combatant deaths), and the fact that the soldiers know they’re being photographed doing it, but don’t care, and even enjoy it, that was Garland’s point. The end of one American dictatorship ended like any Third World uprising or coup. That the journalists felt no danger, no fear that the soldiers may murder them to conceal their assassination, is the message. Take any other decent journalist flick — Killing FieldsYear of Living Dangerously, Under Fire, Salvador, I could go on — and the narrative is built around the idea that journalists stumble upon a warcrime and risk their lives to get the story out. Nothing like that happens here: it is Abu Ghraib-level selfies, confident there will be no repercussions.

Posing with the dead president, Civil War (2024)

Garland hasn’t made a masterpiece, though time may prove me wrong on that. I don’t think that was particularly his intent. This is polemic more than poetry. I don’t think he was interested in, or wanted us to be interested in, the characters, beyond making them substantial enough to add heft to the film’s credibility (and the performances are excellent). As a journalist I have few quibbles with how the profession is portrayed, though I would have expected to see more calls from editors, filing stories, recording expenses. Garland captured the heady mix of gut-dropping fear and by hysterical relief as the journalists seek out, and then try to extricate themselves from, fiefdoms controlled by unpredictable men and women with guns.

He presented the battle scenes as journalist reportage to show us how these things — fantasies, fears, nightmares, our daily lives — may play out when we go down this road, without us getting caught up too much in the outcome. Garland shows us how quickly the process brutalizes us, squaring the circle around our own throats. Dunst’s character talks about how she thought that her career covering foreign wars would send a warning home: “Don’t do this. But here we are.”

This is Garland’s message. Journalism plays a useful role here as a vehicle — once America (the West) thought so highly of itself that it paid individuals to risk their lives covering dangerous power plays in dangerous parts of the world, so we could wring our hands and remind ourselves at least our values were intact. No more. Garland is saying those journalists serve no purpose anymore, because we are just a blockaded door in the Capito away from the exact same power play. We’ve become the dangerous part of the world, and we don’t yet see it.

Building Bridges: The PC’s (Important) Forgotten Origin Story

By | September 3, 2024
Sir Clive Sinclair mosaic, made with original keys from Sinclair computers, Charis Tsevis, 2011 (Flickr)

Technology-wise, we’re presently in what might be called an interregnum.

There is no clear outcome for AI, especially generative AI. We can’t tell whether it’s a saviour, a destroyer, or a damp squib. More importantly, generative AI — and a lot of other stagnant technologies — doesn’t offer a bridge between where we currently are, and where we need to go.

But what might that bridge look like?

This is best explained by looking at an earlier paradigm shift: when personal computing began the journey from a DIY hobby for (mostly) 14-year old males to being a household tool or entertainment console. In blunt terms, the computer went from being a Meccano kit on a workbench to an item of furniture in the living room. When computers started to be useful and affordable.

Here’s a great visualisation which captures this sudden growth in the early 1980s (Source: Home computer sales, 1980–93 | Retro Deam/YouTube)

(The video doesn’t include IBM-compatible PCs, which of course eventually stole most of the market for a few decades. But you get the idea.)

For the masses

Exploring why and how this shift happened might help us understand what we should be looking for right now: what engine of change should we be keeping an eye out for, or, if we were of entrepreneurial bent, cooking up in a lab somewhere?

A Canadian Holocaust survivor, Jack Tramiel, was the first to recognise that computing was as much a consumer device as a business one. The trick was to make it visually appealing, fun, and cheap. His Commodore 64, unveiled in January 1982, was the first real mass market computer, with sound, graphics and software — all for less than $600. The computer “for the masses, not the classes” took North America by storm in 1981. At the time an Apple computer, though far better, would cost at least twice that. The Commodore 64 sold upwards of 12 million units.

Tramiel had understood computers were not confined to the workplace or science. But a British entrepreneur also understood that and may, on balance have contributed more to building the bridge that connected ‘computing’ with ‘appliance’. Sinclair (later Sir Clive) had sold circuit boards and kits profitably to a market of do-it-yourself enthusiasts since the early 1960s, but he was sure there was a much bigger market if he could make the products accessible and cheap enough.

Personal Computer World, April 1980

Sinclair wanted to sell them below £100 — about $200 at the time. He knew the market well — enthusiasts wanted to get their hands on one to tinker with, to write code on, to learn how the machines worked, but they were often kids, and didn’t have a lot of cash.

Crucially for bridge building, he focused on marketing and distribution.

According to Bill Nichols, who ran his PR at the time, Sinclair understood that the market itself needed to be carefully nurtured in the crucial early days. Clive “always insisted that every product first went to market mail order under the total control of the company and the marketing team.” This direct connection, via a cohort of outsourced customer service staff allowed the company to hear and respond to feedback directly from early customers. “Then and only then when the awareness and initial demand was created — typically after six months or so — did it go retail,” he told me.

WH Smith ad for the ZX81, date unknown. (FunkYellowMonkey, imgur)

“Retail” here didn’t mean hobby and computer shops but high street heavy hitters like stationery and newpapers chain WH Smith and pharmacy and healthcare giant Boots. Now the demand had been nurtured enough to be self-supporting, and Sinclair’s marketing and pricing had taken hold, retailers could be confident there was enough demand across a broad customer base to make it worth their while to display them prominently.

Tomb Raider and GTA

For kids and parents they were now easy to find, impossible to resist, and fun to use. The ZX Spectrum, the third incarnation of the Sinclair computer launched in April 1982, sold over 5 million units. Sinclair was a suddenly a serious competitor to Commodore: the Commodore sales team felt them to be enough of a threat to post a photo of Sinclair’s marketing chief Nichols on their dart board.

In one year the computer had gone from an obscure piece of machinery to a consumer device. It also played an important role in creating what we’d now call the ecosystem to support the transition. It was still a little unclear what computers might do for us. Games was the most obvious thing, and the ZX series are credited with spawning a generation of ‘bedroom coders’, kids who would develop games and sell them via hobby magazines. The British video game industry now employs over 20,000 people, and major franchises like Grand Theft Auto and Tomb Raider can trace their roots to companies founded in this early wave. By 1984 the UK had more computers per household (13%) than the U.S. (8.2%) — with the U.S. only catching up in the 2000s.

Sinclair made mistakes. A crucial one was trying to build a market with professionals with the next ZX model, the QL. The device was as half-baked as the rationale — professionals who might need a spreadsheet would already have an IBM PC — and its failure led to him selling the ZX business to a consumer electronics company, Amstrad. Amstrad’s owner, Alan Sugar, had a better instinct for what people needed a computer for: his line of PCW computers were marketed as “word processors”. He sold 8 million of them. (I was no techie but I bought one, and it kickstarted my journalism career. My dad wrote a book with it.)

Me and my Amstrad, 1986

Sir Clive and Jack Tramiel are mostly forgotten figures now but it’s no exaggeration to say this: he probably contributed as much as the likes of Bill Gates, Steve Wozniak and Steve Jobs to the silicon landscape we inhabit today. Both built bridges from the narrow enclaves of ‘enterprise’ and ‘hobby’computing to something called ‘personal computing’, without which we wouldn’t be writing, distributing and reading this on computing devices. In 1980 some 750,000 devices that could be called personal computers were sold. In 1990 there were 20 million. (Source)

That this period is largely forgotten is to our cost. Sure, Gates was instrumental but he was late to the game — Windows 95 was arguably its first consumer-facing product. Jobs understood the market better, but Apple’s first Macintosh, launched in 1984, cost $2,500, ten times a ZX computer. Both Microsoft and Apple scaled the divide between office and home device, but it was the Commodore, ZX, and a handful of other much cheaper devices that built the bridge between them.

Sliding the rules

So what do we learn here? What lessons can we apply to the place we’re in today?

Well, first off, we have lousy memories. That the Sinclairs of this world, and what they did, are rarely mentioned shows just how little we understand about how computing got to be the ubiquitous, ambient thing it is today.

Secondly, there’s only so much we can learn through the lens of Silicon Valley’s favourite business consultant. Superficially at least, Sinclair was an early entrepreneur in the Clayton Christensen mould: catering to underserved segments, building products that were affordable, simplified and user-friendly, and making incremental improvements to gain market share. But he was so much more.

Sinclair was obsessed by several things, one of them fatal. He understood two key concepts: price and size. While semiconductor manufacturers would discard chips that didn’t meet their specifications, many of those rejects might work fine if he designed a product to more lenient specs. “Good enough,” in Clayton Christensen’s words. For the rest of his life he would follow a similar pattern: dream up a product, scout out the technology to see whether it could be built, and then pare back the product (and the size and quality of components) to fit a specific price point and size.

Sinclair’s obsession with the miniature emerged less from notions of disruptive innovation and more, his sister Fiona believes, from the “very confused childhood” they shared, which led her to therapy and him to seeking to impose order on the world. “Everything he makes, everything he designs is to do with order — making things smaller, finer, neater,” she said of him.

The resulting products all leveraged technology to build bridges from niche market to mass. His calculators were small and stylish enough to be desirable in a way calculators hadn’t been before, but cheap enough for everyone. It’s hard to overstate the impact this had. Schoolkids like me were still being taught how to use a sliderule to make calculations in the 1970s, but when the first kid brought a Sinclair Oxford 100 into class (£13) we knew those log table books were doomed.

Vintage Aristo Darmstadt Slide Rule, Joe Haupt (Flickr)

But he had another infatuation: the new. The Sinclair QL, which effectively killed his computer business, arose out of his reluctance to build on a good thing, throwing the earlier ZX model out and trying something completely new, for a market that was already being catered to and which didn’t care overly about price. Launched in 1984, the QL was discontinued within two years after selling around 150,000 units, and Sinclair was forced to sell.

Sinclair understood about the bridge, but in this case misread it. The bridge here was a bridge backwards, returning to a market that already existed, and where users weren’t overly sensitive to price, but were sensitive about usability.

A calculator in your stocking

To me the key lesson to be drawn is this: Sometimes there needs to be an intermediary technology, or technologies, that can be pulled together in a new (and cheaper) way to create a new device, possibly even one that doesn’t have a name. There may be no demand for such a device, but that demand can, with good marketing and distribution, be created. By doing so you create a new market beyond the old one.

This might sound easy and obvious, but it’s not. Sinclair was already well-versed in this approach before he applied it to computers. He had built tiny transistor radios, taking the radio out of the living room and into the pocket or bedroom; home-made amplifiers to take hifi away from the deep-pocketed connoisseur; calculators out of science labs and accounting departments; digital display watches light years ahead of your smart watch; and (spectacularly, in terms of grand failures) electrical vehicles out of the auto industry and milk float sector.

Not all of these, ok not many of these, were successful, but it helped Sinclair develop a good understanding, not so much of invading existing markets a la Christensen, but of creating new ones. No one realistically thought that everyone wanted a calculator until Sinclair suggested it would make a great Christmas present. No one thought a computer would be much fun until he got them into stationers, toy shops and pharmacies. He built a bridge to a place no-one thought existed.

The common view of Sinclair: Daily Telegraph obituary, 18 Sept 2021

We are in something similar now. We have been sitting on a plateau of new, technology-driven consumer products for nigh on a decade now. Interesting technology — materials, blockchain, AI, AR, VR — hasn’t created any real mass market, and that, I believe, is in part due to a lack of imagination and understanding of how bridge-building to new markets works.

I don’t claim to know exactly where those new places are. It could be that some part of AI makes it possible for us to develop a taste for always-new music, say: so instead of us seeking out the familiar when it comes to aural entertainment, we demand AI creates something new for us. (I’ve mentioned before how intriguing I find the likes of udio.com. Here’s a rough stab at “a song in the style of early Yellow Magic Orchestra” with abject apologies to the souls of Yukihiro Takahashi and Ryuichi Sakamoto. )

This is probably too obvious and too narrow an assessment of the market’s potential. Sinclair’s advantage was that he was a nerd first, but a consumer a close second. He dreamed of things he’d like, he understood the technologies, their availability or lack of it, and he cared deeply about form factor. He brought disparate software, materials, circuitry and functionality together to make something that people either never thought they needed, or never imagined they could afford.

Others took his ideas and made them better, cheaper, more reliable: Casio’s calculators (and calculator watches); Amstrad’s computers; even his C5 electric trike, which I’ll explore more deeply elsewhere, became the opening salvo in a decades-long struggle that brought us EV scooters, even the Tesla.

It takes an uncommon mind to see these markets and build the bridges between them. We would not be here if it weren’t for folks like Sinclair who felt people would like these technologies if they were cheap enough, and fun enough, and he understood, at least a little, where we might go with them if we had them.

Now it’s time for someone to ask the same questions and build us some bridges from where we are — computing devices that have been good enough for a decade, software that is mostly a retread of last year’s, and an AI that is undoubtedly impressive, but also depressingly flawed and ultimately dissatisfying.

Over to you.

Anticipating the wave train of AI

By | July 3, 2024

We’ve been poor about trying to predict the real, lasting impact of generative AI.

It’s not through lack of trying: some have talked about rethinking the way our economies run and how we think about our lives, to treating it as an existential risk, to treating AI as a foundational, or general purpose, technology that will change everything.

Soldiers gathered around a transistor radio to listen to a broadcast. 1966.
Oliver Noonan/Associated Press

I’m not above a bit of grand predicting, and I’ll make some here, but it took me a while to realise why all these efforts sat awkwardly for me. We’re used to tech players having a very clear and reasoned notion about the likely impact of their technologies, even if they’re wrong: we want information to be free; we want to unlock the value in owned property — cars, houses; we want to empower everyone on the planet by connecting them to the internet.

All fine, but AI, or in particular generative AI, seems to lack one of these clarifying bumper stickers. It’s vague, amorphous, grand but somehow silly, as if we’ve given the pub mic to the only person in the room who has never done stand-up.

It’s partly, I suppose, because of the nature of AI. We have found it useful for stuff: identifying cats, driving cars, taking better pictures, sorting search results. GAI, meanwhile, is a different beast. In a way we’re struggling for a use case that justifies the vast expense in building and running large language models.

But it’s something else. It’s because Silicon Valley has, for much of the past 20 years or so, been built on the idea of disruptive innovation — that technology will always find a way to do something differently, which will somehow find a way to dislodge an incumbent technology.

And not only have we really worked out what that incumbent technology is when it comes to GAI, we have been too uncritical in our belief in the concept of disruptive innovation. We need, I believe, to overhaul the concept, in order for it to be more useful at this point in our technological progress.

Let me show what I mean by talking about humble transistor radio.

Transistor radios, 2015, by Roadsidepictures

This cute little fella, smaller than most modern smartphones, is often wheeled out as a great example of disruptive innovation, where a newcomer to the scene spots an opportunity to undercut expensive vacuum-tube radios with cheap and cheerful devices that were “good enough” for consumers. Here’s how the founder of disruptive innovation, Clayton Christensen, told the story in his “The Innovator’s Dilemma”:

In the early 1950s, Akio Morita, the chairman of Sony, took up residence in an inexpensive New York City hotel in order to negotiate a license to AT&T’s patented transistor technology, which its scientists had invented in 1947. Morita found AT&T to be a less-than-willing negotiator and had to visit the company repeatedly badgering AT&T to grant the license. Finally AT&T relented. After the meeting ended in which the licensing documents were signed, an AT&T official asked Morita what Sony planned to do with the license. “We will build small radios,” Morita replied. “Why would anyone care about smaller radios?” the official queried. “We’ll see,” was Morita’s answer.

This isn’t accurate, on several counts. Bell Labs, co-owned by AT&T and Western Electric, had been showing off its transistors’ radio capabilities as early as 1947, though use cases and bugs were still being fixed. It’s unthinkable that AT&T had not already thought of the transistor radio — their problem had been finding consumer manufacturers to partner with.

Christensen continues:

Several months later Sony introduced to the U.S. market the first portable transistor radio. According to the dominant metrics of radio performance in the mainstream market, these early transistor radios were really bad, offering far lower fidelity and much more static than the vacuum tube-based tabletop radios that were the dominant design of the time. But rather than work in his labs until his transistor radios were performance-competitive in the major market (which is what most of the leading electronics companies did with transistor technology), Morita instead found a market that valued the attributes of the technology as it existed at the time — the portable personal radio. Not surprisingly, none of the leading makers of tabletop radios became a leading producer of portable radios, and all were subsequently driven from the radio market.

The little gadget that could

In fact, Sony weren’t actually the first to introduce a small transistor radio, nor were they the ones who first recognised the transistor’s commercial potential. Already by 1950 the idea of the transistor radio was being described as the “find of the century” for commercial usage; three years later the first prototype transistor radios were being appearing in the wild, and one or two journalists were already writing about them. In January 1953 freelance science writer Leonard Engel wrote a piece headlined Curtain Lifts on Little Gadget Likely to Revolutionise Radio:

Star Weekly (Toronto, Ontario, Canada) · Sat, 10 Jan 1953 · Page 10

The delay wasn’t only due to indifference. Commercialisation was in part held back due to technical constraints — engineers had yet to master how to mould germanium, a key material, into the special forms required for transistors — but also in part due to the fact that most transistors were “going to the military for secret devices.” (Transistor technology originated in Allied research into radar during the war.)

And while Sony’s Morita may have been quick to recognise the opportunity, the first to market was Texas Instruments, with the launch of the Regency TR-1 in October 1954. They weren’t cheap: adverts at the time showed them retailing around $100 — roughly what an iPhone 15 Pro Max would cost today.

The Indianapolis News (Indianapolis, Indiana) · Mon, 18 Oct 1954 · Page 31

And while cheaper versions eventually brought the price down to $5 or less, Sony’s initial offerings such as the TR-63 cost more or less the same as the TR-1. (See The Transistor Radio in Nuts & Volts Magazine.)

This reality doesn’t fit the myth Christensen heard, and it doesn’t fit the disruptive dogma still powering Silicon Valley.

‘Destroying the social life of mankind’

But it’s not just the chronology of those first few years that is skewed. The focus on disrupting market incumbents misses out so much.

The introduction of the transistor radio, and its rapid spread across the globe, had a profound effect on mass communication, including the building of a mass infrastructure to support the radio’s reach and influence. In 1950 there were 46 countries without any radio broadcasting transmitter at all. By 1960 that number had fallen to 14. In 1950 more than half of the 159 countries surveyed had less than 10 radio receivers per 1,000 people. By 1960 that number had halved. The number of radio receivers in the world doubled between 1950 and 1960 — about half of them in the U.S. (Statistics on Radio and Television, 1950–1960, UNESCO, 1963) . The numbers rose even faster as the Beatles gained popularity: sales in the U.S. almost doubled, to 10 million transistor radios, between 1962 and 1963. (Source: Transistor radios: The technology that ignited Beatlemania — CBS News)

UNESCO had realised early on that radio could be a powerful tool for education — particularly given illiteracy, which was as high as 85% in some countries. (UNESCO Technical Needs Commission, Recommendations of the Sub-Commission on Radio, Aug 1948). War, too, had already convinced many countries of radio’s awesome political power: onetime BBC Governor Sir Ian Fraser had called it “an agency of the mind, which, potentially at least, can ennoble or utterly destroy the social life of mankind” (Broadcasting in the UK and US in the 1950s, edited by Jamie Medhurst, Siân Nicholas and Tom O’Malley, 2016). Indeed, both British and German governments had pushed for cheap radio sets to ensure their propaganda could spread as widely and quickly as possible during the war. ( Source: Television and radio in the Second World War | National Science and Media Museum).

Radio stations were a required target for any respecting coup plotter. Edwin Luttwak, in his Coup d’Etat: A Practical Handbook (1968), prescribed the seizing of the key radio station and establishing a monopoly on information by disabling any others, using “cooperative technicians” where possible. He ascribed part of the reason for the failure of Greek King Constantine II’s counter coup in late 1967, was the fact that the government radio station, Radio Larissa, reached only a fraction of the population because of its weak transmitter and unusual wavelength.

A Wall of Tinny Sound

Transistor radios, in other words, changed the way people got information, what they listened to and their habits. The TR-1 was released in October 1954 into a fast-fermenting musical world: Bill Haley released his Shake, Rattle and Roll the same month. By 1963, the now untethered average American teen listened to the radio for slightly more than three hours per day, according to Steve Greenberg’s piece on the BeatlesBruce Springsteen believed the power and importance of his transistor radio could not be overstated: “I lived with it, tucked it in my schoolbag during the day, and tucked it beneath my pillow all hours of the night.” Radio became a “medium of mobile listening”, in the words of academics Tim Wall and Nick Webber (PDF), and music producers and song-writers adapted accordingly

Phil Spector, for example, built his “wall of sound”, filling the recording studio with mini-orchestras to create a big fat sound that was then fed through special “echo chambers” (See Lance LeSalle’s answer to the question What was Phil Spector’s ‘wall of sound’? on Quora). Spector’s innovations have fed pop music until today.

News spread faster, more immediate: initial reports of JFK’s assassination in November 1963 were largely heard on transistor radios, in fieldsbuses and on the streetJames Sperber described how he heard from a friend who had smuggled a radio into their school in San Diego that JFK had been shot. Meanwhile the teachers, summoned one by one to learn the news themselves from the principal, decided not to burden their pupils with the news unaware many already knew.

The power of the transistor had been proven: from then on the race was to find more uses and scale the technology. Televisions, hearing aids and computers soon followed. Famously IBM president Thomas J. Watson bought 100 Regency radios and ordered that its computers immediately shift to using only transistors. “If that little outfit down in Texas can make these radios work for that kind of money, they can make transistors that will make our computers work, too,” he told executives. (Source: Crystal Fire: The Invention of the Transistor and the Birth of the Information Age)

The Soviet Union’s secret weapon

All this to say: The most disruption caused by transistor radios had very little to do with dislodging the old expensive valve radio manufacturers. It had to do with seismic shifts in behaviours, of consumption, of movement, of accessibility, of attention. It’s not hard, once you’ve absorbed the above tale, to draw a direct line between the transistor radio with the smartphone — via the Walkman, the CD Walkman, the iPod — and to see a certain inevitability. Miniaturisation and mobility became the bywords of consumption: so much so the Soviet Union earned much needed foreign currency by building the world’s smallest radio, the Micro, in the 1960s. It became the rage in Europe and the U.S., especially after Nikita Khrushchev gave one to Queen Elizabeth II (See: Soviet Postcards — “Micro” miniature radio, USSR, 1965–69 ).

Soviet Amsa Micro radio, 1968 (Micro Transistor Radios, defunct website)

We tend to think of disruption in terms of what it does to incumbents — companies, industries, workers etc. Really we should be looking beyond that: Much is missed if we assume the first wave of a tsunami is the last. The first is often not the most powerful: what follows is a ‘wave train’, the timings between them uneven, including edge waves, bores and seiches. We lack the imagination to predict and the tools to measure these serial disruptions that follow an innovation.

It is, I agree, not easy to think through the potential long-term impacts of a technology, especially one like AI. But it might help for us to at least be informing ourselves about the technology, its capabilities and its limits. I am an avid user of whatever tools I can lay my hands on, including both commercial services and any open source offerings. But we are too vulnerable to binaries: critical rejection or unquestioning embrace, which the transistor radio story shows us is both predictable and unhelpful to understanding its wave train.

Take, for example, hallucination. Loyal readers will know I’m not impressed by GAI’s ability to be honest. Indeed I would argue, 18 months in, that we’re no closer to solving the problem than we were before. I won’t bore you with the details but every tool I’ve used so far has failed basic robustness tests. If we choose to use these tools to create supposedly factual content, we will accelerate our decline into content mediocrity where every press release, every article, every email, every communication that might conceivably have been done with the help of AI will be dismissed as such.

Spikes and spin

Pretending this is not an issue is impossible, but the trick now seems to be to say that it’s just a blip. Despite its efforts to put a positive spin on the results, McKinsey’s recent survey on AI was forced to acknowledge that

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk.

That’s putting it mildly. The percentage of respondents who believed that inaccuracy was a risk rose from 56% to 63% between 2023 and 2024, making it their biggest concern out of the 10 or so on offer — the next one being IP and then cybersecurity, all three being deemed relevant by more than half of respondents. And sure, respondents are looking more closely at mitigating the problem but the number is around half of those who consider it a relevant problem (32% last year, 38% this.)

I’m sorry, but that to me essentially says a good chunk of those asked believe the technology is dangerous. And these people are apparently near the coalface. The 1,363 respondents are described as “representing the full range of regions, industries, company sizes, functional specialties, and tenures.” Nearly three quarters of them said their organizations “had adopted AI in at least one business function” and two thirds said their organizations “were regularly using gen AI in at least one function.” So for them to say in those numbers that GAI is making stuff up is the headline.

And yet the title of the paper is: The state of AI in early 2024: Gen AI adoption spikes and starts to generate value. The race to adopt AI, the race to sell AI-related services and consulting (which of course is what McKinsey is doing here) will unfortunately override the annoying voices in the boardroom saying, what are we doing here exactly?

Take the IP thing. There are some extraordinary, creative GAI tools out there — for making images and videos, music (try Udio), sound effects and text to voice (try out ElevenLabs) , but once again it’s inevitable that the companies behind these tools are either bought out by the bigger players or are bankrupted by intellectual property litigation. Those that survive will either leverage their own IP or retreat to a much more cautious approach, producing bland stuff that doesn’t fool anyone. (Just as it’s now relatively easy to spot visual content that’s been created by AI, so some people are already able to distinguish between AI-produced pop, such as Rick Beato’s son. )

So what we are left with? I’m told that LLM tools that use RAG, Retrieval-Augmented Generation, which confine the corpus an AI draws on, is proving useful in some verticals such as helping call centre support staff better help customers. That’s great but I’m not holding my breath. My experience with RAGs has not been a positive one. So I guess we’re left with writing faster, better emails, product descriptions and the like, none of which sound very exciting to me, nor particularly disruptive.

And, indeed, that is the problem. Companies hate to spend money on customer support, and so any money-throwing at GAI will be in the hope of saving more money later. Use cases like this will be the low-hanging fruit, because it’s relatively easy to sell, replacing one underpaid group of human drones with non-human ones.

Underwhelming. GAI may well have to rethink itself to avoid another AI winter or survive a race to the bottom, where every product and service has an AI feature that ends up being as little used as Siri or Clippy.

But if we’ve learned anything from the transistor radio, it’s that we have learned very little from transistor radios.

‘A revolutionary device’

Take for example what happened both before and after its launch. While Christensen’s founding myth of the cheap transistor radio was off, he was right that there was widespread scepticism about the transistor’s potential, with both academia and engineers only gradually being won over:

After indulging themselves for a year or two in the resentful skepticism with which academia generally greets revolutionary new concepts, physicists and electronics engineers gradually warmed to the expansive new possibilities offered by semiconductor electronics. By 1953, The Engineering Index, an annual compendium of scholarly monographs in technical fields, listed more than 500 papers on transistors and related semiconductor devices. A year later there were twice as many. (Source: The Chip: How Two Americans Invented the Microchip and Launched a Revolution — T.R. Reid)

But the transistor radio’s success quickly threw up another problem: scaling. Even as pocket radios were jumping off the shelves in the late 1950s, “the tyranny of numbers began to emerge. Enthusiasm gave way to frustration, even desperation,” according to Reid. The first transistor radios had 2–7 transistors: The problem was finding an economic and reliable way to cluster the thousands, even tens of thousands necessary for the devices which stood to benefit most from the transistor revolution. A Naval aircraft carrier had 350,000 electronic components; a single computer like the Control Data CD 1604 contained 25,000 transistors, according to Reid.

Solving that problem led to the integrated semiconductor circuits of our era, and the rest, as they say, is history. This story arc, however, is left out of the paeans to disruptive innovation, despite it being one we should be familiar with. A great idea comes along and takes a few visionaries (or good business minds) to see how it might be used. That in turn sparks a movement whose expansion is checked only by some limitations of the technology — usually related to scale. Solving those problems leads to even greater expansion, and we are into that wave train of change.

This describes, arguably, the evolution of AI, and even within AI for the evolution of instances of AI, such as neural networks, and, more recently generative AI. At any point someone predicting the long-term impact of the transistor radio might have looked foolish. Journalist Leonard Engel had been writing about the potential of transistors as far back as 1947, but he was a rare voice writing excitedly about the transistor’s potential. Even he was careful about when this revolution would start. In January 1953 he wrote:

A radio small enough to fit into the palm will be available in a few years. Your portable TV set will be the size of a small typewriter. And for the hard-of-hearing there will be a hearing aid the size of a matchbox which will run for a year on a single set of batteries. What’s even more important, air travel will be safer because better radio aids can be found. These are but a few of the revolutionary devices now possible thanks to a recent invention — a tiny device the size of a kernel of corn called the transistor.

He relegated to the bottom of his piece the real visionary:

Dr. E(lmer) W(illiam) Engstrom, head of the RCA laboratories, says that in less than five years transistors have come as far as vacuum tubes did in 20. He predicts that in another few years this amazing little gadget will be used not only in home radios and TV sets, but in all sorts of revolutionary devices that can scarcely be imagined today.

That last sentence stands the test of time, and that offers a helpful blueprint for thinking about AI. Not the self-serving (and misdirecting) noise about the existential threats, not the banal attempts to make customer service, legal research or content creation simpler and cheaper, but about what new things we might do, how society, health, war, creativity, might change and and how it may impact our lives in the same way that little transistor radio did.