WTF happened to our music?

By | October 17, 2024

Accusations that some big name musical acts have been miming their vocals raises some tricky questions.

Popular music is in danger of becoming a parody of itself. Where once we dreamed of a fairer, more equitable landscape, big money is ever more concentrated in the hands of a shrinking group of artists and record companies.

The problem: at least one of those big names has been accused of miming their vocals, raising some uncomfortable questions. Not the least of which: Are we being suckered into paying insane amounts for a one-off, premium artistic creative experience which turns out to be anything but.

For sure, it’s not the only problem in the music industry. But it does illustrate the absurd extremes the industry has gone to to preserve its hegemony.

Let’s take a closer look.

First off, the industry has defied the predictions — made by David Bowie, among others — that a long tail would evolve which benefitted home-spun talent, broke the monopoly of the major record labels, and lead to a new ‘artistic class’.

But, after exhausting all other options, the record companies, and streaming companies like Spotify, have been clever. Twenty years ago the industry was on the ropes. Napster looked to have crowbared open recorded music, ripping the digital song on a CD into an MP3 file small enough to be freely distributed, freely shared and freely downloaded. Who would ever pay for recorded music again?

It took a while, but a way was found. The question was asked: why do people want to own music? The answer was: they don’t, if they can get access to that music whenever they want. Enter streaming. So then it was just a question of cost. Well, two questions. How much would users pay, and how little would artists settle for?

The answer to the second question was easier: it’s not how much the artists would settle for, but the record companies. And by the time the question needed an answer, that was really only a discussion among three players as the others had been gobbled up: Sony, UMG and Warner. Which made agreement easy. The three have exerted constant pressure on the streamers to pay more royalties to the big artists and less to the long tail.

This brings in decent money. But the real money is elsewhere: performance. The music industry has shifted back to what it was before the introduction of recorded music.

This is how big players make their money. Live music accounted for more than half of overall music industry revenue in 2023, rising 26% year on year (more than recovering from the pandemic.) But most of this money is driven by big names — Taylor Swift, Beyoncé, Coldplay, Elton John. Taylor Swift grossed more than $1 billion from her Eras Tour. Eagles made $70 million this year from tours.

But it still entails a cost — especially on the artists. Playing sets every night can be wearing. So within the ‘live’ sphere there have been innovations.

One of them is not to tour. It’s called a residency, and U2 grossed $230 million from theirs at the Las Vegas Sphere. Eagles have just started theirs.

Another is not to sing. A British musician called Fil has been running some recordings of live performances through software that shows the exact pitch of each note, and he’s demonstrated that a number of artists, including Eagles, are actually miming some, if not all, of their vocal performances. (He’s not the only one: Kiss, Red Hot Chilli Peppers, Dua Lipa, Britney Spears, etc have all been accused of miming instruments and/or vocals.)

If you’ve coughed up several hundred dollars to watch and hear them play, you might be a little upset. You expect a few elements to be pre-recorded, especially if the artist is also doing some complicated dance moves, or flying through the air, but as far as I can see, most of the Eagles frontline were standing in a row (like Pretty Maids, you might argue). No trapezes or twerks in sight.

Eagles pic

Of course, Abba have presented what many think is the next step — holographic avatars — but I think we’re still some ways off everyone doing that. But it does open up possibilities for posthumous performances. Or at least to further blur the lines between what is a real performance and what isn’t, addressing the issue of whether Eagles can continue to make money ‘performing’ after they have all gone to that Hotel California in the sky.

What Fil unearthed, and continues to unearth, is troubling. He tells me that no one has gotten back to him to complain, or sue him, and his methodology is transparent. It suggests that some artists see performing live as both a pure money-making event, and the definition of ‘live’ to be fungible. (It also explains why such concerts usually ban audience members from shooting video or recording audio on their phones; it was these clandestine recordings that allowed Fil to compare ‘live’ with a recording of a previous concert.) Fil, it should be said, does not see this as the norm across the industry — his YouTube channel includes a lot of genuinely live performances and told me “I think within the professional community miming is not looked upon too fondly.” (I noticed some of Fil’s analysis has been removed because of a copyright claim.)

Vicious cycle

In short, the industry has caught itself in a vicious cycle it may not be aware of. If the big money moves from audio to performance, however loosely you define it, then you now have a bottleneck in the supply chain, namely the artists themselves. This doesn’t matter too much for someone like Taylor Swift, though it has made her a target for crazies, but it does for those who aren’t able to hit those high notes anymore. It’s in some ways understandable: Eagles last had a hit single in (checks notes) 1980, with I Can’t Tell You Why, and the median age of the current line-up is 76. (But this shouldn’t be an excuse: Fil points out that other artists like Roger Daltrey, who’s older than any surviving Eagle, can still belt out The Who’s old hits pitch perfect, in the original key.)

In short, live takes its toll, and if that’s the only way to make money fans may be increasingly unwilling to shell out if they find out they’re not really watching what they think they’re watching. It might not deter many music fans, but it’s early days: we still don’t know how many artists do this. Fil has only so many hours in a day, but I feel we haven’t heard the end of this. A collapse in confidence could lead to a collapse in demand for this ultra-profitable revenue stream. A ticket for Eagles could cost between $300 and, well, the limit. A lower-level suite for Las Vegas on December 6 would cost $35,139.82. (And don’t get me started on TicketMaster’s dynamic pricing.)

It’s a long way from where we thought the music industry was going at the turn of the century. Where we thought the internet and digital would more equitably distribute the value of an industry, it has done the opposite. Where did we go wrong? I suspect it has something to do with the same errors that led us to think that social media, in its original meaning of a Web 2.0 that flattened the barriers to entry of creating and distributing content, would spread the wealth around.

As things get easier to produce so an abdundance of choice triggers Barry Schwartz’s ‘paradox of choice’. While many of us love the endless array of music we can listen to, and love supporting individual artists on Bandcamp, a lot of us get confused and overwhelmed and gravitate towards the most prominent. In this way record companies become more important because they are able to promote their artists across the full spectrum of media. The number of smaller artists grow, but the funnel between them and a decent income, let alone a big one, feels relatively tighter and tighter.

Long tail

Meanwhile, the long tail, ever longer and thinner, across the floor. Bandcamp, the most prominent marketplace for self-produced music, has more than a quarter of a million ‘active stores’ — meaning acts selling their music (in both digital and analog varieties, along with merch). That’s a three-fold growth over 2022.

Bandcamp is great for artists, but its vibrant community is more of an anachronism than a view of the future. Followers are often invited to name their own price for whatever they buy, and are invited to ‘listening parties’ with their favourite artist. Some artists offer subscriptions, where fans get all the albums released in a particular timeframe.

The company has changed hands twice in two years, and only half of its staff have survived the moves.

End of Scarcity

The other effect of more democratic means of production and distribution is this: the output is no longer scarce. Napster may no longer be with us, but it taught us that a CD or digital album wasn’t as valuable as we thought it was and coincided with the rapid decline of CDs. We stopped thinking of music as a possession and more as a service. When you buy a radio you don’t expect to own the sound coming out of it. Why should a smartphone be any different? Enter streaming services, where you pay for everything but, apparently, don’t care that own nothing.

So where is this all going?

But I think we’ve seen in the music industry the bifurcation I mentioned in the last piece — where communities of artists survive through a closer connection to their audience, while at the other end we see a widening of that gap (you’ve got to be pretty detached from your [latex]%[/latex]-per head audience to mime your way through your performance, hoping a ban on cellphone recording would keep your secret safe.)

If Bandcamp, or something like it, survives, then I think there’s hope that independent artists (including my own humble other self) still have a chance to make a modest income. That’s by no means assured. The options for playing live to promote your act are shrinking: artists are playing half the shows they were in 1994, according the Music Venue Trust, because doing so is too expensive.

AI

AI is the next wave, and it’s clear that the likes of Sony think it might make them richer. I’ve tried it out and I have to admit it comes up with some quite listenable stuff (here’s Udio’s effort at an early Yellow Magic Orchestra sound.) But it’s the musical equivalent of the awful AI art we’re drowning in. We will quickly learn to differentiate between AI and real music and woe betide anyone who tries to sell the former masquerading as the latter.

But done well it might create its own musical niche, and we can’t afford to be precious about it. Music has been artificially generated since the old piano rolls of the late 19th century. Algorithmic music dates back to the 1960s, and even I was using a primitive sequencer in 1982. Bands like Depeche Mode, Echo and the Bunnymen and Orchestral Manoeuvres in the Dark used drum machines (or tape recordings of drum machines.) Nowadays many songs are ‘composed’ of pre-fab loops woven together. Artificially created music is something we’ve long embraced.

It’s the songwriting that we probably hope will still remain a human-centric endeavour, the application of art. But even there we are bound to be disappointed. Lejaren Hiller and Leonard Isaacson composed a suite for string quartet using the ILLIAC I computer in 1957. Sony’s Computer Science Laboratory in Paris created an algorithm in 2002 that could resume a composition after a live musician stopped playing. The likelihood is that it will lead to a ‘just good enough’ approach to using music in ads, TV, airports, lifts, jingles, phone trees, in the process putting a generation of jobbing music creators out of work.

Money for God’s Sake

The bigger lesson? Art is created by artists, but while we claim we love art and want to support it, the people financing the production and distribution of it are more concerned with Mammon.

The music industry is probably a canary in the coalmine. It was the first major industry to be ‘disintermediated’ by digital/internet, and things looked bleak. But it tackled its problems and is now more profitable than ever, while the actual number of people who are able to make a living from it has shrunk. (These two are not unconnected: the big three have successfully pressed Spotify to pay more to the bigger artists and less to the others.)

So we have to assume that these trends will continue, and that they will apply to other industries too. Books have followed a similar pattern; podcasts are currently doing the same thing to radio/news/audio. Global podcast ad spending is likely to hit $4 billion this year, with more than 400 million podcasts. But the top 500 account for 44% of podcast ad spend. While podcasts aren’t as concentrated as music and the written word, it’s heading that way. I would imagine newsletters following a similar trajectory, as I mentioned in the last piece.

So where does this leave us? Music is always going to find a market, and the great thing that digital + internet has given us is the power of discovery. It’s just that not so many of us as we thought might be able to make a living from it: only 4% of the 200 million ‘creators’ worldwide make more than $100,000 a year. A YouTuber with 20,000 views per a day earns a little over the U.S. poverty line. That means that 97.5% of YouTubers don’t make enough to reach that line.

AI will only make that worse. Record companies have made clear they take AI seriously — both by exploring its potential and coming after companies that might challenge them. I don’t imagine AI will follow a predictable path: none of the prior disruptions in music this century have been. But I don’t see anything in there to suggest that original, heartfelt, authentic music will come out on top.

My suggestion? Dig around Bandcamp and look for some new stuff to listen to and support. My belief is that this is what scares music industry execs — a broad church of music lovers with catholic tastes. Rob Stringer, CEO of Sony Music Entertainment, acknowledged to Bloomberg recently that the power of the algorithm in streaming, pushing similar songs to users to keep them listening to the same kind of stuff, had been hugely profitable. But he saw the downside:

I listened to everything because the BBC had a government mandate to play you every type of music. So I was kind of open to that experience. Whereas I think the disadvantage with automated taste is that you end up, as you said, being funneled the same music.

I’m willing to give him the benefit of the doubt here, where he argues that this is about art, not money (he was an early Clash fan, so I have to.) But to me good algorithms, which tried to dig deeper into what you liked rather than just as close a match as possible, could unravel the streaming model quite quickly. You would quickly rub up against the limitations of Spotify and seek out more esoteric fare on the likes of Bandcamp.

Hurry though. Bandcamp itself feels like a commons living on borrowed time.

a way out of subscription hell

By | October 17, 2024

If your revenue model relies on misleading, impoverishing and gaslighting your customers, then you probably should rethink your model.

That would seem to be a reasonable statement to make, and one most businesses might agree with. But the reality is that companies are falling over themselves to charge people for things they didn’t ask for and to keep doing it.

In the UK alone, consumers in 2023 spent £688 million (US$912 million) on subscriptions they didn’t use, didn’t want, and didn’t know they had. That’s nearly double that for the previous year. These include those who didn’t realise a subscription had auto-renewed, signed up for a trial but forgot to cancel before the paid bit kicked in, or who thought they were making a one off-purchase. (I’m only using UK figures because they were the most credible ones I could find. It’s highly likely it’s the same elsewhere.)

Google searches for ‘cancel subscription’, 2004–2024

Welcome to the subscription economy, which should more rightly be called the subscription trap economy. For the past decade it has been the business model du jour. And, possibly, it may be on the way out.

California has led other US states into implementing stricter auto-renewal laws, and the FTC is cracking down on “illegal dark patterns” that trick customers into subscriptions, and in the UK rules that limit predatory practices will likely come into force by 2026. The FTC has taken Adobe to court, alleging it had “trapped customers into year-long subscriptions through hidden early termination fees and numerous cancellation hurdles.”

Enshitification and the doctrine of Subscriptionism

I’m pleased they’re doing all this (and the EU, India and Japan also have legislated in this direction). But I’m not holding my breath. Subscriptions have been such a boon for companies they’re not likely to ditch them, or the dark arts around them that help make all this so profitable.

While I’m tempted to throw all this into the “enshitification” dumpster, it’s not quite the same. Enshitification means gradually eroding the quality of a free or near-free product in order to prod users to pay, or to make the service more profitable in other ways (for advertisers, for example). Subscriptionism, for want of a better term, is what happens after the enshitification has been successful. The customer throws up their hands, signs up for whatever gives them less enshitification, and then the company ensures, Hotel California-like,

  • customers never leave because it’s too painful to figure out how to unsubscribe;
  • they don’t quite realise how much they’re paying (or they never realise they are paying);
  • that the paid service itself gradually enshitifies, pushing the user up to the next tier, which is merely a more expensive version of the original tier they were paying for. (Think Meta, Netflix, Max (ex HBO Max), Amazon Prime, X etc);
  • rinse and repeat.

So no, I don’t think that companies will really make it easy for you to unsubscribe. It’s simple economics. Acquiring a new customer is hard, which is why it’s so easy to sign up. Churn, therefore, is the enemy. Companies will spend a lot of money on trying to keep you aboard. The FTC found that more than three quarters of subscription services used at least one ‘dark pattern’ (think misleading wording or buttons, etc), while EmailTooltester found that Amazon Prime was the worst offender, averaging 7.3 dark patterns per cancellation.

I can confirm this last one, and it provides a good illustration of the dark arts. When I get through to the right page, and click on Cancel Prime, I was greeted by a confusing page. It took me a while to figure out whether or not I actually had cancelled. It’s like a game of Where’s Waldo/Wally? I finally found out by looking at the bottom right corner:

Amazon Prime cancellation page

So no, I haven’t. I don’t think so. Someone somewhere was paid big bucks to discombobulate me enough to think I had already canceled.

There is a correlation between such dark patterns and the problems Amazon is facing. Subscription, according to RetailX, an analyser of retail data, accounted for a 7.7% of Amazon’s total revenue in 2023, more than twice the 3.15% share reported in 2014:

However, the subscription share of Amazon’s revenue has plateaued in recent years. While it doubled between 2014 and 2018, when it made up 6.36%, it has risen by just under 0.5 percentage points in the following five years.

So while Amazon’s subscription revenue is increasing it’s nothing compared to Amazon Web Services, online stores and retail third-party seller services. If subscriptions were so damned wonderful, why would a company work so hard to not let you leave the shop?

The Dark Art of Notificationism

This is not a one-off. Researching this topic I came across Rocket Money which is supposed to help me figure out all my subscriptions and cancel those I don’t want. Or something like that: It only seems to work in the U.S. so I gave up. But by then I had an account and getting out wasn’t going to be so easy. I tried to unsubscribe from emails but was greeted by a page where radio buttons had been pre-selected for 20 different kinds of notification, and which required me to click on every single one to remove myself. When I tried to delete my account I found the option hidden at the bottom of a Profile menu, which required me to click on a unlabelled gear icon, and then to scroll down beyond the visible pane to find the delete account option. Even then it still wasn’t clear that I had actually removed my account: trying to log in to check instead requires me to set up Two Factor Authentication, via an app or SMS, before I could check. So I left it there and I will probably be getting Rocket Money notifications for the rest of my days.

The foxes are indeed in charge of the hen house. Rocket Money seems keener on adding me to their subscription world rather than help me unsubscribe from the others. Indeed, it’s interesting to watch the businesses who tout themselves as kings of this domain. Recurly, for example, presents itself as the “leading subscription management platform” and says it wants to help its customers “create a frictionless and personalised customer experience.” Amusingly, it lists as its clients six companies, five of which have themselves been criticised for cancellation issues.

The company is upbeat in its 2024 Subscription Trends, Predictions, and Statistics:

Over the past year, Recurly has seen a 15.7% increase in active subscribers, and when comparing this to the figures from 2020, the growth surges to an impressive 105.1%.

But a closer look reveals a somewhat different picture.

  • For the past four years the monthly rate at which new subscribers are signing up has been falling, from 5.3% in 2020 to 3.7% last year.
  • Trials, the most common form of getting people to sign up, are not converting to paid subscriptions: in 2020, the trial-to-paid conversion rate was about 60%; in 2023 it had fallen to 50%.

And I wonder how many of those people actually realised they were signing up, since in most cases companies require some form of payment mechanism to get those trials. The Paramount+ subreddit (a Recurly client) is littered with painful stories of folk who didn’t realise they hadn’t canceled their trial. One guy lost $250. In its white paper Recurly recommends a trial period of no more than 7 days before payment kicks in. Once again: if the product was so good, why do you feel the need to pressure the user into a financial armlock so early on?

I don’t have an easy answer to all this. Everyone seems to be in on it: try to cancel a New York Times subscription (call during US office hours, wherever you signed up). I’m still trying to find out why I’m still paying for a newsletter which moved from Substack to their own platform, without any apparent paper trail. While I managed to get one of the payments stopped by my neo-bank, I was told they couldn’t block any future transactions from the provider. (Paypal is much better at this, allowing you to cancel a recurring payment without any fuss and I’m going to shift everything across to that. I would advise the same.)

My bigger worry is this: as everything becomes subscription-based, two things are going to happen:

We’re going to lose access to things we care about

Subscriptions don’t leave you with anything, most of the time. The most civilised version is that practiced by the likes of Tinderbox, one of my favourite Mac apps, which requires an annual fee entitling you to updates during that year, which are yours to keep if you decide to stop (or pause). But most times subscriptions don’t leave you with anything. Those carefully curated playlists on Spotify? All those emails on Gmail? Gone, unless you’ve downloaded all your emails first, and unless you use illegal downloaders in the case of Spotify.

We’re going to find it harder to assign a value to things

I have used Setapp for several years, a subscription-based app store for Macs and iOS, with a wonderful array of apps. But at what point have I overpaid? I’m not sure. Depending on how you calculate it I’ve saved $1,920 or lost $898.

The good guys are going to have to find a better way

I love subscribing to people whose work I love. From an Australian creator of amazing dioramas to a mad British scientist in rural France, I offer meagre sustenance via Patreon. Until a recent cull I was a paid subscriber to more than a dozen Substack newsletters. And this is the thing: while these methods are a great way to support independent content, they don’t scale. When we as consumers add up how much we’re spending on subscriptions we realise it’s a lot more than we thought, and we have to take action. Sadly that means opting for a $10 monthly streaming service, while cutting two $5 per month Substack newsletters. It’s the little guy who loses, unfortunately. They’re doing great work, but so are a lot of others, none of whom can really scale beyond the one-man-and-a-dog model to compete with the behemoths.

Modeling porn: A Boulder Creek diorama

So we’ll see changes. We have to. Substack writers will have to band together and form partnerships, offering discounts or freebies to subscribers. Bundling will become the norm, both for big players and small. The New York Times bought The Athletic and offers a financially attractive bundle. It wouldn’t surprise me if they folded dozens of Substack writers into a similar arrangement.

And that’s the thing. We may well find our way back into a more centralised model not unlike the one we thought we were disrupting. Newspapers consolidated pamphlets, advertisements and other printed matter because they could enjoy economies of scale; I see it inevitable that something similar happens.

Cable content networks bundled dozens of channels together to offer a take-it-or-leave-it proposition to consumers back in the 80s and 90s. As studios and platforms vie for your attention it seems inevitable they’ll further consolidate beyond the platform subscriptions that Amazon Prime, Netflix and Apple already offer.

This would all take us back to something we thought we’d got away from. I have enjoyed the transformation of entertainment that Netflix has ushered in. I even like Spotify’s ability to help me find songs from my childhood I really should have grown out of. But at what cost?

There is another way. I spend a lot more time digging around for video and audio that isn’t on these platforms, such as Bandcamp, where a lot of musicians allow you to pay what you want for their work. And Youtube, far from becoming a cesspit of clickbait and rabbit holes, can actually provide some extraordinarily erudite and enlightening content, often asking only for a donation on Patreon.

1941 New Theatre Oxford Pantomime Poster

Subscriptions in themselves are not bad. They enabled a flourishing of music, theatre and art, for example, from the 19th century. Subscription allows creators, producers, providers to plan ahead, to be sure that the interest in their creation will last (at least) the year. It also allows a closer relationship between creator and subscriber. Substack and Patreon have helped revive this engagement.

The problem, then, is that the Subscription Model has bifurcated, between one where the subscriber feels closer, is closer, to the producer, and one where the opposite is true. On that model there is, instead, a faux closeness, driven entirely by AI and algorithms, but which is always trying to steer whatever excuse for a conversation takes place into upselling, or, if the subscriber cannot be deterred from cancelling, by offering a far cheaper price, a pause, or some other sop to keep them. In other words, the only time a subscriber may be offered something which is not available to them is when they threaten to leave.

In other words, the only time a subscriber may be offered something which is not available to them is when they threaten to leave.

The owner of an alcohol-free bar in Liverpool, SipSin, told BBC Radio Four this week that the bar had to close when it became apparent that drinkers of non-alcoholic beverages don’t tend to drink more than two and instead prefer to chat. Not the regular drink-more-need-more alcohol crowd. The interviewer asked her, Heather Garlick, what she planned to do next. She wasn’t sure, she said, but was thinking of a subscription/membership type model.

SipSin (@sipsinliverpool) • Instagram photos and videos

To me this captures the purpose of subscription models well. The idea is to create something that can endure, by ensuring that users and provider are committed to some sort of financial arrangement over the longer term. The publican is assured of income; the patrons are assured of somewhere they can go and find others of a like mind. (The book subscription market is doing something similar to the point where it’s worth $11.7 billion, with subscribers getting a curated box of books each month.)

Subscription is about demonstrating confidence that what you get in the future is going to be as good as what you’re getting now.

It is not a method of hoodwinking and impoverishing your customer. This should go without saying, but apparently needs to be said. In their response to a call for input to the UK government’s Digital Regulation Cooperation Forum Work plan last year, the UK charity Citizens Advice found that

consumers in vulnerable circumstances and from marginalised communities are at the sharp end of these practices and suffer particularly bad outcomes. We found that 26% of people have signed up to a subscription accidentally, but this rises to 46% of people with a mental disability or mental health problem, and 45% of people on Universal Credit.

Let’s not drag these practices any lower than they already are. Let’s instead try to reinstill some of the magic of paying for something you’re really excited about, confident that someone somewhere is using your money wisely and with the aim of keeping you hooked — without no dark patterns in sight.

Breaking the wall: Drawing the right lessons from Blade Runner(s)

By | September 3, 2024

This is the second in a series of pieces I am writing on dystopian movies — — broadly defined — and what they tell us, or could tell us, about our own condition and what prescriptions they might offer for a way forward. In this piece I offer a different interpretation of the two Blade Runner movies and the three commissioned shorts, arguing they can and should offer us a timely piece of advice at two of the most pressing problems of our age.

Zhora (Joanna Cassidy) tries to escape Deckard in the first Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

This is a review of the Blade Runner movies. But it’s really about where we are today. Although I think Blade Runner 2049, Denis Villeneuve’s 2017 sequel to Ridley Scott’s 1982 original, is deeply flawed, I believe that if we take the two movies together we can learn important lessons about our bipolar world, and where we should fit technology, in particular AI, into it.

It’s a lesson I haven’t seen others draw. And it’s based on a rather subjective view of the two movies which you might not agree with. So strap in.

(I’m assuming you’ve seen both movies, so if you haven’t, I would recommend watching them first. I’ll still be here when you get back.)

The first Blade Runner was unleashed on an unsuspecting, and somewhat unreceptive world, in June 1982 (and September in the UK). Largely ignored in the U.S. at the time, it gradually became a cult classic, casting a shadow over anyone who dared consider a sequel. Eventually Denis Villeneuve had a go, releasing Blade Runner 2049 in October 2017. Once again underwhelming at the box office (though critics mostly loved it). I’ll come out and say it: I still don’t like the second one, and am not sure I ever will. And that throws me in the trenches with those who, with very detailed and cogent arguments, also don’t like it, hurling epithets at those in the other trenches who, with equally detailed and cogent arguments, love it.

I’m not (necessarily) here to persuade you to join me in the anti-trench. But I my thesis about the two movies carrying a key, poorly understood, message for us in this particular moment in our real world narrative depends to some extent on me arguing my corner. I’ll try not to be bombastic about it. And of course, there’ll be plot spoilers in there.

Love stories

Both films are love stories, where the protagonist learns how to love. Deckard learns that it doesn’t matter whether Rachael is a replicant, or whether her memories aren’t real. She is, and that, he learns on the roof-top watching Roy Batty die amid tears and rain, is enough for him. Similarly in 2049 the protagonist K learns that while Joi, the female hologram love doll, is not real, his feelings are, his interactions with her are, and that therefore he has agency. He is capable of love, he has a soul of sorts, and he change things. His decision to help Deckard find his daughter, is that change.

Some or this interpretation, I know, is contentious, so let me briefly substantiate. In Blade Runner, when Deckard and Rachael are at the piano in his apartment, she shares her confusion about whether the memories she has are her own or not, whether her ability to play piano is from her memory or not. Deckard tells her that it doesn’t matter. “You play beautifully,” he says. She can play, therefore she is. (I’m definitely not the first to point out that Deckard is Philip K. Dick’s nod to Descartes, whose philosophy populates both book and the film.)

Rachael (Sean Young) and Deckard (Harrison Ford) at the piano in the 1982 Blade Runner. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.) 

In 2049 there’s also uncertainty about K’s reaction to seeing the skyscraper-high Joi hologram, commercialism in all its brash nakedness, single him out as he walks home. When she uses the name she had given him — Joe — as a generic come-on — “You look like a good Joe” — K goes through another life-changing moment. Was their love just a clever bit of commercial programming?

The original script makes clear his thought process:

The NAME goes through K like an arrow. Joe? Jo? His mind fills with doubt and hope and doubt again. Was it all part of her program? Was she ever real?

No answers from “Joi.” Only a knowing wink and her mannequin smile as she looks back out on the city. Selling herself to the world.

CLOSE ON K. _His eyes close. As if saying goodbye. To her. To everything he learned from her to dream and hope for.

K is letting go of his memory of her as a lover, but not to the memories, the lessons she shared. For him it doesn’t matter that she was a hologram, or even an off-the-shelf virtual love doll. He learned from her. To dream. To hope. Enough for him to say goodbye, rather than, as some have interpreted the scene, discard the whole experience to trash. Like any love affair.

A constructed world

So what does all this have to do with AI?

Well, a lot of the discussion about the first Blade Runner has been about whether replicants are human, and what the differences are between the two. We are persuaded to conclude that replicants are in a way more human than human (beyond the trite motto of the company that manufactures them), because they don’t carry our baggage, they want to live, and because they have a termination date they know that time is sacred. Most of us humans are guilty of often forgetting we have a termination date too.

But this is not, in my view, the whole picture. Ridley Scott thought deeply about the movie he wanted to make, as did the author of the original script, Hampton Fancher (who also wrote the story and initial draft for the sequel), and so we should be looking deeper for richer treasures. A film well made, after all, is a constructed world that reflects our world back at us with fresh eyes.

Similarly, no-one will accuse Villeneuve of directing superficial films. Arrival (2016) is an extraordinary journey through the concept of time, and how, were time not linear, we might still decide to live our lives as we do. Sicario(2015) takes the idea of protagonist and subverts it, leaving us questioning what we believe and how we see the wall between us and the way the world really is. And don’t get me started on the Dune movies (2021, 2023).

In short, closer attention in both films is rewarded, though we shouldn’t expect, or want, the same message from both. In the first Blade Runner, Ridley understood that the message of the film was quite a tender one — as he describes it, the hunter falls in love with the hunted — but the movie needs to explain why and how that happens.

The untermenschen

It happens because humans have screwed up, and replicants are the answer to their problem. They’ve screwed up the planet, they don’t have enough people to go do the empire building off-world, and so they created an untermenschen, an underclass to do the work. It’s not the first time humans have done this, and it won’t be the last. The only difference here is that the untermenschen are artificially created humanoids.

The problem the movie presents is that the answer to the problem has itself become a problem: these replicants have rebelled and started to infiltrate Earth. The protagonist, Harrison Ford’s Deckard, is the fall-guy, the gumshoe, the Philip Marlowe who has to go do the dirty work of removing this problem. At this point in the story (2019, 37 years into the future when the first film was made) the human populace is not aware of this infiltration, and only with a sophisticated device, the Voight-Kampff test, can blade runners identify them. Even that is not infallible: In a deleted scene, one of Deckard’s colleagues, Holden, complains:

maybe it doesn’t work on these ones, Deck… These replicants aren’t just a buncha muscle miners anymore, they’re no goddamn different than you or me… It’s all over, it’s a wipe-out, they’re almost us, Deck, they’re a disease…

By the time of the second movie, released in 2017 and set 32 years hence, a lot has happened (this is all explored in three short prequels commissioned by Villeneuve, and without which a lot of 2049 is barely intelligible): three years after Deckard’s travails, gangs of humans hunt down and kill the latest generation of replicants, who have no artificial lifespan, in turn prompting a replicant terrorist attack, where an EMP causes a planet-wide blackout and erases its databases. Out of the ashes emerges another inventor-terrible, who solved world hunger and is now creating another generation of replicants. Earlier generations of replicants are still being hunted, but now their replacements are an acknowledged part of the scenery and machinery. The blade runner, replicant or not, is the middleman, policing the no-man’s land between replicant and human.

In other words, a lot has changed, but a lot hasn’t. By the time of the second movie replicants are more advanced and easier to identify (they have a serial number under their right eyeball) and live among humans, but are still treated as a subspecies. (Indeed 2049 opens with a blade runner ‘retiring’ a rogue replicant in a rehashed version of how Ridley Scott had proposed the first movie begin with, right down to the sketches.)

The canvas wall

It’s this broad canvas — two movies, three shorts — upon which the love story/detective story plays out. But of course the canvas is in many ways the story. The canvas is a world deeply divided. We see K being spat on by fellow officers, abused by his neighbours and his front door sporting welcome-home graffiti “fuck off skinner”. The only other replicants he encounters are those he’s been told to kill, prostitutes, known as doxies, or the super replicant, Luv, who works as a henchwoman for the inventor-terrible, Niander Wallace.

So we’re still stuck in a hierarchical world, where one species looks down on the other. But now they’re living cheek-by-jowl. The only thing keeping them apart is the certainty that one can side can reproduce themselves, and the other can’t. And as K’s boss Joshi puts it: “The world’s built on a wall that separates kind. Tell either side there’s no wall and you’ve bought a war, or a slaughter.”

This tension is best understood with the prequels; it does not permeate Villeneuve’s world sufficiently to convey the menace/promise upon which the movie is built, in my view. But it’s vital to the storyline because K is later forced to make a decision, just as Deckard was a generation before: whose ‘kind’ do I belong to? In other words: do I accept this definition of my world as a civil war? Do I fight, and if so, what for?

In the final scenes of Blade Runner, after Deckard watches his saviour Roy Batty die, he is confronted immediately with a choice. His police shadow, Gaff, descends in a police spinner. “You’ve done a man’s job, sir,” he shouts. Gaff, the human, is taunting Deckard that he might not be. He throws a gun at Deckard, hoping it’ll pick it up. He doesn’t. “It’s too bad she won’t live,” Gaff says. “But then again, who does?”

Gaff (Edward James Olmos) throws a gun at Deckard: “Too bad she won’t live”. (Copyright Warner Bros., Ridley Scott, 1982. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

For Deckard it’s now clear what he has to do. An earlier version (23/2/81) of the script was clearer. Gaff exhibited some sadness — that Deckard had someone to love, and that love doesn’t last forever. “I wouldn’t wait too long,” he says. “I wouldn’t fool around. I’d get my little panocha and get the hell outta here.” And when Deckard’s car “bullets through the woods” as he and Rachael escape the city and outrun their pursuers, a voiceover spells out Deckard’s choice:

I knew it on the roof that night. We were brothers, Roy Batty and I! Combat models of the highest order. We had fought in wars not yet dreamed of… in vast nightmares still unnamed. We were the new people… Roy and me and Rachael! We were made for this world. It was ours!

Deckard has made his choice, chosen his side. He escapes with his lover, and the spiritual guide of his saviour Roy Batty. This bit was dropped from the shooting script, although the studio later imposed a clunky voice-over — but which, crucially, didn’t resurrect any of this talk of the world belonging to ‘us’. In any of the emerging cuts of the film Ridley Scott chose to make the story end like a love story, with the two lovers disappearing into the night. In Ridley Scott’s world Deckard had not indicated any decision to throw his lot in with the replicants.

And this is where things get confusing (spoilers ahead). The sequel chooses to build itself around the idea that Rachael and Deckard’s love story becomes the origin story of a second replicant uprising — the war that Joshi fears. Their child is proof that an earlier generation of replicant is capable of procreation. If the Deckard in 2049 was the same Deckard who believed “we were the new people” then they had in his child the rallying cry for the “wars not yet dreamed of” that would make this world “ours”.

It’s no clearer in 2049 whether this cause is what Deckard has committed himself to. Unfortunately it’s this part of the story, this narrative that sustains both films, wobbles and, I would argue, collapses in the second half of 2049. There are a number of plot holes but the key problem is not so much a plot hole as a poor solution to a major problem in the narrative. And it’s this:

All the various actors in the drama want the same thing, even if it’s not always overtly expressed: to find the replicant love child. Joshi and the police because they want to stop a war. Luv and her über-industrialist boss because he wants to reverse engineer it to build self-replicating replicants. K because his boss wants him too, but increasingly because he believes he is the love child. Deckard because he’s the father. An underground replicant army because they see an opportunity to have he/her lead an uprising.

But in 2049 this ‘race’ is half-hearted, poorly developed and incoherent. The pivot of the film is when K finds out that he is not Rachael and Deckard’s child, the growing conclusion he — and therefore we — had been coming to for much of the preceding two hours and five minutes. It is a big moment, though I’m not alone in feeling that it’s not as earth-moving as the film would like us to think it was. The problem is that this key moment overlaps with another key moment: the replicant army leader Freysa instructing K that he must kill Deckard, because he may reveal the location of his child to Luv and hence to the über-industrialist.

Denis Villeneuve directs Harrison Ford and Ryan Gosling in Blade Runner 2049 (Photo courtesy of Warner Bros. Pictures)

This is where the plot falls apart. And here’s why.

Somewhere before or during filming a bigger problem was fixed, leaving this rather awkward, almost hackneyed scene of combined key plot points, where suddenly a host of new characters appear, for no obvious reason and presumably at great risk to themselves.

The problem was this. In the original ‘shooting script’ — written by Fancher, the same man who wrote the first key drafts of the first movie — Freysa dismisses K’s fears that Deckard’s abduction places him in danger. “Don’t pay no mind on that. He always wanted to die for his own. Never had the luck. Officer did him a favor.” In short: Deckard identified with the replicants and wanted to die heroically for them.

She then goes on (she is written as speaking a sort of pidgin English, something that was thankfully discarded): “Deckard only want his baby stay safe. And she will. I wish I could find her… I show her unto the world. And she lead an army!” In short: Freysa doesn’t know where the replicant love child is. If she did, she would present her to the world as the leader of the replicant army — apparently with or without her say-so.

Of course a lot of this is overshadowed by the new information that floors K — namely that if the love child is female, then it’s not him. But this leaves all sorts of problems that Villeneuve needed to fix, not least of which was that it made Deckard’s fate dramatically and narratively irrelevant. He wanted to die for something so there we go. It gave very little for K to do other than accept his underwhelming fate as a normal replicant. It leaves Freysa and her army with nothing more to do except wonder where the child is — and, incidentally, not to ask K whether he had any thoughts on that, given he’d been working on this for a while and nearly died for it. And it leaves us, the audience, wondering why we had just sat through all this K-might-be-a-replicant-love-child only to find he’s, er, not.

An unfixable hole

It’s obvious why Villeneuve decided that this problem had to be fixed because it leaves K desolate and with no obvious path forward. The original script leaves only a hint that K would try to rescue Deckard (‘K can’t live with that’ when he hears Deckard would be happy dying for a cause) but there’s no real dramatic tension, no wrestling with a decision to be made about whether he rescues Deckard, kill him, do something else, or nothing. It’s a lousy pre-climax scene. And worse, it leaves only thin strand of Freysa’s motivation to go to the effort to follow K, save him and reveal to him the existence of the army and their location. If she wants him to join and help them it’s unclear why.

So something has to be done. It’s a patch-up job and not worthy of such a great film-maker, and it raises questions about how this significant problem apparently sat there all the way to the shooting script. (I would argue there are numerous other major problems that lead to this big problem, but that’s for another time.)

So the final version is this forced conflict, where K has to think about what he’s going to do with Deckard — kill him to stop him divulging the whereabouts of Freysa — or to save him (and lead him to the person he now realises is his daughter.) You don’t need to be a rocket scientist to realise it wasn’t really an either-or choice: he could save Deckard before he is forced to divulge the information and take him to meet his daughter if he’d wanted to. Which (plot spoiler) he does. Presumably, given Deckard’s stated fear that his daughter might, if identified as the replicant love child, be captured and “taken apart, dissected,” he wasn’t about to then help her lead the replicant army.

It’s probably as good an ending as Villeneuve can manage with the material. And it could be argued that the ambiguity of motivation behind K’s final act of courage and selflessness — the question that Deckard asks of him, “Why? Who am I to you?” — leaves the film as open-ended as the first film. But of course, that’s not really true. Deckard’s motive was to escape with Rachael from his human-centred world, to find something and someone better. Implicit was the idea that he had changed sides. With K, there is no such open-ended future for him, as (spoiler alert) he expires on the steps in the snow.

The only mystery then is the question he doesn’t answer: why K went to such lengths to make the reunion happen. And the possible answer, or answers, are interesting: did he realise his love for Joi was ultimately doomed and pointless, that it was better to find love in the sacrifices you make for others? A whole host of things which reflect the complexity of what makes us human.

We wrestled, along with Deckard, with similar questions when Roy Batty saves Deckard at the end of Blade Runner. This time, however, there’s no one to keep K company as he dies. Or is there? In the “shooting script” K, lying awkwardly on the steps, hears Joi’s voice asking “Would you read to me?”

Just as she said when we first met her. K smiles at this ghost of memory.

Of course.

A thready whisper of his baseline. Their old favorite. “And blood-black nothingness began to… spin… a system of cells interlinked…”

If that had been left in, it would have taken us full circle, showing us that the real love story in this film was the one between a replicant and a mass-produced hologram. In some ways that might have helped support what I believe was the thread, that Scott, Villeneuve, Fancher and the other writers wanted to be their key takeaway: that there is no wall — between replicants and humans, between replicants and holograms. There are memories, there are experiences that populate those memories, which when shared can connect every living thing.

A replicant like Roy Batty can learn to love life in the abstract, and Deckard as an embodiment of that. Rachael the replicant can learn to love and trust her own feelings. K, a replicant, can love a hologram, a supposedly lower form of AI, which in turn can learn to love, and sacrifice her/itself for love.

And the arc concludes with K choosing to himself die for love — in this case between a father who isn’t his, and a daughter who may not know she had one.

Joi (Ana de Armas) tells K (Ryan Gosling) he looks like “a good Joe” (Copyright Warner Bros., Denis Villeneuve, 2017. Used under fair use for the purpose of criticism, review, or illustration for instruction.)

AI

So what does this have to do with AI?

Well, it’s simple enough. The villain of the piece is of course the über-industrialist, who kills replicants for fun, and exploits the feelings and loyalties of those he commands. He wants to find a replicant child to reverse engineer to expand his empire, colonise the galaxy. His megalomania sounds a lot like some of the techbro titans who bestraddle the world — and particularly those who talk of AI being both the biggest threat and the biggest opportunity humanity has faced.

Some of that hyperbole — or appetite for that hyperbole — has died down a little of late, but that doesn’t mean these ideas, ambitions are not still being pursued. All of that is taking place without any serious consideration by the rest of us, and I would argue that science fiction, dystopian fiction, in writing and film, is as good a way as any of exploring potential outcomes. Both Blade Runners conjure up a world which should be the beginning of a useful conversation.

And we shouldn’t think this is all some way off. One reviewer of the first Blade Runner said she found it all a bit far fetched, a world too noisy, dense, technofied and neon-tinted until she walked out afterwards into Leicester Square. (I had that exact same experience, and when later I lived in Bangkok, Singapore, Hong Kong and Jakarta I felt I was living in Ridley Scott’s dense, compressed, retrofitted, gridlocked world.)

But more seriously, think about the ‘replicants’ around us. Those same cities — and many others like them — are home to wall after wall after wall, keeping one kind from another. Countries like Singapore limit visas to specific workers of specific genders, and they are required to live apart from the rest of the population, often shipped around like cattle from work site to work site. Social media is full of scorn for these people who go into debt in the hope of helping their family, a silent underclass. Beyond them — in Syria, Myanmar, Sudan — are vast populations of the dispossessed, stateless, homeless. And beyond them is the animal kingdom, where we have anointed ourselves as monarch ruling over all other species. Underclasses are everywhere if we choose to look.

The genius of the first movie is that Ridley Scott gave us a compelling kaleidoscope of images, some real, some imagined, a world where our existing beliefs find themselves mutating. There is the dense streetscape of words, people, animals, street hardware (their detailed designs include a parking meter), much of which passes in a blur on a first viewing, while in Batty’s final speech we are asked to conjure up images of massive galactic battlefields and structures which are all the better for not being visualised for us. In this disorienting but all-absorbing world he asks us to question whether the difference between man and machine really exists and that feelings, emotions are rendered fluid. As Deckard says in an unused voiceover:

Replicants weren’t supposed to have feelings, neither were Blade Runners. What the hell was happening to me?

We are not good at knocking down walls. We need films like these to help us look back at ourselves and think more deeply about what those walls are and whether they should exist. From refugee to robot, we think we know where we stand in the abstract, what our values are. But it’s only when we are confronted with the reality we realise we are not so well prepared. The Blade Runner sequence gives us a glimpse of that.

And while real replicants are not yet in the shops we are already used to disembodied voices like Alexa, or GPT chats. But we haven’t even started to understand what we want from these early AIs: We expect these tools to be anthropomorphic because we are hard-wired to interact with everything — man, animal, machine — in that way. But we are far from really understanding what that means. When Claude GPT prefixes answers with — Great question!/That’s an interesting question about the Atari brand appearing in the Blade Runner films/You’ve raised an excellent point that highlights a subtle but important detail in the original Blade Runner film/You’re quite observant to notice this discrepancy (all genuine responses) I feel patronised and irritated. It may be a small thing for now, but to me this is going to be the hardest part, or one of the hardest parts, to reconcile as our AIs move from generic to increasingly personal, bespoke, bilateral nature of computer interaction.

For me the problem is this: we are already in this dangerous world where we interact with machines, without any notion of what constitutes civilised behaviour. We curse and roll our eyes at Alexa’s stupidity, but then laugh at her attempts to make friendly chit-chat. Similarly with ChatGPT we are so over the novelty of it, even though it is a hugely powerful tool, and one that I’ve been using almost as Deckard and K interacted with their machines. But we have no baseline, no manual of the appropriate etiquette. We have already established our dominance over machines, and so it’s a relatively small step for those machines to cross the uncanny valley, where we will continue to treat them as machines.

The Blade Runner stories may be driven by love, but they are really ethical journeys, preparing us for that moment, when a human creates something that approximates sentient life. Key to that discussion is what, and who, led us to that moment. What that sentient life is depends on which human, or humans, creates it, and this is, I suspect, is the root cause of our unease. Neither über-industrialist Tyrell nor Wallace is portrayed as a pleasant, civic-minded or moral individual, which perhaps tells us all we need to know of who, out here in the real world, we should be keeping an eye on.

Sources

Too many to list here, but the main ones I drew on are these. Apologies for any omissions.

  • Blade Runner 2049, story by Hampton Fancher, screenplay by Hampton Fancher and Michael Green, ‘Final Shooting Script’, no date
  • Blade Runner, screenplay by Hampton Fancher and David Peoples, February 23, 1981
  • Blade Runner: The Inside Story, Don Shay, July 1982
  • Do Androids Dream of Electric Sheep, Philip K. Dick, 1968
  • Are Blade Runner’s Replicants “Human”? Descartes and Locke Have Some Thoughts, Lorraine Boissoneault 11–2017
  • Deckard/Descartes, Kasper Ibsen Beck, Google Groups, 1999
  • A Vision of Blindness: Blade Runner and Moral Redemption, David Macarthur, University of Sydney (2017)
  • Several interviews with Hampton Fancher: Sloan Science and Film (2017), Forbes (2016), Unclean Arts (2015)
  • Blade Runner Sketchbook, 1982
  • Philosophy and Blade Runner, Timothy Shanahan, 2014
  • The Illustrated Blade Runner, 1982

The Civil War in Our Heads

By | September 3, 2024

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I learned an unpleasant lesson

Burning bin at Holiday Inn Express in Rotherham, Aug 5 2024, screengrab from Sky News

I finally braced myself to see Alex Garland’s Civil War earlier this month, unable to watch in more than 10 minute chunks, and so found myself flipping between fictional scenes of American carnage and real-world assaults on the Holiday Inn in Rotherham. I found both compelling but hard to watch.

I realised that what I saw in Civil War wasn’t what most other people seemed to see. It follows four (and for while six) journalists through an America in the midst of war between at least two groups, culminating in the journalists following one faction into the White House in what appears to be the final moments of the war. The reasons for the conflict and for the state-level alliances are never made clear or explored.

Although superficially it’s about journalists covering a war, it’s really about the war itself, and while reviewers have tended to bemoan the lack of hack-like things — most notably discussion among the four journalists about what caused the war, that is exactly what journalists would do by this point in a story — they might discuss how the war might end, but Garland rightly focuses on what that means for the journalists themselves, who of course obsess about scoops, money shots and that critical knack of being in the right place at the right time.

Kirsten Dunst and Cailee Spaeny in Civil War (2024)

What I see in the movie is this: From the opening scene, with the flag-carrying suicide bomber running into a crowd of people waiting for a water truck, Garland focuses on the dehumanised nature of the conflict. He focuses on the conventions of war, and essentially demonstrates methodically that in this conflict there are none. This is as murderous, bloody, savage and inhuman as any other war — in Serbia, in Rwanda, in Somalia, in El Salvador, in Myanmar, in Ukraine, in Syria, in Gaza, you name it.

This is Americans killing each other in the most brutal ways on American soil, not really caring who they are, even whether they’re on the same side. There is no code of honour, no Geneva Convention, no taking of prisoners, no distinguishing between civilian and combatant. The first shots we witness are from khaki-clad fighters peppering an opponent trying to hide behind a pillar. When the group the journalists are following prevail, they shoot dead an incapacitated soldier and execute their prisoners with an anti-aircraft gun.

The idea of side is constantly derided by participants. One sniper mocks Wagner Moura’s character when he asks them who he’s shooting at. “Someone’s trying to kill us. We’re trying to kill them.” When Jesse Plemons’ character, the most bloodthirsty and indiscriminate of all those the journalists encounter, asks where each of them comes from, the answers don’t seem to matter. What type of American are you, he asks, inscrutable behind his red sunglasses, without ever seeming to know what type he is looking for.

Garland is saying: we are no better; Americans are no better, than anyone else. We pretend we are, but when the last vestiges of democratic rule have gone — and many have already gone — this is how we will treat each other, already are treating each other.

Jesse Plemons and Cailee Spaeny in Civil War (2024)

From the opening suicide bomb to the (spoiler alert) Saddam-like, Gaddafiesque, murder of the president lying on the floor of the Oval Office, Garland is saying: this is where we’re at. It doesn’t matter how we got here; we’re already here. He used journalists as the medium because as individuals they were observers to the tragedy, hungry to see the worst of it, hungry for the rush, but they’re useful idiots in portraying his message.

We journalists might be occasionally tender to one another, and allow our feelings to sometimes protrude, but then we, like the stories, move on. Kirsten Dunst’s Lee Miller, the war photographer we follow throughout the movie, dies on the carpet protecting her protégée, but neither her colleague/friend, or the protégée, stop to check for a pulse. They move on, drunk by the high of being in the right place at the right time. The character arcs of Jessie and Lee — the ‘narrative’ of the story — predictably cross over as one becomes desensitised to war enough to photograph it, while the other goes in the opposite direction. They are us.

Garland is saying: journalists are shits, have to be shits to do their job, but so, under the right conditions, are we all.

This is not a tale about humanity or the triumph of the spirit, or a heartfelt paean to the lost era of journalism. There is no humanity here, Garland is saying, so why should I waste time pretending that there’s someone to care about, a dog to save, or a kid, that usually works as a salve to our conscience in such movies — “the whole world froze over/blew up/was destroyed by aliens but Timmy the Dog was safe!” Garland is saying: this is the reductio ad absurdum of all our efforts to put allegiance to party, cause and power over country. There is no magic brake that somehow stops America — or any country — from falling into civil war.

I know I’m sailing against the wind here. I’ve seen reviews like this one, which go the other way: “(Cailee Spaeny’s character Jesse)’s decision to keep shooting through Lee’s sacrificial death becomes Civil War’s final insistence that there is a unique nobility to this profession. They care about the truth; it’s why Jessie captures the president’s extrajudicial killing.” I am happy to debate the point, but I just don’t buy that. It’s true that the profession has its better moments, but the idea that somehow taking a photo of the president’s pleading last words and ignominious death was somehow noble, or a resolution of sorts, somehow making worthwhile the deaths of half of the journalists who made the trip, is to misunderstand, I believe, what Garland was attempting to say.

It was instead the manner of the president’s death (and all the other combatant and non-combatant deaths), and the fact that the soldiers know they’re being photographed doing it, but don’t care, and even enjoy it, that was Garland’s point. The end of one American dictatorship ended like any Third World uprising or coup. That the journalists felt no danger, no fear that the soldiers may murder them to conceal their assassination, is the message. Take any other decent journalist flick — Killing FieldsYear of Living Dangerously, Under Fire, Salvador, I could go on — and the narrative is built around the idea that journalists stumble upon a warcrime and risk their lives to get the story out. Nothing like that happens here: it is Abu Ghraib-level selfies, confident there will be no repercussions.

Posing with the dead president, Civil War (2024)

Garland hasn’t made a masterpiece, though time may prove me wrong on that. I don’t think that was particularly his intent. This is polemic more than poetry. I don’t think he was interested in, or wanted us to be interested in, the characters, beyond making them substantial enough to add heft to the film’s credibility (and the performances are excellent). As a journalist I have few quibbles with how the profession is portrayed, though I would have expected to see more calls from editors, filing stories, recording expenses. Garland captured the heady mix of gut-dropping fear and by hysterical relief as the journalists seek out, and then try to extricate themselves from, fiefdoms controlled by unpredictable men and women with guns.

He presented the battle scenes as journalist reportage to show us how these things — fantasies, fears, nightmares, our daily lives — may play out when we go down this road, without us getting caught up too much in the outcome. Garland shows us how quickly the process brutalizes us, squaring the circle around our own throats. Dunst’s character talks about how she thought that her career covering foreign wars would send a warning home: “Don’t do this. But here we are.”

This is Garland’s message. Journalism plays a useful role here as a vehicle — once America (the West) thought so highly of itself that it paid individuals to risk their lives covering dangerous power plays in dangerous parts of the world, so we could wring our hands and remind ourselves at least our values were intact. No more. Garland is saying those journalists serve no purpose anymore, because we are just a blockaded door in the Capito away from the exact same power play. We’ve become the dangerous part of the world, and we don’t yet see it.

Building Bridges: The PC’s (Important) Forgotten Origin Story

By | September 3, 2024
Sir Clive Sinclair mosaic, made with original keys from Sinclair computers, Charis Tsevis, 2011 (Flickr)

Technology-wise, we’re presently in what might be called an interregnum.

There is no clear outcome for AI, especially generative AI. We can’t tell whether it’s a saviour, a destroyer, or a damp squib. More importantly, generative AI — and a lot of other stagnant technologies — doesn’t offer a bridge between where we currently are, and where we need to go.

But what might that bridge look like?

This is best explained by looking at an earlier paradigm shift: when personal computing began the journey from a DIY hobby for (mostly) 14-year old males to being a household tool or entertainment console. In blunt terms, the computer went from being a Meccano kit on a workbench to an item of furniture in the living room. When computers started to be useful and affordable.

Here’s a great visualisation which captures this sudden growth in the early 1980s (Source: Home computer sales, 1980–93 | Retro Deam/YouTube)

(The video doesn’t include IBM-compatible PCs, which of course eventually stole most of the market for a few decades. But you get the idea.)

For the masses

Exploring why and how this shift happened might help us understand what we should be looking for right now: what engine of change should we be keeping an eye out for, or, if we were of entrepreneurial bent, cooking up in a lab somewhere?

A Canadian Holocaust survivor, Jack Tramiel, was the first to recognise that computing was as much a consumer device as a business one. The trick was to make it visually appealing, fun, and cheap. His Commodore 64, unveiled in January 1982, was the first real mass market computer, with sound, graphics and software — all for less than $600. The computer “for the masses, not the classes” took North America by storm in 1981. At the time an Apple computer, though far better, would cost at least twice that. The Commodore 64 sold upwards of 12 million units.

Tramiel had understood computers were not confined to the workplace or science. But a British entrepreneur also understood that and may, on balance have contributed more to building the bridge that connected ‘computing’ with ‘appliance’. Sinclair (later Sir Clive) had sold circuit boards and kits profitably to a market of do-it-yourself enthusiasts since the early 1960s, but he was sure there was a much bigger market if he could make the products accessible and cheap enough.

Personal Computer World, April 1980

Sinclair wanted to sell them below £100 — about $200 at the time. He knew the market well — enthusiasts wanted to get their hands on one to tinker with, to write code on, to learn how the machines worked, but they were often kids, and didn’t have a lot of cash.

Crucially for bridge building, he focused on marketing and distribution.

According to Bill Nichols, who ran his PR at the time, Sinclair understood that the market itself needed to be carefully nurtured in the crucial early days. Clive “always insisted that every product first went to market mail order under the total control of the company and the marketing team.” This direct connection, via a cohort of outsourced customer service staff allowed the company to hear and respond to feedback directly from early customers. “Then and only then when the awareness and initial demand was created — typically after six months or so — did it go retail,” he told me.

WH Smith ad for the ZX81, date unknown. (FunkYellowMonkey, imgur)

“Retail” here didn’t mean hobby and computer shops but high street heavy hitters like stationery and newpapers chain WH Smith and pharmacy and healthcare giant Boots. Now the demand had been nurtured enough to be self-supporting, and Sinclair’s marketing and pricing had taken hold, retailers could be confident there was enough demand across a broad customer base to make it worth their while to display them prominently.

Tomb Raider and GTA

For kids and parents they were now easy to find, impossible to resist, and fun to use. The ZX Spectrum, the third incarnation of the Sinclair computer launched in April 1982, sold over 5 million units. Sinclair was a suddenly a serious competitor to Commodore: the Commodore sales team felt them to be enough of a threat to post a photo of Sinclair’s marketing chief Nichols on their dart board.

In one year the computer had gone from an obscure piece of machinery to a consumer device. It also played an important role in creating what we’d now call the ecosystem to support the transition. It was still a little unclear what computers might do for us. Games was the most obvious thing, and the ZX series are credited with spawning a generation of ‘bedroom coders’, kids who would develop games and sell them via hobby magazines. The British video game industry now employs over 20,000 people, and major franchises like Grand Theft Auto and Tomb Raider can trace their roots to companies founded in this early wave. By 1984 the UK had more computers per household (13%) than the U.S. (8.2%) — with the U.S. only catching up in the 2000s.

Sinclair made mistakes. A crucial one was trying to build a market with professionals with the next ZX model, the QL. The device was as half-baked as the rationale — professionals who might need a spreadsheet would already have an IBM PC — and its failure led to him selling the ZX business to a consumer electronics company, Amstrad. Amstrad’s owner, Alan Sugar, had a better instinct for what people needed a computer for: his line of PCW computers were marketed as “word processors”. He sold 8 million of them. (I was no techie but I bought one, and it kickstarted my journalism career. My dad wrote a book with it.)

Me and my Amstrad, 1986

Sir Clive and Jack Tramiel are mostly forgotten figures now but it’s no exaggeration to say this: he probably contributed as much as the likes of Bill Gates, Steve Wozniak and Steve Jobs to the silicon landscape we inhabit today. Both built bridges from the narrow enclaves of ‘enterprise’ and ‘hobby’computing to something called ‘personal computing’, without which we wouldn’t be writing, distributing and reading this on computing devices. In 1980 some 750,000 devices that could be called personal computers were sold. In 1990 there were 20 million. (Source)

That this period is largely forgotten is to our cost. Sure, Gates was instrumental but he was late to the game — Windows 95 was arguably its first consumer-facing product. Jobs understood the market better, but Apple’s first Macintosh, launched in 1984, cost $2,500, ten times a ZX computer. Both Microsoft and Apple scaled the divide between office and home device, but it was the Commodore, ZX, and a handful of other much cheaper devices that built the bridge between them.

Sliding the rules

So what do we learn here? What lessons can we apply to the place we’re in today?

Well, first off, we have lousy memories. That the Sinclairs of this world, and what they did, are rarely mentioned shows just how little we understand about how computing got to be the ubiquitous, ambient thing it is today.

Secondly, there’s only so much we can learn through the lens of Silicon Valley’s favourite business consultant. Superficially at least, Sinclair was an early entrepreneur in the Clayton Christensen mould: catering to underserved segments, building products that were affordable, simplified and user-friendly, and making incremental improvements to gain market share. But he was so much more.

Sinclair was obsessed by several things, one of them fatal. He understood two key concepts: price and size. While semiconductor manufacturers would discard chips that didn’t meet their specifications, many of those rejects might work fine if he designed a product to more lenient specs. “Good enough,” in Clayton Christensen’s words. For the rest of his life he would follow a similar pattern: dream up a product, scout out the technology to see whether it could be built, and then pare back the product (and the size and quality of components) to fit a specific price point and size.

Sinclair’s obsession with the miniature emerged less from notions of disruptive innovation and more, his sister Fiona believes, from the “very confused childhood” they shared, which led her to therapy and him to seeking to impose order on the world. “Everything he makes, everything he designs is to do with order — making things smaller, finer, neater,” she said of him.

The resulting products all leveraged technology to build bridges from niche market to mass. His calculators were small and stylish enough to be desirable in a way calculators hadn’t been before, but cheap enough for everyone. It’s hard to overstate the impact this had. Schoolkids like me were still being taught how to use a sliderule to make calculations in the 1970s, but when the first kid brought a Sinclair Oxford 100 into class (£13) we knew those log table books were doomed.

Vintage Aristo Darmstadt Slide Rule, Joe Haupt (Flickr)

But he had another infatuation: the new. The Sinclair QL, which effectively killed his computer business, arose out of his reluctance to build on a good thing, throwing the earlier ZX model out and trying something completely new, for a market that was already being catered to and which didn’t care overly about price. Launched in 1984, the QL was discontinued within two years after selling around 150,000 units, and Sinclair was forced to sell.

Sinclair understood about the bridge, but in this case misread it. The bridge here was a bridge backwards, returning to a market that already existed, and where users weren’t overly sensitive to price, but were sensitive about usability.

A calculator in your stocking

To me the key lesson to be drawn is this: Sometimes there needs to be an intermediary technology, or technologies, that can be pulled together in a new (and cheaper) way to create a new device, possibly even one that doesn’t have a name. There may be no demand for such a device, but that demand can, with good marketing and distribution, be created. By doing so you create a new market beyond the old one.

This might sound easy and obvious, but it’s not. Sinclair was already well-versed in this approach before he applied it to computers. He had built tiny transistor radios, taking the radio out of the living room and into the pocket or bedroom; home-made amplifiers to take hifi away from the deep-pocketed connoisseur; calculators out of science labs and accounting departments; digital display watches light years ahead of your smart watch; and (spectacularly, in terms of grand failures) electrical vehicles out of the auto industry and milk float sector.

Not all of these, ok not many of these, were successful, but it helped Sinclair develop a good understanding, not so much of invading existing markets a la Christensen, but of creating new ones. No one realistically thought that everyone wanted a calculator until Sinclair suggested it would make a great Christmas present. No one thought a computer would be much fun until he got them into stationers, toy shops and pharmacies. He built a bridge to a place no-one thought existed.

The common view of Sinclair: Daily Telegraph obituary, 18 Sept 2021

We are in something similar now. We have been sitting on a plateau of new, technology-driven consumer products for nigh on a decade now. Interesting technology — materials, blockchain, AI, AR, VR — hasn’t created any real mass market, and that, I believe, is in part due to a lack of imagination and understanding of how bridge-building to new markets works.

I don’t claim to know exactly where those new places are. It could be that some part of AI makes it possible for us to develop a taste for always-new music, say: so instead of us seeking out the familiar when it comes to aural entertainment, we demand AI creates something new for us. (I’ve mentioned before how intriguing I find the likes of udio.com. Here’s a rough stab at “a song in the style of early Yellow Magic Orchestra” with abject apologies to the souls of Yukihiro Takahashi and Ryuichi Sakamoto. )

This is probably too obvious and too narrow an assessment of the market’s potential. Sinclair’s advantage was that he was a nerd first, but a consumer a close second. He dreamed of things he’d like, he understood the technologies, their availability or lack of it, and he cared deeply about form factor. He brought disparate software, materials, circuitry and functionality together to make something that people either never thought they needed, or never imagined they could afford.

Others took his ideas and made them better, cheaper, more reliable: Casio’s calculators (and calculator watches); Amstrad’s computers; even his C5 electric trike, which I’ll explore more deeply elsewhere, became the opening salvo in a decades-long struggle that brought us EV scooters, even the Tesla.

It takes an uncommon mind to see these markets and build the bridges between them. We would not be here if it weren’t for folks like Sinclair who felt people would like these technologies if they were cheap enough, and fun enough, and he understood, at least a little, where we might go with them if we had them.

Now it’s time for someone to ask the same questions and build us some bridges from where we are — computing devices that have been good enough for a decade, software that is mostly a retread of last year’s, and an AI that is undoubtedly impressive, but also depressingly flawed and ultimately dissatisfying.

Over to you.