AI and the Shrinking Perimeter

By | September 9, 2025

(No AI was used in the writing or illustrating of this post. AI was used in research but its results have been checked manually. This is another in a series of pieces exploring the frontiers between human and AI work. Here’s another.)

The Defence of Rorke's Drift, 1879, Alphonse de Neuville
The Defence of Rorke’s Drift, 1879, Alphonse de Neuville

AI is here to stay, and it’s moving fast. We are a little like defenders of Khe Sanh, the Alamo, Rorke’s Drift, take your pick. The defensible perimeter is shrinking as AI turns out to be getting better and better at doing what we thought was an unassailable human skill.

I would like to try to alter that thinking, if I can, to present as just another kind of challenge that we’ve seen countless times before, We need to think of AI as a sort of horizontal enabler, kicking down the walls between professions and disciplines. Historians have traditionally been sniffy about people not trained as historians (John Costello, among others, tarred with the brush of being ‘amateur’ and not in a good way). We journalists are notorious for going from 0-80, from knowing nothing about a subject to parading as an expert after a few hours’ research.

AI is definitely replacing creative jobs, but many of those jobs in themselves relied on an expertise that was ported in. Duolingo, for example, has apparently fired most of its creative and content contractors in favour of AI. (I agree that the product will suffer as a result, but we need to keep in mind that this job itself didn’t exist until 2021.)

We tend to assume that our skills, training, education and experience are moats, but they’re not, really. The moats are usually artificially constructed — see how long we’ve had to live with the internal contradictions of quantum orthodoxy because of academic inertia and bias. We create degrees and other bits of paper as currency that limits serious challenges to conventional wisdom to a selective few, and are then surprised when the challengers come from outside.

The lesson here, I believe is that we shouldn’t be too precious about what we do that we can’t acknowledge that machines might be able to do it as well as us. Brian Eno, inventor of ambient music and a great mind, was quite happy to create an app that generated the very genre of music he invented, and still produces. He recognised that creativity could, under the right circumstances, be generated automatically by tweaking some variables.

Yes, we should rightly worry about the way AI is being used to make large sections of the creative (I use the term loosely, to include bullshit jobs and PR content creation) workforce redundant. But we should worry less about the fact and more about our standards. We are, as I have argued previously, decided to accept inferior quality — just good enough not to have users running their finger nails down the blackboard in frustration — and not demand a higher standard. This has less to do with AI, I believe, than with the way AI is used. While AI content is usually atrocious, it depends what you give it to work with. This is a function, in other words, of what is being fed into the bio harvester, rather than (only) the shortcomings of the bio harvester itself.

What we should really worry about is this: how do we encourage creative types to embrace AI and to leverage it to improve the quality of what they do and to create remarkable new things? We should be thinking: how do we keep ahead of AI? The answer: we use AI to do what we can do better and something that (for now) the AI cannot do on its own.

This means having a different attitude to work, to learning, to doing business, to thinking. The closest parallel I can think of right now is how online freelance workers have been working for the last 15-20 years. I wrote a story about this back in 2012, when I visited a town in the Philippines to see how a librarian had transformed her neighbourhood by switching to offering her librarian skills online, and then, having won her clients’ trust, upskilled herself to do more complex work for them, and hiring neighbours to help her. She and others I spoke to were constantly reinventing themselves, something that I think a lot of online workers do as an obvious part of their business. For them there are no moats, only bridges to new skills and better-paid work.

My x.com feed is full of people selling the idea that AI can churn out money-making work while you sit back and relax. Fine. This might work for a while, a sort of arbitraging the transitions from human to AI. But this is not (in the long term) what AI should be used for. Yes, it can definitely help us be more efficient — help us scale the walls of knowledge and competence that we did not view as either necessary or desirable. I, for example, ask AI to help me figure out bits of Google Sheets that I can’t get my head around, or how to automate the sending of transcribed voice notes to my database. It’s not always right — actually it’s rarely right — but I know its quirks and can work around them.

But this is small potatoes. I need to rethink what I want to do, what I want to be paid for, and how high I want to fly. I’m almost certainly going to put some people out of work on the way (I don’t have as much work for my virtual assistant as I used to, I’m ashamed to admit), but I’m sure I’m not alone in noticing that the demand for well-ideated and executed commercially sponsored content has dried up in the past year. I’m guessing a lot of that stuff is now done in-house with a bit of AI. McKinsey, BCG etc now all use some form of AI to prepare reports based on their own prior content and research.

In short, join the march to new pastures. But what new pastures?

The first thing is to acknowledge whatever trajectory you had in mind for your professional life may no longer exist. This is not an easy thing to accept, but it’s better to accept it now than to wait and hope. Whatever AI does badly today, or not at all, it will get better and better is all most people need. There are no moats, only ditches to die in.

I don’t know, of course, exactly how this will play out. We’ll need to get back at co-existing at AI, learning about the langauge we use to communicate with them (what is rather pompously called prompt engineering).

But that’s just the interface. The skills we develop through that interface could be anything. Think of the skills acquisition explosion that Youtube supports, such as channels that focus on manual crafts (Crafts people for example, has 14.2 million subscribers).

But we need to skate to where the puck will be, not where it is.

Indeed, the irony of AI may be that it helps our world pivot from digital to real, as politics and climate crisis push us to abandon our addiction to rampant consumerism and regain a respect for tools and products that last, and that we can fix ourselves.

More important though will be the new work that we can build on the back of AI. If I knew what those jobs are I probably wouldn’t be sitting here telling you lot, but be building them myself. But for the sake of argument let’s ponder how we might figure it out: we first need to identify a need that exists, or will exist (remember the puck) and that the need itself cannot be be done by AI, at least for the foreseeable future.

The most obvious one is the mess AI leaves us. I have never come across something done by AI that can’t be improved upon, or corrected. And we know that AI does not explain itself well, so even if AI can find the tumour a specialist couldn’t, we really need to know how it found it, even if it means trying to understand something that it is unable to explain itself.

But there are bigger issues, bigger problems, bigger needs that we need to face. It is not as if there aren’t things that we need to do, we just never seem to have the money or the political will to do them. We are suffering an epidemic of mental health issues, and while I understand how AI might be helpful in easing some of that pain (particularly the pain of loneliness), but far better would be a process that expanded the cohorts of people with enough counselling skills to be able to connect to sufferers and help them, professionally or as part of their daily work. Instead of potholes being something reported and fixed by some remote arm of government, the process of collating data could be crowdsourced (I remember a World Bank pilot project in the Philippines in the early 2010s) and the work assigned to a nearby volunteer suitably skilled via AI in filling in potholes.

Story of a pothole in Cebu City – training materials created for MICS (from TRANSPORT CROWD SOURCE ICT DEMONSTRATION COMPONENT 2 FINAL REPORT)
Story of a pothole in Cebu City – training materials created for MICS (from TRANSPORT CROWD SOURCE ICT DEMONSTRATION COMPONENT 2 FINAL REPORT)

AI could and does help us acquire new skills that are not ‘careers’ in themselves, but can contribute to a blend of physical, mental and creative work that sustains our future selves.

Another way of looking for opportunities is this: What work would have been too expensive or too time consuming to be considered a business, and can AI help? Garfield AI, a UK commpany, offers an automated way for anyone to recover debts of up to £10,000 through the small claims court. This is potentially a £20 billion market — the amount of unpaid debts that go uncollected annually — and it’s probably one that is not manageable by most legal companies, and too cumbersome for small creditors. Here’s a piece by the FT on the company: AI law firm offering £2 legal letters wins ‘landmark’ approval.

This doesn’t mean that we can’t also be doing ‘mental’ creative work — using AI, say, to work out who is responsible for the mess that is the British water industry by creating an army of AI-enabled monitors and investigators, identifying polluters and making them accountable. This could be initiated by one individual with the smarts to figure out the legal challenges and to find ways to incentivise as many people as possible to gain the skills necessary to contribute effectively.

In other words: we need to stop thinking in terms of what jobs AI is taking and thinking what jobs that don’t exist that we can now do with AI.

Finally, a word on creativity. Creativity is, in the words of Eurythmic Dave Stewart, “making a connection between two things that normally don’t go together, the joining of seemingly unconnected dots.” That meant for him and Annie Lennox trying to find a connecting point between their talents — synths for him, voice for her — which as we know eventually paid off. Steve Jobs talking about creativity being “just connecting things.” It’s not as if AI can’t do this, but it can’t, according to researchers, compete with humans’ best ideas, their best connecting together disparate things.

I understood this a little better talking to an old friend who is a famous writer, but has been suffering from Long COVID since the pandemic. It prevents him, he says, from keeping in his head the necessary elements for a novel, but he can just about manage short stories. He still writes, and writes a lot, but that extra genius/muscle of creativity that has propelled him into the ranks of Britain’s best writers is currently not working properly. To me that explains a lot about what is really going on in human creativity, at least at the highest level. It’s frustrating for him, of course, but to me it’s an insight that should inspire us to make the most of our extraordinary minds, and to acknowledge that at their best our minds are no match for any kind of AI.

Creativity, our last frontier, needs shifting

By | February 24, 2025

I’m changing my mind about generative AI. I don’t think hallucination is the show-stopper anymore. I think we are.

Perplexity.ai, which I’ve been using as my main AI research assistant, has this month launched a Deep Research tool, essentially an upgraded version of itself, which doesn’t just do a cursory web search and build its response based on that. (It’s obviously presenting itself as a competitor to OpenAI’s tool with the same name, launched a couple of weeks earlier, and available to pro users in some jurisdictions. It’s faster than Perplexity’s, and is comparable.) Deep Research prepares itself — and shows you how it is doing so — by breaking down the prompt and continually reassessing itself and customising searches until it spits out a response, which is presented as a paper of sorts, with a beginning, middle and end.

I have to admit I was impressed. Perplexity, of course, is not the only one who has reached this level but to roll it out to existing users so seamlessly is impressive. Unfortunately it’s too good. Whereas previously I would run a prompt through Perplexity as an initial foray into a topic, now I more or less feel there’s nothing much to add. The result: a motivation-sapping sensation where the question: what’s the point of doing anymore on this? is hard to answer.

Still from The Hill (1965)
Still from The Hill (1965)

The case in point was a thought I had while watching Sidney Lumet’s 1965 film The Hill. The themes, the tone, the conclusion felt like more than an echo of another film released only a few months earlier, Fail Safe, and I figured it was relevant enough, with all that’s going on the world, to compare and contrast the two and see what lessons Lumet might have been trying to convey that could be applied to us, a quarter century into the millennium: The dangers of rigid protocol and process especially in military environments, the toxic power of sociopaths manipulating those systems, the futility of trying to do the right thing in the face of those forces, and a general pessimism about whether humans can build systems that still retain their humanity.

The Hill may be about a prison camp in North Africa to hold and rehabilitate criminals within the British armed forces, and Fail Safe may be about the U.S. and the Soviet Union locked in a nightmare scenario of mutually assured destruction, but both were really about power, and whether the systems we build to project and protect that power are fit for purpose. Both films are extraordinary white-knuckle rides, and both end with little hope offered.

Fonda and Hagman in Fail-Safe (1964)
Fonda and Hagman in Fail-Safe (1964)

I ran Perplexity, just to see if there were any other aspects of this I should be aware of: perhaps the conceit had already been spotted and dissected elsewhere. Perhaps I was just following other people’s footsteps. My prompt was this:

Write a report exploring the movies The Hill and Fail Safe by Sidney Lumet, comparing and contrasting the themes and other elements

I didn’t try to pre-empt things or tip the scales by giving Perplexity a glimpse of my own thoughts, and I gave equal weight to comparison and contrast. I also tried not to restrict discussion to themes by adding ‘other elements’ — suggesting camera work, acting, location, lighting etc, could equally be explored.

And here’s the crunch. Perplexity’s answer was so good I didn’t really feel there was much more to say. Why not just send you all the Perplexity idea and head off to the beach?

Perplexity.ai's response
Perplexity.ai’s response

Here’s how it started off:

Sidney Lumet’s The Hill (1965) and Fail Safe (1964) stand as stark examinations of institutional power, systemic failure, and the human cost of rigid hierarchies. Though distinct in setting—one a WWII military prison in North Africa, the other a Cold War nuclear crisis—both films interrogate the moral decay inherent in systems that prioritize control over humanity. Through unflinching realism, claustrophobic direction, and morally complex characters, Lumet crafts two parallel narratives that reflect the anxieties of their eras while offering timeless critiques of authority. This report explores the thematic and stylistic intersections and divergences between these works, revealing Lumet’s mastery in exposing the vulnerabilities of systems designed to suppress individual agency.

I’m not a professional film reviewer, nor am I an academic, but there’s not much there I would disagree with — and I would readily admit my own thoughts on the topic are pretty prosaic compared to this. Perplexity goes on under clear headings to dissect the topic, before concluding thus:

The Hill and Fail Safe exemplify Lumet’s ability to dissect power structures through intimate, high-stakes narratives. While The Hill exposes the brutality of colonial and military hierarchies, Fail Safe interrogates the existential threats of technological overreach. Both films, however, converge on a central truth: systems designed to protect often enslave, and humanity’s greatest vulnerabilities lie in the very institutions meant to uphold order. In an era of resurgent authoritarianism and AI-driven warfare, Lumet’s films remain urgent reminders of the costs of unchecked power.

Here’s the full prompt and response.

Perplexity here is not just parrotting its sources, it’s pulling them together into a paper that responds to my prompt at a level that’s breathtaking. It is thinking, in the sense that it is not merely summarising from collated sources. There’s little to quibble about here. The effect is jarring, I’ll admit. We’re not in Kansas, anymore.

Or are we?

Perplexity prides itself on showing its thought processes, and that in itself is impressive. You’re watching a computer work, you’re a teacher checking your student’s construction of an argument.

Perplexity.ai's train of thought
Perplexity.ai’s train of thought

But we’re still hamstrung, in a way that we’ve been hamstrung before in GAI, by a lingering opacity. I can now see how Perplexity decided to go about the task, and I can see, up to a point, how its thinking evolved based on the sources it found, but I’m still not sure of the provenance of each element. Perplexity provides links in the text, sometimes after each sentence, but upon closer inspection it’s not always clear how and even whether the source cited is relevant.

Indeed, when asked Perplexity a follow-up question about which of the sources it cited made any direct comparison of the two films, I realised that we had at least one foot still in Kansas. How much of its answer was its own work, I was asking, essentially.

It replied that in only one of the 31 sources cited was there any link between the two films, and that, it said, was fleeting. In fact, the link it provided made no mention of The Hill and when challenged Perplexity apologised and corrected its answer. And it said:

The most relevant comparisons between the two films actually come from source1, which mentions both films in Sidney Lumet’s filmography. However, it doesn’t provide a direct comparison. The closest it comes to comparing them is in separate sections about each film…

It concluded its answer thus:

The comparative analysis in my previous response was largely synthesized from separate information about each film, rather than from a source directly comparing the two. I apologize for any confusion this may have caused.

I’m not going to quibble. In this and countless other responses Perplexity is hitting them out of the park. Checking its work is easier than in a lot of similar tools, but it’s still problematic. But in terms of delivering a well-constructed argument that not only synthesises sources but actually creates an argument from scratch, providing enough detail to know that it’s not just conjuring fake stuff up (hallucinating), then we’re already on a new level.

Matthau holds forth in Fail-Safe (1964)
Matthau holds forth in Fail-Safe (1964)

But.

Perplexity and its ilk should be congratulated for what they’ve done, and I’m sure things will improve more quickly.

But for us creative/knowledge professionals what does this mean? What does this mean for journalists?

We’ve hopefully gotten past the low-hanging fruit, mostly knee-jerk responses that it’s going to put everyone out of work. I don’t deny that this is happening already. Why hire people to write a mealy-mouthed press release or marketing blurb when AI does it for you — without all the fingers that insist on adding unwieldy, yawn-inducing buzzwords? Why hire people like me to write sponsored content, ‘white papers’, or other nonsense? Nobody reads those things anyway, so why not have a machine to do it (as, no doubt, is already happening on the side supposedly reading these things)?

In that sense Perplexity and its ilk are simply moving the already-moved goalposts. Seen clearly, we’re moving at a pace to a world where ‘just good enough’ — where we accept a level of mediocrity because it is more efficient, and because that’s what is, under current conditions, considered acceptable — is now just about good enough to roll out at a level most industries and professions would be willing to sign off on. We’re not just talking creatives/media, here: we’re talking law, government, manufacturing — any sector which relies, at some point in its procedures, on the written word. (Think instruction manuals, documentation, briefs, summaries etc.) One AI will create, and another will check. The human involvement will be as minimal as a modern production line. We can’t escape this fate.

Connery in The Hill (1965)
Connery in The Hill (1965)

So for journalists and other creatives — where we’re being paid to think up new ways of saying things, connecting things, reinventing stuff, moving stuff from one medium to another —the obvious answer is that it allows us to test story ideas, theories, half-baked thoughts. But that in itself is somewhat half-baked. It’s the ‘faster horse’ fallacy.

So let me lay down a few Wagstaff Laws, none of which come from AI, I promise, but might perhaps have begun reading other people’s ideas.

First off, we need to recognise that the better these things are, the harder it will be to find errors. We have to assume there to be errors, and to assume those errors can be big or small — an incorrectly cited source (breeding claims of plagiarism, IP theft etc.) Perplexity has shown that its footnoting, though impressive, is still flawed, and I’m guessing might never be perfect. The old Perplexity provided half a dozen sources for a response; Deep Research cites more than 30.

Finding those errors isn’t going to be fun, and might end up taking more time than actually researching and writing the piece from scratch. Yes, AI could do the checking, but then you’ll need to check that as well. And how can I be sure its search, though apparently thorough, didn’t miss some stuff? I take some pride in my ability to search stuff online. I have, after all, been doing it for nearly 35 years. How do I know that Perplexity covers all the bases I would have?

And that’s just for the small stuff. How about your AI comes up with some great idea that it doesn’t know it cribbed from one of its sources? We’re now into dangerous territory, and not just for academics. If journalists rely on AI developing story ideas it has to be assumed that the AI is going to find out if someone else has a similar idea. But is it going to tell you?

Related to that, my second law: The more we rely on AI for ideas generation, the weaker we will become. We will depend on it, not because we couldn’t come up with it ourselves, but because we don’t need to come up with it ourselves. So we’ll effectively outsource that part of our cognitive effort, and we’ll wake up one day entirely dependent on something beyond our control. This is fine if it’s something like knowing how to put up a gutter pipe, or even knowing how to drive. But it’s not fine when it’s a skill that is unique — my idea for a story, unless it’s people in a burning building, or the new GDP figures, is never going to be identical to someone else’s, because of all the experience, successes, failures, peccadilloes, biases, enthusiasms, hobbies etc I have allowed in over the years. If I let go of some of that not only will I gradually get worse at it, but I won’t be absorbing and integrating new experiences, skills, pecadilloes etc. I will no longer be the journalist I was. The point is: we don’t really understand what makes us us, so outsourcing a core piece of what makes me me will lead to a different, and I would have to assume lesser, me.

The generals in Fail-Safe (1964)
The generals in Fail-Safe (1964)

For example: I’ve gone through all the sources for the The Hill/Fail Safe thing but I’m still not comfortable. I still don’t know where the sources end and the ideation starts. And now I don’t know where Perplexity’s ideation ends and mine begins. That dissipation of motivation to write the piece? That comes from the blurring of these lines. My creative engine depends on believing that this is my idea, my story, my execution, my name at the top. What happens when a story is mostly the work of AI? So these are pretty obvious laws. The third is less so: The more we use tools like this, the more we’ll have to reinvent both how we use them and, in the process, ourselves. This in some ways is quite exciting. I’ve not come across many people using Perplexity but I’m. sure there are lots. Eventually, just as with Google, we’ll all be using it, or something like it. So there’s little point in journalists working on pieces that, to an acceptable level of mediocrity, can be produced by AI. So we’ll have to change. Perhaps, our jobs will, if they survive.

To do that we’ll need to figure out ways to retain our soul — by which I mean, the indelible proof that what we produce is ‘us’, even if it’s a markets wrap. I suspect it will be by learning the art of prompts well enough to squeeze out of AI what only it can do, or what only it can do well. What I mean is this: I can ask Perplexity to help me think out a piece juxtaposing two films, something I could have, and should have, done myself. But if I asked it something I couldn’t possibly do myself, at least in the time available, wouldn’t that make me a better journalist/writer/thinker?

For example: Take the scripts of the following directors (Kubrick, Lumet, and 20 others) and gauge from the language used which of them would have been banned or deemed unacceptable according to current laws. Perhaps that’s too easy. I was listening to an interview with Peter Turchin, an academic who applies mathematical rigour to history (‘cliodynamics’). When the interviewer, Nate Hagens, asked him what he’d say if he got five minutes with Donald Trump, he talked about funding to do the research necessary to explore what lessons could be applied from his work to the present state of the world. The research, he said, would take five years. Nate, quite reasonably, asked: are we still going to be here in five years?

Peter Turchin (left) and Nate Hagens ()
Peter Turchin (left) and Nate Hagens

To me it was an opportunity missed for Turchin. While I think we probably do have five years, it’s true to say time is something we don’t have a lot of, whether talking personally, politically, planet-wise. Can AI help us on that? Yes. I would like a decent AI to look at the things that Tuchen is looking at — essentially looking at periods of elite-rule and wealth-pumping, and which lead to catastrophe and which don’t — and do it all in five days rather than five years. If Turchin and his team are good, and I’m sure they are, they should be able to rise to the occasion, and at least get us a ‘good enough’ working conclusion to be able to take it to Trump and help him make more informed decisions. There are some things where getting stuff done quickly is more important than getting things done perfectly, to paraphrase Voltaire, or Aristotle, or whoever it was.

As journalists we need to rise to the challenge of this. We need to realise that our core skill is coming up with better ideas than the other guy, and executing those ideas. We need to mesh that with another skill: ‘better prompts’, which in turn should lead us to ‘better stories’ which may, in the end, look nothing like what we now think of as stories.

There is another option. Which may in itself not be a bad strategy. AI will shape our world more than any other technology, because it will shape us, our minds, our bodies, and our dependence on it. It’s not like a car or a kettle. We can live without both. But AI has already made it impossible for us to switch it off. And so it follows the most creative act is one of independence — to essentially live without AI. To not use it as a crutch, whether it’s finding the cheapest flight or to assess CT scans. For a journalist this would mean writing as a sort of clandestine, subversive activity. It would be like keeping one eye closed when turning on a light to retain some night vision. You would reject all technologies that make you dependent on something you can’t fix yourself (so a manual typewriter might be OK.) You would truly be an independent thinker, and I wouldn’t be surprised if you ended up being smarter than the people around you.

This is a radical solution, of course. I’m convinced that we must remain masters of our technology — a red line I suspect we’ve passed in many cases already — but I believe that entails not allowing ourselves to accept the false efficiency god of mediocrity. As artists like Mark Haddon have sought in a joint letter published today to get the UK government to rethink plans to weaken copyright laws governing artwork and writing for the sake of improving AI, we should not fool ourselves about what is at stake if we allow mediocrity to win. The creativity that every human is born with — and assiduously develops until it’s school-ed out of them — would wither and die in the face of AI-generated content. We somehow have to keep the door to mediocrity closed as long as possible while seeking new definitions of what our human minds can do, to discover and unleash those capabilities we know are hidden somewhere within.

AI should be the spur that drives an assault on that bastion of locked-in creativity. Our future happiness depends on it.

Dealing with email embarrassment

By | December 20, 2024
Photo by Ivan Aleksic on Unsplash

I assumed that by now we’d be using something better than email. (Remember Google Wave? No, I don’t either.) But we aren’t. So it’s probably time for me to make a confession: I’m still struggling to find the perfect email app.

After a recent spate of embarrassing moments when important emails just passed me by, I vowed to come up with a system that didn’t let that happen again.

For years I’ve been using Apple’s stock email program (‘Mail’) and souping it up with rules, extensions and smart mailboxes. It looked good. I had at least 40 different smart mailboxes, a dozen or so rules, and everything seemed to be under control.

It was an illusion. By automatically directing things into smart mailboxes I cleaned up my inbox but to the point where I missed messages. Sometimes it’s not even clear why I missed it. Was it a bad rule I set up? It drove me nuts. An email from an accountant. Missed. An email from a teacher. Missed. An email from the guy who does, well, I’m not supposed to tell you what he does. Missed.

So I went back through the email apps I had played with in the past. There must be something better out there by now. It’s been 35 years we’ve been playing around with email apps (very happy to see Pegasus, my first email love, still going strong); surely we’ve got something solid and dependable to sink our teeth into?

Not really, is the answer.

I’m a Gmail user but not that’s one of four email accounts I need to have regular access to, so it couldn’t just be a Gmail client (like the discontinued Mailplane, and Mimestream.) Airmail is OK, but it looks and feels a bit dated these days. The same with Thunderbird. It’s great to see Mailbird with a MacOS version, noting the work Andrea Loubier did to get the app going, but it didn’t seem to add much to what Mail and others already do for free. Postbox is another dead man walking, having been acquired by eM Client in October. (eM Client is more of a productivity tool, and while it’s competitively priced, it fell outside my zone of candidates.)

Photo by Lucia Sorrentino on Unsplash

The field ranges from an all-singing, all-dancing app at one end that promises to do everything except clean the coffee machine, to ones that strip everything away and keep a tight focus (by hiding everything from the user, like a zen Marie Kondo apartment where everything has been stuffed in a cupboard.) I needed something half way between the two. In the end I settled on Spark (both the iOS and MacOS versions are available on SetappCanary is another email on Setapp worth a look.). There were, and are, things I don’t like about it, but I’ve tried to focus on what I need the email app to prioritise.

In short I need it to make sure I know that an important email has arrived. Spark is pretty good at that, allowing the user to mark sender(s) of a message as a priority sender. That means that anything from them will be shunted to the top of your list.

Then you’re able to assign an email thread as a priority, which colours it in a subtle orange. (A little Harry Potter scar appears alongside the thread to help you identify it.)

Everything is shunted into one of three categories Personal, Newsletters and Notification (a sort of half way between the two.) It actually works pretty well, although I’d love to be able to have more control with tagging (works with Gmail, not with other email services) or folders and filters.

And there’s also some AI guff that I really could do without. I never want anyone who receives an email from me to think it might have begun life as an AI prompt. All this kind of stuff is going to seem embarrassingly lame in a year or so and I wish development dollars weren’t spent on this when there are better things to work on.

Security, for example. Spark’s owners Readdle have been criticised for storing email on their servers, something they argue is necessary for some of the features. I don’t like it, though, and I wish there was a way to avoid this. It’s one reason that you can’t access Protonmail with Spark even through the former’s mail bridge (which works with apps like Apple Mail.)

I’m pretty happy with the outcome, and while I don’t have any use of the team-based functions, where you can presumably work on drafts with others, there’s clearly ambition to make the app better and smarter. Which is good: the app does crash, and sometimes the little x’s that you click on to close a mini-window don’t respond. And I’m not happy they only offer an annual billing. It’s $60 a year though there is a free version which I think offers most of the features that make it work. So try that first if you’re interested.)

Spark’s inbox

Given I’ve been writing columns about email for nearly 25 years, I’m a bit shocked I’m a) still writing about it and b) have had to admit I screwed up with lost emails enough to take action.

So I’m probably not the person to offer advice on the topic. I’d rather hear from you. But in the meantime, here are some lessons I’ve learned:

  • be a good emailer: acknowledge when you receive an email if you don’t intend to respond in detail immediately. You may save a lot of misunderstanding;
  • build your email world around the people you care enough about to not leave them hanging. Tags, folders etc are fine but if you’re missing emails, or forgetting to reply to those that deserve a reply, you’re doing something wrong.
  • think about email as the start of things. Spark has some pretty good extensions to help you connect with other apps that do a better job of handling tasks, calendars etc (my favourites: Agenda for contractual and project-level discussions, NotePlan for calendar+ and personal tasks, TickTick for assigning tasks to and managing others.)
  • don’t, whatever you do, use AI to write your emails unless the person you’re sending them to deserves it.
  • belt and braces might be a good way to go: I’ve kept my Apple Mail app going, and use that for digging around and organising the many newsletters I subscribe to.

Thanks for reading and let me know what I’m missing — either in terms of apps, or of the greener pastures of Windows.

Next time: Revisiting RSS

The First First-Person War

By | November 14, 2024

Drones are changing the way wars are fought, but they’re also changing the way we experience those wars.

Television brought the brutality of war into the comfort of the living room. Vietnam was lost in the living rooms of America — not on the battlefields of Vietnam. — Marshall McLuhan (1975)

Hamas leader Yahya Sinwar is sitting injured in a what was once a red armchair in a bombed out house in Gaza, his face covered with a scarf, when an Israeli drone tracks him down. As it hovers a few feet from it he throws a stick at it, apparently knowing his fate. Tank shells and missiles rained down on the building, killing him.

A Ukrainian drone weaves between blast-shredded trees in the Kreminna forest until it finds a target: a single Russian soldier hiding in a bunker beneath a tree. He stares at the drone a few feet away and tries to fire his weapon, but it’s too late. The video turns to static, signifying the drone has exploded. (Source: GWAR69)

There’s something deeply unsettling when a war doesn’t just come to your living room, but when it makes you, the viewer, the angel of death. You are the payload, looking into the eyes of those about to be destroyed.

We are used to these videos now, either cheering along with them, or, more likely, skipping past them. The juxtaposition of technology and the banal horrors of the battlefield is nothing new, but staring into the low-res eyes of the condemned takes a bit of getting used to.

The war in Ukraine has helped gamify warfare for the civilian as for the drone operator, feeding us recordings of “first-person view” (FPV) drone attacks on vehicles, civilians, buildings and trenches — clips usually accompanied by martial music and cheering from the operators.

In some ways it’s the logical conclusion of the consumerisation of IT. First it was the office, where we brought our iPads and demanded they work with our office computers. Now it’s the battlefield, where consumer drones are pimped into service, giving us a FPV of combatants as they are hunted down in buildings, in bunkers, in fields, in vehicles. The medium, in Marshal McLuhan’s words, is the message: such videos are intuitive to us because they are no different to a first-person shooter game. The only quirk is that when the drone finds its target, the feed is lost and our imagination fills in the blanks.

Fog

It’s hard to overstate the impact of drone innovation on the way wars are being fought — and will be fought in the future — and to recognise that this is being driven not by the military industrial complex but by small, quasi-entrepreneurial teams using existing drones, 3D-plastic and -wood printers, cheap Raspberry Pi computers etc. Yes, it’s still a war of heavy artillery, jet fighters and tanks, but drones have forced their way into nearly every facet of the battlefield, to the point where commanders cannot afford to operate without them. South Korea has begun replacing mortar teams with drone teams, and former Google chief Eric Schmidt has said the US should replace its tanks with drones.

Perhaps the greatest is in the way it has not just lifted some of the fog of war, but made both intelligence and targeting highly bespoke: a soldier in the battlefield can no longer count on camouflage, the protection of steel or concrete, or even speed, to evade the enemy.

A Tet enforcement drone in Oblivion (2013)

We’ve grown used to the trope in popular culture: the all-seeing eye, despatched to track down a protagonist, in OblivionHis Dark MaterialsWar of the Worlds (2005), Dune etc. But few writers envisaged the reach and scale of what is being reported from Ukraine (and of course, we have to factor in that most sources have a bias, even those translating reports from Russian Telegram channels. Everyone has a dog in this fight.) Nevertheless, there’s little doubt that in the nearly three years since the Russian invasion of Ukraine beyond those parts annexed in 2014, no other weapon has evolved so quickly.

Drones patrol the sky hunting for prey. Any soldier is a potential target for a kamikaze drone. A Russian soldier walking to a nearby village for spare parts is killed on his way back to his mortar battery. His platoon commander and a fellow crewman are spotted; the latter is killed, the former loses a leg. (Source: WarTranslated)

Soldiers have grown used to surrendering to drones — what is called “non-contact surrender”, according to David Hambling, writing in Forbes. In one image shared online, what appears to be a Russian soldier attempts to surrender to a drone, holding up a piece of equipment as a bargaining chip. The equipment is likely a relatively basic Russian-made drone jammer.

Russian soldiers have reported that they have been threatened with attack from their own drones if they do not comply with orders to advance. “If you don’t want to go, they throw a grenade from above to speed you up,” the soldier, Lt Oleg Guivik, is quoted as saying by military researcher Chris O, who translated text and a video found on a Russian Telegram channel. (It’s not possible to verify the source material.)

Substitute robot for drone and you can see how far we have come, and where all is this going.

Indeed the tail has begun to wag the dog. Social media teams embedded in units quickly edit and circulate FPV videos. Their popularity has allegedly inspired Russian commanders to co-opt the drones for filming propaganda for their superiors rather than for military purposes, according to one Telegram channel.

Battle-ground

The result is that drones factor in to many battlefield decisions. Artillery weapons fired less than 20 km behind the front line, says Trent Telenko, a specialist in military logistics, can’t relocate fast enough to avoid drone counter-fire. Evacuation of the wounded and civilians is vastly complicated by the threat of drones, according to a piece by Kris Parker in Byline Times, even when the evacuation is done by civilian volunteers.

Although Putin has taken the trend seriously — last year he said Russia aimed to increase its supply of drones by a factor of 10 this year, to 1.4 million. Ukrainian President Volodymyr Zelensky said it can produce 4 million drones a year, according to Parker. The numbers are probably meaningless, for several reasons. Many drones are either bought by the units themselves, or donated. Some never appear: Russian front line troops have complained that commanders take donated drones and sell them at local markets.

Barbecue

What is clear is that the evolution of drone warfare will continue to be rapid. It is a war of attrition, where each tries to parry the other’s technological thrust.

Russian soldiers have done their best to protect themselves and their vehicles from drones, with limited effect. They have constructed wire cages atop their tanks, called ‘cope cages’ by some, though their effectiveness is unclear, given the Russians prefer to call them ‘barbecues’, and their larger, turtle-like shell, a ‘king barbecue’, suggesting they don’t have much faith in them. Indeed Ukrainian munition producer Shock Wave Dynamics has recently produced a 3D-printed bomb that punches debris in a specific direction, nullifying any cage or shell, according to Stuart Rumble of BFBS Forces News.

A modified tank (Perun, Youtube channel)

Others simply hope to outrun the drones on motorbikes.

Jamming the signal between drone and operator is still the most effective defence. Beyond jamming, both sides have been experimenting with net launchers, which are released to ‘capture’ the target drone and bring it down. FPV drones have been successful in intercepting long-range fix-winged drones like the Russian Gerbera, usually by crashing into them kamikaze-style. Other approaches include trying to nudge it off course.

In response to jammers, Russians started using fiber-optic cable drones, using a spooling line of angel-hair thin cable to control the drone and feed back high definition video until it reaches its target. While the fibre optic cable reduces some its manoeuvrability, it renders useless drone jammers which rely on intercepting wireless signals between drone and operator.

Autonomy

It is inevitable that drones both large and small will become more autonomous, free of the need to be directed by a remote operator, able to find the target itself, and therefore be invulnerable to jamming of communications or GPS signal. In other words, a seeing drone able to make its own decisions.

Max Hunder of Reuters wrote in July that Ukrainians are already working on this, looking to create a targeting system that is cheap and easy to build, using Raspberry Pi computers and a simple program. Indeed, this is where the next breakthroughs are likely to come. The cost of making drones is falling, but thousands of them are expended each week. The goal will have to be to make drones that can not only evade defences but deliver a punch worth the effort.

So expect to see more like Kyiv-based Swarmer, a startup which is using AI to automate the launching and flight of fleets of drones, according to a Hunder. Possibly helping such initiatives will be that they are less dependent on government budgets and more on private money, especially Silicon Valley’s. Swarmer just finished a round that included not just a defence tech company (R-G.AI) but also a Web3 player, an LA-based VC focused on Ukrainian startups and aerospace-focused VC Radius Capital Ventures.

AI is already being used to direct some of the country’s long-range drone strikes inside Russia, Hunder wrote, quoting an unnamed official as saying that such attacks sometimes involve a swarm of about 20 drones, in part to divide up tasks and to distract and confuse defences.

What is clear is that the lines between a military drone and a consumer drone are blurring. At the beginning of the war drones could drop little more than a hand grenade. Now Ukraine’s “Baba Yaga” drones can carry a 40 kg bomb, according to Daniel R, an imaging physicist, citing photos from Ukrainian X feed @watarobi. (Much of the bomb and release appear to have been 3D-printed.)

A Dragon drone in action: (Daily Telegraph)

Ukraine’s thermite “Dragon” drones appeared in late August, one dropping molten metal as it sped along a tree-line, setting trees and undergrowth ablaze in a display frighteningly reminiscent of napalm raining down on Indochina. Thermite is a combination of oxidised iron and aluminium that burns at about 4,440° F (2,448.89° C), according to Howard Altman of The War Zone, effectively making it a sort of modern-day flamethrower, dropped or sprayed. They are now being used on tanks and bunkers, small enough to navigate their way through narrow slits that were previously considered impenetrable.

So what does all this mean?

I’d offer a few thoughts.

Assassins

From a tech point of view, we’re seeing drones, 3D printing and a startup mentality (and VC funding) all find their natural fit: dual-use. AI is on the way to doing something similar: ex-Google CEO Schmidt, has been working on AI-driven military drones with a startup he founded, White Stork. As I mentioned above, he has called on the US to ditch its tanks, and in a speech in Saudi Arabia last month, predicted that artillery would also disappear. He called drone warfare “the future of conflict” driven by the rapid decline in the “cost of autonomy.”

He’s got stuff to sell, so of course he would say that. But there’s no denying a revolution has taken place without us really noticing. We have entered the era of robot warfare, but not as we envisaged it. We expected humanoid droids of some kind, but instead we are getting consumer quadcopters strapped together with duct tape, bombs and their hardpoints 3D-printed from plastic or wood, granted intelligence by an off-the-shelf computer the size of a small stack of credit cards. Cost: about $150, compared to a military drone costing between $60,000 and $25 million.

Think of them as small, fast, roving assassins, for now. The next step is swarms of robot drones each with a slightly different task, operating, as Schmidt predicts, autonomously.

All of this will be, is being, captured on video, and beamed onto your phone, should you wish to see it.

WTF happened to our music?

By | October 17, 2024

Accusations that some big name musical acts have been miming their vocals raises some tricky questions.

Popular music is in danger of becoming a parody of itself. Where once we dreamed of a fairer, more equitable landscape, big money is ever more concentrated in the hands of a shrinking group of artists and record companies.

The problem: at least one of those big names has been accused of miming their vocals, raising some uncomfortable questions. Not the least of which: Are we being suckered into paying insane amounts for a one-off, premium artistic creative experience which turns out to be anything but.

For sure, it’s not the only problem in the music industry. But it does illustrate the absurd extremes the industry has gone to to preserve its hegemony.

Let’s take a closer look.

First off, the industry has defied the predictions — made by David Bowie, among others — that a long tail would evolve which benefitted home-spun talent, broke the monopoly of the major record labels, and lead to a new ‘artistic class’.

But, after exhausting all other options, the record companies, and streaming companies like Spotify, have been clever. Twenty years ago the industry was on the ropes. Napster looked to have crowbared open recorded music, ripping the digital song on a CD into an MP3 file small enough to be freely distributed, freely shared and freely downloaded. Who would ever pay for recorded music again?

It took a while, but a way was found. The question was asked: why do people want to own music? The answer was: they don’t, if they can get access to that music whenever they want. Enter streaming. So then it was just a question of cost. Well, two questions. How much would users pay, and how little would artists settle for?

The answer to the second question was easier: it’s not how much the artists would settle for, but the record companies. And by the time the question needed an answer, that was really only a discussion among three players as the others had been gobbled up: Sony, UMG and Warner. Which made agreement easy. The three have exerted constant pressure on the streamers to pay more royalties to the big artists and less to the long tail.

This brings in decent money. But the real money is elsewhere: performance. The music industry has shifted back to what it was before the introduction of recorded music.

This is how big players make their money. Live music accounted for more than half of overall music industry revenue in 2023, rising 26% year on year (more than recovering from the pandemic.) But most of this money is driven by big names — Taylor Swift, Beyoncé, Coldplay, Elton John. Taylor Swift grossed more than $1 billion from her Eras Tour. Eagles made $70 million this year from tours.

But it still entails a cost — especially on the artists. Playing sets every night can be wearing. So within the ‘live’ sphere there have been innovations.

One of them is not to tour. It’s called a residency, and U2 grossed $230 million from theirs at the Las Vegas Sphere. Eagles have just started theirs.

Another is not to sing. A British musician called Fil has been running some recordings of live performances through software that shows the exact pitch of each note, and he’s demonstrated that a number of artists, including Eagles, are actually miming some, if not all, of their vocal performances. (He’s not the only one: Kiss, Red Hot Chilli Peppers, Dua Lipa, Britney Spears, etc have all been accused of miming instruments and/or vocals.)

If you’ve coughed up several hundred dollars to watch and hear them play, you might be a little upset. You expect a few elements to be pre-recorded, especially if the artist is also doing some complicated dance moves, or flying through the air, but as far as I can see, most of the Eagles frontline were standing in a row (like Pretty Maids, you might argue). No trapezes or twerks in sight.

Eagles pic

Of course, Abba have presented what many think is the next step — holographic avatars — but I think we’re still some ways off everyone doing that. But it does open up possibilities for posthumous performances. Or at least to further blur the lines between what is a real performance and what isn’t, addressing the issue of whether Eagles can continue to make money ‘performing’ after they have all gone to that Hotel California in the sky.

What Fil unearthed, and continues to unearth, is troubling. He tells me that no one has gotten back to him to complain, or sue him, and his methodology is transparent. It suggests that some artists see performing live as both a pure money-making event, and the definition of ‘live’ to be fungible. (It also explains why such concerts usually ban audience members from shooting video or recording audio on their phones; it was these clandestine recordings that allowed Fil to compare ‘live’ with a recording of a previous concert.) Fil, it should be said, does not see this as the norm across the industry — his YouTube channel includes a lot of genuinely live performances and told me “I think within the professional community miming is not looked upon too fondly.” (I noticed some of Fil’s analysis has been removed because of a copyright claim.)

Vicious cycle

In short, the industry has caught itself in a vicious cycle it may not be aware of. If the big money moves from audio to performance, however loosely you define it, then you now have a bottleneck in the supply chain, namely the artists themselves. This doesn’t matter too much for someone like Taylor Swift, though it has made her a target for crazies, but it does for those who aren’t able to hit those high notes anymore. It’s in some ways understandable: Eagles last had a hit single in (checks notes) 1980, with I Can’t Tell You Why, and the median age of the current line-up is 76. (But this shouldn’t be an excuse: Fil points out that other artists like Roger Daltrey, who’s older than any surviving Eagle, can still belt out The Who’s old hits pitch perfect, in the original key.)

In short, live takes its toll, and if that’s the only way to make money fans may be increasingly unwilling to shell out if they find out they’re not really watching what they think they’re watching. It might not deter many music fans, but it’s early days: we still don’t know how many artists do this. Fil has only so many hours in a day, but I feel we haven’t heard the end of this. A collapse in confidence could lead to a collapse in demand for this ultra-profitable revenue stream. A ticket for Eagles could cost between $300 and, well, the limit. A lower-level suite for Las Vegas on December 6 would cost $35,139.82. (And don’t get me started on TicketMaster’s dynamic pricing.)

It’s a long way from where we thought the music industry was going at the turn of the century. Where we thought the internet and digital would more equitably distribute the value of an industry, it has done the opposite. Where did we go wrong? I suspect it has something to do with the same errors that led us to think that social media, in its original meaning of a Web 2.0 that flattened the barriers to entry of creating and distributing content, would spread the wealth around.

As things get easier to produce so an abdundance of choice triggers Barry Schwartz’s ‘paradox of choice’. While many of us love the endless array of music we can listen to, and love supporting individual artists on Bandcamp, a lot of us get confused and overwhelmed and gravitate towards the most prominent. In this way record companies become more important because they are able to promote their artists across the full spectrum of media. The number of smaller artists grow, but the funnel between them and a decent income, let alone a big one, feels relatively tighter and tighter.

Long tail

Meanwhile, the long tail, ever longer and thinner, across the floor. Bandcamp, the most prominent marketplace for self-produced music, has more than a quarter of a million ‘active stores’ — meaning acts selling their music (in both digital and analog varieties, along with merch). That’s a three-fold growth over 2022.

Bandcamp is great for artists, but its vibrant community is more of an anachronism than a view of the future. Followers are often invited to name their own price for whatever they buy, and are invited to ‘listening parties’ with their favourite artist. Some artists offer subscriptions, where fans get all the albums released in a particular timeframe.

The company has changed hands twice in two years, and only half of its staff have survived the moves.

End of Scarcity

The other effect of more democratic means of production and distribution is this: the output is no longer scarce. Napster may no longer be with us, but it taught us that a CD or digital album wasn’t as valuable as we thought it was and coincided with the rapid decline of CDs. We stopped thinking of music as a possession and more as a service. When you buy a radio you don’t expect to own the sound coming out of it. Why should a smartphone be any different? Enter streaming services, where you pay for everything but, apparently, don’t care that own nothing.

So where is this all going?

But I think we’ve seen in the music industry the bifurcation I mentioned in the last piece — where communities of artists survive through a closer connection to their audience, while at the other end we see a widening of that gap (you’ve got to be pretty detached from your [latex]%[/latex]-per head audience to mime your way through your performance, hoping a ban on cellphone recording would keep your secret safe.)

If Bandcamp, or something like it, survives, then I think there’s hope that independent artists (including my own humble other self) still have a chance to make a modest income. That’s by no means assured. The options for playing live to promote your act are shrinking: artists are playing half the shows they were in 1994, according the Music Venue Trust, because doing so is too expensive.

AI

AI is the next wave, and it’s clear that the likes of Sony think it might make them richer. I’ve tried it out and I have to admit it comes up with some quite listenable stuff (here’s Udio’s effort at an early Yellow Magic Orchestra sound.) But it’s the musical equivalent of the awful AI art we’re drowning in. We will quickly learn to differentiate between AI and real music and woe betide anyone who tries to sell the former masquerading as the latter.

But done well it might create its own musical niche, and we can’t afford to be precious about it. Music has been artificially generated since the old piano rolls of the late 19th century. Algorithmic music dates back to the 1960s, and even I was using a primitive sequencer in 1982. Bands like Depeche Mode, Echo and the Bunnymen and Orchestral Manoeuvres in the Dark used drum machines (or tape recordings of drum machines.) Nowadays many songs are ‘composed’ of pre-fab loops woven together. Artificially created music is something we’ve long embraced.

It’s the songwriting that we probably hope will still remain a human-centric endeavour, the application of art. But even there we are bound to be disappointed. Lejaren Hiller and Leonard Isaacson composed a suite for string quartet using the ILLIAC I computer in 1957. Sony’s Computer Science Laboratory in Paris created an algorithm in 2002 that could resume a composition after a live musician stopped playing. The likelihood is that it will lead to a ‘just good enough’ approach to using music in ads, TV, airports, lifts, jingles, phone trees, in the process putting a generation of jobbing music creators out of work.

Money for God’s Sake

The bigger lesson? Art is created by artists, but while we claim we love art and want to support it, the people financing the production and distribution of it are more concerned with Mammon.

The music industry is probably a canary in the coalmine. It was the first major industry to be ‘disintermediated’ by digital/internet, and things looked bleak. But it tackled its problems and is now more profitable than ever, while the actual number of people who are able to make a living from it has shrunk. (These two are not unconnected: the big three have successfully pressed Spotify to pay more to the bigger artists and less to the others.)

So we have to assume that these trends will continue, and that they will apply to other industries too. Books have followed a similar pattern; podcasts are currently doing the same thing to radio/news/audio. Global podcast ad spending is likely to hit $4 billion this year, with more than 400 million podcasts. But the top 500 account for 44% of podcast ad spend. While podcasts aren’t as concentrated as music and the written word, it’s heading that way. I would imagine newsletters following a similar trajectory, as I mentioned in the last piece.

So where does this leave us? Music is always going to find a market, and the great thing that digital + internet has given us is the power of discovery. It’s just that not so many of us as we thought might be able to make a living from it: only 4% of the 200 million ‘creators’ worldwide make more than $100,000 a year. A YouTuber with 20,000 views per a day earns a little over the U.S. poverty line. That means that 97.5% of YouTubers don’t make enough to reach that line.

AI will only make that worse. Record companies have made clear they take AI seriously — both by exploring its potential and coming after companies that might challenge them. I don’t imagine AI will follow a predictable path: none of the prior disruptions in music this century have been. But I don’t see anything in there to suggest that original, heartfelt, authentic music will come out on top.

My suggestion? Dig around Bandcamp and look for some new stuff to listen to and support. My belief is that this is what scares music industry execs — a broad church of music lovers with catholic tastes. Rob Stringer, CEO of Sony Music Entertainment, acknowledged to Bloomberg recently that the power of the algorithm in streaming, pushing similar songs to users to keep them listening to the same kind of stuff, had been hugely profitable. But he saw the downside:

I listened to everything because the BBC had a government mandate to play you every type of music. So I was kind of open to that experience. Whereas I think the disadvantage with automated taste is that you end up, as you said, being funneled the same music.

I’m willing to give him the benefit of the doubt here, where he argues that this is about art, not money (he was an early Clash fan, so I have to.) But to me good algorithms, which tried to dig deeper into what you liked rather than just as close a match as possible, could unravel the streaming model quite quickly. You would quickly rub up against the limitations of Spotify and seek out more esoteric fare on the likes of Bandcamp.

Hurry though. Bandcamp itself feels like a commons living on borrowed time.