I lost my brother the other day. Less than a month after losing my aunt, it is starting to feel like carelessness. And to be clear, neither of these deaths were down to Covid-19, at least not directly, and this column isn’t about Covid-19, at least not directly. But as I spent some virtual time alone with my brother’s closed casket, I couldn’t help feeling that this was a future that might seem dystopian, but was instead rather comforting.
My brother was my only sibling, so there’s only me now from that family unit. And he lived on the polar opposite side of the world, in DC, and so hard to reach at the best of times from my perch in Singapore. But Covid-19 had somehow rendered that distance meaningless; he could have just as easily been across the causeway in Malaysia, and quarantines would have made it almost impossible to be grave-side.
For me it was deja-vu backing up down the freeway.
When my aunt died in a British nursing home a few weeks ago, I could do nothing but send a few photos of her to a small mailing list of people unconnected to each other except through my aunt. (She wasn’t a real aunt, but as a childhood friend of my mother I had known her better and longer than most of my other relatives, and she had outlived most of those.) Most on the mailing list were not geeks, it’s safe to say, so those photos that did come back were usually upside down or sideways, which could have appeared disrespectful, but which would have amused my aunt. Her funeral was of course a small one, remote and ring-fenced by Covid-19, and I never got to hear what she had meant to others, or even see a photo of where she was buried. I had to sit alone with my grief, and read a few scraps from others on the adhoc mailing list about how the day went.
My brother’s death was quite different. My tech-savvy nephew had arranged for me to to spend some time alone near my brother’s casket via Facetime, which was oddly peaceful. And then, with the help of his friends, he hosted a memorial on Zoom. As the introductory music played over a tasteful drawing, I watched as a list of people signing on flashed up on the screen — by the time it started more than 200 people were aboard. I made it through my own eulogy and then watched dozens of others talk about my brother. Most of the people were those I had never met, but it was moving to hear them talk about him, revealing angles of my brother that were new to me, and refreshing, like uncovering hidden doors in a familiar house.
The tools aren’t perfect of course. I forgot to turn my video off after I had talked, and so I dread to think what I was doing for the rest of the gathering. And there were the inevitable glitches. But most important for me was that people were there, wherever they were from. Distance was no obstacle, and neither was familiarity. You never know who will turn up at a funeral: that’s what makes them so fascinating. The former lovers, the unacknowledged offspring, estranged spouses and feuding relatives, all may turn up. The wake is a more selective affair. But here in a virtual room were dozens of people who had heard about my brother’s death, and who could join in to share the memory of someone without fear of somehow not being close enough. It was egalitarian in the way that wakes usually aren’t; no one has to fuss over who to invite, and whether there are enough chairs or canapés to go around. Instead it becomes a festival, where anyone can pay tribute. No one was left out because of distance, either physical or familial, and that seemed to me to be something rather beautiful.
I don’t know what will happen when Covid-19 is tamed. I’m sure a lot more things will go back to normal than people predict. But if we take one lesson from the pandemic I feel it should be this: whether it’s a funeral, wedding, wake, birthday party, bluechip annual general meeting or parish council conclave, offer a virtual version too. Don’t let physical distance decide who has a seat and a voice. And don’t just put a camera in a corner of the room and live stream it. Give it some thought, as my nephew did for my brother. It made a world of a difference.
I’ve been wondering about the countries we haven’t heard from yet on Covid numbers, or which have just flown under the radar. What I call the dogs that haven’t barked. In short, who’s on the other side of the curve, and who isn’t?
have already beaten COVID-19 (where the curve is back to where it started)
are nearly there (where it’s half way down the slope)
need to take action (where it’s still near the peak
That’s interesting enough, but I wanted to get a sense of what that means. First off, I wanted to get a sense of how many people that is — how many people are in countries that have beaten COVID (I actually it’s a little early to say they have, I’m using Bar-Yam’s terminology), and how many people are living in countries which still need to take action?
So I did a bit of downloading and spreadsheeting. This is what we have:
So according to that data, more than half of the populations of the countries measured haven’t really started to tackle Covid-19. Gulp. There are some big countries in there:
Now, you could quibble about which countries are at which stage, and some countries have clearly done more than others, but the curve doesn’t really lie. At least, it’s as good a yardstick as any.
So my next question was: how many people are we leaving out? Turns out quite a lot. The countries covered had a population of 5.9 billion, which leaves about 1.9 billion people unaccounted for. The website says that the “set of countries is certainly not an exhaustive list, but we do highlight the countries which we find to be interesting or important in some way.” It doesn’t say why it found them important and interesting. Possibly the data was clearer for those included. Hopefully they’ll add more countries once the data is clearer.
So what does the pie chart of people look like if we account for the missing two billion? We get this:
The population in countries that haven’t started yet goes down to 42%, which is still a big chunk of humanity, while the ‘good guys’ account for about 23%. A quarter of the world’s population doesn’t appear at all.
So what happens if we put all this onto a map? Where are all these people?
It makes for grim viewing. Several things jump out:
The countries doing well are flattered by the presence of China. And Vietnam, Thailand and South Korea. In population terms the largest country outside APAC to get a star is Greece, with 10.4 million people. It does not look good.
The gray areas are a big concern. Very little data from Africa and Central Asia was ‘interesting or important’ enough to include in the study thus far. As we’ve read in the media, Africa is unprepared for Covid-19, where two countries account for nearly half of all tests carried out in Africa so far, according to data collected by Reuters.
The other gray area is Central Asia. A piece in the Atlantic Council’s New Atlanticist this week said that the countries of the region were each ploughing their own furrow, with varied, sometimes suspicious, results: “Tajikistan and Turkmenistan had been clinging to a fantastical claim of zero cases for weeks, despite the recent spike in mysterious deaths attributed to pneumonia. Tajikistan finally reported its first fifteen confirmed cases on April 30, after weeks of speculations and warning from international experts.”
I don’t know what all this says, exactly, but it sheds a little light on the dogs that haven’t barked. Covid-19 is a pandemic, which means it infects people, not countries. Yes, we can seal borders, and keep people from traveling, but at the end of the day the disease will only be conquered when every country has flattened every curve — or, unlikely, never let it rise up in the first place. None of us is going to be able to travel as freely as we once did, or see supply chains and shops creak back into life again, until all the people in these countries are through the same tunnel we’re currently in. That could mean a lot more pain, heartbreak, and death.
So thinking of this by population size, and looking more deeply at the countries that either haven’t seen the curve appear, or aren’t reporting it for one reason or another, is one way of knowing how long we’re all going to be fighting this war.
When is innovation just another stab at the past, and when is it revolutionary? When it becomes a bit of a Twitter storm in a teacup, is possibly when.
Here’s an interesting case study in the offing: You might need to get your head around some unfamiliar terms, like bi-directional linking, breadcrumb navigation and transclusion. Or not.
My thoughts were nudged by a post at Amplenote, an online note-taking and note-sharing app that is worth a look. Like a lot of these players, they’ve been forced into offering features by the new kid on the block, namely Roam Research, which by taking the genre by the scruff of its neck turned itself into something that knowledge workers are getting excited about.
The post was written by Bill Harding, founder of Alloy, the company behind Amplenote. They’re old style, by internet terms and their own words, not relying on VC funding, freemium models etc. You can tell by the product that it’s neat, robust and reliable.
That’s mostly what people interested in this kind of tool are looking for. I’ve been a note taker and outliner since the early days, and I’ve tried pretty much everyone that’s out there. Folks know that I feel that despite some great stuff, computer software has let us down when it comes to making software that understands us, rather than the other way around. (Why Won’t Computers Do What We Want Them To?)
But it’s taken Covid-19 to make me realise this is changing. And apps like Amplenote — or ones more familiar to you, like Evernote, are getting caught up in it. Roam Research has given us a glimpse of what note-taking — the simple act of reading, hearing, thinking or seeing something, and storing that somewhere — is ripe for disruption, and he’s going for it. It’s early days, but lockdown may be the jolt that propels this genre into the mainstream, something no other note-taking app has managed to do. A day ago Conor White-Sullivan posted to Roam’s Reddit group that 10,000 new users signed up to the app over the weekend, causing upheaval for his servers and users and forcing him to suggest to those ‘super-concerned’ use a rival app “for a week or so”.
He must know that suggestion really isn’t an option for most of his disciples. Roam is attracting a lot of interest — and beyond the usual numbers of people who dabble in this kind of thing. Why? Well, I think there are number of reasons:
Roam works right out of the box. It’s all online, the (initial) interface is very simple, even prepopulating the page with blank, but dated entries, prompting you to just start writing.
It gets as complex as you want it to get. You could just use it as a journal, if you wanted, but that would be a waste. It goes deep, with features being added with little fanfare, including sliders, nodemaps, and stuff I haven’t gotten around to figuring out.
But the key to all the excitement, I believe, is a key formula, which I think is the basic currency of software success: you get more out of it than you put in. Put simply, this means, for example, that if you linked one page to another page, that second page would update itself so you can see that link. This is what they mean by bi-directional links, or backlinks. It might seem to be the most trivial and useless piece of data, but if you don’t know what pages are linking to the page you’re looking at, you simply won’t know what information you have that you’ve already decided is linked to this. Your effort in linking to that page is now automatically creating extra value without you having to do the extra work.
If you’ve read this far you’ll probably get it. But if not, let me just go into more detail. Computers, and the software that run them, are useless tools if they don’t allow us to do more with the stuff we tell them than we are capable of. Enter numbers into a spreadsheet because you know the software can do a load of things with that data than you can. Let your Apple Watch collect data about your body because you know it’s going to do more with it that’s useful than you could with a paper and pen. But for the most part knowledge workers have not really had anything similar for textual data. Yes, AI can help to find patterns in large bundles of it, but the same applied to our knowledge — the stuff we decide is worth keeping from our readings, talks, viewing and thinking in a computer — has not been so useful. Mostly, it’s just about being able to retrieve stuff more easily, so we don’t need to remember it, or remember where we put it.
But this is 2020. And we’re still there?
And this is where we come back to Bill Harding. And his screed. His point is a fair one: that all this talk of bidirectional linking is deja vu for many of us, when in the post-dotcom bubble burst of the early 2000s Wikipedia’s surprising success prompting interest in the underlying technology (here’s a piece I wrote in 2004 (sic) for the Journal about the disruption Wikipedia caused).
To drink in the enthusiasm we’ve witnessed in some corners of Twitter, bidirectional linking will evolve what’s possible…for those who dedicate themselves to the pursuit of learning this enigmatic craft. As fate & capitalism would have it, an elite cadre has popped up to help enthusiasts learn how to benefit from bidirectional linking. By all accounts, those who successfully assimilate the ideas of these programs transform their lives for the better. It seems reasonable, and I believe these gentlemen are doing great work to which they are wholly devoted.
But how different are today’s opportunities than what came before? To technologists of a certain age, the groundswell for a better connected network of ideas hearkens back to the halcyon days of 2005, when wikis were The Next Big Thing. Back then, companies like Wetpaint raised $40m to help organize the world’s knowledge. “A lot of venture money is flowing into wiki products” said Techcrunch in 2006. Having ubiquitous, bidirectional linking with surrounding context info was creating transformative opportunities in companies where people knew how to build them.
But with a couple major exceptions, wikis fizzled out, never catching on for personal use. Having set up my share of wikis during the early 2000s, I can attest that it was “worse than WordPress”-level bad for Twiki and Mediawiki. Developers might struggle through it to better capture their personal ideas, but the benefits of bidirectional linking were largely relegated to business knowledge software.
These new startups reviving and refreshing the ideas from the wiki craze is a great outcome for productivity enthusiasts. Especially with the expert guidance of smart people like Tiago and Nat, there’s never been a better time to help the humble wiki live up to its nearly-forgotten first round of hype.
If you sniff a little snarkiness in the tone ‘elite cadre’, you might be right. After trying to lure Roam users away with an import tool and a pricing comparison, Amplenote’s twitter account said less than an hour before I wrote this, that it had been blocked by Roam’s:
It’s not really a battle of equals: Amplenote has 24 followers, Roam 16,000. And this is the thing. What we’re really seeing, I believe, is a new player come in and bring the necessary tweaks and rethink to an existing technology — in this case, personal databases — by taking the best bits from each, adding a few new ones, and stealing the show to create a following and a buzz. This is not to say Roam isn’t impressive: it is, and I have not found any other app to match it.
And that’s why Amplenote and others, Bear, TiddlyWiki, WorkFlowy, TheBrain, Dynalist etc, are all struggling to add similar features, thinking about adding them, or defending why they don’t have them. This is good, and classic Christensen disruption. It might not be Roam that ends up winning this, but they’ve shaken up the market.
But what market? Is there really one for this kind of thing? Back in the early 2000s I would have said yes, because the kinds of people interested in this kind of thing were the same kind of people who bothered to read a tech column in The Wall Street Journal. The internet was a means of connectivity, and its potential was seen in those terms — could I store my stuff in more than one place, was the common question. So it wasn’t surprising that, as Bill recounts, everyone thought that wikis (the ‘readwrite web’) were the way to go. But others saw it differently, and all the smart money ended up going to using the internet to create more passive experiences — user generated, yes, but simpler, shorter, and where possible multimedia. It was all about eyeballs, and so content as knowledge slipped into the background as Twitter (status), Facebook (sharing stuff) and Google (search) came to the fore.
I’m not saying that’s necessarily changed. But Covid-19 has helped crystallise something that was already happening, namely that smart people are exploring how to leverage their knowledge and knowledge of software to solve the unsolved problems of the past, or to reconsider tools that had been largely forgotten. Knowledge work was once an obscure term that is now on its way to describing pretty much all of us who are sat at a computer, and it’s this realisation that has made people like Conor, I imagine, realise there’s a market for tools that really address the problem of deriving more value from the cost of user input.
Bill’s error is a generational one: what was once ‘business knowledge’ is now something else entirely. Watch this video of Andy Matuschak, a software engineer who works at the Khan Academy, to see what this looks like (he’s using Bear, by the way, not Roam). It’s strangely captivating viewing, ASMR for the Knowledge Generation:
The other mistake is to think of this as ‘productivity’. This is not about that. This is not just a better task manager. I believe we’ve moved on from that — or at least recognised its limits. Now the thinking is, as Tiago Forte, one of Bill’s ‘elite cadre’, has mentioned, about acquiring and processing knowledge in a way that our brain retains it. You can almost hear Andy’s brain whirring as he processes what he’s reading before he expresses it.
So where will all this go? I’m not sure. Roam is talking about charging $15 a month, which is why people like Bill still think they have a chance to grab some of this market. To me it’s a rising tide and I’m pleased to see there are boats paying attention. After 20 years of focus on either ‘Getting Things Done’ or on the cuteness/elegance of interface, we’re entering a much larger ocean, which has the potential to bring these cutters, sloops and yawls into the slipstream of the incumbent tankers. Whether they go under or catch the current is anyone’s guess. Evernote has, largely, failed to find a larger audience (for lots of reasons) but the timing might now be right, especially as knowledge workers find themselves with plenty of time, isolation, an internet connection, and an urge to learn.
Addendum: Conor points out in a tweeted response to this piece that he has “been working on this problem for the better part of a decade. Strong(ly) agree the changes in landscape are great for us, but we wandered the desert for years before this.”
Computers and the software that runs them have long denied us the basic right of dictating to them — not letters and grocery lists, but of what they should actually do for us – most importantly in the first step of thinking: the art of taking notes.
In the mid 80s I was studying history in London, and the first consumer PC came out: the Amstrad. I was immediately intrigued, though I’m no techie. I remember going into Dixon’s one rainy winter afternoon on Tottenham Court Road and explaining my problem to the salesman. It was simple, I thought: I am a collector of events, and I want a computer which will do exactly what I currently do, but store it so I don’t have to carry around this pile of paper. It was simple, I told him. And I explained how I took my history notes, involving two or three basic steps. He looked at me blankly and tried to change the subject. “It comes with a printer and three spare disks.” I bought it anyway. But oh, how naive was I.
Because the reality is that 35 years on — 35 years! — there is still no way to do this. No app allows you to draw lines on a page and then add pieces to it wherever you want. I should know, I’ve tried hundreds of them (and if anyone does read this, I will get responses like ‘Have you tried OneNote?’ or ‘Aeon Timeline allows you to do just that.’ Yes, and no it doesn’t. No app, in short, is smart enough to just ask you what you have in mind and just evolve into that, to help you shape the app in the way you want.
This is the fundamental failure of computers, and computer software. As a technology it’s failed to really find a place in our lives that we’re comfortable with, and that’s because it has demanded too much change in our behaviours. We are mostly compliant: back in the late 2000s executives at telcos were worried 3G was for nought, because people didn’t show any interest in using their phones for anything more than calls and SMS. It took Steve Jobs to change that, by building a consumer device we craved to hold. The rest came naturally, because of a great UI, but no one is claiming that the smartphone adapted to us; we adapted to it. That’s not to say it’s not useful, it’s just not useful in a way that we might have envisaged, if we ever sat down to think about it.
Indeed, the Apple revolution, which I would date from about 2008 cannot be detached from the broader mobile data revolution, which we’re just emerging from. This was a revolution in interfaces, but it wasn’t a revolution in terms of computing. We have become more productive, in narrow terms — we are online a lot more, we send more messages, we might even finish projects quicker — but no one is claiming that our computers mould themselves to our thinking. It’s apt that movies like Her try to explore what that might mean — that our computers learn our thinking and adapt themselves to it.
So back to me and my history problem. There of course are answers to it, but they all require us understanding the mind of the person or people who developed them. And I’m not ungrateful to these apps; they have long been welcome bedfellows. From TheBrain to Roam, MyInfo to Tinderbox, TiddlyWiki to DEVONthink, they have all rewarded the hours — days, weeks, even — I have invested in trying to understand them. But therein lies the problem. The only reward one can get is if one adapts one’s own mind to that of the creator’s vision, and, however amazing that vision is, this in itself is an admission of failure. I don’t want to have to report everything to someone else’s vision, I have one myself, but there’s no software on this earth in 35 years of looking that I can wrestle into submission to my simple vision.
This is not to say the apps in question are a failure. I love them dearly and still use many of them. I have used my pulpits to promote them, and have gotten to know some of the developers behind them. These people are geniuses, without exception, and it’s not their fault their tools cannot be more than interpretations of that genius. We just lack the tools to tell our computers what to do from scratch.
‘Take an A4 sheet of paper, turn it horizontally so it’s in landscape, and then draw three perpendicular lines equidistant apart. Allow the user to write anywhere between the lines, and interpret a three-line dash as the end of each nugget. Interpret the digits at the beginning of each nugget as a date, which can be as vague as a decade and as specific as a minute. Order each nugget chronologically, whichever line it sits between, relative to each other, with gaps between according to the dates. etc etc’.
I still don’t see why I can’t have that software. I don’t see why I couldn’t have it in 1985. I probably could get a developer to whip something up, but then that’s already demonstrated the failure I’m talking about. I want the computer to do it for me, and not being able to, to have to rely on someone else’s coding skills, or even my own, means it’s not doing that.
This feeds into a broader point. Tiago Forte, a young productivity guru, wrote an interesting thread about the serial failure of hypertext, which was a precursor (and loser) to the simpler Web, and the lessons we can draw from it. In the case he describes, Roam. The simple truth: taking notes is a niche area because it’s not taken seriously at any stage of the education process (my history chronology capture was shown to me by the late and excellent Ralph B. Smith, who understood the power of note taking; I can still remember him demonstrating the technique in our first class. It has stuck with me ever since.) Note-taking is the essence of understanding, retaining, collating, connecting and propounding. And yet it’s mostly done in dull notebooks, or monochrome apps, none of which really mould themselves to what we write, take pictures of, record or otherwise store. (And no, Clippy doesn’t count.)
Tiago may well be right: the trajectory of knowledge information management apps (and there you have it; already segmented into what sounds like the most boring cocktail party ever) is that they just aren’t sexy enough to break out of a niche. Evernote was closest, but it got dragged down in part by its dependence on a vocal core of users who pushed it one way and its desperate need to justify its valuation by trying to go value. Truth is, people don’t value collecting information, in part because it’s so easy to recall: even with my 60GB DEVONthink databases, I more often than not Google something because I know I can find the document more quickly that way than in my offline library.
But this doesn’t explain the pre-Google world. Why did we let software go in the wrong direction by not demanding it submit to our will, not the other way around? Well, the truth is probably that computers were basic things, oversized calculators and typewriters for the most part. Sure it helped us write snazzier-looking letters, but heaven forbid us doodling on them, or moving the address around beyond the margins.
We’re still hidebound by our computers, so much so that we don’t realise it. I am rebuilding my life around the new tools, like Roam, and old ones like Tinderbox — a wonderful piece of exotica that is massive for those of us who like to poke around in a piece of software, but which basically means poking around in the head of its developers — and I get a lot out of them. But I am keenly aware that I would rather be just telling a blank computer screen to “take an A4 sheet of paper…”
This is part two of a series on the lessons we should, and shouldn’t be drawing from the Coronavirus Pandemic. Part One is here.
A crisis defines us. Perhaps more precisely, a crisis highlights what we have lost, and what defines us is how quickly we can regain it (or not).
First there’s the humanity. One of the redeeming features about Hong Kong’s dense life — and a restorer of faith in human nature — was that once you got into the hills and trails passing strangers would greet each other as if they were on a jaunt in Kew Gardens. It’s a strange world now, when we venture out alone and nervously pass another lone pedestrian at a safe distance, and bark when they get too close. And you know we are in the territory beyond compassion when a family loses one of its kind to the coronavirus, and yet doesn’t want to say that is the reason for fear of stigma. Society, even at its smallest component unit, can break down quickly if social norms don’t catch up with the crisis in its midst. Now that family (friends of a friend) can’t properly mourn, or warn others in the tightly packed neighbourhood where they live of a lethal infection, and those around can’t find a way to offer support because they can’t speak the killer’s name out loud. Empathy withers, rumours prosper and dog ignores dog.
But there are other trends afoot. When there are practical things to do, air rushes in to fill the gaps. Our obsession with just-in-time supply chains has exposed its achilles heel — a lack of the things we need the most. But others have stepped up. A group calling itself OSCMS-Mods (Open Source Covid-19 Medical Supplies) has emerged on Facebook “to evaluate, design, validate, and source the fabrication of open source emergency medical supplies around the world.” It has quickly proved its mettle: a document listing needs ranges from hand sanitizer to laryngoscopes, complete with glossary and warnings about safety and liability.
The Facebook group is dynamic. At the time of writing it has nearly 48,000 members. In the past hour or so I’ve seen posts by an electrical engineering instructor at the University of Wisconsin-Madison mulling whether to assign as a final project to his students an infrared thermometer. Others chip in, saying they’re working on something similar, offering advice (“don’t forget to program in an emissivity factor”) or help (“I’m pretty fast with 3D modeling and a 3D printer. If you guys need some help with the casing, hit me up!”). In another post a resident of Vancouver Island reports, after driving 500 km, that there is no hand sanitiser or isopropyl on the island. Some offer alternatives (not all them wise) but someone in Hong Kong offers a simpler solution: shipping some of the excess from Hong Kong, where emergency orders arrived too late and the price has fallen below its suppliers’ cost. Medical workers from Southampton, New York, post a photo of themselves in full gear to ask for help alleviate a shortage of surgical caps and masks. The jury is still out on this one, but there are some good suggestions, and hopefully the gap is plugged.
Such adhoc approaches are reassuring. For one thing, people want to help, and the platforms are there to make that a reality — Facebook, with its groups, and a growing mastery of the technology: 3D printing, materials, Arduinos and Raspberry Pis. Of course, these initiatives are only going to be meaningful if they are consistent, and find a way of ensuring that requests for help are not just met by comments and virtue-signalling responses, but concrete action. Time will tell. Technology can often be a hammer looking for a nail.
I think another side to this that may outlast these emergency responses is that the technology will find a way through to real usage, rather than a pure business model. In other words, that tools emerge not because people want to make money, but because they can be useful, and there’s nothing like a jolt to the system for us to realise we need different, or better tools and to define those needs better. A piece in TechCrunch relates how Jahanzeb Sherwani, who developed a popular screensharing app called ScreenHero which he sold to Slack, pushed out a follow-up app called Screen ahead of time — and made it free — to help teams stuck at home share their screens. Given how quickly we’ve grown sick of the ‘heads in a box’ conferencing view (where everyone to me looks like they’re baffled seniors, looking around for their false teeth) this tool works well by going the other way, realising that we’re online to work on something, not look up each other’s noses.
This reminds me much more of the 2000-2005 era, when collaboration and thoughtfulness tended to be the norms, a post-boom pause which led to the development (or propagation) of the tools that became Web 2.0, and which in turn provided the (largely unacknowledged) foundations of the social media era. I’m thinking RSS, XML, podcasts, wikis, tagging, web-based apps, microformats, the Cluetrain Manifesto, simple beautiful interfaces. If you think we could have got to Twitter, Facebook etc without the work of largely unrewarded pioneers of that age, you’re mistaken. But it was born out of a particular era, a transition from the web’s beginnings to the mobile, silo-ed era we know today. It was a vastly underestimated period, where an explosion of ideas and connectedness led to an explosion of tools to make the most of that. Nearly all those tools were open source, nearly all became bedrock of these later, frankly more selfish, times. But the spirit, it seems, lives on, and I am hoping that what I’m seeing in initiatives like OSCMS and Sherwani’s generosity is something akin to that: a realisation that technology is the handservant, not of viral growth and big payoffs, but of building connections between us – in times of calamity (personal or global) and beyond, by providing tools for free because they might be useful and might lead to something great.
I don’t think for a moment that these initiatives will of themselves be enough; it’s clear that nearly every public medical facility and service was woefully underfunded and hence underprepared. No battery of 3D printers is going to be able to fill that void. But hopefully the level of interest and involvement — call it civic, call it individualist — in trying to address that gap contributes to a broader discussion about what is our baseline for supplying, funding, equipping and populating such services in the future. And just as Wikipedia arose, not out of commissioning existing experts to write up entries (an effort that failed abysmally) but out of just letting anyone — no need to flash credentials — to contribute and allow the water level to rise by itself, so may we find that out of this Quarantined Era emerges a new sense of how individuals might contribute, and what mechanisms and tools need to be developed or honed to make that happen.
A piece by Politico confirmed my bias that there is a tendency among those viewing the crisis unfold towards confirmation bias — nearly all the experts asked to contribute their thoughts on how the world will be changed effectively said what you think they would say: the author of a book called “The Way Out: How to Overcome Toxic Polarisation” said that there would be a, er, decline in polarisation. The author of a book called “The Death of Expertise” said there would be a, um, return to seriousness and respect for expertise. The author of a book about how social infrastructure can help fight inequality said the virus would “force us to reconsider who we are and what we value” and “make substantial new investments in public goods-for health.”
I’m not mocking these writers, or the article itself. It’s natural enough to see in the virus the seeds of the change one is hoping for or has already predicted would happen. Such predictions rarely stand the test of time. We saw the same phenomenon after 9/11, the last great external shock to the West’s system. People talked then about leaving New York, about embracing a different, simpler life. They bought canoes, bulletproof vests, ammunition, parachutes. Analysts predicted a quite different future for us all. A N.R. Kleinfield wrote a decade on in the New York Times:
Paul Simon said he didn’t know if he could ever complete another album. A woman wrote on a remembrance site that she regretted that she had had children, that she had brought their innocence into a world no longer fathomable to her.
But there has been a chasm between expectations and reality. The prophecy of more attacks on the United States has not been the case, not yet at least. Bumbling attempts got close — involving underwear and a shoe and a 1993 Nissan Pathfinder — but the actuality has been that terrorist acts on American soil in the succeeding years have been, as always, largely homegrown.
So many things were expected to be different that have not been. Time passes, and passes some more. Exigencies of living hammer away impatiently. People — most of them, at least — began to become themselves. New York, which by its nature accommodates so much, was willing to absorb 9/11 and keep moving.
That day for many of us is as fresh as if it were yesterday, but the way we thought it would change us has grown stale. Yes, we have the security theatre of airport checks — though they too, might change emphasis once the viral dust has settled — but for most of us our lives didn’t change substantially. (Paul Simon has released six albums, six compilations and one boxed set since 9/11.)
It’s understandable we feel that momentous events have momentous, long-term impacts on our lives, but the reality is that the changes wrought are both less and more than what we anticipate, even by the boffins among us.
Probably the best way to view the impact of Covid-19 is to view the impact of its predecessor. Not SARS or MERS, although they highlighted how those countries with a institutional and collective memory of a recent epidemic are best equipped mentally and logistically for a new one; but the Spanish ‘Flu of 1918-20, which affected much of the same territory as Covid has — namely the world.
Firstly there are significant similarities between the two in the way they played out. As we have seen in Europe, Australia and the U.S., there’s a reluctance on the part of government to impose unpopular measures — most obviously to get people out of pubs, off beaches and indoors. The same was true in France in 1918, where local officials were reluctant to enforce measures such as closing theatres, cinemas, churches and markets “for fear of annoying the public.” Japan happily banned mass gatherings in its Korean colony, but didn’t even consider trying the same thing back home.
People are people. Officials don’t want to do unpopular things (except when they do not actually face the voter — Japan was by then a democracy of sorts.) And while during the pandemic itself people behaved much as we’re behaving — most of us with “collective resilience”, as Laura Spinney puts it in her excellent Pale Rider — that group identity eventually splinters, and “bad” behaviour emerges. She points to the 1919 Rio carnival, intended to mark the end of the crisis even while the flu was still claiming lives, where the partying took a dark twist: one historian, Sueann Caulfield, found that in the period after the epidemic, there was a surge in reported rapes in the city, temporarily outnumbering other types of crime. The point — beyond the horror of the crimes themselves — is that people behave in strange ways, and crises both fundamentally change their behaviour, but also amplify existing traits. There is no simple outcome.
So predicting is a dangerous game, or it would be if we were ever held to the predictions we make. And it is, of course, far too soon to even know how this crisis will unfold, how long it will take and how many of us it will take with it. So it’s probably unfair to ask others to predict the lasting impacts, at least at this point, and unfair to mock them for their confirmation bias. I would love a more civil society that takes electing its leaders seriously enough to realise they aren’t electing someone to entertain them as much as operate the levers of government. I would love to believe that the selflessness we’ve seen come out of the crisis thus far would linger after peace returns, that we will properly honour those in and serving the medical professions — from the cleaner to the surgeon. That we will realise it can’t go on like this, that we have to take better care of the planet, not move so selfishly through it and past each other, that Gaia is a complex being that weaves everything into her web, even unseen droplets that can pass between us, which we can use to kill each other if we do not take the utmost care.
But that would probably be asking too much. We have to assume that the crisis brings out both the best in us and the worst in us, and we need to stop virtue-signalling about helping old folk with their groceries or checking in on neighbours and just do it, sotto voce, both during the quarantine and after it. If you need a reason why, it’s because collective resilience is as selfish as looking after yourself alone; during crises we tend to perceive ourselves not as individuals but as members of a group, and hence (so the psychological theory goes) helping others in the group is a form of selfishness. Do it, but don’t pat yourself on the back and post something to Facebook about it. If you were really serious about it you would have been doing it long ago, and keep doing it long after.
So my predictions? I’ll jump off that cliff in a later post, but for now, it seems likely that we will both underestimate and overestimate the length and impact of this crisis. Those of us who think we’re well prepared for this, will find that it hits us in other ways. Those of us fearful for the future will probably find fresh reservoirs of strength. The only thing I can predict with any certainty is that it will start to get boring quickly, and while people are dying, others will be defying curfews and sabotaging efforts to stamp out the virus. At the same time, I believe there will be more quiet heroics that will go untold, more quiet domestic solidarity among families that once fought, and the rise of business ideas amidst the lockdown that will make millions for those who nurse them to life. I’ll hang my hat on those predictions, alongside my mask and hand sanitizer gel.
The subscription model (‘subscription economy’ was a term apparently coined at least four years ago) is becoming de rigeur in many zones. App Annie’s recent State of Mobile report found that In App subscriptions contributed to 96% of spend in the top non-gaming apps. As an overall proportion of spend they rose from 18% in 2016 to 28% in 2019 (games, of course, still dominate.) It concluded in a recent post: “Clearly companies across industries need to not only be thinking about their mobile strategy, but also their subscription strategy, if they want to succeed in 2020.”
But is this a wise move?
The attention economy, as folk call it, depends on competing for a limited resource — our attention. But it will always be trumped by a resource that determines what can be done with that attention — money. If we have no job, then our attention tends to be focused elsewhere. If we have a job but not much money, or are afraid of losing that job, then our attention to other non-job issues is probably limited.
The other thing the attention economy relies on increasingly is the subscription model. Recurring fees are much more appealing to a company than a one-time cost, which is why everyone is heading that way. But the subscription model has an achilles heel: most services that used the subscription model in the old days were because of the way they were produced and delivered — electricity, water, telephone, gas, newspapers, cable. And most involved some lock-in: an annual or quarterly contract etc, which hid the overhead costs of connecting, delivering and disconnecting in the subscription. But to disrupt these entrenched subscription services OTT upstarts which didn’t have those costs like Netflix made it real easy to subscribe — and unsubscribe.
And here’s the rub. When subscription becomes a discretionary spend — something you can shed like a skin when the rain comes, then you find the weakness of the subscription model. This is why old guard subscription model players like the New York Times have transferred their approach to digital, knowing it’s better to alienate a few users by making unsubscribing disproportionately harder than subscribing, absorbing the hit of a few angry folk like me in order to keep the bulk of subscribers who couldn’t be bothered to jump through the hoops.
So when the Coronavirus Recession hits you, what are you going to shed? Discretionary spend is the first one to go, and that usual means monthly outgoings that just don’t seem to be as important as they were when you were coasting. Indeed, a lot subscription economy players, like Statista and others, only offer an annual subscription, although they price it per month to make it sound less. It’s cheaper, and more predictable, to charge per year.
I’m not convinced that software is a good candidate for subscription models. I understand its appeal, and I am as frustrated as them how the mobile appstore has reduced the amount that people are willing to pay for good software.
When Fantastical, a calendar on steroids for macOS and iOS from Flexibits, went from a one-time fee to a subscription model it split the community — especially those on iOS who suddenly had to pay 10 times what they were paying before. John Gruber argued $40 a year for a professional task app on all Mac platforms was a decent deal, arguing that those who don’t want to upgrade can still use the old version, and he’s probably right. But I haven’t upgraded and have instead shifted over to another calendar app, BusyCal, that is included in Setapp, another subscription model which bundles together multiple apps for $10 a month. In part that was because of the annoyance of finding certain features still available as menu items in Fantastical but blocked by popups:
Not the kind of productive experience I am looking for. Hobbling or crippling, as it’s sometimes called, is never a pretty look. You either have the functionality or you hide it.
A better route is to be flexible. Of course, there’s an upside to monthly subscriptions that are real easy to start and stop — when the sun shines, you can easily resubscribe. Indeed, the smartest subscription model in my book is the freemium one — where you can easily move between subscription levels depending on usage and how empty your pockets are. I recently canceled my paid Calendly subscription, downgrading to the free model and was told by a helpful customer service person that “you can certainly choose the monthly plan on your billing page and pay for only the months you need it for! That might work better for you.”
I would recommend any company moving to the subscription model to do this. Or to pursue the bundling model. Not to lock people in — where one subscription depends on another — but to make what might have been discretionary spend something that becomes necessary spend through a compelling use case. Setapp is that model (though sometimes I baulk and wonder if I’m paying over the odds). A lot of the apps I use on Setapp are ones that I would have not otherwise found — and I’m an inveterate hunter of new apps. By making the marginal cost of using them zero, I find they worm their way into my workflow.Setapp helps this by taking an interesting route, in that its appstore-like mothership is so baked into macOS that searching for an app installed on my computer via Spotlight or Alfred will include in the results apps that haven’t been installed but are part of Setapp. So if I’m looking for a photo editor, or screenshot taker, or calendar app, on my Mac the results will include those in Setapp that I haven’t installed.
This shoehorns productivity into the subscription model. It’s helping to make Setapp more useful by introducing me to new apps it is has in its portfolio — thus making all the apps in Setapp more recession-proof because the more Setapp apps I use, the less likely I’m going to cancel the subscription overall. (Yes, those apps I don’t install or use won’t get a cut, or will get a smaller cut, but the overall rising tide will help keep all the boats afloat. Or in a tweak of the analogy: all the apps in the Setapp boat, amid the buffeting recessional sea, rely on the size of the boat to keep them all afloat. Only if the boat sinks will they sink).
Bundling makes a lot of sense in disparate fields — I’ve been advising media clients to seek out bundling options with other subscription model companies which previously might have been regarded as competitors. Bundling should not be the cable TV model of putting the good stuff and crap together and forcing subscribers to pay for both, but to try to anticipate — if your customer data is good enough you shouldn’t have to guess — what else of value is in your customer’s discretionary bucket, and try to move both yours and those into a necessary one. A tech news site coupling with a tech research service, say.
In the meantime, expect a lot of subscription-based approaches to suffer in the recession. I expect by the end of it the subscription model won’t be so appealing, or will require more creative thought processes to evolve. The key is in not treating the consumer as either stupid (that we don’t realise $5 a month adds up over a year) or lazy (that we won’t do what is necessary to cancel a subscription if we have to), but to take the freemium model seriously: make it really easy to reduce our payment when we need to, and really easy to go back when we’re feeling flush again. Just don’t cripple the quality of the service you have committed to deliver, even if it’s free, by ads beseeching us to pony up or by drawing arbitrary and punitive lines which make the free version more irritating than alluring.
Then just wait out the storm, as are we all, and hopefully you’ll remain useful enough in the free version to stay on our radar when the sun returns.
Another Apple product I’m unlikely to purchase — a smartwatch. I don’t need more screens to look at frankly, but I doff my smartcap to the company for the way they’ve usurped an industry that already existed and then doubled it. This approach has some parallels to the AirPod strategy, which I looked at before : take a market that exists, wait until the technology works, have a couple of shots at it, dominate it and then expand it. Here are the latest numbers, courtesy of Strategy Analytics:
In short, Apple has not only grown its shipments by more than a ⅓, it’s eaten a sizeable portion of the Swiss watch industry’s cheese lunch. As SA’s Steven Waltzer puts it: “Traditional Swiss watch makers, like Swatch and Tissot, are losing the smartwatch wars. Apple Watch is delivering a better product through deeper retail channels and appealing to younger consumers who increasingly want digital wristwear. The window for Swiss watch brands to make an impact in smartwatches is closing. Time may be running out for Swatch, Tissot, TAG Heuer, and others.” The full report can be purchased here.
So let’s put this in a slightly broader perspective. This is a tipping point in the evolution of the watch and a hammer blow to the Swiss watch industry. While the figures don’t quite tally with Strategy Analytics’, those from the Federation of the Swiss Watch Industry show just how effective Apple has not only created a market for itself, but also usurped another’s. For years the Swiss watch industry had been relatively settled, only to see Apple — and knee-jerk competitors like Huawei and Samsung, who have also carved a market for themselves on Apple’s coat-tails — gradually erode their business. Last year shows just how far it has gone:
This is classic Apple in many ways. There were lots of ‘this is make or break for Apple’ type stories in the first year, and overblown predictions of 2015, and 2016, had to be revised. Indeed, while overall shipments of smartwatches rose in 2016 (from 20.8 million to 21.1 million, according to Strategy Analytics, Apple’s shipments actually shrunk, while others rose. But these were teething problems: sensors needed to be more accurate, sales channels with telcos needed to be tweaked. By 2017 Apple had fixed most of this, and the trajectory is clear. Probably more importantly, consumers realised that if you were going to put a smartwatch on your wrist, it had to be a classy one. There was no ‘good enough’ syndrome for that bit of prime real estate. And, like the Air Pods, the device needs to have a seamless relationship with the parent device.
Lessons learned? I once again wasn’t convinced about the smart watch. I haven’t bought one, and don’t intend to. But I get it; Apple is currently making much from the stories of how these devices may have saved lives. This isn’t the reason people buy these things, but it’s a good argument to win over the spouse, or conscience, and it does point to how, eventually, medtech and consumer device will merge beyond the hobbyist and fitness fanatic. And it’s not hard to see how soon enough the ear piece and the wrist will eventually become The Device, and we can ditch the smartphone altogether.
Having recently (finally) bought a pair of big chunky Bluetooth headphones, thinking they were so commonplace I wouldn’t get any weird looks, I now realise that once again I’m at the wrong end of a trend curve. People are staring at me — and not for my rugged visage. I’m the oddity: everyone else is sporting wireless earphones, the Apple AirPods variety (although I suspect quite a few of them are the cheap knockoffs which are indistinguishable in look and a tenth the price.)
Reality bites: what once looked a bit weird — massive headphones — looks weird again, and what looked even weirder — wireless earphones with little sticks dangling out of them — looks cool, and increasingly normal.
Indeed, it’s not only the fastest and largest growing category. It has leapfrogged the other two in the space of a year.
That’s particularly interesting because the original AirPods were launched three years ago. It’s taken that long for them to conquer the market, and this is a product that cost anywhere between $140 and $250. Yes, I know people spent silly money on headphones but that’s a lot of dough for something so small you’re likely to lose it down the back of the couch or running to catch the bus. But it has become, in quite short order, a massive market when you consider how many smartphones there are. In terms of units, it’s a quarter the size of the smartphone market (see below) which, according to IDC was about 360 million units in Q3 2019. And that market is virtually static, while the ‘smart personal audio devices’ market has nearly tripled.
This is all of Apple’s doing. They created the wireless earphone market singlehandedly. They were slow on headphones, and they never went for the wireless earpieces connected by cord, and their ordinary earphones have never really, in my view, stacked up, but it seems with the second version of the AirPod, and the AirPod Pro, they’ve taken the market they created and dominated it:
You could argue that since they only work with Apple devices the data is skewed but you could also look at it the other way: the Samsungs, Huaweis and Xiaomis of this world have not risen to the challenge for the Android market, and are lagging woefully. Given that Samsung shipped 78 million devices in Q3, while Huawei shipped 67 million against Apple’s 47 million (IDC numbers again), it’s clear just how much of a market opportunity they’ve missed. Canalys’ numbers, meanwhile, suggest that Apple shipped 18.5 million AirPods that quarter, meaning that 40% of every iPhone sold was sold alongside, or nearby, an AirPod. That’s impressive stuff.
While Canalys focus on the ‘smartness’ of these devices — the control they allow, the possibility of sensors etc capturing health data and serving as payment devices — I think that’s not the point. The likes of Jabra have been trying to sell wireless earphones for swimmers, runners etc for years, and it’s remained a niche market. Apple have instead done what they do best — mastering the technology to make the experience of listening to stuff easy, seamless and, at least now, so cool it’s become de rigeur. The problem was always a simple one: wires. They got rid of the wires, and they made devices that sound good, fit snugly and well (at least with the Pros) and connect relatively painlessly.
That was the problem to solve, and hence the market unleashed.
I recently did two things I hadn’t done before. One was to cancel my membership at a co-working space. The other was to meet, face to face, my virtual assistant of seven years. I belatedly realised the two events were connected: the freelance world, once a parallel universe hidden from view, is fast switching places with the real one, and governments, companies and families should take note.
It’s tempting nowadays to think that technology is redefining work, and not in a good way. AI and robotics are stealing away work from top to bottom, from lawyers to assembly lines. Gig platforms like Uber and Deliveroo are slicing up jobs into ever smaller chunks, making robots of us before the jobs are actually handed over to robots. And technology outsources what can be outsourced.
But I realise this is just one side of things. Who are all these people to whom this work is outsourced? By 2020, the number of self-employed in the U.S. will triple, to 42 million people. Freelancers are the fastest growing labour group in the European Union. Behind these statistics is a story, not just of harried drivers and deliver guys, but of knowledge workers who have chosen their own lifestyle, who have defied the disintermediation of the so-called platform economy. They offer a counter-narrative to the usual technology story of innovative disruption.
Take co-working spaces. On the one hand such spaces have proliferated. I recall looking for a co-working in Singapore space back in 2009 and finding only one, on the campus of one of the universities, and when I turned up one morning there to find the curtains closed, bodies all over the floor and a distinct odour of unwashed students. Now, every other floor in the tower blocks of the business district are co-working spaces, though the business looks nothing like it was originally imagined to be. Just don’t expect to find many freelancers there.
Co-working sounded like a freelancer’s dream — a place for those working alone and from home to find space to work, to mix, to find work, to find comradeship. It may have started out like that, but you won’t find many freelancers in a co-working space nowadays. Respondents to a survey of 99designs freelancers, for example, showed only 4 percent of them used a co-working space.
I asked Patrick Llewelyn, CEO of 99designs, why this was. One reason, he said, was that most of the designers on his platform are primary care givers, looking after either their kids or a family member, and so tend to keep less formal hours. As co-working spaces have become substitute offices, they keep office hours which don’t suit most freelancers, most of whom want to get away from the 9-5 grind.
I also realised there was little that was appealing. I abandoned mine when I realised I didn’t enjoy going there. I had returned to working for myself a year or so ago and long admired co-working spaces as a vibrant, tasteful, colourful alternative to the dour, dusty and downbeat newsroom I worked in. But I realised that co-working spaces were too self-conscious, too brimming with hipness to be genuinely convivial. And expensive.
So freelancers choose their own path, and it doesn’t fall easily into any fancy new disruptive model.
And then there’s the other thing: my virtual assistant. She’s real, but based in a Philippines town far from the madding crowd. I had always imagined that one day I’d make the pilgrimage there to meet her, since when she started working for me, she didn’t have a passport. But by now she, husband and two kids in tow, was the peripatetic one, carving time for me in her hectic tour of Singapore.
This is the other thing that struck me about what Patrick told me. When I asked about how his freelancers find social fulfilment if they’re working from home, he said that’s the point. By staying home, often looking after family, they’re able to retain those physical connections that those working in an office tend to lose. And being able to support themselves gives them a sense of contribution as well as a creative outlet, which in turns give them confidence.
When Patrick recently went to Novi Sad, the second largest city in Serbia and one of 99designs’ biggest markets, he attended a meet-up of freelancers who clearly knew each other and felt a kinship and warmth you’d be hard pressed to find in a co-working space. Amarit Charoenphan, cofounder of Thailand’s first and largest co-working space Hubba, told me that in the rush to grab market share and protect themselves from competition, many co-working players had lost the human touch, of fostering a community among their members. He sees the future in algorithms, co-working 3.0, where spaces draw on technology to address the emotional benefits of being together.
Freelancers might argue they already have that, using apps to connect to friends and colleagues, while staying or moving to the places they love. My virtual assistant continues to work from her seaside home, bouncing her two-year-old daughter on her knee on conference calls with her main client, a friend of mine based in Texas. She worries about brownouts and the occasional typhoon, but with internet connectivity improving, she’s rarely offline for long.
She’s part of a massive, gradual shift in knowledge work, from the big city to the smaller towns and villages. This shows up in the data: Less than a quarter of the 99designs freelancers live in urban hubs of more than a million people — just as many live in towns or villages of less than 20,000. This is true more or less across the board: In the U.S. and Indonesia the number falls to be low 14% who live in a metropolis. Data from Upwork, a general freelancing site, shows that for a lot of specialised work even those based in remote towns in the developing world can command decent USD rates.
For sure, freelancing isn’t for everyone, and it’s not always easy to get your first client. And platforms that break down basic tasks like delivery and driving will always be a race to the bottom. But for those with skills, or those motivated to acquire them, the freelance economy has grown in the past decade to be a vast continent in the landscape of the future of work, mostly unnoticed by governments and immune to Silicon Valley’s eviscerations. Which reminds me; I have to go, my virtual assistant is reminding me we’re due a virtual brainstorming session.