Bring on winter: We’re out of good ideas

By | November 29, 2022

IRR Mean vs Year

Internal rate of return on VC investment, 1980-2008 (Source: The returns of venture capital investments

We seem to have approached a point where the existing guard has run out of ideas, and the internet community has run out of patience. I am probably wrong, but I would like to believe that the next few years will see a significant shift in the composition and direction of ‘the technology vanguard’, for want of a better term.  I believe this could unleash some useful — really useful, not fake useful — innovation to drive the next generation of the web. 

This is why. The most significant period in the past 25 years in tech innovation came during the dot.com winter of the early noughties. (I’ve written a bit about this before, and this blog dates back to that era.) 

The half-decade after the dot.com bubble burst was one of frenetic, largely unfunded, activity that paved the way for what we now called Web 2.0 (the term wasn’t widely used until 2005) But crucially, none of this innovation perceived itself in commercial terms. There was a general feeling that the bubble of the mid- to late 1990s had very little to do with utility. Back then web companies were finding themselves flush with investment simply by putting an internet-sounding name on it. 

When the bubble burst attention turned to making the web useful to individuals. Some key technologies, for want of a better word, were developed during this time. Blogging became a thing. It’s hard now for us to understand how significant this was. A blog — a web-based log — was radical in that it didn’t require any knowledge of HTML. It emphasised visually muted but appealing design — and allowed a user with no HTML or graphics knowledge to create something pleasant to the eye. And it also allowed readers to attach their comments and thoughts to the page just by typing in a box. At the time, when a website was considered static, authoritative and designed and populated by a team, this was a huge step. The first blogging platform was Pyra, set up in 1999 as a note-taking feature for project management software, by, inter alia, Ev Williams and Jack Dorsey, who later founded Twitter. Pyra had no funding and no business model: users were asked for donations. 

These innovations — simplicity, writability, free —  were very much the tone of the times. Others solved other problems. If lots of people were writing entries to their blogs, how could users keep up, short of visiting each blog and checking whether there were updates? Several individuals built a protocol which would create a ‘feed’ of blog posts, allowing users to ‘subscribe’ to those feeds using a piece of software called a reader. This was called RSS, standing for Really Simple Syndication or Rich Site Summary, depending on which flavour you went for. Once again, this was all done by individuals in their spare time, probably the most notable being Dave Winer and the late and much missed Aaron Swartz. Now the content was pulled on request into one place, making the web suddenly a more productive and configurable place. When some blogs forked into audio affairs, RSS provided as easy and compelling a form of distribution for what became called podcasts as it had for blogs. RSS demonstrated the advantage of one application — the podcast app, or the blogging app — allowing itself to be integrated with other software.

Others tackled the quality of information itself. If everyone could access the internet, why shouldn’t they also be able to access the shared wisdom of everyone on it? The idea of a webpage that could be edited by anyone without even registering seemed absurd from a top-down view point, but appeared even more absurd when the goal was to build a website that aimed to be an encyclopaedia. Wikipedia, in the end, turned out to work extremely well, and still does, because of, rather than in spite of, those absurdities. Once again, the technology was developed, not for monetary gain, but because someone wanted to make something useful. 

I could go on. Del.icio.us was built in 2003, a simple social bookmark-sharing service that allowed users to add tags to their entries. There was no rule about what tags you could use and what you couldn’t, nor about how many tags you added. This itself was a huge leap — del.icio.us was the first web service that took this approach, and while it may seem silly now, back then it was a major departure from the ‘rule-based’ world of hierarchical labelling and categorisation. 

If all these innovations seem underwhelming it’s because they form the bedrock of our digital world now. Facebook, Twitter and nearly every social media platform owes its design, functionality and distribution to these early 2000s technologies. The key difference is that the innovations of the first half of the 2000s were rarely funded by VCs. Indeed at that time VC was in the doldrums (see chart.) Most of the tools were written on the fly, open-sourced, and discussed in pragmatic terms with only a nod to any ideological belief (usually one built around ease of use and lack of paywall) and none to the idea of any big pay-day. 

I have to say as a journalist this was a really interesting time, and I believe it was central to bringing millions of people online. Blogging, far from a nerdy affair, caught the imagination of many, providing an easy way to log and share one’s interests, whether it be bee-keeping or hiking up volcanoes. Suddenly tech became useful, helpful, simple, embracing, accessible. The most interesting stuff was built by those who escaped the bust with enough money not to care, or none at all. Both spent a lot of time asking basic questions of the net and tech more broadly.

I think we’re in a similar situation now with the web, whatever you want to call it — not necessarily because the money may dry up, but because the whole thing has run out of steam. What are we doing now, exactly? What can we get excited about online? It seems to me we’re in serious trouble if we’re relying on Mark Zuckerberg to come up with a new idea and pivot Facebook in that direction. Success is as likely as Google’s forlorn attempts to reinvent itself as something other than an ad platform. These are both advertising platforms trying to find compelling reasons for users to use their services. (81% of Alphabet’s revenue in Q4 FY 2021 was from ads. So far in 2022 97.6% of Meta’s revenue has come from ads.) 

The chances are slim that a successful company which invented an industry can use its money to invent a new one. Apple, I guess, is the only real example of that, and even then each new industry they create or permeate depends hugely on the success of their existing ones. There’s little point in selling services and apps if you’re not also selling the hardware they’re being distributed on.

So where might the new ideas come from? For now we’re still too interested in the technologies, not the use and users of them. All the technologies of the early noughties — blogging, RSS, tagging, wikis — arose out of frustrations with what was on offer. None had a business model or an exit strategy in mind. At present I don’t see anything really similar happening. None of us seems to be asking the question: what are we frustrated with now that could be solved by better technology? Or perhaps more specifically — how could existing technologies be built up on or re-thought to make them more useful to as many users as possible? 

These are not necessarily simple questions. Partly the net is the victim of its own success. When I was writing about technology in the noughties, my main concern was to demystify technology, to make it accessible and less frightening. Now almost the opposite is required: the net has morphed from a fairly egalitarian, self-policed environment to one that is heavily controlled and directed towards extracting as much from the user as possible, be it directly or indirectly. That monetisation is largely built on the shoulder of the pioneers of the early 2000s. And so it’s unsurprising that the only real innovations Big Tech is interested in now is trying to build extra, new business models and industries atop its own dominance. 

So, instead of looking for how technology can be tweaked for greater individual utility and satisfaction, we’re just looking at what technologies can be harnessed to replicate the conjuring trick that Google, Facebook and others managed before: to convert a function (search, school yearbooks) into something that can be monetised. So we see lots of interfaces for the same service — Google Glasses, Meta’s VR Metaverse. This is the old-fashioned hammer looking fora. nail. 

So is there anything else? A few folk would point to blockchain as a valid and successful technology (through Bitcoin) which really was built to solve a problem we have — transferring value without having to submit to an intermediary. But that is both a blessing and a curse: Both the ICO era and the more recent DeFi era have shown that when there is a financial use case for a technology it will likely be diverted into opportunities for plunder.  Those who understand it better than most will develop products that are essentially grifts, in that those who understand them better will make money at the expense of those who understand them less.  That’s not to say there are some promising uses of blockchain, and the DeFi infrastructure built atop them, but they need to be developed by people not all looking for a big and quick payday, rug pull or otherwise. This crypto winter may provide some breathing space for them to do so. (Please see my declaration of interest at the bottom of this post). 

And so here’s the rub: Silicon Valley is in the way here, just as their absence during 2000-2004 really helped some good ideas thrive and take flight, even if many of them ultimately ended up being bought by Big Tech and lost in a cupboard somewhere. We learned back then that a good idea doesn’t need a lot of funding; it needs a handful of smart people uninterested in an exit. I do see a few of these people, including in DeFi. At some point it may be healthier and more productive to lower one’s sights — to stop thinking that they need to build a new financial system. A new financing system, perhaps, but just solving some of the problems ordinary users face, without it necessarily changing the world. RSS didn’t save the world, but there’s a lot we wouldn’t have now if it didn’t exist. 

Declaration of interests: I consult for a PR agency, YAP Global, which focuses on crypto, web3 and DeFi clients, and I have in the past consulted directly or indirectly for Facebook, Google, and some other tech companies on issues related to this subject. I hold some crypto assets. Thanks to Gina Chua for ideas. 

Coming to terms with terms (Digitisation)

By | June 19, 2022

There’s lots of grey when it comes to three terms that as a journalist I used rarely because they were such turn-offs to readers and editors alike. But companies like them and they’re useful, up to a point, to help us understand this process we’re going through.

The terms are digitisation, digitalisation and digital transformation.

Yes, they’re horrible.

One reason they’re horrible is they aren’t exciting. No way would an editor of mine have OKed a story with those words anywhere in it.

Another reason is they’re very similar, in both sound, and definition. Nobody seems to agree on what they are, which is usually a good sign you’re veering into marketing/consultant-speak. When a term is not one that people use in public confident that everyone in the room understands it and agrees with everyone else what it means, you

a) shouldn’t use it and

b) should assume it’s dreamt up by some fella to sell more widgets (or consulting time.)

As a consultant I’m offended by that so I’m going to take a stab at defining it. It’s not that the concept is hard, it’s that the terms, I feel, aren’t particularly helpful.

So here’s my stab at defining them, in the hope that they actually demonstrate something useful, which is presumably why we have them. (And yes, arguably we should ditch them.)

The world is still largely analog

The natural world is analog. In the words of Peter Kinget, Department Chair of Electrical Engineering at Columbia University:

The world we live in is analog. We are analog. Any inputs we can perceive are analog. For example, sounds are analog signals; they are continuous time and continuous value. Our ears listen to analog signals and we speak with analog signals. Images, pictures, and video are all analog at the source and our eyes are analog sensors. Measuring our heartbeat, tracking our activity, all requires processing analog sensor information.

For most of us this is an ongoing process. We are forever asking our devices to convert the analog world to digital. But let’s keep it simple: suppose we have a cupboard full of old photographs. Or slides. Or negatives. They are, obviously, analog. We can still look at them, hang them on the wall, put them in an album, print off the negatives, show the slides on a projector, but all of those are analog processes. The data has not been changed.

So we are still keeping both the data — the photos, slides, negatives — and the process analog.

You may be quite happy with that arrangement (I am — I can never throw out analog photos, they seem to be quite a durable medium) but the pressure is on to digitise. So we go about converting that data to digital by scanning them. This is an analog-digital process.

This all might seem rather basic, and it is. But it gets more complicated when we talk about more complicated digitisation. When a library digitises its books that is obviously using a similar process. But when it digitises its catalog, but not its books, it gets more complicated, as we shall see. How useful are the terms when an enterprise only digitises half of the process?


And there’s another problem. As I mentioned, we’re living in an analog world. And so a lot of our supposedly digital tools are actually largely performing the same task as the scanner in our photos example. But in real time, all the time. Take our cellphone, for example. More of the chips in there are actually ‘analog chips’ than digital ones. An analog chip will handle power supply to produce a well-regulated supply of power to other chips, wideband signals, and sensors. Or they may combine with digital chips to convert analog to digital (a temperature sensor, say) or from digital to analog — for making sound, for example.

According to Cricket Semiconductor, there are more than twice as many analog chips as digital ones. (Cricket itself might no longer be with us, and the iPhone model is a very old one, so this proportion might be out of date.)

These chips are forever converting real world data into digital data, from where you are, to how you’re holding the device, to where you’re touching the screen, what you’re watching, listening to, taking photos of, as well as some of the actual communication between your device and the outside world. Digitisation, in other words, is not necessarily managing a step but managing a continuous process. This bit, I believe, is why we run into problems with the next two terms.

If it’s digitised, it can be digitalised

Yes, an ugly aphorism, but the idea is a simple one: Unless you’ve gone through the digitisation stage, outlined above, you can’t start to reap the benefits of digitisation. Which is what we call digitalisation. Not all of us, but let’s for the moment leave them out of it.

Going back to the pile of photos. You’ve scanned them into the computer and they’re all now bits, noughts and zeroes. And you can look at them on your computer, or phone, or whatever you used to scan them. But they’re not digitalised, as it were. Once again, this is both a data and a process.

  • First you would be renaming the photo files to something useful — usually a date, perhaps with some idea of who is in the photo.
  • Then you might be adding some metadata — data about the data (in this case a photo).
    • You might do this manually — adding details to the file itself (i.e. not the filename, but the fields that accompany the JPEG format, or whatever format you’ve chosen to store the file in.) These could include location (geolocational data, usually in the form or coordinates), type of camera, date the photo was taken, subject matter. Anything you like.
    • Some of this process might be automated — for example, dumping the photos in Apple Photos, and letting it scan the photos for faces, and then grouping those files together when it recognises your Aunt Maude is in them. (In more complex examples, the digital images can be explored using something called computer vision, which is essentially training a computer to see a digital image and work out what it contains — whether it’s a dog, or a traffic light etc.)

Now this is, in my view, part of digitalisation, not digitisation, although you can see how this might be argued either way. To me you’re now already into the process of adding value to digital data by adding metadata to the photos, which is to me the key element of digitalisation. We’re adding data to the data so it can use, and be used by, other data and processes (what we call applications.) We can now search for photos of Aunt Maude and find her without having to remember when we last saw her, and so which box of photos or albums to hunt through, or if the photos were digitised but still lacking metadata, trawling through hundreds of thumbnails until we spotted her glistening red beehive.

Going back to the iPhone, this process of digitalisation is tightly woven into the process of digitisation. When our phone is busy converting real world, analog, information and signal into digits, that is just a conversion process. When that process is finished (which of course it never is, but I’m referring to individual sessions of conversion) then the digitalisation — the digital dividend — kicks in. For the iPhone that is seamless and largely expected — after all that’s the point of the device, a pocket full of real-world tools and applications — but the digitisation is still a process that has to happen. It’s just so quick and seamless we don’t realise that it’s two processes: digitisation and digitalisation. The capturing of real world data and converting it to digits, and then adding value to those digits by turning them into usable data. (The computer vision process mentioned above could also be compressed — photos and video are shot and analysed in almost real time, because they may well need to be. The automated or connected car needs to know whether it’s about to hit a dog in the road, as the below GIF shows.)

Digitalisation is a multi-step process

Now digitalisation doesn’t stop there. When data is digital it can now start talking to other digital data. Other applications can understand that data, combine it with other data, and create new data, and thereby add value. In our photos example, the photos — or usually the underlying metadata — can be connected with other applications, such as search engines, or databases, or virtual reality games.

In the case of the phone, all that real world data about heat, position, moisture, sound etc can be used by dozens of applications on your phone. Without that real world data the phone is surprisingly dumb. (And even wifi and GPS signals require some amount of analog to digital conversion.)


Now some would argue this is also ‘digital transformation’ because, when it comes to business, processes are being transformed by the digitisation dividend. By converting analog to digital and using that data it’s argued that digitalisation is synonymous with digital transformation. I don’t buy that, it strikes me as lazy shorthand and not properly looking at the stages involved:

  • Digitisation has converted atoms to bits;
  • Digitalisation has converted those bits to data that can be interpreted and used by the rest of the digital world (within the device, the house, the company, the world).

And yes, just as digitisation was also both data and process, part of that is also the process of making use of these digital assets. But it’s not ‘transformative’, at least in the sense I understand it.

Take the library: they digitised the catalog. Great.

But no biggie. People still have to go find the books on the shelves; they are just able to confirm its existence more readily — and in theory remotely.

Then the librarians converted the entire library to digits, scanning every book.

Better; now I can read the book on my iPad, in theory, and I don’t need to go to the library. Good. But. I would argue that’s digitalisation more than digital transformation. They may have transformed their own procedures, but not yet undergone digital transformation.

Let’s see why.

Digital transformation is, or should be, when processes and businesses are transformed

So let’s start with the library this time. It’s not going to take long before people realise that you don’t actually need a physical library (at least for storing books).

Or librarians.

Or even digital books. Why not just let people search the text and metadata of books digitally and put together whatever collection of reading, or notes, or insight they want?

Why not convert the librarians into curators, who develop systems to connect disparate subjects and disciplines together, training algorithms to think better than we humans about the links between subjects? Or to mine data from readers to better understand and recommend more books to them, or figure out how to encourage people to read more?

Whole new services could emerge from what we used to think of a staid environment wedded to slumber and the worship of dead trees. (And, yes, we could use the libraries for something else: poetry, education, talks, a post-prandial nap, advice.)


This is what I think is meant by digital transformation. It’s a long drawn-out process that we’re only beginning to touch the edges of. It embraces things like automation, Internet of Things, AI, biomimicry (because it’s about converting the real world into something we can use, and better understand, and biomimicry is exactly that).

Digital transformation is taking data that can now be connected to any other kind of (digital) data, and build new ideas, business models, industries, disciplines etc, that weren’t available or apparent to use before, so it makes sense that the real value is going to lie in places we haven’t dreamed of yet.

That’s the distinction I make between digitalisation and digital transformation. Digitalisation is the process of adding value to digitised data, improving business processes, making them more efficient. Digital transformation is the process of transforming how that data is used in innovative ways that change industries entirely.

In short:

  • You can’t digitalise any process until the data it spews out has been digitised.
  • You can’t transform a process until you’ve digitalised it — applying digital technologies to the data you’ve digitised.
  • When you transform a process you change it fundamentally, recognising and realising the opportunities digitalisation could unleash.

So, a final example to clarify what I think are the differences.

Let’s take a heat sensor (thermometer) attached to a machine.

The sensor readout itself could be digital, but if you’re writing down the readings in a book the data becomes analog. It needs to be digitised — entered into a tablet, and then into a spreadsheet, say. Or the data could be drawn straight from the sensor itself. That is, arguably, digitisation. I would argue it’s digitisation because it’s still part of the process of converting analog data into digital. I would say that unless you’ve got to the point where all your key data are digits, you’re not digitised.

Once the data is there, you can digitalise both it and the processes. The first step is converting it to a form that is intelligible to the rest of your processes. The data has now been digitalised. And then, the next step of digitalisation is to digitalise the process — where the sensor is read by a computer and an automated warning light goes off to signal when there’s a problem.

Digital transformation occurs when this process is overhauled so that the business itself is transformed. It might just be transforming the process — robots replacing workers, say — but that is just a step in a much longer process when you don’t just replace one sort of tool with another, but actually change the way the widget is made, or sold, or change the widget itself. In the case of the sensor, it would be first to automate not just the monitoring and warning process, but then automating the repair work, the replacement, or using AI to learn how to improve the lifetime of the machinery, or the optimal process for replacement in conjunction with other data about other machines, prices, time of day etc. The next step would be redesigning the machine itself based on the lessons drawn from the digitalisation. It could be to transform the business entirely, by using the data to improve the business model (XaaS), using different processes, or to get out of the business altogether.

Digital transformation is a journey (much as I hate the word, which business has rendered meaningless or obfuscatory, depending on the context), not a step.


I am well aware that I am not using the terms as some use them. And I am happy to be corrected by those who can show me I’m misunderstanding the underlying processes. But hopefully this will prompt a discussion, or at worst some brickbats.

Another ‘Web 2.0 isn’t what you think it is’ post

By | February 2, 2022
Border control, by Mussi Katz, Flickr

I really don’t want to add to the web3 debate (not least because I have skin in the game, advising a PR agency that works with DeFi firms), except to make some observations about its predecessor.

I feel on safer ground here because was there, I know what I saw: Web 2.0 wasn’t what most people think it is, or was. It means slightly different things to different people. But here in essence was how it evolved. I don’t claim to have intimate knowledge about how it went down, but I did have enough of a view as a WSJ technology columnist, to know some of the chronology and how some of those involved viewed it.

The key principles of Web 2.0, though never stated) were share; make things easy to use; encourage democratisation (of information, of participation, of feedback loops).

The key elements that made this possible were:

  • tagging — make things easier to find and share (del.icio.us, for example, or Flickr);
  • RSS — build protocols that make it easy for information to come to you (think blogs, but also think podcasts, early twitter. Also torrents, P2P)
  • blogging tools — WordPress, blogger and Typepad, for example, that made it easy to create content that also makes it easy for people to share and comment on;
  • wikis — make it easy for people to contribute knowledge, irrespective of background (Wikipedia the best known);

Some would disagree with this, but that is the problem with calling it Web 2.0. People weren’t sitting around saying ‘let’s build Web 2.0!’ They were just building stuff that was good; but gradually a sort of consensus emerged that welcomed tools and ideas that felt in line with the zeitgeist. And that zeitgeist, especially after the bursting of the dot com bubble in 2001, was: let’s not get hung up on producing dot.com companies or showing a bit of leg to VCs; let’s instead share what we can and figure it out as we go along. It’s noticeable that none of the above companies or organisations I mention above made a ton of money. In fact the likes of RSS and Wikipedia were built on standards that remain open source to this day.

Of course there were a lot of other things going on at the time, which all seemed important somehow, but which were not directly associated with Web 2.0: Hardware, like mobile phones, Palm Pilots and Treos; iPods. Communication standards: GPRS, 3G, Bluetooth, WiFi.

The important thing here is that we think of Web 2.0 also as Google, Apple, Amazon etc. But for most people they weren’t. Google was to some extent part of things because they built an awesome search engine that, importantly, had a very clean interface, didn’t cost anything, and worked far better than anything that had come before it. But that was generally regarded as a piece of plumbing, and Web 2.0 wasn’t interested in plumbing. The web was already there. What we wanted to do was to put information on top of it, to make that available to as many people as possible, and where possible to not demand payment for it. (And no, we didn’t call it user-generated content.)

Yes, they were idealistic times. Not many people at the heart of this movement were building things with monetisation in mind. Joshua Schachter built del.icio.us while an analyst at Morgan Stanley in 2003; it was one of the, if not the, first services which allowed users to add whatever tags they wanted — in this case to bookmarks. It’s hard to express just how transformational this felt: we were allowed to add words to something online that were helpful to us being able to find that bookmark again, irrespective of any formal system. Del.icio.us allowed us to do something else, too: to share those bookmarks with others, and to search other people’s bookmarks via tags.

It sounds underwhelming now, but back then it was something else. It democratised online services in a way that hadn’t been done before, by building a system that was implicitly recognising the value of individuals’ contributions. It wasn’t trying to be hierarchical — like, say, Yahoo!, which forced every site into some form of Dewey Decimal-like classification system. Tagging was inherently democratic and, as important, trusting of users’ ability and responsibility to help others. (For the best analysis of tagging, indeed anything about this period, read the legendary David Weinberger.)

Wikipedia had a similar mentality, built on the absurd (at the time) premise that if you give people the right tools, they can organise themselves into an institution that creates and curates content on a global scale. For the time (and still now, if you think about it) this was an outrageous, counterintuitive idea that seemed doomed to fail. But it didn’t; the one that did fail was an earlier model that relied on academics to contribute to those areas in which they were specialists. Only when the doors were flung open, and anybody could chip in, was something created.

This is the lesson of Web 2.0 writ large. For me Really Simple Syndication (RSS) is the prodigal son of Web 2.0; a standard, carved messily around competing versions, where any site creating content could automatically assemble and deliver that content to any device that wants it. No passwords, no signups, no abuse of privacy. This was huge: it suddenly allowed everyone — whether you were The New York Times or Joe Bloggs’s Blog on Bogs — to deliver content to interested parties in the same format, to be read in the same application (an RSS reader). Podcasts, and, although I can’t prove this, Twitter also used RSS to deliver content. Indeed, the whole way we ‘subscribe’ to things — think of following a group on Facebook or a person or list on Twitter — is rooted in the principles of RSS. In some ways RSS was a victim of its own success, because it was a powerful delivery mechanism, but had privacy baked in — users were never required to submit their personal information, saving them spam, for example. So the ideas of RSS were adopted by Social Media, but without the bits that put the controls in the hands of the user.

And then there were the blogging tools themselves. Yes, blogging predated the arrival of services like Blogger, Typepad and WordPress. But those tools, appearing from late 1999, helped make it truly democratic, requiring no HTML or FTP skills on the part of the user, and positively encouraging the free exchange of ideas and comments. There was a subsequent explosion in blogging, which in some ways was more instrumental in ending the news media’s business model than Google and Facebook were. We weren’t doing it because we were nudged by some algorithm. We wrote and interacted because we wanted to have conversations, a chance for people to share ideas, in text or voice.

Searches for Web 2.0 (blue) vs Social media (red), 2004-2022, courtesy Google Trends

Web 2.0, then, started in 1999 with the first blogs, and was in steady decline by 2007, when VC money was pushing the likes of Social Media to scale in a way Web 2.0 hadn’t. Twitter was the first to the table, globally, as more ‘friendship’ oriented services like Friendster and MySpace, and later Facebook, slugged it out. By 2010 Google was dominant — it bought Blogger, boxed RSS into a corner with its Google Reader (which it subsequently canned), while Yahoo bought del.icio.us). And of course, there was the iPhone, and then the iPad, and by then the idea of mashing tools together to build a democratic (and largely desktop) universe was quietly forgotten as the content became the lure and we became the product.

Social Media is an industry; Web 2.0 was a movement of sorts. The writing was on the wall in 2005 when technical publisher Tim O’Reilly coined the term, which his company then trademarked. That didn’t end well for him.)

So what are the lessons to draw from this for web3? Well, one is to see there are two distinct historical threads: ‘Web 2.0’ and ‘Social Media’. To many of those involved there was a distinct shift from one to the other, and I’m not sure it was one many welcomed (hard though it was to see at the time.) So if web3 is a departure, it’s worth thinking about what it’s a departure from. The other lesson: don’t get too hung up in defining yourself against something: the greatest parts of ‘Web 2.0’ were just things that people came up with that were cool, were welcome, and gave rise to other great ideas. Yes, there were principles, but it’s not as if they were written in stone, or even defined as such. And while there was some discussion of protocols it was really about what material and functionality could be built on the existing infrastructure, which was still a largely static one (the first phones to have certified WiFi didn’t appear until 2004).

I do think there are huge opportunities to think differently about the internet, and I do think there’s a decent discussion going on about web3 that points to something fundamental changing. My only advice would be to not get too hung up about sealing the border between web3 and Web 2.0, because the border is not what most of us think it is — or even a border. And for those of us natives of Web 2.0, I think it’s worth not feeling offended or ok boomer-ed and to follow the discussions and development around web3. We may have more in common than you think.

What lies ahead

By | January 3, 2022
Radar, by Glynne Hather, Flickr

What lies ahead?

That’s a dangerous game, in this era, as I suggested in 2020 (The Changes A-coming). But with an eye on the tech future, I thought I’d rummage around in the basement of gadgets, and content, and the mechanisms to both deliver content to gadgets and for the gadgets themselves to be something more, to see what might be around the corner. This is a longish one, for which I apologise.

On the one hand, devices during COVID have been a miserable failure. The Apple Watch was supposed to be our path to a self-measuring future but that has been something of a failure (Doctors say it’s time Apple Watch ticked all the health boxes. A lot of the blame seems to be placed at the feet of doctors, which strikes me as a cop out. If the devices’ success required changing the procedures, and expecting increase in the cognitive workload, of the most time-poor and under-resourced professions on the planet, then I think our breathless excitement at this new era of wellness and health might have been a bit more, well, breathy.

And don’t get me started on how we have failed to use our ubiquitous mobile devices to better manage COVID movements, contact tracing, travel and entry documentation etc. Well, actually, do, as I have already written about it (The Future of Pandemic Preparedness: Digital Health Passports for example). My general feeling is that the people who could have made a difference tended to focus on the wrong things (well the right things, but at the wrong time: decentralised identity, experimental technology etc) where the situation was desperately crying out for something practical that could be realistically deployed within a useful timeframe. I compared the challenge a year ago with that of attempts to deploy an e-passport here. Both pieces were commissioned by Roche Diagnostics but content was not edited or directed in a way I felt changed their intent, so I’m happy to keep my name atop them.

So yes, I don’t think the COVID era will be one to closely associate with leveraging personal technology for the benefit of the greater good. After all, these devices have been entirely designed to deliver a version of privacy that is at odds with one that governing institutions might get away with. Our devices are privacy conscious only insofar as they protect commercial and financial interactions. Apart from that they are specifically designed to ooze other forms of data, from location to apps used, search terms, etc. In fact, for Alphabet, Amazon and Meta, the devices in effect operate on a subsidy model, where the operating system, the device itself (think Kindle, Android devices) and the apps themselves are given away at or below cost because of the high value extracted from the user’s data being sucked out of the device.

For governments this would be a boon, if the user was as disinterested in the implications of this when it comes to government surveillance as she is for commercial surveillance. But COVID has shown us there is no social contract between government and citizen in most democratic states that would allow such information to be (directly) drawn from the device. So there has, so far, been no alignment of interests between companies, individuals and governments to allow our most prized possession to help us get out of this pandemic without lots of gnashing of teeth and wailing.

Ericsson, downloaded 2022-01-03

COVID though, has had an impact on our relationship with these devices. Mobile subscriptions – smartphone and feature phone – grew very little year-on-year in 2020, and in many regions of the world actually shrunk, according to Ericsson data (see above). It’s not surprising, I guess: for many of us, the relationship with our phone has loosened a little under lockdown. Or has it? On one hand, we’ve binge-watched ourselves to numbness, with Netflix, Apple, Amazon and others desperately trying to create enough content to keep us hooked. But on the other hand, a lot of the things that make a mobile device so compelling usually have lost their lustre, now that most of the time we’re stuck at home.

And so has evolution of the mobile device remained static? Well, yes, in a way. Accelerometers, biosensors, cameras, gimbals — all of the machinery that makes our device so smart, are relatively useless when we’re in one place. This is perhaps what makes Mark Zuckerberg’s dream of a second life in his metaverse resonate more than it would usually: yes, we are beginning to hate our four walls, painful checks for and restrictions on travel. So even a legless torso and cartoon visage interacting with another might seem, well, worth a punt.

But I think that vision is dystopian, cynical and not where the interesting stuff is happening.

COVID has been about content, but post-covid will be about devices. COVID was about maximising the amount of immobile content — games, movies etc — but when we do break these chains, even if it’s pandemic->endemic state change, there is likely to be a relatively fast transformation of what we can expect our device to do.

The Eye of the Apple

Hear me out.

Predicting Apple’s next move is a mug’s game, but I’ll give it a shot. Part of my research work has involved looking at submarine internet cables, and there’s always one dog that doesn’t bark: Apple. Meta (declaration of interest: I’ve done some research work for them, indirectly), Google (commercial relationship in another area), Microsoft (ditto) and Amazon are all very busy in this space, essentially moving into owning as much of the delivery infrastructure as possible, matching the proportion of traffic they’re responsible for. Structure here means cable, data centers, hardware. It’s a remarkable shift in these industries that has gone little-noticed.

But even less noticed has been the missing player: Apple. As far as I can work out, Apple has taken no significant stake in any submarine cable venture or player; neither has it built a vast network of data centers. (It does have them, but not in the same number as, say, Google. Google has 23 publicly acknowledged data center locations; Apple has less than half that, and, indeed, stores increasing amounts of its data on Google’s servers (Apple Reportedly Storing Over 8 Million Terabytes of iCloud Data on Google Servers – MacRumors).

But this is undoubtedly changing as it gets more into the content business. Apple has something called Apple Edge Cache where it offers to supply hardware to internet service provider (ISP) partners “to deliver Apple content directly to customers”. I won’t pretend to understand the exact arrangement here, but I think it’s fair to assume it’s a way for Apple to get their content closer to users, reducing latency, and a way for ISPs to demonstrate the quality of their service while probably reducing some of their international bills. Netflix and others offer something similar. Netflix: Open Connect and Akamai’s Network Partnerships.

But I don’t believe this is where Apple sees a road to the future. Indeed, though I do love some of the content on Apple TV+, it feels like a very traditional, backward-looking concept — all chrome logos and shadow, slick but somehow dated. I think the smart money is on converting the mobile phone, the watch, the iPad, the VR headset, into a (virtually) single device that interacts with its surroundings much more effectively, and in the process not only accesses the network but, in a certain way, becomes the network.

The Network is the Device

There are certain elements to this. We saw that Apple was deeply involved in the development of 5G — the first time, I believe, a device manufacturer had that much of a role, though I could be mistaken — and early last year posted job ads seeking research engineers for current and next generation networks (6G isn’t expected to be deployed for another decade, but these are generations of technologies, and like generations they’re more of a period than a date.) But I think this time it’s not going to be about 6G, more than it is about AppleG. More recently the same Bloomberg reporter, Mark Gurman, spotted another hiring spree, this time for wireless-chips (Apple Builds New Team to Bring Wireless Chips In-House).

I would agree, however, with Jonathan Goldberg at Digits to Dollars, who before Christmas picked up the Bloomberg story and ran further with it (What Are They Building in There? | Digits to Dollars). This is about vertical integration, he wrote, but it’s also about a lot more:

Beyond that, Apple has some very ambitious plans for communications and they are looking to drive the industry in their direction. They have set themselves some very interesting objectives, or at least left us with some big mysteries. What are they building in there?

I was intrigued enough to try to follow up some of those clues, exposing my lack of technical knowledge on the way. But thinking of it not from the point of the stack (the inner workings, the supply chain, the network connectivity) but from the user perspective, I think there are a few clues that give us enough to make some reasonable guesses:

Apple has filed patents that focus on communication between devices, that don’t involve connecting to networks. Most prominent are patents related to Apple’s project to build semi- and fully autonomous vehicles. Vehicle-to-everything (VTX) is a core part of the future of autonomous/semi-autonomous vehicles, because it makes no sense for urgent, data-heavy, ultrafast communication between devices (think car and streetlight, or car and bicycle) to go onto a network and then circle back to the other device. (Apple Reveals their work on Project Titan’s Vehicle-to-Vehicle Communications System for future Autonomous Vehicles – Patently Apple)

This is sometimes called sidelinking, and is already a part of 5G standards. But for Apple it makes a lot of sense, if you move away from the notion of networks as synonymous with beacons, towers, backhaul, data centers and submarine cables. If your devices are everywhere, be they watch, glasses, computer, car, whatever, then doesn’t it make much more sense to think of them as the network? In the same way Apple moves its content as close to the edge of the network, why not think of Beyond The Edge — all those devices, which you control remotely via your increasingly unified operating system — as the network itself?

Smells and Radar

This requires what we think of as the device to take on some new powers and heft. Part of that is the ability to connect to other devices, but another element is to make it smarter. We have been somewhat groomed to get excited about incremental improvements in the iPhone, but in terms of what new things it can actually do, there’s been very little to cheer about for a decade. It’s been five years since I wrote Nose job: smells are smart sensors’ last frontier and there’s been precious little progress yet on adding smell sensors to consumer devices (I see my old friend and source on all things olfactory Redg Snodgrass has been through three job changes since I wrote the piece, which shows how long ago that is). But smell is a complicated thing, and likely the last thing we’ll see enough commercial imperatives to get it into a consumer device any time soon.

Before that a natural function for the device to have is radar. This is a technology that’s already 90 years old, but its use case for mobile devices has, until recently, not been visible enough to merit the miniaturisation process. That changed last year, with chipmakers like Celeno incorporating Doppler Radar with Wi-Fi and Bluetooth into its Denali chip. It’s perhaps not a surprise that Celeno is based in Israel. Apple Israel just happens to be looking for an engineer to focus on this very technology, among other things (Wireless Machine Learning Algorithms Engineer for sensing and localization – Jobs at Apple).

But what would you use radar for? It’s Apple, so I’m guessing it’s all about the user. The job ad mentions the successful applicant “will be part of an extraordinary group that pioneers various wireless technologies for localization purposes and for wireless sensing applications. You will work with different RF solutions, such as WiFi / Bluetooth / RADAR systems, to provide outstanding user experience for Apple users.” This is about sensing. The radar would enable our devices to do two things they’re not very good at the moment:

  • sensing where we are to a high degree of accuracy. Currently this is done by GPS, when outdoors and not hidden by trees, and by triangulation, via cell-phone towers or by Wi-Fi signals. Bluetooth offers a little more granularity, but these all involve signals interacting with signals. What if our devices could bounce signals off everything, and build a virtual map of our location, even inside a cupboard, and respond appropriately? What if our device could detect movement, posture, alignment, movement of a limb, or even breathing? The Celeno press release explores this a little: “[T]he added radar function enables presence location and even posture recognition for supporting additional useful applications in buildings and homes, including human monitoring, elderly care and fall detection.”
  • combined with Wi-Fi, whose packets it would leverage, the radar would give the device the ability, at least according to Celeno, “to “see” through walls without requiring line of sight and/or dependence on light conditions. In addition, the technology does not depend on any Wi-Fi clients, wearables of any kind and does not invade privacy.”

I’m always wary of use-cases that are associated with health-care, because so few of these things end up being used, and in reality there’s not a whole lot of money in that line of product. More likely, I think is that we see functions like radar, and LiDAR (which is already in the upper-end iPhones) being used to make our devices smarter and more aware, both of their surroundings, and of each other. Apple has done more than any other company to make our devices talk to each other better, but I still feel that features like AirDrop, frankly, suck.

A Conscious Instrument

But this is not just about sharing photos and stuff over Bluetooth or Wi-Fi. This is about devices becoming ‘conscious’ of much more around them, and leveraging that to make themselves smarter and more useful. I think this is where Apple is going. They don’t care about 6G except insofar as it doesn’t dent their own plans. In their world there would be no carrier, no Wi-Fi sign-on, no SIM card, e- or otherwise. Instead the device would know exactly what the context is and optimise itself automatically. If it was a game, it would map the environment and build a peer network to bring whatever data and content was necessary from the edge to the AR-VR device(s). If it was a movie showing it would scour the walls for the best projection screen, download the data and share it among the devices so everyone could hear the soundtrack in their own preferred language.

That is just the obvious stuff. Apple is always conspicuous by its absence, so it’s interesting to see what they’re supposedly ‘late’ to. Bendable phones? Samsung and others have gotten big into this, but follow the trajectory of the technology and it’s clear it’s an old technology that is more akin to lipstick and pigs than it is to the bleeding edge. Apple is kind of getting into VR, but slowly, and probably wisely. I’m no fan-boy — I think they’ve made numerous errors, and I think they abuse their users that may come back to haunt them — but there’s no arguing they play for the long term. So I think the fact that they’re not getting into the infrastructure side of things as others suggests to me that they see the value, as they always have, in the provision of mind-blowing experience, something everyone on the planet seems to be willing to pay premium price for.

And that means, as it always has, stretching the definition of a device. And I think that means adding, one by one, the sensors and communications technology that enable the device to more intimately understand the user, their mood, their exact position, their habitat, their intent, and everything and everyone they care about. It’s always been about that. And I’d find it difficult to argue with it as a strategy.

Sponsored content: the bait and switch

By | February 27, 2024

Sponsored content on big media name platforms is not what you think it is.

Increasingly, companies are paying well-known journalism brands to produce and host sponsored content. The deal is this: you pay a big name media platform to write something nice about you and they put it on their website. If you squint you may see your puff piece alongside, or at least near, some of the big name media platform’s own content, created by their own journalists. You pay for the privilege, but (still squinting) what’s not to like? A big win, right?

Up to a point, Lord Copper.

I’ve spent time in both parts of this business — as a journalist for some of the top media brands in the world — Dow Jones, Thomson Reuters, the BBC — and as a consultant to companies putting sponsored content on these platforms, and I have concluded that, for my clients, this is not the best way to spend their money. Why?

I see two problems:

  1. Poor ideation of content. In a real newsroom, there’s a process to decide what the story is — what is going to be written. An editor or two will weigh in; the reporter will likely have to write a pitch in order to demonstrate the story is going to be a good one. The process is not necessarily adversarial, but it might be, because the parties are aligned to ensure the end product is as good as possible. The editor doesn’t want to commission a dud; the journalist wants to create a story that wins awards. When the content is sponsored, there are no such calibrated alignments. Once the platform has your money, they don’t really care what the content is. For sure, there’s some negotiation prior to the signing of a contract, but no-one with any real skill and experience in good journalism is in the discussion, asking the tough questions: is this a good story? Will someone want to read it? Aren’t we just saying what everyone expects us to say? The result is usually at best a bland story, at worst, a cringe-fest.
  2. Poor understanding about what is actually being sold. What am I actually getting for my money? Big media platforms essentially perform a kind of bait-and-switch in that they make it sound like the client is buying at least two things: access to the platform’s top-notch journalists, and placement of the content alongside the platform’s original, real journalism.

Let’s look at how this works in practice. Take Reuters, for example. (Declared interest: I worked at Reuters as a journalist from 1988-1997 and from 2012-2018, and it’s home to some of the best journalists in the world.)

Just the facts, ma’am

In Reuters’ case a separate marketing team was set up in 2017, and a service launched called Reuters Plus. In June 2018, the wording of the about page was as follows:

Just the facts

Since 1850 Reuters has provided society with the content it needs to be free, prosperous and informed. Today our content reaches more than one billion people every day and is distributed in 115 countries.

“We must continue to value facts and to recognise that trusted journalism – the kind we are committed to producing – makes a difference.” Stephen Adler, Editor in Chief, Reuters News

With feet on the ground in close to 200 international locations, Reuters is perfectly placed to tell your story to decision makers.

If I was a company looking to get my content on Reuters I might be forgiven for thinking this would be the way to do it. At the time Adler was the most senior practising journalist in Reuters, and the wording suggests strongly that Adler and his team of journalists would be involved in the process of telling the client’s “story to decision makers.”

To be fair, when it was pointed out to Reuters’ management that Adler’s name was being used in this way, the page was removed.

But the way these services are presented now is not a whole lot better. The website is now more deeply fused into the Reuters Agency website (which is the commercial part of Reuters selling content to media etc) and the service can be accessed from a menu alongside Reuters commercial editorial products (i.e., actual journalism).

The page itself states the following. I’ve italicised the areas where two separate elements are conflated — real Reuters journalism and content created for and paid for by a client:

The world listens to Reuters. That’s why when we tell stories for our partners – with a steadfast commitment to excellence, accuracy, and relevance – we create impact and provoke powerful responses.

Reuters content studio builds campaign content that helps you to connect with your audiences in meaningful and hyper-targeted ways. From full-service content creation to editorial event sponsorships, coupled with our unique content distribution capabilities, we tell your stories in a way that audiences have relied on for 170 years.

Elevate your campaigns with Reuters quality: Covering everything from written articles, videography, broadcast content, photography and events, our multifaceted approach to content creation is rooted in award-winning expertise and a commitment to the heritage of storytelling.

We combine inventive content creation with the science of data-driven strategy to make your stories work seamlessly across our premium platforms and social media channels and engage the right audiences.

Benefit from the power of Reuters: Reuters has always stood for trust and integrity, and we remain the world’s preferred source for news and insights across evolving platforms, channels, and media.

Our unrivaled insights into the big picture – globally and regionally – and the finest details across every industry and agenda, ensure that every story we tell is accurate, up to date, and sought after by audiences worldwide.

That’s why when Reuters tells your story, you can be sure that it will reach the right calibre of audience and provoke the response you need.

In fact, I felt that the lines between Reuters Plus (now called a “content studio”) and the journalism part of Reuters were even more blurry. Not one sentence in the above blurb is designed to clarify that the content being produced by Reuters Plus is entirely separate to Reuters’ journalistic content. Even the terms get blurred: The word ‘partners’ was used to describe customers both of editorial content (journalism) and sponsored content (not journalism as journalists know it). It has also added other services to the mix that further blur the lines. Under a page called ‘sponsorship’ other services are offered, including event sponsorship and ‘editorial franchises’. (Sponsored content has also now been quietly renamed “sponsored articles”. )

Trust me, I’m a journalist, sort of

In short, I noticed no attempt to disabuse the customer of any belief that they were in essence paying for a Reuters journalist to write a positive story about their company/institution/government and pushed to all Reuters’ media customers and channels alongside Reuters’ real journalism.

And this is in essence the bait-and-switch taking place. I’m not suggesting Reuters is the sole culprit here; nearly every major mainstream media platform does this, but as a consultant my interest lies primarily with the client, and I feel there’s a lack of understanding (some say a wilful lack of understanding) of what is really happening here.

Trust me: no Reuters journalist, editor, writer, reporter, whatever, will be involved in helping you find a compelling angle for your content. The writing will likely be farmed out to a freelancer, or an in-house writer (here’s a recent ad from Reuters Plus for a ‘custom content writer’; notice the requirement does not prioritise journalistic experience as much as branded content: Candidates should have Have 5+ years experience in branded content writing or journalism. The kicker: one of the perks of the job, the ad proclaims, is the chance to

Work alongside Pulitzer Prize-winning journalists and a team who provide unmatched, award-winning coverage of the world’s most important stories

If working alongside means (possibly) being in the same building, then I guess that is a perk. In my experience there is zero contact between the marketing teams involved in the production of sponsored content and the (real) editorial teams in Reuters, or elsewhere. And that’s how it should be: journalists should not be aware of any commercial or other relationships between the company they work for and the companies, governments and individuals they write about. Implying there is only damages the editorial function.

Journalism for most of these traditional platforms (WSJ, NYT, BBC, AP, Reuters etc) is not particularly lucrative, and it was probably inevitable that they would have to rent out their most valuable asset — their masthead — to generate some cash. The problem is this: most of it is done without the knowledge or approval of the journalists working for that masthead, and things go wrong when they do. Reuters has gone through quite a few about-turns when journalists get wind of these rentals, like this one, in 2014. Understandable: at best these rent-outs undermine journalism because it not only dilutes the brand, but encourages companies to think, “I don’t need to talk to the journalists who ask difficult questions; I can just buy some space and say what I want.”

And that’s the rub. Companies have allowed themselves to be misled into thinking this content they’re paying for is

a) going to appear alongside the platform’s own content, and to be largely indistinguishable from it and

b) somehow their content will be as widely read as the platform’s own content.

Neither is likely to be true. If you’re relying on a marketing team and a content-writer to come up with something as snazzy as a piece of real journalism, you’re overestimating interest in your product/company and underestimating the effort put into real journalism.

Reality and Obscurity

So my advice — and once again, I’m declaring an interest in trying to rethink content produced by companies and institutions, so of course I would say this — is to think hard before committing funds to sponsored content in channels like these. There are some which work hard to deliver quality content, but never at any point in my experience as a consultant has the process come close to matching the rigour and effort involved in real newsrooms to ensure the content is really, really good, and is compelling enough to stand alongside original journalism.

Because that’s what is being sold here: the notion that your content will be as compelling as the rest of the content on the platform, because it’s gone through similar processes which will ensure it’s as widely read — and trusted — as that other content.

The reality is that in most instances this is simply not the case.

So what’s the answer? I’ll go into some case studies of alternatives in future columns, but for now, here’s a checklist when you’re considering commissioning sponsored content:

  • go in with a clear idea of what you’re really getting for your money. Don’t think in terms of trying to promote some grand vision or new product line; think in terms of what might be of interest to a reader: What would capture their attention, and provide real, original insight?
  • demand to have handling your content someone who is experienced in real journalism, and encourage them to speak their mind. Let them guide you in determining angles, because their instinct about what people want to read will be your biggest asset;
  • demand to know who will be writing the content, and demand that person be an experienced (real) journalist;
  • keep the channels free of any input from marketing or other spin. Resist the urge to add specific product mentions or jargon; Don’t tweak the text just for the sake of it; respect the writer’s knowledge about what works and what doesn’t;
  • be proud of what you’ve done, but done be under any illusions that this somehow replaces being open to real journalists who want to talk to you. Sponsored content is not an alternative to a well-staffed and experienced PR function; if it’s good, the content should spark greater interest among journalists in what you have to say.