Asha to Ashes: Microsoft’s Emerging Markets Conundrum

A piece I wrote with Devi in Delhi, and the help of a couple of other colleagues. 

Asha to Ashes: Microsoft’s emerging market conundrum

By Jeremy Wagstaff and Devidutta Tripathy

SINGAPORE/NEW DELHI | Thu Sep 5, 2013 9:22pm EDT

(Reuters) – Microsoft Corp’s acquisition of Nokia’s handset business gives the software behemoth control of its main Windows smartphone partner, but leaves a question mark over the bigger business it has bought: Nokia’s cheap and basic phones that still dominate emerging markets like India.

Microsoft Chief Executive Steve Ballmer has said he sees such phones – of which Nokia shipped more than 50 million last quarter – as an entree to more expensive fare.

“We look at that as an excellent feeder system into the smartphone world and a way to touch people with our services even on much lower-end devices in many parts of the world,” he said in a conference call to analysts on Tuesday.

But analysts warn that’s easier said than done.

The problem, said Jayanth Kolla, partner at Convergence Catalyst, an India-based telecom research and advisory firm, is that Microsoft simply lacks Nokia’s retail and supply chain experience in the Finnish company’s most important markets.

“The devices business, especially the non-smartphones business in emerging markets, is a completely different dynamic,” he said.

Kolla pointed to the need to manage tight supply chains, distribution, and building brands through word-of-mouth. “Microsoft doesn’t have it in its DNA to run operations at this level,” he said.

India is a case in point. Nokia has been there since the mid 1990s and the country accounted for 7 percent of its 2012 revenue while the United States generated just 6 percent, according to Thomson Reuters data. Its India roots run deep: it has a presence in 200,000 outlets, 70,000 of which sell only its devices. One of its biggest plants in the world is in the southern city of Chennai.

For sure, Nokia has slipped in India as elsewhere: After nearly two decades as the market leader it was unseated by Samsung Electronics Co Ltd in overall sales last quarter.

But it still sold more of its more basic feature phones.

As recently as last October, market research company Nielsen ranked it the top handset brand. The Economic Times ranked it the country’s third most trusted brand.

LOYALTY RUNS DEEP

In a land of frequent power cuts and rugged roads, the sturdiness and longer battery life of Nokia’s phones have won it a loyal fan base – some of whom have stayed loyal when trading up.

Take Sunil Sachdeva, a Delhi-based executive, who has stuck with Nokia since his first phone. He has just bought his fifth: an upgrade to the Nokia Lumia smartphone running Microsoft’s mobile operating system.

“Technology-wise they are still the best,” he said of Nokia.

But Microsoft can’t take such loyalty for granted. Challenging it and Samsung are local players such as Karbonn and Micromax, which are churning out smartphones running Google Inc’s Android operating system for as little as $50.

Such players are also denting Nokia’s efforts to build its Asha brand, touchscreen devices perched somewhere between a feature phone and a smartphone.

Nokia shipped 4.3 million Asha phones globally in the second quarter of this year, down from 5.0 million the previous quarter.

“The sales performance of the Asha line has been quite poor,” said Sameer Singh, Hyderabad-based analyst at BitChemy Ventures, an investor in local startups. “With increasing competition from the low-end smartphone vendors, I’m unsure how long that business will last.”

That leaves the cheap seats. Singh estimates that the Asia Pacific, Middle East and Africa accounted for two-thirds of Nokia’s feature phone volumes in the last quarter, at an average selling price of between 25 to 30 euros ($32.99 to $39.59).

“I don’t see how Microsoft can really leverage this volume,” he said. “The market is extremely price sensitive and margins are racing into negative territory.”

TOO BIG TO IGNORE

The quandary for Microsoft is that while the basic phone market may be declining, it may simply be too big to ignore.

“If you look at markets like India and Indonesia, more than 70 percent of the volume comes from the feature phone business,” Anshul Gupta, principal research analyst at Gartner said. “It’s still a significant part of the overall market.”

That means that if Microsoft wants to herd this market up the value chain to its Windows phones, it needs to keep the Nokia and Asha brands afloat – while also narrowing the price gap between its smartphones and the feature phones and cheap smartphones.

Microsoft has hinted that lowering prices of smartphones would be a priority. The Windows Phone series includes the top-end Lumia 1020, which comes with a 41-megapixel camera, while it also sells simpler models such as the Lumia 610 and 620 aimed at first-time smartphone buyers.

“The lower price phone is a strategic initiative for the next Windows Phone release,” Terry Myerson, vice president of operating systems said on the same conference call, while declining to provide details.

An option for Microsoft, analysts said, would be to shoe-horn services like Bing search, Outlook webmail and Skype, the Internet telephony and messaging application, into the lower-end phones as a way to drive traffic to those services and make the devices more appealing.

“So you can bundle services with these low-end products and that way you can reach a wider audience,” said Finland-based Nordea Markets analyst Sami Sarkamies.

But in the meantime Microsoft needs to brace for assault on all fronts as emerging market rivals see an opportunity to eat further into Nokia’s market share. In India, said Convergent Catalyst’s Kolla, cheap local Android brands have been held back by Nokia’s strong promotion of its mid-tier Asha brand.

“Now, I expect them to pounce,” he said. ($1 = 0.7577 euros)

(Reporting By Jeremy Wagstaff in Singapore, Devidutta Tripathy in New Delhi, Bill Rigby in Seattle, Ritsuko Ando in Helsinki; Editing by Emily Kaiser)

Smartwatches: Coming Soon to a Cosmos Near You

This is a column I did for the BBC World Service, broadcast this week. 

There’s been a lot of talk that the big boys — by which I mean Apple and Samsung — are about to launch so-called smart watches. But how smart does a watch have to be before we start strapping them to our wrists in numbers to make a difference?

First off, a confession. I’ve strapped a few things to my wrist in my time. Back in the 80s and 90s I used to love the Casio calculator watch called the Databank, though I can’t actually recall ever doing a calculation on it or putting more than a few phone numbers in there. About a decade ago I reviewed something called the Fossil Wrist PDA, a wrist-bound personal digital assistant. It didn’t take off. In fact, no smart watch has taken off.

So if the smartwatch isn’t new, maybe the world around them is? We’ve moved a long way in the past couple of years, to the point where every device we have occupies a slightly different spot to the one it was intended for. Our phones, for example, are not phones anymore but data devices. And even that has evolved: the devices have changed direction in size, from shrinking to getting larger, as we realise we want to do more on them.

That in turn has made tablets shrink. When Apple introduced the iPad Steve Jobs famously said that was the smallest the tablet could reasonably go, but Samsung proved him wrong with the phablet, and now we have an iPad Mini. All this has has raised serious questions about the future of the laptop computer and the desktop PC.

But it shouldn’t. For a long time we thought that the perfect device would be something that does everything, but the drive to miniaturise components has actually had the opposite effect: we seem to be quite comfortable moving between devices and carrying a bunch of them around with us.

This all makes sense, given that our data is all stored in the cloud, and every device is connected to it either through WiFi, a phone connection or Bluetooth. We often don’t even know how our device is connecting — we just know it is.

So, the smartwatch optimists say, the time is ripe for a smartwatch. Firstly, we’ve demonstrated that we are able to throw out tired conventions about what a device should do. If our phone isn’t really our phone anymore then why not put our phone on our wrist? Secondly, the cloud solves the annoying problem of getting data in and out of the device.

Then there’s the issue of how we interact with it. It’s clear from the chequered history of the smartwatch that using our digits is not really going to work. We might be able to swipe or touch to silence an alarm or take a call, but we’re not going to be tapping out messages on a screen that size.

So it’s going to have to be voice. GeneratorResearch, a research company, reckons this would involve a small earpiece and decent voice-command software like Apple’s Siri. I’m not convinced we’re quite there yet, but I agree with them that it’s going to take someone of Apple’s heft to make it happen and seed the market.

In short, the smart watch might take off if it fits neatly and imaginatively into a sort of cosmos of devices we’re building around ourselves, where each one performs a few specific functions and overlaps with others on some. If it works out, the watch could act as a sort of central repository of all the things we need to know about — incoming messages, appointments, as well as things the cloud thinks we should know about, based on where we are: rain, traffic jams, delayed flights.

But more crucially it could become something that really exploits the frustratingly unrealised potential of voice: where we could more easily, and less self-consciously, talk to our devices and others without having to hold things to our ear, or be misunderstood.

In time, the smartwatch may replace the smartphone entirely.

I’m not completely convinced we’re as close as some think we are, but I’ve said that before and been proved wrong, so who knows?

Cuckoonomics

Here’s a piece I wrote for the BBC which went out today. (They often air some time after I’ve recorded them.) 

It’s very hard to be in the technology business these days because you don’t know when someone is going to be a cuckoo, A cuckoo, in case you are not an ornithologist, are what are called brood parasites, which means they lay their eggs in another bird’s nest — effectively outsourcing the whole brooding process.

Technology players have been playing this game for a while. The problem is that no one is quite sure who is the cuckoo, who is the sucker and what’s the nest. I call it cuckoonomics.

Take the recent spat between Apple and Google. Google was quite happy to have its Maps software on an iPhone — after all, it makes more money from an iPhone than it does from a phone running its own Android software — but it didn’t want to give away the farm. So it wouldn’t allow a feature which allowed users to navigate turn by turn. So Apple ditched the whole thing and went, somewhat disastrously, with its own version of maps.

Google in this case thought it was being a cuckoo, and the iPhone was the nest. But it didn’t want iPhone users enjoying the product so much that its own users jumped ship. 

In the old days technology was about hardware. Simple. You make something, put a sticker on it, and sell it. That’s all changed. Now it’s about software, about services, about experience. I may run an expensive telecommunications network but I can’t control what goes on it. Cuckoos offering video, games, messaging etc flock onto it, parking their eggs and reaping the benefits.

It happens in more subtle ways, though the implications may be just as drastic. Microsoft is about to launch a new version of its operating system called Windows 8. It’s quite quite different from before and a major gamble; not surprising, because Microsoft’s once cushy nest is being dismantled by Macs, mobiles and tablets.

It’s a brave attempt by Microsoft, but what’s interesting to me is how they’ve aimed their sights not at Apple but at Google. Microsoft have baked search so far into their new operating system they hope it will be where we do most of our stuff. From one place we can search all our apps, the web, our contact list, our saved notes and documents.

Of course this isn’t new. You can do this on a Mac, on an iPad, on an Android phone, even on a Windows PC. But it’s not been quite as well done before.

I’ll wager if Windows 8 catches on this will be one of its biggest features, and Google as a result will take a hit. Which is ironic because it’s been Google who have used cuckoonomics against Microsoft for more than a decade, gradually building a library of services around search that have ended up taking over Microsoft’s nest. Think Gmail taking over Outlook and Hotmail; Docs taking over Office, and then eventually the Chrome browser taking over Internet Explorer. 

What’s intriguing is that Microsoft is also trying to the same trick with Facebook. Windows 8 dovetails quite nicely with your Facebook stuff but at no point does it look like Facebook. I couldn’t find a Facebook app for Windows 8 but it didn’t seem to matter; instead all my Facebook friends, updates, photos and messages all appeared within Windows 8 — with rarely a Facebook logo in sight. 

Which cuckoo is going to win? 

If we can’t imagine the past, what hope the future?

Another piece I recorded for the BBC

Up until we discovered a body in a glacier in the Italian Alps more than 20 years ago, we didn’t really have a clue about our ancestors.  The body  belonged to a man who died 5000 years ago,. While much of the interest has focused on how he died — it took scientists 10 years to discover he was killed by an arrow whose head was still lodged in his shoulder — much more interesting to me is that we had no idea about how someone like this dressed. 

Otzi, according to an excellent book by Bill Bryson, has confounded all assumptions: for one thing he had more gear than your average outdoorsy dude today , like

two birchbark canisters, sheath, axe, bowstave, quiver and arrows, small tools, some berries, a piece of ibex meat and two spherical lumps of birch fungus, each about the size of a large walnut and carefully threaded with sinew. One of the canisters had contained glowing embers wrapped in maple leaves, for starting fires. 

His clothes — leggings, garters and belt, a loincloth and hat — were made from skins and furs from red deer, bear, chamois, goat and cattle,  He carried a rectangle of woven grass that might have been a cape or a sleeping mat — we don’t know. he was wearing boots that looked like birds nests on soles of stiffened bear skin, which looked awful until a foot and shoe expert recreated them and walked up a mountain. Turned out their grip was better than modern rubber, without giving blisters. 

Now it’s great that we now know this stuff, but it’s somewhat humbling to think how little we had imagined any of this. We were wandering around for several hundred years looking down on our ancestors thinking they dressed and were equipped like Raquel Welch in 10,000 years BC. 

Turns out that we lacked the imagination to figure out what our forebears looked like. We’d have done better to have wandered down to our nearest outdoor store than listen to the experts pontificate. And yet I’ve seen no collective mea culpa about this and to reassess what we think we know by trying to imagine a little harder. 

And so for something even more depressing: if we’re so bad at imagining what the past looked like, what hope do we have about the future? We’ve generally been pretty poor at this, even in the short term. Bladerunner may have been a great movie now thirty years old this year, but the world it depicts of seven years hence appears to be completely without the one thing that already dominates and defines our world: mobile devices. 

Sure we are supposed to be surrounded by robots that look so much like us we’d need a lie detector machine called a Voight-Kampff to tell the difference, and we’d be floating about in flying cars, but to yack with our love interest, we’d need to find a bar with a video payphone, and if someone wanted to reach us they’d have to  track us  down in the permanent rain to our favorite noodle stall. 

Now our mobile devices are indispensable, wrapping the Internet around us in a way that few of us predicted even ten years ago. None of us predicted social networks like Facebook. None of us thought that nearly a billion people would sign up. I dread to think what we haven’t imagined about the next ten, 20, 30 years.

My money is on us all wearing bird nest boots. 

 

Smarter smartphones for smarter people

This is a piece I wrote for the BBC World Service..

So, the iPhone 5 is here, and while it will sell well, probably better than any phone before it, there’s a sense of anticlimax: this, we are told, is evolution, not revolution. None of the mind-bending sense of newness and change that the iPhone and iPad used to engender. This is a sign, we’re told, that the market is mature, that there’s not much more that can be done.

I’d like to suggest another way of looking at this. For sure, not every new product that comes out of Apple HQ can blow our minds. But that doesn’t mean the mobile device is now doomed for a stodgy and reliable plateau of incremental improvements, like cars, washing machines or TVs.

In fact, quite the opposite. The world of the mobile device has already made extraordinary changes to our world, and we’re only at the start of a much larger set of changes. Our problem is that we’re just not very good judging where we sit amidst all this upheaval.

Consider these little factlets from a survey conducted last year by Oracle. At first glance they seem contradictory, but I’ll explain why they’re not.

More than half of those surveyed thought their mobile phone would replace their iPod/MP3 player by 2015. A year later when they asked them again, a third said it already had. Oracle found more or less the same was true of people’s global positioning systems, or GPS.

Then there’s this. More than two thirds of the people surveyed said they use a smartphone, and of those people, 43% have more than one.

In other words, more and more functions that used to be a separate device are now part of our mobile phone. And yet at the same time a significant chunk of users have more than one mobile phone.

What this means, I think, is that we are integrating mobile phones into our lives in a way that even those who spend time researching this kind of thing don’t really get. In fact we’ve integrated them so much we need two.

That’s because, of course, they’re not really phones: they’re devices that connect us to all sorts of things that we hold dear, whether it’s social, work or personal.

But there’s still a long way to go. The device of the future will make everything more seamless. A company in Thailand, for example, allows you to use your smartphone to open your hotel door, tweak the room lights and air con, order food and switch TV channels.

In other words interact with your surroundings. Some via connected devices, from air conditioning units to washing machines, from street signs to earthquake sensors. Other services will sift piles and piles of big data in the cloud, and push important information to us when we need it. Google already has something called Google Now which tries to anticipate your problems and needs before you do: a traffic jam up ahead, a sudden turn in the weather, a delayed flight.

Devices will also interact with the disconnected world, measuring it for us — whether it’s our blood sugar levels or the air quality. Sense movement, odors, colors, frequencies, speed. It may even, one day, see through walls for us.

So our smart phones are just starting to get smart. We’re already smart enough to see how useful they can be. The bits that are missing are the technologies that blend this all together. This could still take some time, but don’t for a moment think the mobile world is about to get boring.

The Tablet is the Computer

One thing discussed often and at great length in nerdy circles these days is this: Is the tablet—by which we really mean the Apple iPad, because it created the market, and presently accounts for nearly two thirds of it—a computer. A PC, if you will?

Some say that the iPad is not really a computer. It has no keyboard. People don’t sit at desks to use it. It lacks the horsepower of most of today’s computers. So they think it’s a big smartphone. I think they are wrong. They misunderstand what is happening.

This is not hard to see in action. Wandering around an airport cafe the other day, everyone had at least one device. But those with an iPad were by far the most comfortable, whether curled up in an armchair or sitting at a table. And they were doing everything: I saw one guy watching a movie, another writing a letter, another CEO-type playing Angry Birds. I was thrown out of the cafe before I was able to finish my research.

At the hairdressers no fashion magazines were being read: Everyone was cradling an iPad, oblivious to the time and their hair being teased into odd shapes.

So let’s look at the data.

Surveys by comScore, a metrics company, point to what is really happening. In studies in the U.S. last October and of Europe released this week [Registration required], they noticed that during the week tablet usage spikes at night—as computer usage drops off. So while during the work day folk are using their PCs, come evening they switch to tablets. (Mobile usage, however, remains flat from about 6 pm.)  The drop in PC usage is even more pronounced in the U.S., while tablet usage in the evening continues to rise until about 11 pm:

image

In other words, people are using their tablets as computers. Not as mobile devices. Not as replacements for their phone. They’re using them, in the words of a friend of mine, as a replacement to that ancient computer sitting in the corner gathering dust that gets booted up once in a while to write an email or a letter to Granny on.

Now not everyone is using tablets like this. The first surveys of tablet usage indicated they were using them as ‘TV buddies’—things to play with while watching TV. But this still doesn’t quite capture what is happening.

One study by Nielsen found last May that 3 out of 10 Americans were using their computer less frequently after buying a tablet. What’s surprising about this figure is that it’s higher than for all other devices—including gaming consoles, Internet-connected TVs and portable media players. Given the plethora of games and stuff you can get for a tablet, surely more people would be saying that they use these devices less than their netbook, laptop or desktop, now they have a tablet?

That survey was done when less than 5% of U.S. consumers owned one. A year on, that figure is much higher. Pew’s Internet and American Life Project reported on Jan 23 that the number of American adults who owned a tablet grew from 10% to 19% over the holiday period; although their data may not be directly comparable with Nielsen’s it sounds about right. And represents an unprecedented adoption of a new device, or computing platform, or whatever you want to call it.

(Pew also surveys ebook readers and finds the same results. But I think we’ll see a serious divergence between these two types of device. Yes, some tablets are good for reading and some ereaders, like the Kindle Fire, look a lot like a tablet. But they’re different, and used in different ways. I think that while the market will overlap even more, they’ll be like more like the laptop and netbook markets, or the ultrabook and the PC market: they may do similar things but the way people use them, and the reason people buy them, will differ.)

This is rapidly altering the demographics of the average tablet user. Back in 2010, a few months after the first iPad was launched, 18-34 year olds accounted for nearly half the market, according to another Nielsen report. A year on, that figure was down to a little over a third, as older folk jumped aboard. Indeed the number of 55+ iPad users doubled in that period, accounting for more than 25-34 year old users.

(Pew’s figures suggest that while older folk have been slower to adopt, the rate of growth is picking up. Around a quarter of adults up to the age of 49 now have a tablet in the U.S. (a shocking enough figure in itself.) Above 50 the number comes down. But the telling thing to me is that the rate of growth is more or less the same: about a fourfold growth between November 2010 and January 2012. While a lot of these may have been gifts over the holidays, it also suggests that the potential is there.)

So it’s pretty simple. The tablet, OK, the iPad came along and reinvented something that we thought no one wanted—a tablet device with no keyboard. But Apple’s design and marketing savvy, and the ecosystem of apps and peripherals, have made the tablet sexy again. Indeed, it has helped revive several industries that looked dead: the wireless keyboard, for example. ThinkOutside was a company in the early 2000s that made wonderful foldable keyboards for the Palm, but couldn’t make it profitable (and is now part of an apparently moribund company called iGo).

Now look: the website of Logitech, a major peripherals company, has the external keyboard and stand for the iPad as more or less its top product. Logitech reckon a quarter of tablet users want an external keyboard, and three quarters of them want their tablet “to be as productive as their laptop.” Most peripheral companies offer a kind of wireless keyboard, and there are more on the horizon.

And as BusinessWeek reported, the highest grossing app on the iPad appstore this Christmas wasn’t Angry Birds; it was a program for viewing and editing Microsoft Office documents, called QuickOffice. The app itself is not new: it’s been around since 2002, and a paired-down version came preinstalled on dozens of devices. But people wouldn’t shell out the extra $10 for the full version—until the iPad came along. Now they happily pay $20 and the company sold $30 million’s worth in 2011. (BusinessWeek links this to growing corporate interest in the iPad but you can see from comScore’s data that this is not necessarily correct. The tablet is a personal device that is mostly used outside the office.)

So. There’s a new industry out there, and it’s for a device that’s not a phone, though it has the same degree of connectivity; it’s not a desktop, though it should be able to do all the things a desktop can do; it’s not a laptop, though it should make the user as productive as a laptop can. And it’s many more things besides: a TV buddy, a sort of device to accompany your downtime in cafes, salons or on the couch.

Gartner, a research company, reckon that from about 17.5 million devices sold in 2010 there will be 325 million sold in 2015. An 18-fold increase. In the same period the annual sales of notebooks will only have doubled, and desktops will have grown by, er, 5%. Hard not to conclude from that that the tablet, OK, the iPad, is going to be everyone’s favorite computer—replacing the desktop, the laptop and whatever ultrabooks, netbooks or thinkbooks are the big thing in 2015.

(Update: This was written before Apple’s results. Tim Cook has confirmed the PC is their main competitor.) 

Inside the Web of Things

This is a slightly longer version of a piece I’ve recorded for the BBC World Service

I’ve long dreamed of an Internet of things, where all the stuff in my life speaks to each other instead of me having to the talking. The vision is relatively simple: each gadget is assigned an Internet address and so can communicate with each other, and with a central hub (my, or my computer, or smartphone, or whatever.)

The most obvious one is electricity. Attach a sensor to your fusebox and then you can see which or your myriad appliances is inflating your electricity bill. Great idea! Well sort of. I found a Singapore-based company that was selling them, and asked to try one out. It was a nice, sleek device that promised to connect to my computer via WiFi and give me a breakdown of my electricity consumption. Woohoo.

Only it never worked. Turns out the device needed to be connected to the junction box by a pro called Ken, who tried a couple of times and then just sort of disappeared. I don’t mean he was electrocuted or vaporized, he just didn’t come back. The owner of the company said he didn’t really sell them anymore. Now the device is sitting in a cupboard.

Turns out that Cisco, Microsoft and Google tried the same thing. The tech website Gigaom reports that all three have abandoned their energy consumption projects. Sleek-looking devices but it turns out folk aren’t really interested in saving money. Or rather, they don’t want to shell out a few hundred bucks to be reminded their power bills are too high.

This might suggest that the Internet of things is dead. But that’d be wrong. The problem is that we’re not thinking straight. We need to come up with ways to apply to the web of things the same principles that made Apple tons of cash. And that means apps.

The Internet of things relies on sensors. Motion sensors which tell whether the device is moving, which direction it’s pointing in, whether it’s vibrating, its rotational angle, its exact position, its orientation. Then there are sensors to measure force, pressure, strain, temperature, humidity and light.

The iPhone has nearly all these. An infrared sensor can tell that your head is next to the phone so it can turn off the screen and stop you cancelling the call with your earlobe. (The new version can even tell how far away you from the phone so it can activate its voice assistant Siri.)

But what makes all this powerful is the ecosystem of third party applications that have been developed for the iPhone. Otherwise it’s just a bunch of sensors. There are 1000s of apps that make use of the iPhone’s sensors–most of them without us really thinking about it.

This is the way the Internet of things needs to go. We need to stop thinking boring things like “power conservation” and just let the market figure it out. Right now I want a sensor that can tell me when the dryer is spinning out of control, which it tends to do, because then it starts moving around the room. Or help me find my keys.

In short, the Internet of things needs to commoditize the sensors and decentralize the apps that make those sensors work. Make it easy for us to figure out what we want to do with all this amazing technology and either give us a simple interface for us to do it ourselves, or make a software kit that lets programmy people to do it for us.

Which is why some people are pretty excited about Twine, a bunch of guys from MIT who are working on a two and a half inch rubber square which connects to WiFi and will let you program it via a very simple interface. Some examples: hang it around your infant’s neck and get it to send you a tweet every time it moves.

It may not be rocket science, but if you’ve got an infant-wandering problem it could be just what you needed.

Quaintness in Salt Lake

(This is the script for a piece I did for the BBC World Service. Posted here by request. Podcast here.)

Something rather quaint is going on in a Salt Lake City courtroom. A company called Novell, who you’d be forgiven for not having heard of, is suing Microsoft over a product called WordPerfect, which you also may not have heard of, which it says was hobbled from running on something called Windows 95 to protect its own product, called Microsoft Word.

To be honest, you don’t need to know the ins and outs of this Microsoft law suit; nor do you really need to know much about Novell—once a giant in word processing software, and now a subsidiary of a company called The Attachmate Group, which I had never even heard of. Or, for that matter Windows 95—except that once upon a time people used to stay up all night to buy copies. Sound familiar, iPad and iPhone lovers?

It’s weird this case is going on, and I won’t bore you with why. But it’s a useful starting point to look at how the landscape has changed in some ways, and in others not at all. Microsoft is still big, of course, but no-one queues up for their offerings anymore: Indeed nobody even bought Vista, as far as I can work out. But back then, nearly every computer you would ever use ran Windows and you would use Microsoft Office to do your stuff. You couldn’t leave because you probably didn’t have a modem and the Internet was a place where weird hackers lived.

Now, consider this landscape: Apple make most of their money from phones and tablets. Google, which wasn’t around when Windows 95 was, now dominate search, but also own a phone manufacturer, have built an operating system. Amazon, which back then was starting out as a bookseller, is now selling tablets at cost as a kind of access terminal to books, movies, magazines and other things digital. Facebook, which wasn’t even a glint in Mark Zuckerberg’s 11 year old eye at the time, is now the world’s biggest social network, but is really a vast walled garden where everything you do—from what you read, what you listen to, as well as how well you slept and who you had dinner with—is measured and sold to advertisers.

All these companies kind of look different, but they’re actually the same. Back in 1995 the PC was everything, and so therefore was the operating system and the software that ran on it. The web was barely a year old. Phones were big and clunky. So Microsoft used its power to dominate to sell us what made the most money: software.

Now, 15 or 16 years on, look how different it all is. Who cares about the operating system? Or the word processor? Or the PC? Everything is now mobile, hand-held, connected, shared, and what was expensive is now free, more or less. Instead, most of these companies now make their money through eyeballs, and gathering data about our habits, along with micropayments from data plans and apps, online games and magazines.

And to do this they all have to play the same game Microsoft played so well: Dominate the chain: Everything we do, within a Hotel California-like walled garden we won’t ever leave. So my predictions for next year, most of which  have been proved true in recent days : A Facebook phone which does nothing except through Facebook, an Amazon phone which brings everything from Amazon to your eyes and ears, but nothing else, an Apple-controlled telco that drops calls unless they’re on Apple devices. Google will push all its users into a social network, probably called Google+ and will punish those who don’t want to by giving them misleading search results. Oh, and Microsoft. I’m not sure about them. Maybe we’ll find out in Salt Lake City.

The Siri Thing

I was asked to pen a few lines for a Guardian journalist on why I thought Siri was male  in the U.S. and female in the UK. My quote was taken a tad out of context and so offended some folk who either didn’t know I was a technology columnist who makes a living out of irony and flip, or that I’m the most egregious, line-forming mumbler  in British history. So here’s my contribution in its entirety. Make of it what you will.

I don’t know the reason why they chose male and female voices that way: it’s probably something prosaic about licensing or they didn’t have a Female British voice handy, or someone thought it would be good to try it that way first to see what happened.

But there’s plenty of literature to suggest that the gender of a voice is important to the listener. Men, according to researchers from Kansas State University,  tend to take more financial risk if they are given a video briefing voiced over by a woman; the opposite is also true. (Conclusions from this are undermined when it’s added that men are willing to take even more risks if there’s no voice-over at all, which possibly means the less information they’re given, the more comfortable they feel about charging off into the unknown. This might sound familiar.)

Indeed, the problem with most research on the subject is that it tends to be as confusing as that. A paper from academics at the University of Plymouth found that “the sex of a speaker has no effect on judgements of perceived urgency” but did say that “female voices do however appear to have an advantage in taht they can portray a greater range of urgencies beacuse of their usually higher pitch and pitch range.”

We do know this: male German drivers don’t like getting navigational instructions delivered in a female voices. There’s also something called presbycusis—basically hearing loss, where older people find it easier to hear men’s voices than women’s, and can’t tell the difference between high pitched sounds like s or th.

But the bottom line is that Apple may have erred. Brits are notoriously picky about accents: class and regional, and, according to a study by the University of Edinburgh, can’t stand being told what to do by an American female voice. So far so good. But they also found that people don’t like what the researchers called a Male Southern British English voice either. Conclusion: until Siri can do regional female voices, it’s probably not going to be a huge success in the UK.

My tuppennies’ worth: Americans speak loudly and clearly and are usually in a hurry, so it makes sense for them to have a female voice. British people mumble and obey authority, so they need someone authoritative and, well, not American female.

Media: Reducing Story Production Waste

In trying to change news to match the new realities of the Interwebs, media professionals are still somewhat stuck in old ways of doing things. One is to fail to address the massive waste in news production–or at least parts of it.

So what potential waste is there? Well, these are the obvious ones:

  • Gathering: Reporters/trips/stories per trip/matching other outlets
  • Editing: The number of people who look at a story before it is published/time a story takes to work through the system

I’m more interested, however, in the amount of waste from material generated. Think of it like this:

Inputs:

  • Story idea
  • Logistics (travel/communications/reporting tools)
  • Interviews, multimedia and other material generated

Outputs:

  • Story
  • Photo
  • ?Video

Wastage:

  • All content not used in story (some may be reused, eg photos, sidebars but rarely)
  • All content used that’s not reused/repurposed.

This seems to me to be extremely wasteful in an industry in so much pain. Any other industry wouldn’t just look to pare back on factors of production but to also minimize the waste generated.

Any journalist will know just how much we’re talking about. Say you interview five people for a story. Even a stock market report is going to involve five interviews of at least five minutes. At about 150 words a minute that’s nearly 4,000 words. The stock market report itself is going to be about 500 words, maybe 600. That’s a 3,600 words–say 2,500, allowing for the reporter’s questions, and some backchat–gone to waste. For 500 words produced we had to throw out 2,000.

Yes, I know it’s not a very scientific way of doing things, but you get my point. Most journalists only write down the quotes they need for the story, and many will delete the notes they’ve taken if they’re typing them on the screen in the same document they’re writing the story on. So all that material is wasted.

A good reporter will keep the good stuff, even if it’s not used in the story, and will be able to find it again. But I don’t know of any editorial system that helps them do that–say, by tagging or indexing the material–let alone to make that available to other reporters on the same beat.

This is where I think media needs to change most. It needs to assume that all material gathered by journalists, through interviews, research, even browsing, is potentially content. It needs to help journalists organise this material for research, but, more importantly to generate new content from.

Take this little nugget, for example, in a New York Times, story, Nokia Unveils a New Smartphone, but Not a Product of Its Microsoft Deal – NYTimes.com: The reporter writes of the interviewee, Nokia’s new chief executive Stephen Elop: ”During the interview, he used the words “innovate” or “innovation” 24 times.”

I really like that. It really captures something that quotes alone don’t. We would call it “interview metadata”–information about the interview that is not actual quotes or color but significant, nonetheless.

Whether the journalist decided to count them early on during the interview, or took such good notes a keyword search or manual count after was enough, or whether he transcribed the whole thing in his hotel room later, I don’t know. (A quibble: I would have put the length of the interview in that sentence, rather than an earlier one, because it lends the data some context. Or one could include the total number of words in the interview, or compare it with another word, such as “tradition” or something. Even better create a word cloud out of the whole interview.)(Update: here’s another good NYT use of metadata, this time the frequency of words in graduation speeches: Words Used in 40 Commencement Speeches – Class of 2011 – Interactive Feature – NYTimes.com)

The point? Elop is an executive, and he has a message. He wants to convey the message, and so he is using carefully chosen words to not only ensure they’re in any quote that’s used, but also to subliminally convey to the journalist the angle he hopes the journalist will adopt. By taking the interview metadata and presenting it separately, that objective, and strategy, will be well illustrated to the reader.

And, of course, you’ve reduced the story production wastage, or SPW, significantly.

Media can help this process by developing tools and offering services to maximise the usefulness of material gathered during research and interviews, and to reduce the time a journalist spends on marshalling this material.

Suggestions?

  • Transcription services, where journalists can send a recording and get the material back within the hour (or even as the interview is conducted, if the technology is available).
  • Push some of the content production to the journalist: let them experiment with wordclouds and other data visualization tools, not only to create end product but to explore the metadata of what they’ve produced.
  • Explore and provided content research and gathering tools (such as Evernote) to journalists so they don’t have to mess around too much to create stuff drawing on existing material they’ve gathered, for the story they’re working on, from previous research and interviews, and, hopefully, from that of colleagues.

A lot of my time training journalists these days is in these kinds of tools, and I’m always surprised at how little they are made use of. That needs to change if media is to find a way to make more use of the data it gathers in the process of creating stories.