Why we hate video calls

Good piece in the New Scientist about why we’ve always hated video calls:

When another New York Times reporter went to Pittsburgh in mid-1971, however, he found only 33 Picturephones in operation, with just 12 able to dial outside their own buildings. Aside from impracticalities such as cost, it seemed that, against all predictions, no one actually wanted video calling. Users were more interested in seeing graphics than face-to-face video conversation. At Bell Labs, Lucky recalls that the only person who called his Picturephone was his boss, Arno Penzias. “I found it very awkward because I had to stare at him,” he says.

More than that, I think the enduring non-appeal of video is that it doesn’t start to replace talking face to face. Face to face talking is not about seeing the other person, or looking them in the eyes — it’s about non-verbal communication — gestures, body language, touching, etc. It’s also about allowing other things to intervene — movement, distraction, interaction with objects.

Video calls are exhausting, because you are trying to replace all that with just maintaining eye contact, or at least giving the appearance of remaining engaged. It’s a new form of communication, and we’ve tried and rejected it. Whenever Cisco drag me over to their HQ for some elaborate video conference I always feel it’s a waste of time, and a major overengineering of a flawed medium.

Talking on the phone, meanwhile, suits us perfectly (although I’ve come to hate it almost as much as video calling.) As George Costanza once said, after going through a phone conversation with a blind date:

George: She had to be impressed by that conversation, had to! It was a great performance. I am unbelievable on the phone. On the date they should just have two phones on the table at the restaurant, done.

Phone calls have become useful because we are able to transfer a lot of the body language and non-verbal cues into speech (and silence). We’re still working on text chat, but we’re getting there. It works — it’s not exhausting. It’s communicating what we want to communicate, and filtering out what we don’t — and not reading, at least for the most part, anything into anything else.

Xiaomi Goes Virtually Edgeless By Using Ultrasound

NewImage

Regular readers will know I’ve been looking out for this to happen for a while: the use of sound, or rather ultrasound, as a form of interface. Here’s a Reuters piece I did on it a year ago:  From pixels to pixies: the future of touch is sound | Reuters:

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

But the proof will be in the pudding, I reckoned:

Perhaps the biggest obstacle to commercialising mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says “people didn’t use it”.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices.

That day has happened. Xiaomi’s new MIX phone, Elliptic Labs says, is the first smartphone to use their Ultrasound Proximity Software:

INNER BEAUTY replaces the phone’s hardware proximity sensor with ultrasound software and allows the speaker to be completely invisible, extending the functional area of the screen all the way to the top edge of the phone.

Until now, all smartphones required an optical infrared hardware proximity sensor to turn off the screen and disable the touch functionality when users held the device up to their ear.

Without the proximity sensor, a user’s ear or cheek could accidentally trigger actions during a call, such as hanging up the call or dialing numbers while the call is ongoing.

However, INNER BEAUTY — built on Elliptic Labs’ BEAUTY ultrasound proximity software — uses patented algorithms not only to remove the proximity sensor, but also to hide the speaker behind the phone’s glass screen.

Besides eliminating the unsightly holes on a phone’s screen, Elliptic Labs’ technology eliminates common issues with hardware proximity sensors, such as their unreliability in certain weather conditions or in response to various skin colors as well as dark hair.

This is a good first step. The point here of course, for the company, is that they can push the display right to the top, which definitely looks nice (the front-facing camera, if you’re wondering, is now at the bottom.) But the use of ultrasound has lots of interesting implications — not least for how we interact with our phones. If gestures work, rather than just say they work, it will make interacting with other devices as interesting, maybe more interesting, than voice.

iPad Pro Thoughts

Jean-Louis Gassée again hits the right note in his piece on the iPad Pro: Wrong Questions | Monday Note. Tim Cook shouldn’t go around saying it will replace the laptop. It might for him, but the laptop/PC has evolved to be used in myriad ways, not all of which are best suited to a big screen and unwieldy, optional keyboard. 

Why not say that the iPad Pro will helpfully replace a laptop for 60%, or 25% of conventional personal computer users? In keeping with Steve Jobs’ Far Better At Some Key Things formula, why not say that the iPad Pro is a great laptop replacement for graphic designers, architects, mechanical engineers, musicians, videographers…and that the audience will grow even larger as new and updated apps take advantage of the iPad Pro’s screen size, speed, and very likable Pencil.

And it’s not just that. Taking up his and others’ theme that at each stage of hardware evolution we’ve lacked the imagination to realise what these devices might best be used for, I imagine the big screen and power of the iPad Pro will yield uses that we so far have not considered. 

As with wearables, these devices are as much about creating (this is something I’ve never been able to do before) or extending new markets (I could do this before, but it wasn’t much fun) as anything else. I’m not about to replace my laptop with an iPad Pro, but I could see a lot of things I would love to do with it — music editing, photo editing and organising, and maybe a bit of doodling. As in Horace Dediu’s video  The new iPad is like nothing we’ve ever seen before there’s lots of great visualization possibilities too. 

Is it a work tool? Could be, for some industries. It’s not a very mobile beast. 

The question is: while developers see enough reward in supporting it with apps? 

From pixels to pixies: the future of touch is sound

My piece on using sound and lasers to create 3-dimensional interfaces. It’s still some ways off, but it’s funky.

Screenshot 2015 10 01 10 49 33

Screenshot from Ultrahaptics video demo

From pixels to pixies: the future of touch is sound | Reuters:

SINGAPORE | BY JEREMY WAGSTAFF

(The video version: The next touchscreen is sound you can feel | Reuters.com)

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

Couple that with a different kind of wave – light, in the form of lasers – and we’re edging towards a world of 3D, holographic displays hovering in the air that we can touch, feel and control.

UK start-up Ultrahaptics, for example, is working with premium car maker Jaguar Land Rover [TAMOJL.UL] to create invisible air-based controls that drivers can feel and tweak. Instead of fumbling for the dashboard radio volume or temperature slider, and taking your eyes off the road, ultrasound waves would form the controls around your hand.

‘You don’t have to actually make it all the way to a surface, the controls find you in the middle of the air and let you operate them,’ says Tom Carter, co-founder and chief technology offjauiclinkeer of Ultrahaptics.

Such technologies, proponents argue, are an advance on devices we can control via gesture – like Nintendo’s Wii or Leap Motion’s sensor device that allows users to control computers with hand gestures. That’s because they mimic the tactile feel of real objects by firing pulses of inaudible sound to a spot in mid air.

They also move beyond the latest generation of tactile mobile interfaces, where companies such as Apple and Huawei [HWT.UL] are building more response into the cold glass of a mobile device screen.

Ultrasound promises to move interaction from the flat and physical to the three dimensional and air-bound. And that’s just for starters.

By applying similar theories about waves to light, some companies hope to not only reproduce the feel of a mid-air interface, but to make it visible, too.

Japanese start-up Pixie Dust Technologies, for example, wants to match mid-air haptics with tiny lasers that create visible holograms of those controls. This would allow users to interact, say, with large sets of data in a 3D aerial interface.

‘It would be like the movie ‘Iron Man’,’ says Takayuki Hoshi, a co-founder, referencing a sequence in the film where the lead character played by Robert Downey Jr. projects holographic images and data in mid-air from his computer, which he is then able to manipulate by hand.

BROKEN PROMISES

Japan has long been at the forefront of this technology. Hiroyuki Shinoda, considered the father of mid-air haptics, said he first had the idea of an ultrasound tactile display in the 1990s and filed his first patent in 2001.

His team at the University of Tokyo is using ultrasound technology to allow people to remotely see, touch and interact with things or each other. For now, the distance between the two is limited by the use of mirrors, but one of its inventors, Keisuke Hasegawa, says this could eventually be converted to a signal, making it possible to interact whatever the distance.

For sure, promises of sci-fi interfaces have been broken before. And even the more modest parts of this technology are some way off. Lee Skrypchuk, Jaguar Land Rovers’ Human Machine Interface Technical Specialist, said technology like Ultrahaptics’ was still 5-7 years away from being in their cars.

And Hoshi, whose Pixie Dust has made promotional videos of people touching tiny mid-air sylphs, says the cost of components needs to fall further to make this technology commercially viable. ‘Our task for now is to tell the world about this technology,’ he says.

Pixie Dust is in the meantime also using ultrasound to form particles into mid-air shapes, so-called acoustic levitation, and speakers that direct sound to some people in a space and not others – useful in museums or at road crossings, says Hoshi.

FROM KITCHEN TO CAR

But the holy grail remains a mid-air interface that combines touch and visuals.

Hoshi says touching his laser plasma sylphs feels like a tiny explosion on the fingertips, and would best be replaced by a more natural ultrasound technology.

And even laser technology itself is a work in progress.

Another Japanese company, Burton Inc, offers live outdoor demonstrations of mid-air laser displays fluttering like fireflies. But founder Hidei Kimura says he’s still trying to interest local governments in using it to project signs that float in the sky alongside the country’s usual loudspeaker alerts during a natural disaster.

Perhaps the biggest obstacle to commercializing mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says ‘people didn’t use it’.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices. Next stop, she says, is including touchless gestures into the kitchen, or cars.

(Reporting by Jeremy Wagstaff; Editing by Ian Geoghegan)

Force field: Apple’s pressure-based screens promise a world beyond cold glass

A piece looking at the technology behind the pressure sensing. My prediction: once people play with it they’ll find it hard to go back to the old way of doing things. Maybe typing on an touchscreen may one day feel natural, and maybe even enjoyable. 

Force field: Apple’s pressure-based screens promise a world beyond cold glass | Reuters:

SINGAPORE/TAIPEI | BY JEREMY WAGSTAFF AND MICHAEL GOLD

By adding a more realistic sense of touch to its iPhone, Apple Inc may have conquered a technology that has long promised to take us beyond merely feeling the cold glass of our mobile device screens.

In its latest iPhones, Apple included what it calls 3D Touch, allowing users to interact more intuitively with their devices via a pressure-sensitive screen which mimics the feel and response of real buttons.

In the long run, the force-sensitive technology also promises new or better applications, from more lifelike games and virtual reality to adding temperature, texture and sound to our screens.

‘Force Touch is going to push the envelope of how we interact with our screens,’ says Joel Evans, vice president of mobile enablement at Mobiquity, a mobile consultancy.

The fresh iPhones, unveiled on Wednesday, incorporate a version of the Force Touch technology already in some Apple laptop touchpads and its watches. Apple also announced a stylus that includes pressure sensing technology.

As with previous forays, from touch screens to fingerprint sensors, Apple isn’t the first with this technology, but by combining some existing innovations with its own, it could leverage its advantage of control over hardware, interface and the developers who could wrap Force Touch into its apps.

‘Here we go again. Apple’s done it with gyroscopes, accelerometers, they did it with pressure sensors, they’ve done it with compass, they’ve been great at expediting the adoption of these sensors,’ said Ali Foughi, CEO of US-based NextInput, which has its own technology, trademarked ForceTouch. ‘Apple is at the forefront.’

TOUCHY FEELY

Haptic technology – a tactile response to touching an interface – isn’t new, even in mobile devices. Phones have long vibrated to alert users of incoming calls in silent mode, or when they touch an onscreen button.

But efforts to go beyond that have been limited.

BlackBerry incorporated pressure sensing into its Storm phone in 2008. And Rob Lacroix, vice president of engineering at Immersion Corp, said his company worked in 2012 with Fujitsu on the Raku-Raku Smartphone, an Android phone that could distinguish between a soft and firm touch to help users unfamiliar with handheld devices.

But most efforts have been hamstrung by either a poor understanding of the user’s needs, or technical limitations. A vibrating buzz, for instance, has negative connotations, causing most people to turn off any vibration feature, says James Lewis, CEO of UK-based Redux, which has been working on similar touch technology for several years.

The technology powering vibrations is also primitive, he said, meaning there’s a slight delay and a drain on the battery. Early versions of pressure-sensing technology also required a slight gap between screen and enclosure, leaving it vulnerable to the elements.

Apple seems to have solved such problems, experts said, judging from their trackpads and the Apple Watch. Indeed, the trackpad carries the same sensation of a physical click of its predecessors, but without the actual pad moving at all.

The result: In the short term, Force Touch may simply make interacting with a screen more like something we’d touch in real life – a light switch, say, or a physical keyboard. With Force Touch, the device should be able to tell not only whether we are pressing the screen, but how firmly. It should in turn respond with a sensation – not just a vibration, but with a click – even if that click is itself a trick of technology.

‘What we’re going to see initially is putting life back into dead display,’ said Redux’s Lewis. ‘We just got used to the cold feel of glass.’

HARD PRESSED

To be sure, mobile is not the first industry to flirt with haptics.

For example, for car drivers, Redux demonstrates a tablet-like display which creates the illusions of bumps and friction when you run your fingers over the glass, mimicking physical buttons and sliders so your eyes don’t need to leave the road.

Mobiquity’s technical adviser Robert McCarthy points to several potential uses of Apple’s technology – measuring the force of touch when entering a password, say, to indicate how confident the user is of their selection, or keying in a numeric passcode using different pressure levels as an extra layer of security.

While Apple’s adoption of the technology has awoken the mobile industry to its possibilities, it was pipped to the post by Chinese handset maker Huawei, which this month unveiled one model with what it also tagged Force Touch technology. Pressing harder in a photo app, for example, allows you to zoom in on a picture without the usual two-finger spread.

Other manufacturers are exploring how to make touching a device more friendly, and more advanced, says Freddie Liu, CFO of Taiwan-based TPK Holding Co Ltd, an Apple supplier.

‘This is just the beginning for Force Touch,’ he said.

(Reporting by Jeremy Wagstaff and Michael Gold, with additional reporting by Reiji Murai in TOKYO; Editing by Ian Geoghegan and Raju Gopalakrishnan)”

Factbox: iPhone 3D Touch suppliers and haptics companies | Reuters

The path to a wearable future lies in academia | Reuters

The path to a wearable future lies in academia | Reuters:

My oblique take on wearables

IMG 0563

For a glimpse of what is, what might have been and what may lie ahead in wearable devices, look beyond branded tech and Silicon Valley start-ups to the messy labs, dry papers and solemn conferences of academia.

There you’d find that you might control your smartphone with your tongue, skin or brain; you won’t just ‘touch’ others through a smart Watch but through the air; and you’ll change how food tastes by tinkering with sound, weight and color.

Much of today’s wearable technology has its roots in these academic papers, labs and clunky prototypes, and the boffins responsible rarely get the credit some feel they deserve.

Any academic interested in wearable technology would look at today’s commercial products and say ‘we did that 20 years ago,’ said Aaron Quigley, Chair of Human Interaction at University of St. Andrews in Scotland.

Take multi-touch – where you use more than one finger to interact with a screen: Apple (AAPL.O) popularized it with the iPhone in 2007, but Japanese academic Jun Rekimoto used something similar years before.

And the Apple Watch? Its Digital Touch feature allows you to send doodles, ‘touches’ or your heartbeat to other users. Over a decade ago, researcher Eric Paulos developed something very similar, called Connexus, that allowed users to send messages via a wrist device using strokes, taps and touch.

‘I guess when we say none of this is new, it’s not so much trashing the product,’ says Paul Strohmeier, a researcher at Ontario’s Human Media Lab, ‘but more pointing out that this product has its origins in the research of scientists who most people will never hear of, and it’s a way of acknowledging their contributions.’

VAMBRACES, KIDS’ PYJAMAS

Those contributions aren’t all pie-in-the-sky.

Strohmeier and others are toying with how to make devices easier to interact with. His solution: DisplaySkin, a screen that wraps around the wrist like a vambrace, or armguard, adapting its display relative to the user’s eyeballs.

Other academics are more radical: finger gestures in the air, for example, or a ring that knows which device you’ve picked up and automatically activates it. Others use the surrounding skin – projecting buttons onto it or pinching and squeezing it. Another glues a tiny touchpad to a fingernail so you can scroll by running one finger over another.

Then there’s connecting to people, rather than devices.

Mutual understanding might grow, researchers believe, by conveying otherwise hidden information: a collar that glows if the wearer has, say, motion sickness, or a two-person seat that lights up when one occupant has warm feelings for the other.

And if you could convey non-verbal signals, why not transmit them over the ‘multi-sensory Internet’? Away on business? Send a remote hug to your child’s pyjamas; or deliver an aroma from one phone to another via a small attachment; or even, according to researchers from Britain at a conference in South Korea last month, transmit tactile sensations to another person through the air.

And if you can transmit senses, why not alter them?

Academics at a recent Singapore conference focused on altering the flavor of food. Taste, it seems, is not just a matter of the tongue, it’s also influenced by auditory, visual and tactile cues. A Japanese team made food seem heavier, and its flavor change, by secretly adding weights to a fork, while a pair of British academics used music, a virtual reality headset and color to make similar food seem sourer or sweeter to the eater.

MAKING THE GRADE

It’s hard to know just which of these research projects might one day appear in your smartphone, wearable, spoon or item of clothing. Or whether any of them will.

‘I don’t think I’m exaggerating when I say that 99 percent of research work does not end up as ‘product’,’ says Titus Tang, who recently completed a PhD at Australia’s Monash University, and is now commercializing his research in ubiquitous sensing for creating 3D advertising displays. ‘It’s very hard to predict what would turn out, otherwise it wouldn’t be called research.’

But the gap is narrowing between the academic and the commercial.

Academics at the South Korean conference noted that with tech companies innovating more rapidly, ‘while some (academic) innovations may truly be decades ahead of their time, many (conference) contributions have a much shorter lifespan.’

‘Most ‘breakthroughs’ today are merely implementations of ideas that were unimplementable in that particular time. It took a while for industry to catch up, but now they are almost in par with academic research,’ says Ashwin Ashok of Carnegie Mellon.

Pranav Mistry, 33, has risen from a small town in India’s Gujarat state to be director of research at Samsung America (005930.KS). His Singapore conference keynote highlighted a Samsung project where a camera ‘teleports’ viewers to an event or place, offering a real-time, 3D view.

But despite a glitzy video, Samsung logo and sleek black finish, Mistry stressed it wasn’t the finished product.

He was at the conference, he told Reuters, to seek feedback and ‘work with people to make it better.’

(Editing by Ian Geoghegan)”

Smartwatches: Coming Soon to a Cosmos Near You

This is a column I did for the BBC World Service, broadcast this week. 

There’s been a lot of talk that the big boys — by which I mean Apple and Samsung — are about to launch so-called smart watches. But how smart does a watch have to be before we start strapping them to our wrists in numbers to make a difference?

First off, a confession. I’ve strapped a few things to my wrist in my time. Back in the 80s and 90s I used to love the Casio calculator watch called the Databank, though I can’t actually recall ever doing a calculation on it or putting more than a few phone numbers in there. About a decade ago I reviewed something called the Fossil Wrist PDA, a wrist-bound personal digital assistant. It didn’t take off. In fact, no smart watch has taken off.

So if the smartwatch isn’t new, maybe the world around them is? We’ve moved a long way in the past couple of years, to the point where every device we have occupies a slightly different spot to the one it was intended for. Our phones, for example, are not phones anymore but data devices. And even that has evolved: the devices have changed direction in size, from shrinking to getting larger, as we realise we want to do more on them.

That in turn has made tablets shrink. When Apple introduced the iPad Steve Jobs famously said that was the smallest the tablet could reasonably go, but Samsung proved him wrong with the phablet, and now we have an iPad Mini. All this has has raised serious questions about the future of the laptop computer and the desktop PC.

But it shouldn’t. For a long time we thought that the perfect device would be something that does everything, but the drive to miniaturise components has actually had the opposite effect: we seem to be quite comfortable moving between devices and carrying a bunch of them around with us.

This all makes sense, given that our data is all stored in the cloud, and every device is connected to it either through WiFi, a phone connection or Bluetooth. We often don’t even know how our device is connecting — we just know it is.

So, the smartwatch optimists say, the time is ripe for a smartwatch. Firstly, we’ve demonstrated that we are able to throw out tired conventions about what a device should do. If our phone isn’t really our phone anymore then why not put our phone on our wrist? Secondly, the cloud solves the annoying problem of getting data in and out of the device.

Then there’s the issue of how we interact with it. It’s clear from the chequered history of the smartwatch that using our digits is not really going to work. We might be able to swipe or touch to silence an alarm or take a call, but we’re not going to be tapping out messages on a screen that size.

So it’s going to have to be voice. GeneratorResearch, a research company, reckons this would involve a small earpiece and decent voice-command software like Apple’s Siri. I’m not convinced we’re quite there yet, but I agree with them that it’s going to take someone of Apple’s heft to make it happen and seed the market.

In short, the smart watch might take off if it fits neatly and imaginatively into a sort of cosmos of devices we’re building around ourselves, where each one performs a few specific functions and overlaps with others on some. If it works out, the watch could act as a sort of central repository of all the things we need to know about — incoming messages, appointments, as well as things the cloud thinks we should know about, based on where we are: rain, traffic jams, delayed flights.

But more crucially it could become something that really exploits the frustratingly unrealised potential of voice: where we could more easily, and less self-consciously, talk to our devices and others without having to hold things to our ear, or be misunderstood.

In time, the smartwatch may replace the smartphone entirely.

I’m not completely convinced we’re as close as some think we are, but I’ve said that before and been proved wrong, so who knows?

Smarter smartphones for smarter people

This is a piece I wrote for the BBC World Service..

So, the iPhone 5 is here, and while it will sell well, probably better than any phone before it, there’s a sense of anticlimax: this, we are told, is evolution, not revolution. None of the mind-bending sense of newness and change that the iPhone and iPad used to engender. This is a sign, we’re told, that the market is mature, that there’s not much more that can be done.

I’d like to suggest another way of looking at this. For sure, not every new product that comes out of Apple HQ can blow our minds. But that doesn’t mean the mobile device is now doomed for a stodgy and reliable plateau of incremental improvements, like cars, washing machines or TVs.

In fact, quite the opposite. The world of the mobile device has already made extraordinary changes to our world, and we’re only at the start of a much larger set of changes. Our problem is that we’re just not very good judging where we sit amidst all this upheaval.

Consider these little factlets from a survey conducted last year by Oracle. At first glance they seem contradictory, but I’ll explain why they’re not.

More than half of those surveyed thought their mobile phone would replace their iPod/MP3 player by 2015. A year later when they asked them again, a third said it already had. Oracle found more or less the same was true of people’s global positioning systems, or GPS.

Then there’s this. More than two thirds of the people surveyed said they use a smartphone, and of those people, 43% have more than one.

In other words, more and more functions that used to be a separate device are now part of our mobile phone. And yet at the same time a significant chunk of users have more than one mobile phone.

What this means, I think, is that we are integrating mobile phones into our lives in a way that even those who spend time researching this kind of thing don’t really get. In fact we’ve integrated them so much we need two.

That’s because, of course, they’re not really phones: they’re devices that connect us to all sorts of things that we hold dear, whether it’s social, work or personal.

But there’s still a long way to go. The device of the future will make everything more seamless. A company in Thailand, for example, allows you to use your smartphone to open your hotel door, tweak the room lights and air con, order food and switch TV channels.

In other words interact with your surroundings. Some via connected devices, from air conditioning units to washing machines, from street signs to earthquake sensors. Other services will sift piles and piles of big data in the cloud, and push important information to us when we need it. Google already has something called Google Now which tries to anticipate your problems and needs before you do: a traffic jam up ahead, a sudden turn in the weather, a delayed flight.

Devices will also interact with the disconnected world, measuring it for us — whether it’s our blood sugar levels or the air quality. Sense movement, odors, colors, frequencies, speed. It may even, one day, see through walls for us.

So our smart phones are just starting to get smart. We’re already smart enough to see how useful they can be. The bits that are missing are the technologies that blend this all together. This could still take some time, but don’t for a moment think the mobile world is about to get boring.

The Tablet is the Computer

One thing discussed often and at great length in nerdy circles these days is this: Is the tablet—by which we really mean the Apple iPad, because it created the market, and presently accounts for nearly two thirds of it—a computer. A PC, if you will?

Some say that the iPad is not really a computer. It has no keyboard. People don’t sit at desks to use it. It lacks the horsepower of most of today’s computers. So they think it’s a big smartphone. I think they are wrong. They misunderstand what is happening.

This is not hard to see in action. Wandering around an airport cafe the other day, everyone had at least one device. But those with an iPad were by far the most comfortable, whether curled up in an armchair or sitting at a table. And they were doing everything: I saw one guy watching a movie, another writing a letter, another CEO-type playing Angry Birds. I was thrown out of the cafe before I was able to finish my research.

At the hairdressers no fashion magazines were being read: Everyone was cradling an iPad, oblivious to the time and their hair being teased into odd shapes.

So let’s look at the data.

Surveys by comScore, a metrics company, point to what is really happening. In studies in the U.S. last October and of Europe released this week [Registration required], they noticed that during the week tablet usage spikes at night—as computer usage drops off. So while during the work day folk are using their PCs, come evening they switch to tablets. (Mobile usage, however, remains flat from about 6 pm.)  The drop in PC usage is even more pronounced in the U.S., while tablet usage in the evening continues to rise until about 11 pm:

image

In other words, people are using their tablets as computers. Not as mobile devices. Not as replacements for their phone. They’re using them, in the words of a friend of mine, as a replacement to that ancient computer sitting in the corner gathering dust that gets booted up once in a while to write an email or a letter to Granny on.

Now not everyone is using tablets like this. The first surveys of tablet usage indicated they were using them as ‘TV buddies’—things to play with while watching TV. But this still doesn’t quite capture what is happening.

One study by Nielsen found last May that 3 out of 10 Americans were using their computer less frequently after buying a tablet. What’s surprising about this figure is that it’s higher than for all other devices—including gaming consoles, Internet-connected TVs and portable media players. Given the plethora of games and stuff you can get for a tablet, surely more people would be saying that they use these devices less than their netbook, laptop or desktop, now they have a tablet?

That survey was done when less than 5% of U.S. consumers owned one. A year on, that figure is much higher. Pew’s Internet and American Life Project reported on Jan 23 that the number of American adults who owned a tablet grew from 10% to 19% over the holiday period; although their data may not be directly comparable with Nielsen’s it sounds about right. And represents an unprecedented adoption of a new device, or computing platform, or whatever you want to call it.

(Pew also surveys ebook readers and finds the same results. But I think we’ll see a serious divergence between these two types of device. Yes, some tablets are good for reading and some ereaders, like the Kindle Fire, look a lot like a tablet. But they’re different, and used in different ways. I think that while the market will overlap even more, they’ll be like more like the laptop and netbook markets, or the ultrabook and the PC market: they may do similar things but the way people use them, and the reason people buy them, will differ.)

This is rapidly altering the demographics of the average tablet user. Back in 2010, a few months after the first iPad was launched, 18-34 year olds accounted for nearly half the market, according to another Nielsen report. A year on, that figure was down to a little over a third, as older folk jumped aboard. Indeed the number of 55+ iPad users doubled in that period, accounting for more than 25-34 year old users.

(Pew’s figures suggest that while older folk have been slower to adopt, the rate of growth is picking up. Around a quarter of adults up to the age of 49 now have a tablet in the U.S. (a shocking enough figure in itself.) Above 50 the number comes down. But the telling thing to me is that the rate of growth is more or less the same: about a fourfold growth between November 2010 and January 2012. While a lot of these may have been gifts over the holidays, it also suggests that the potential is there.)

So it’s pretty simple. The tablet, OK, the iPad came along and reinvented something that we thought no one wanted—a tablet device with no keyboard. But Apple’s design and marketing savvy, and the ecosystem of apps and peripherals, have made the tablet sexy again. Indeed, it has helped revive several industries that looked dead: the wireless keyboard, for example. ThinkOutside was a company in the early 2000s that made wonderful foldable keyboards for the Palm, but couldn’t make it profitable (and is now part of an apparently moribund company called iGo).

Now look: the website of Logitech, a major peripherals company, has the external keyboard and stand for the iPad as more or less its top product. Logitech reckon a quarter of tablet users want an external keyboard, and three quarters of them want their tablet “to be as productive as their laptop.” Most peripheral companies offer a kind of wireless keyboard, and there are more on the horizon.

And as BusinessWeek reported, the highest grossing app on the iPad appstore this Christmas wasn’t Angry Birds; it was a program for viewing and editing Microsoft Office documents, called QuickOffice. The app itself is not new: it’s been around since 2002, and a paired-down version came preinstalled on dozens of devices. But people wouldn’t shell out the extra $10 for the full version—until the iPad came along. Now they happily pay $20 and the company sold $30 million’s worth in 2011. (BusinessWeek links this to growing corporate interest in the iPad but you can see from comScore’s data that this is not necessarily correct. The tablet is a personal device that is mostly used outside the office.)

So. There’s a new industry out there, and it’s for a device that’s not a phone, though it has the same degree of connectivity; it’s not a desktop, though it should be able to do all the things a desktop can do; it’s not a laptop, though it should make the user as productive as a laptop can. And it’s many more things besides: a TV buddy, a sort of device to accompany your downtime in cafes, salons or on the couch.

Gartner, a research company, reckon that from about 17.5 million devices sold in 2010 there will be 325 million sold in 2015. An 18-fold increase. In the same period the annual sales of notebooks will only have doubled, and desktops will have grown by, er, 5%. Hard not to conclude from that that the tablet, OK, the iPad, is going to be everyone’s favorite computer—replacing the desktop, the laptop and whatever ultrabooks, netbooks or thinkbooks are the big thing in 2015.

(Update: This was written before Apple’s results. Tim Cook has confirmed the PC is their main competitor.) 

Quaintness in Salt Lake

(This is the script for a piece I did for the BBC World Service. Posted here by request. Podcast here.)

Something rather quaint is going on in a Salt Lake City courtroom. A company called Novell, who you’d be forgiven for not having heard of, is suing Microsoft over a product called WordPerfect, which you also may not have heard of, which it says was hobbled from running on something called Windows 95 to protect its own product, called Microsoft Word.

To be honest, you don’t need to know the ins and outs of this Microsoft law suit; nor do you really need to know much about Novell—once a giant in word processing software, and now a subsidiary of a company called The Attachmate Group, which I had never even heard of. Or, for that matter Windows 95—except that once upon a time people used to stay up all night to buy copies. Sound familiar, iPad and iPhone lovers?

It’s weird this case is going on, and I won’t bore you with why. But it’s a useful starting point to look at how the landscape has changed in some ways, and in others not at all. Microsoft is still big, of course, but no-one queues up for their offerings anymore: Indeed nobody even bought Vista, as far as I can work out. But back then, nearly every computer you would ever use ran Windows and you would use Microsoft Office to do your stuff. You couldn’t leave because you probably didn’t have a modem and the Internet was a place where weird hackers lived.

Now, consider this landscape: Apple make most of their money from phones and tablets. Google, which wasn’t around when Windows 95 was, now dominate search, but also own a phone manufacturer, have built an operating system. Amazon, which back then was starting out as a bookseller, is now selling tablets at cost as a kind of access terminal to books, movies, magazines and other things digital. Facebook, which wasn’t even a glint in Mark Zuckerberg’s 11 year old eye at the time, is now the world’s biggest social network, but is really a vast walled garden where everything you do—from what you read, what you listen to, as well as how well you slept and who you had dinner with—is measured and sold to advertisers.

All these companies kind of look different, but they’re actually the same. Back in 1995 the PC was everything, and so therefore was the operating system and the software that ran on it. The web was barely a year old. Phones were big and clunky. So Microsoft used its power to dominate to sell us what made the most money: software.

Now, 15 or 16 years on, look how different it all is. Who cares about the operating system? Or the word processor? Or the PC? Everything is now mobile, hand-held, connected, shared, and what was expensive is now free, more or less. Instead, most of these companies now make their money through eyeballs, and gathering data about our habits, along with micropayments from data plans and apps, online games and magazines.

And to do this they all have to play the same game Microsoft played so well: Dominate the chain: Everything we do, within a Hotel California-like walled garden we won’t ever leave. So my predictions for next year, most of which  have been proved true in recent days : A Facebook phone which does nothing except through Facebook, an Amazon phone which brings everything from Amazon to your eyes and ears, but nothing else, an Apple-controlled telco that drops calls unless they’re on Apple devices. Google will push all its users into a social network, probably called Google+ and will punish those who don’t want to by giving them misleading search results. Oh, and Microsoft. I’m not sure about them. Maybe we’ll find out in Salt Lake City.