BBC: The Rise of Disappearables

The transcript of my BBC World Service piece on wearables. Reuters original story here

Forget ‘wearables’, and even ‘hearables’, if you’ve ever heard of them. The next big thing in mobile devices: ‘disappearables’.

Unless it really messes up, Apple is going to do for wearables with the Watch what is has done with the iPod for music players, the phone with its iPhone, the iPad for tablets. But even as Apple piques consumer interest in wrist-worn devices, the pace of innovation and the tumbling cost, and size, of components will make wearables smaller and smaller. So small, some in the industry say, that no one will see them. In five years, wearables like the Watch could be overtaken by hearables – devices with tiny chips and sensors that can fit inside your ear. They, in turn, could be superseded by disappearables – technology tucked inside your clothing, or even inside your body.

This all may sound rather unlikely, until you consider the iPhone is only 8 years old, and see what has happened to the phone since then. Not only do we consider the smartphone a status symbol in the salons of New York, but they’re something billions of people can afford. So it seems highly plausible that the watch as a gizmo is going to seem quaint in 10 years — as quaint as our feature phone, or net book or MP3 player is now.


So how is this all going to play out? Well this year you’ll be able to buy a little earpiece which contains a music player, 4 gigabytes of storage, a microphone to take phone calls – just nod your head to accept – and sensors that monitor your position, heart rate and body temperature.

Soon after that you’ll be able to buy contact lenses that can measure things like glucose levels in tears. Or swallow a chip the size of a grain of sand, powered by stomach juices and transmitting data about your insides via Bluetooth. For now everyone is focused on medical purposes, but there’s no reason that contact lens couldn’t also be beaming stuff back to you in real time — nice if you’re a politician being able to gauge the response to your speech so you can tweak it in real time.

Or you’re on a date and needing feedback on your posture, gait, the quality of your jokes. 

In short, hearables and wearables will become seeables and disappearables. We won’t see these things because they’ll be buried in fabric, on the skin, under the skin and inside the body. We won’t attack someone for wearing Google Glasses  because we won’t know they’re wearing them. 

Usual caveats apply. This isn’t as easy as it looks, and there’ll be lots of slips on the way. But the underlying technologies are there: components are getting smaller, cheaper, so why not throw in a few extra sensors into a device, even if you haven’t activated them, and are not quite sure what they could be used for? 

Secondly, there’s the ethical stuff. As you know, I’m big on this and we probably haven’t thought all this stuff through. Who owns all this data? Is it being crunched properly by people who know what they’re doing? What are bad guys and governments doing in all this, as they’re bound to be doing something? And how can we stop people collecting data on us if we don’t want them to? 

All good questions. But all questions we should be asking now, of the technologies already deployed in our street, in our office, in the shops we frequent, in the apps we use and the websites we visit. It’s not the technology that’s moving too fast; it’s us moving too slow.

Once the technology is too small to see it may be too late to have that conversation.  

Technorati Tags: , , , ,

The path to a wearable future lies in academia | Reuters

The path to a wearable future lies in academia | Reuters:

My oblique take on wearables

IMG 0563

For a glimpse of what is, what might have been and what may lie ahead in wearable devices, look beyond branded tech and Silicon Valley start-ups to the messy labs, dry papers and solemn conferences of academia.

There you’d find that you might control your smartphone with your tongue, skin or brain; you won’t just ‘touch’ others through a smart Watch but through the air; and you’ll change how food tastes by tinkering with sound, weight and color.

Much of today’s wearable technology has its roots in these academic papers, labs and clunky prototypes, and the boffins responsible rarely get the credit some feel they deserve.

Any academic interested in wearable technology would look at today’s commercial products and say ‘we did that 20 years ago,’ said Aaron Quigley, Chair of Human Interaction at University of St. Andrews in Scotland.

Take multi-touch – where you use more than one finger to interact with a screen: Apple (AAPL.O) popularized it with the iPhone in 2007, but Japanese academic Jun Rekimoto used something similar years before.

And the Apple Watch? Its Digital Touch feature allows you to send doodles, ‘touches’ or your heartbeat to other users. Over a decade ago, researcher Eric Paulos developed something very similar, called Connexus, that allowed users to send messages via a wrist device using strokes, taps and touch.

‘I guess when we say none of this is new, it’s not so much trashing the product,’ says Paul Strohmeier, a researcher at Ontario’s Human Media Lab, ‘but more pointing out that this product has its origins in the research of scientists who most people will never hear of, and it’s a way of acknowledging their contributions.’

VAMBRACES, KIDS’ PYJAMAS

Those contributions aren’t all pie-in-the-sky.

Strohmeier and others are toying with how to make devices easier to interact with. His solution: DisplaySkin, a screen that wraps around the wrist like a vambrace, or armguard, adapting its display relative to the user’s eyeballs.

Other academics are more radical: finger gestures in the air, for example, or a ring that knows which device you’ve picked up and automatically activates it. Others use the surrounding skin – projecting buttons onto it or pinching and squeezing it. Another glues a tiny touchpad to a fingernail so you can scroll by running one finger over another.

Then there’s connecting to people, rather than devices.

Mutual understanding might grow, researchers believe, by conveying otherwise hidden information: a collar that glows if the wearer has, say, motion sickness, or a two-person seat that lights up when one occupant has warm feelings for the other.

And if you could convey non-verbal signals, why not transmit them over the ‘multi-sensory Internet’? Away on business? Send a remote hug to your child’s pyjamas; or deliver an aroma from one phone to another via a small attachment; or even, according to researchers from Britain at a conference in South Korea last month, transmit tactile sensations to another person through the air.

And if you can transmit senses, why not alter them?

Academics at a recent Singapore conference focused on altering the flavor of food. Taste, it seems, is not just a matter of the tongue, it’s also influenced by auditory, visual and tactile cues. A Japanese team made food seem heavier, and its flavor change, by secretly adding weights to a fork, while a pair of British academics used music, a virtual reality headset and color to make similar food seem sourer or sweeter to the eater.

MAKING THE GRADE

It’s hard to know just which of these research projects might one day appear in your smartphone, wearable, spoon or item of clothing. Or whether any of them will.

‘I don’t think I’m exaggerating when I say that 99 percent of research work does not end up as ‘product’,’ says Titus Tang, who recently completed a PhD at Australia’s Monash University, and is now commercializing his research in ubiquitous sensing for creating 3D advertising displays. ‘It’s very hard to predict what would turn out, otherwise it wouldn’t be called research.’

But the gap is narrowing between the academic and the commercial.

Academics at the South Korean conference noted that with tech companies innovating more rapidly, ‘while some (academic) innovations may truly be decades ahead of their time, many (conference) contributions have a much shorter lifespan.’

‘Most ‘breakthroughs’ today are merely implementations of ideas that were unimplementable in that particular time. It took a while for industry to catch up, but now they are almost in par with academic research,’ says Ashwin Ashok of Carnegie Mellon.

Pranav Mistry, 33, has risen from a small town in India’s Gujarat state to be director of research at Samsung America (005930.KS). His Singapore conference keynote highlighted a Samsung project where a camera ‘teleports’ viewers to an event or place, offering a real-time, 3D view.

But despite a glitzy video, Samsung logo and sleek black finish, Mistry stressed it wasn’t the finished product.

He was at the conference, he told Reuters, to seek feedback and ‘work with people to make it better.’

(Editing by Ian Geoghegan)”

Smartwatches: Coming Soon to a Cosmos Near You

This is a column I did for the BBC World Service, broadcast this week. 

There’s been a lot of talk that the big boys — by which I mean Apple and Samsung — are about to launch so-called smart watches. But how smart does a watch have to be before we start strapping them to our wrists in numbers to make a difference?

First off, a confession. I’ve strapped a few things to my wrist in my time. Back in the 80s and 90s I used to love the Casio calculator watch called the Databank, though I can’t actually recall ever doing a calculation on it or putting more than a few phone numbers in there. About a decade ago I reviewed something called the Fossil Wrist PDA, a wrist-bound personal digital assistant. It didn’t take off. In fact, no smart watch has taken off.

So if the smartwatch isn’t new, maybe the world around them is? We’ve moved a long way in the past couple of years, to the point where every device we have occupies a slightly different spot to the one it was intended for. Our phones, for example, are not phones anymore but data devices. And even that has evolved: the devices have changed direction in size, from shrinking to getting larger, as we realise we want to do more on them.

That in turn has made tablets shrink. When Apple introduced the iPad Steve Jobs famously said that was the smallest the tablet could reasonably go, but Samsung proved him wrong with the phablet, and now we have an iPad Mini. All this has has raised serious questions about the future of the laptop computer and the desktop PC.

But it shouldn’t. For a long time we thought that the perfect device would be something that does everything, but the drive to miniaturise components has actually had the opposite effect: we seem to be quite comfortable moving between devices and carrying a bunch of them around with us.

This all makes sense, given that our data is all stored in the cloud, and every device is connected to it either through WiFi, a phone connection or Bluetooth. We often don’t even know how our device is connecting — we just know it is.

So, the smartwatch optimists say, the time is ripe for a smartwatch. Firstly, we’ve demonstrated that we are able to throw out tired conventions about what a device should do. If our phone isn’t really our phone anymore then why not put our phone on our wrist? Secondly, the cloud solves the annoying problem of getting data in and out of the device.

Then there’s the issue of how we interact with it. It’s clear from the chequered history of the smartwatch that using our digits is not really going to work. We might be able to swipe or touch to silence an alarm or take a call, but we’re not going to be tapping out messages on a screen that size.

So it’s going to have to be voice. GeneratorResearch, a research company, reckons this would involve a small earpiece and decent voice-command software like Apple’s Siri. I’m not convinced we’re quite there yet, but I agree with them that it’s going to take someone of Apple’s heft to make it happen and seed the market.

In short, the smart watch might take off if it fits neatly and imaginatively into a sort of cosmos of devices we’re building around ourselves, where each one performs a few specific functions and overlaps with others on some. If it works out, the watch could act as a sort of central repository of all the things we need to know about — incoming messages, appointments, as well as things the cloud thinks we should know about, based on where we are: rain, traffic jams, delayed flights.

But more crucially it could become something that really exploits the frustratingly unrealised potential of voice: where we could more easily, and less self-consciously, talk to our devices and others without having to hold things to our ear, or be misunderstood.

In time, the smartwatch may replace the smartphone entirely.

I’m not completely convinced we’re as close as some think we are, but I’ve said that before and been proved wrong, so who knows?

Smarter smartphones for smarter people

This is a piece I wrote for the BBC World Service..

So, the iPhone 5 is here, and while it will sell well, probably better than any phone before it, there’s a sense of anticlimax: this, we are told, is evolution, not revolution. None of the mind-bending sense of newness and change that the iPhone and iPad used to engender. This is a sign, we’re told, that the market is mature, that there’s not much more that can be done.

I’d like to suggest another way of looking at this. For sure, not every new product that comes out of Apple HQ can blow our minds. But that doesn’t mean the mobile device is now doomed for a stodgy and reliable plateau of incremental improvements, like cars, washing machines or TVs.

In fact, quite the opposite. The world of the mobile device has already made extraordinary changes to our world, and we’re only at the start of a much larger set of changes. Our problem is that we’re just not very good judging where we sit amidst all this upheaval.

Consider these little factlets from a survey conducted last year by Oracle. At first glance they seem contradictory, but I’ll explain why they’re not.

More than half of those surveyed thought their mobile phone would replace their iPod/MP3 player by 2015. A year later when they asked them again, a third said it already had. Oracle found more or less the same was true of people’s global positioning systems, or GPS.

Then there’s this. More than two thirds of the people surveyed said they use a smartphone, and of those people, 43% have more than one.

In other words, more and more functions that used to be a separate device are now part of our mobile phone. And yet at the same time a significant chunk of users have more than one mobile phone.

What this means, I think, is that we are integrating mobile phones into our lives in a way that even those who spend time researching this kind of thing don’t really get. In fact we’ve integrated them so much we need two.

That’s because, of course, they’re not really phones: they’re devices that connect us to all sorts of things that we hold dear, whether it’s social, work or personal.

But there’s still a long way to go. The device of the future will make everything more seamless. A company in Thailand, for example, allows you to use your smartphone to open your hotel door, tweak the room lights and air con, order food and switch TV channels.

In other words interact with your surroundings. Some via connected devices, from air conditioning units to washing machines, from street signs to earthquake sensors. Other services will sift piles and piles of big data in the cloud, and push important information to us when we need it. Google already has something called Google Now which tries to anticipate your problems and needs before you do: a traffic jam up ahead, a sudden turn in the weather, a delayed flight.

Devices will also interact with the disconnected world, measuring it for us — whether it’s our blood sugar levels or the air quality. Sense movement, odors, colors, frequencies, speed. It may even, one day, see through walls for us.

So our smart phones are just starting to get smart. We’re already smart enough to see how useful they can be. The bits that are missing are the technologies that blend this all together. This could still take some time, but don’t for a moment think the mobile world is about to get boring.

Taking Shady RAT to the Next Level

I know I’ve drawn attention to this before, but the timeline of McAfee’s Operation Shady RAT by Dmitri Alperovitch raises questions again about WikiLeaks’ original data.

Alperovitch points out that their data goes back to mid-2006:

We have collected logs that reveal the full extent of the victim population since mid-2006 when the log collection began. Note that the actual intrusion activity may have begun well before that time but that is the earliest evidence we have for the start of the compromises.

This was around the time that Julian Assange was building up the content that, he recounted in emails at the time, that his hard drives were filling up with eavesdropped documents:

We have received over 1 million documents from 13 countries, despite not having publicly launched yet! (Wikileaks Leak, Jan, 2007)

Although Assange has since denied the material came from eavesdropping, it seems clear that it was, until McAfee’s report, the earliest example of a significant trove of documents and emails stolen by China-based hackers. This may have been the same channel stumbled upon a year later by Egerstad (Dan Egerstad’s Tor exit nodes get him arrested and proves a point I made in July | ZDNet).

There were, however, reports in mid 2006 of largescale theft of documents: State Dept (May), and NIPRNet (June), US War College (Sept) and German organisations (October).

I would like to see more data from McAfee and, in the interests of transparency, at least the metadata from the still unrevealed WikiLeaks stash in order to do some note comparing and triangulation. I’d also like to see this material compared with the groundbreaking work by three young Taiwanese white hats, who have sifted through malware samples to try to group together some of these APTs: APT Secrets in Asia – InSun的日志 – 网易博客.

The work has just begun.

Radio Australia Stuff, Jan 9 2009

For those listening to my slot on Radio Australia’s Breakfast Show, here’s what I was talking about:

Word Processing: Still in the Dark Ages

image

I’m amazed by how word processing is still in the dark ages, considering it’s what we spend most of our day doing. Case in point is Microsoft Word 2007, which throws all sorts of weirdness—artefacts, I guess we’d call them—in text. Try scrolling through a longish document—anything over 5,000 words—and you get this kind of thing (see above) where three lines repeat themselves. And it’s not just a brief, trick of the eye type thing. It sits there like a dumb duck until you fiddle with it and it goes away.

I’m very surprised that this kind of thing happens, and that it happens on such a regular basis. These are not complicated files that contain big tables or fancy graphics, or imported ones. They’re normal Word files.

It tends to confirm my suspicion that software developers rarely concentrate honing the functions that we actually spend most time in. There’s a tendency to add features, or change interfaces, or in some ways to count value, not in terms of making sure the basics work well, but in the stuff around it. (And no, sadly OpenOffice.org isn’t a whole lot better.

I guess what I’d like to see is someone come up with a real word processor: something that really processes words properly. So far I don’t think we’re there.

Are We Too Obsessed With Our Cars?

More stuff from the observant and thought-provoking Nokia researcher Jan Chipchase: drivers protecting their cars in Beijing from urinating canines: 

Beijing, 2008

As Jan points out, lots of issues arise with this: how confused must dogs get that their choice of territory markers move around? (Or maybe that’s exactly what they want—expanding territory without them having to do anything.) But I guess for me is the weird thing that people have over their cars these days. Cars seem to be much clearner than they used to be.

Even in dusty, messy places like Jakarta you knew you could always beat the other guy to a spare corner of road just because you cared less about your car’s bodywork than he did. Here in Singapore people’s cars are so shiny you could eat off them. Apartments may look a mess but the car—the most visible reflection of people’s affluence—is glittering.

Then there’s the thing about hotels, restaurants and clubs allowing the owners of fancy cars to park right outside the lobby. Never quite understood how that works in practice. How do you know whether your car is fancy enough to merit this treatment? Would my Kijang KF42 cut it? It had a chrome trim thing going on which I thought was quite fancy.

Jan Chipchase – Future Perfect: Ownership = Target

Evernote’s Smart New Look

image

I like Evernote but I’ve always found the notes a bit messy: different fonts, lots of weird formatting, and not particularly easy to read and scan through.

That seems to have changed with their latest version, where the notes are decently sized, free of too much extraneous stuff and easily distinguished from each other with elegant gray space.

Amazing how a few user interface tweaks—to make things simpler and more intuitive than to impress and show off-turn a maybe into a must-have.

The Alarm Clock is Dead, Long Live the Cellphone

image

Gadgets, like software and services, often end up being used in ways the creator didn’t intend. But how many companies make the most of this opportunity?

Take the cellphone. More than a third of Brits use their mobile phone as an alarm clock, according to a survey by British hotel chain Travelodge (thanks textually.org):

Budget hotel chain Travelodge quizzed 3,000 respondents on waking up habits and 71% of UK adults claimed that alarm clocks are now obsolete. The faithful bedside companion has been cast off in favour of the modern must-have, a mobile phone. Sixteen million Brits (36%) now prefer using the latest ring tone to rouse them from sleep rather than the shrill bleeping of an alarm clock.

Why? The article doesn’t say, but the answers are pretty obvious:

  • Who wants to take an extra device with you when you travel?
  • Ever come across an alarm clock with a dozen different ring tones?
  • Ever tried to program an alarm clock you’re not familiar with?
  • Ever tried to rely on wake up services?
  • Most alarm clocks are badly designed.

This might even reveal itself in the Alarm Clock Law: if another device can handle the task of a dumber gadget, it will replace it. So does that mean that the alarm clock is dead?

Not exactly. The alarm clock performs a single function: wake the person up. But that has turned out not to be as easy as it looks. While the design of most alarm clocks have been outsourced to the brain-dead, other designers have recognised the potential of alarm clocks that don’t merely wake up the owner, but keep them awake long enough to get up.

This list, for example, illustrates the thriving world of alarm clock design (think Clocky, that has wheels and has wheels and . And in this post about Seth Godin last September there was a bunch of responses suggesting that in fact alarm clock designers have tried to add features to make the alarm clock relevant. As one of the commenters pointed out, the problem is that we’re just not ready to pay more for those features because alarm clocks have become a commodity.

I suspect it’s a bit more complicated than this. There may be other factors:

  • the decline of radio, and therefore the decline of alarm-clock-radios (34% of respondents wake up to the radio in the Travelodge survey);
  • We travel more and carry more gadgets with us, so something had to stay behind;
  • As home alarm clocks became more sophisticated (music, radio, mains-powered) so we were less likely to take them on the road with us;
  • Then there’s security: I know I stopped bringing an old-style ticking alarm clock with me because it made airport security professionals nervous.

Perhaps most important, we have developed a comfort level with our cellphone’s inner workings, and few of us would like to entrust a morning alarm to something or someone we don’t know.

Cellphone manufacturers, to their credit, seem to have acknowledged this new role: I tried to find the alarm function on a Nokia 6120 and did so in five seconds. I bet it would take me longer on any digital alarm clock. The process is quick and painless, and a little bell logo on the home screen reassuringly indicates it’s set. The alarm itself is cute and starts out unobtrusively but then gets louder until you’re up and about.

Or, more ominously, have thrown the phone across the room where it now sits in pieces. Maybe there is something to be said for keeping the alarm clock separate.

del.icio.us Tags: ,,,