Xiaomi Goes Virtually Edgeless By Using Ultrasound

NewImage

Regular readers will know I’ve been looking out for this to happen for a while: the use of sound, or rather ultrasound, as a form of interface. Here’s a Reuters piece I did on it a year ago:  From pixels to pixies: the future of touch is sound | Reuters:

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

But the proof will be in the pudding, I reckoned:

Perhaps the biggest obstacle to commercialising mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says “people didn’t use it”.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices.

That day has happened. Xiaomi’s new MIX phone, Elliptic Labs says, is the first smartphone to use their Ultrasound Proximity Software:

INNER BEAUTY replaces the phone’s hardware proximity sensor with ultrasound software and allows the speaker to be completely invisible, extending the functional area of the screen all the way to the top edge of the phone.

Until now, all smartphones required an optical infrared hardware proximity sensor to turn off the screen and disable the touch functionality when users held the device up to their ear.

Without the proximity sensor, a user’s ear or cheek could accidentally trigger actions during a call, such as hanging up the call or dialing numbers while the call is ongoing.

However, INNER BEAUTY — built on Elliptic Labs’ BEAUTY ultrasound proximity software — uses patented algorithms not only to remove the proximity sensor, but also to hide the speaker behind the phone’s glass screen.

Besides eliminating the unsightly holes on a phone’s screen, Elliptic Labs’ technology eliminates common issues with hardware proximity sensors, such as their unreliability in certain weather conditions or in response to various skin colors as well as dark hair.

This is a good first step. The point here of course, for the company, is that they can push the display right to the top, which definitely looks nice (the front-facing camera, if you’re wondering, is now at the bottom.) But the use of ultrasound has lots of interesting implications — not least for how we interact with our phones. If gestures work, rather than just say they work, it will make interacting with other devices as interesting, maybe more interesting, than voice.

Smartwatches: Coming Soon to a Cosmos Near You

This is a column I did for the BBC World Service, broadcast this week. 

There’s been a lot of talk that the big boys — by which I mean Apple and Samsung — are about to launch so-called smart watches. But how smart does a watch have to be before we start strapping them to our wrists in numbers to make a difference?

First off, a confession. I’ve strapped a few things to my wrist in my time. Back in the 80s and 90s I used to love the Casio calculator watch called the Databank, though I can’t actually recall ever doing a calculation on it or putting more than a few phone numbers in there. About a decade ago I reviewed something called the Fossil Wrist PDA, a wrist-bound personal digital assistant. It didn’t take off. In fact, no smart watch has taken off.

So if the smartwatch isn’t new, maybe the world around them is? We’ve moved a long way in the past couple of years, to the point where every device we have occupies a slightly different spot to the one it was intended for. Our phones, for example, are not phones anymore but data devices. And even that has evolved: the devices have changed direction in size, from shrinking to getting larger, as we realise we want to do more on them.

That in turn has made tablets shrink. When Apple introduced the iPad Steve Jobs famously said that was the smallest the tablet could reasonably go, but Samsung proved him wrong with the phablet, and now we have an iPad Mini. All this has has raised serious questions about the future of the laptop computer and the desktop PC.

But it shouldn’t. For a long time we thought that the perfect device would be something that does everything, but the drive to miniaturise components has actually had the opposite effect: we seem to be quite comfortable moving between devices and carrying a bunch of them around with us.

This all makes sense, given that our data is all stored in the cloud, and every device is connected to it either through WiFi, a phone connection or Bluetooth. We often don’t even know how our device is connecting — we just know it is.

So, the smartwatch optimists say, the time is ripe for a smartwatch. Firstly, we’ve demonstrated that we are able to throw out tired conventions about what a device should do. If our phone isn’t really our phone anymore then why not put our phone on our wrist? Secondly, the cloud solves the annoying problem of getting data in and out of the device.

Then there’s the issue of how we interact with it. It’s clear from the chequered history of the smartwatch that using our digits is not really going to work. We might be able to swipe or touch to silence an alarm or take a call, but we’re not going to be tapping out messages on a screen that size.

So it’s going to have to be voice. GeneratorResearch, a research company, reckons this would involve a small earpiece and decent voice-command software like Apple’s Siri. I’m not convinced we’re quite there yet, but I agree with them that it’s going to take someone of Apple’s heft to make it happen and seed the market.

In short, the smart watch might take off if it fits neatly and imaginatively into a sort of cosmos of devices we’re building around ourselves, where each one performs a few specific functions and overlaps with others on some. If it works out, the watch could act as a sort of central repository of all the things we need to know about — incoming messages, appointments, as well as things the cloud thinks we should know about, based on where we are: rain, traffic jams, delayed flights.

But more crucially it could become something that really exploits the frustratingly unrealised potential of voice: where we could more easily, and less self-consciously, talk to our devices and others without having to hold things to our ear, or be misunderstood.

In time, the smartwatch may replace the smartphone entirely.

I’m not completely convinced we’re as close as some think we are, but I’ve said that before and been proved wrong, so who knows?

The Real Revolution

This is also a podcast, from my weekly BBC piece. 

While folks at the annual tech show in Vegas are getting all excited about a glass-encased laptop, the world’s thinnest 55″ TV and a washing machine you can control from your phone, they may be forgiven for missing the quiet sound of a milestone being crossed: there are now more smartphones in the world than there are ordinary phones.

According to New York-based ABI Research, 3G and 4G handsets now account for more than half of the total mobile phone market. Those old ‘dumb phones’ and the so-called feature phones–poor relations to the computer-type iPhone or Android device can–are now officially in decline.

This is, in the words of ABI Research’s Jake Saunders, “an historic moment.” While IDC, another analyst company, noticed that this happened in Western Europe in the second quarter of last year, Saunders points out: “It means not just mobile phone users in Developed Markets but also Emerging Market end-users are purchasing 3G handsets.”

So why is this a big issue? Well, a few years back it would have been hard to convince someone in an emerging market to shell out several hundred bucks for a phone. A phone for these folks was good for talking and sending text messages. That was a lot. And enough for most people–especially when the handset cost $20 and the monthly bill was even less.

Now, with prices falling and connectivity improving in the developing world a cellphone is so much more: It’s a computer. It’s an Internet device. It’s a portable office and shop front. It’s a music player. A TV. A video player. A way to stay in touch via Facebook and Twitter.

And for the industry these people in emerging markets are a life saver. For example: The developed world is pretty much saturated with smartphones. People aren’t buying them in the numbers they used to.

But that’s not to say the feature phone is dead. In fact, for some companies it’s still an important part of their business. Visionmobile, a UK based mobile phone research company, says that Nokia–busy launching its new Windows Lumia phones in Vegas–is still the king of feature phones, accounting for more than a quarter of the market.

And they just bought a small company called, confusingly, Smarterphone, which makes a feature phone interface look more like a smartphone interface. So clearly at least one company sees a future in this non-smartphone world. In a place like Indonesia, where the BlackBerry leads the smartphone pack, nearly 90% of phones sold in the third quarter of last year were feature phones, according to IDC.

So companies see a big chance for growth in these parts of the world. But they also need the spectrum. If you’re a mobile operator your biggest problem now is that smartphone users do a lot of downloading. That means bandwidth. The problem is that one piece of spectrum is for that 3G smartphone, and another is for your old-style 2G phone. The sooner you can get all your customers to upgrade their handset to 3G, the sooner you can switch that part of the spectrum you own to 3G.

So this is a big moment. We’re seeing a tipping point in the world’s use of cellphone use, from a simple, dumb communication device to something vastly more useful, vastly more exciting, vastly more lucrative. All those people moving over to smartphones

ABI Research reckons there’ll be 1.67 billion handsets sold this year. That’s one in four people buying a new device. Forget fancy Vegas. The real revolution just started.

Inside the Web of Things

This is a slightly longer version of a piece I’ve recorded for the BBC World Service

I’ve long dreamed of an Internet of things, where all the stuff in my life speaks to each other instead of me having to the talking. The vision is relatively simple: each gadget is assigned an Internet address and so can communicate with each other, and with a central hub (my, or my computer, or smartphone, or whatever.)

The most obvious one is electricity. Attach a sensor to your fusebox and then you can see which or your myriad appliances is inflating your electricity bill. Great idea! Well sort of. I found a Singapore-based company that was selling them, and asked to try one out. It was a nice, sleek device that promised to connect to my computer via WiFi and give me a breakdown of my electricity consumption. Woohoo.

Only it never worked. Turns out the device needed to be connected to the junction box by a pro called Ken, who tried a couple of times and then just sort of disappeared. I don’t mean he was electrocuted or vaporized, he just didn’t come back. The owner of the company said he didn’t really sell them anymore. Now the device is sitting in a cupboard.

Turns out that Cisco, Microsoft and Google tried the same thing. The tech website Gigaom reports that all three have abandoned their energy consumption projects. Sleek-looking devices but it turns out folk aren’t really interested in saving money. Or rather, they don’t want to shell out a few hundred bucks to be reminded their power bills are too high.

This might suggest that the Internet of things is dead. But that’d be wrong. The problem is that we’re not thinking straight. We need to come up with ways to apply to the web of things the same principles that made Apple tons of cash. And that means apps.

The Internet of things relies on sensors. Motion sensors which tell whether the device is moving, which direction it’s pointing in, whether it’s vibrating, its rotational angle, its exact position, its orientation. Then there are sensors to measure force, pressure, strain, temperature, humidity and light.

The iPhone has nearly all these. An infrared sensor can tell that your head is next to the phone so it can turn off the screen and stop you cancelling the call with your earlobe. (The new version can even tell how far away you from the phone so it can activate its voice assistant Siri.)

But what makes all this powerful is the ecosystem of third party applications that have been developed for the iPhone. Otherwise it’s just a bunch of sensors. There are 1000s of apps that make use of the iPhone’s sensors–most of them without us really thinking about it.

This is the way the Internet of things needs to go. We need to stop thinking boring things like “power conservation” and just let the market figure it out. Right now I want a sensor that can tell me when the dryer is spinning out of control, which it tends to do, because then it starts moving around the room. Or help me find my keys.

In short, the Internet of things needs to commoditize the sensors and decentralize the apps that make those sensors work. Make it easy for us to figure out what we want to do with all this amazing technology and either give us a simple interface for us to do it ourselves, or make a software kit that lets programmy people to do it for us.

Which is why some people are pretty excited about Twine, a bunch of guys from MIT who are working on a two and a half inch rubber square which connects to WiFi and will let you program it via a very simple interface. Some examples: hang it around your infant’s neck and get it to send you a tweet every time it moves.

It may not be rocket science, but if you’ve got an infant-wandering problem it could be just what you needed.

Media: Reducing Story Production Waste

In trying to change news to match the new realities of the Interwebs, media professionals are still somewhat stuck in old ways of doing things. One is to fail to address the massive waste in news production–or at least parts of it.

So what potential waste is there? Well, these are the obvious ones:

  • Gathering: Reporters/trips/stories per trip/matching other outlets
  • Editing: The number of people who look at a story before it is published/time a story takes to work through the system

I’m more interested, however, in the amount of waste from material generated. Think of it like this:

Inputs:

  • Story idea
  • Logistics (travel/communications/reporting tools)
  • Interviews, multimedia and other material generated

Outputs:

  • Story
  • Photo
  • ?Video

Wastage:

  • All content not used in story (some may be reused, eg photos, sidebars but rarely)
  • All content used that’s not reused/repurposed.

This seems to me to be extremely wasteful in an industry in so much pain. Any other industry wouldn’t just look to pare back on factors of production but to also minimize the waste generated.

Any journalist will know just how much we’re talking about. Say you interview five people for a story. Even a stock market report is going to involve five interviews of at least five minutes. At about 150 words a minute that’s nearly 4,000 words. The stock market report itself is going to be about 500 words, maybe 600. That’s a 3,600 words–say 2,500, allowing for the reporter’s questions, and some backchat–gone to waste. For 500 words produced we had to throw out 2,000.

Yes, I know it’s not a very scientific way of doing things, but you get my point. Most journalists only write down the quotes they need for the story, and many will delete the notes they’ve taken if they’re typing them on the screen in the same document they’re writing the story on. So all that material is wasted.

A good reporter will keep the good stuff, even if it’s not used in the story, and will be able to find it again. But I don’t know of any editorial system that helps them do that–say, by tagging or indexing the material–let alone to make that available to other reporters on the same beat.

This is where I think media needs to change most. It needs to assume that all material gathered by journalists, through interviews, research, even browsing, is potentially content. It needs to help journalists organise this material for research, but, more importantly to generate new content from.

Take this little nugget, for example, in a New York Times, story, Nokia Unveils a New Smartphone, but Not a Product of Its Microsoft Deal – NYTimes.com: The reporter writes of the interviewee, Nokia’s new chief executive Stephen Elop: ”During the interview, he used the words “innovate” or “innovation” 24 times.”

I really like that. It really captures something that quotes alone don’t. We would call it “interview metadata”–information about the interview that is not actual quotes or color but significant, nonetheless.

Whether the journalist decided to count them early on during the interview, or took such good notes a keyword search or manual count after was enough, or whether he transcribed the whole thing in his hotel room later, I don’t know. (A quibble: I would have put the length of the interview in that sentence, rather than an earlier one, because it lends the data some context. Or one could include the total number of words in the interview, or compare it with another word, such as “tradition” or something. Even better create a word cloud out of the whole interview.)(Update: here’s another good NYT use of metadata, this time the frequency of words in graduation speeches: Words Used in 40 Commencement Speeches – Class of 2011 – Interactive Feature – NYTimes.com)

The point? Elop is an executive, and he has a message. He wants to convey the message, and so he is using carefully chosen words to not only ensure they’re in any quote that’s used, but also to subliminally convey to the journalist the angle he hopes the journalist will adopt. By taking the interview metadata and presenting it separately, that objective, and strategy, will be well illustrated to the reader.

And, of course, you’ve reduced the story production wastage, or SPW, significantly.

Media can help this process by developing tools and offering services to maximise the usefulness of material gathered during research and interviews, and to reduce the time a journalist spends on marshalling this material.

Suggestions?

  • Transcription services, where journalists can send a recording and get the material back within the hour (or even as the interview is conducted, if the technology is available).
  • Push some of the content production to the journalist: let them experiment with wordclouds and other data visualization tools, not only to create end product but to explore the metadata of what they’ve produced.
  • Explore and provided content research and gathering tools (such as Evernote) to journalists so they don’t have to mess around too much to create stuff drawing on existing material they’ve gathered, for the story they’re working on, from previous research and interviews, and, hopefully, from that of colleagues.

A lot of my time training journalists these days is in these kinds of tools, and I’m always surprised at how little they are made use of. That needs to change if media is to find a way to make more use of the data it gathers in the process of creating stories.

Lost in the Flow of The Digital Word

my weekly column as part of the Loose Wire Service, hence the lack of links.

By Jeremy Wagstaff

A few weeks ago I wrote about the emergence of the digital book, and how, basically, we should get over our love affair with its physical ancestor and realize that, as with newspapers, rotary dial phones and reel-to-reel tape decks, the world has moved on. Digital rules, and ebooks now make more sense than papyrus.

Not everyone was happy. My bookseller friends won’t talk to me anymore, and don’t even mention my author ex-buddies. One person told me I was “brave” (I think he meant foolhardy) in saying something everyone else thought, but didn’t yet dare mention.

But the truth is that a lot of people have already moved on. Amazon is now selling more ebooks than hardbacks. It’s just about to bring out a Kindle that will sell for about $130. When it hits $100—by Christmas, probably—it’s hard not to imagine everyone getting one in their stocking.

By the end of next year, you’ll be more likely to see people reading on a digital device than a print version. Airlines will hand them out at the beginning of the flight instead of newspapers, along with a warning during the security demonstration not to steal them. (I was on a flight the other day that reminded people it was a serious offence to steal the lifejackets. What kind of people take planes and then steal the one thing standing between them and a watery grave?)

But what interests me is the change in the pattern of reading that this is already engendering. (The ereading, not the theft of flotation devices.) I go to Afghanistan quite a bit and it’s common to see Kindles and Sony eBook Digital Book Readers in the airport lounge. Of course, for these guys—most of them contractors, aid workers or soldiers—the ereader makes a lot of sense.

There are indeed booksellers in Kabul but it’s not exactly a city for relaxed browsing, and lugging in three or four months’ worth of reading isn’t ideal—especially when you can slot all that into one device that weighs less than a hardback, and to which you can download books when you feel like it.

Those who use Kindles and similar devices say that they read a lot more, and really enjoy it. I believe them. But there’s more. Amazon now offers applications for the iPhone (and the iPad) as well as the Android phone and the BlackBerry. Download that and you’re good to go. 

The first response of friends to the idea of reading on a smart phone is: “too small. Won’t work.”

Until, of course, they try it. Then opposition seems to melt away. One of my Kabul colleagues, no spring chicken, reads all his books on his iPhone 4. When the Android app came out a few weeks ago I tried it on my Google Nexus One.

And that’s when I realized how different digital books are.

Not just from normal books. But from other digital content.

I look at it like this: Written content is platform agnostic. It doesn’t care what it’s written/displayed on. We’ll read something on a toilet wall if it’s compelling enough (and who doesn’t want to learn about first-hand experience of Shazza’s relaxed favor-granting policies?)

We knew this already. (The fact that content doesn’t care about what it’s on, not how Shazza spends her discretionary time.) We knew that paper is a great technology for printing on, but we knew it wasn’t the only one. We also knew the size of the area upon which the text is printed doesn’t matter too much either. From big notice boards to cereal packets to postage-stamps, we’ll read anything.

So it should come as no surprise that reading on a smartphone is no biggie. The important thing is what Mihály Csíkszentmihályi defined as flow: Do we lose ourselves in the reading? Do we tune out what is around us?

Surprisingly, we do. Usually, if I’m in a queue for anything I get antsy. I start comparing line lengths. I curse the people in front for being so slow, the guy behind me for sneezing all over my neck, the check-in staff for being so inept.

But then I whip out my phone and start reading a book and I’m lost. The shuffling, the sneezing, the incompetence are all forgotten, the noise reduced to a hum as I read away.

Now it’s not that I don’t read other stuff on my cellphone. I check my email, I read my Twitter, Facebook and RSS feeds. But it’s not the same. A book is something to get absorbed in. And, if you’re enjoying the book, you will. That’s why we read them.

So it doesn’t really matter what the device is, so long as the content is good (and this is why talk of turning ebooks into interactive devices is hogwash. All-singing, all-dancing multimedia swipe and swoosh is not what flow is all about—and what books are all about.)

This is what differentiates book content from other kinds of digital content. We’re actually well primed to pick up the thread of reading from where we left off—how many times do you notice that you’re able to jump to the next unread paragraph of a book you put down the night before without any effort? Our brains are well-trained to jump back into the narrative threat a book offers.

There’s another thing at work here.

Previously we would only rarely have considered picking up a book to read for short bursts. But the cellphone naturally lends itself to that. You’ll see a few people in queues reading physical books, but the effort required is often a bit too much. It looks more defiantly bohemian than cozy. Not so with the phone, which is rarely far from our grasp.

This is one reason why friends report reading more with these devices. They may carve the process into smaller slices, but the flow remains intact.

And one more thing: The devices enable us to keep several books on the go at once. Just as we would listen to different music depending on our mood, time of day, etc, so with books we switch between fiction and non-fiction, humor, pathos, whatever. Only having a pile of books in your bag wasn’t quite as practical as having one by your bedside.

Now with ebooks that’s no longer an issue.

This is all very intriguing, and flies in the face of what we thought was happening to us in our digital new world: We thought attention spans were shrinking, that we weren’t reading as much as before, that we were slaves to our devices rather than the other way around.

I don’t believe it to be so. Sure, there are still phone zombies who don’t seem to be able to lift their gaze from their device, and respond to its call like a handmaiden to her mistress. But ebooks offer a different future: That we are able to conquer distraction with flow, absorb knowledge and wisdom in the most crowded, uncivilized of places, and, most importantly, enjoy the written word as much as our forebears did.

Praise be to Kindle. And the smart phone.

The Future: Findability

We only noticed three months later, but we passed something of a milestone last December. I’m hoping it might, finally, wake us up to the real power of the Web: findability.

According to Ericsson, a mobile network company, in December we exchanged more data over our mobile devices than we talked on them. In short, we now do more email, social networking, all that stuff, on our mobile phones and mobile-connected laptops than we do voice.

Quite a turning point.

But a turning point of what, exactly?

Well, the conventional wisdom is that we will use our cellphone (or a netbook with a cellphone connection) to do all the things we used to do, or still do, on our desktop tethered laptop or PC. According to a report by Sandvine, another network company, released this month, one in five of us mobile data subscribers are using Facebook and video sharing website YouTube accounts for at least a 10th of all traffic.

But the conclusions they draw from this are wrong.

The thinking is that we’re somehow interested only in doing things that we did at our desk, even when we’re in the open air. Or on the couch.

Well, OK, but it betrays a lack of imagination of what we’ll do when we’re really untethered.

When we have access to everything the Internet has to offer–and when the Internet has access to us. Then we’ll have findability. By that we mean we can find the answer to pretty much every question we ask, from where’s the nearest 24-hour pizza place to what’s the capital of Slovakia. Or who was in that movie with John Cusack about a hit man returning to his high school prom?

We know that we know all this, even if we don’t know it. Because we have all this at our fingertips, because we have the Internet. No longer do we care about hoarding information because we know the Internet’s hoarding it for us, and Google or someone, is there to help us find it in a microsecond.

That’s one bit of findability. But there’s another bit. Connect all this to other bits of information about ourselves, drawn from sensors and other chips inside the device: where we are, what time of day it is, what that building in front of us is, who we’re with, what language they’re speaking, our body temperature, whether we’re moving or stationary, whether we’re upright, sitting or laying flat, whether our eyes are closed, whether we used voice, touch, eyes, keys or gestures to pose whatever question was on our mind.

All that adds extra layers of information to findability, by giving context to our search for information. Only our imagination can tell us how all these bits and pieces of data can be useful to us, but if you’ve used a map on your smartphone you’ll already get a glimpse of its potential.

Last December, we passed into this new era. The era when the potential of the Internet to move beyond the desk and lap, and start to mesh with our lives so that it is all around us. Where we, where everything,  can be found.

Tony’s Camera


Tony’s Camera
Originally uploaded by Loose Wire.

How many people, I wonder, have had this experience: a nightmare with a smartphone and a return to trusty basics. My friend Tony has a BlackBerry, but this is his phone of choice, and after his N91 died in midflight (literally) he decided he wouldn’t take a chance on a phone being anything more than a phone anymore. This ancient museum-piece is now his main phone and he’s very happy with it. And he being in telecoms too!

Bye Bye, Laptop?

image

The day seems to be getting closer when we can do something that would seem to be pretty obvious: access our pocket-sized smartphone via a bigger screen, keyboard and a mouse. Celio Corp says it’s close.

Celio Corp have two products: their Mobile Companion (pictured above), a laptop like thing that includes an 8″ display, a full function keyboard, and a touchpad mouse. At 1 x 6 x 9 inches and weighing 2 lbs, the Mobile Companion promises over 8 hours of battery life and boots instantly. After loading a driver on your smartphone you can then access it via a USB cable or Bluetooth. (You can also charge the smartphone via the same USB connection.)

Uses? Well, you can say goodbye to coach cramp, where you’re unable to use a normal laptop. You can input data more easily than you might if you just had your smartphone with you. And, of course, you don’t need to bring your laptop.

The second product might be even better. The Smartphone Interface System is, from what I can work out, a small Bluetooth device that connects your smartphone, not to the Mobile Companion, but to a desktop computer, public display or a conference room projector  — these devices connect via a cable to the Interface, like this:

image

The important bit about both products is that the Redfly software renders the smartphone data so it fits on the new display (this will be quite tricky, and, because it will carried via Bluetooth, would need quite a bit of compression. The maximum size of the output display is VGA, i.e. 800 x 480, so don’t expect stunning visuals, but it’ll be better than having all your colleagues crowding around your smartphone.)

The bad news? Redfly isn’t launched yet, and will for the time being be available only for Windows Mobile Devices. Oh, and according to UberGizmo, it will cost $500. The other thing is that you shouldn’t confuse “full function keyboard” with “full size keyboard”: this vidcap from PodTech.net gives you an idea of the actual size of the thing:

image

this is the keyboard size relative to Celio CEO Kirt Bailey’s digits:

image

Until I try the thing out and feel sure that the keyboard doesn’t make the same compromises as the Eee PC, I’d rather use my Stowaway keyboard.

For those of you looking for software to view your mobile device on your desktop computer, you might want to check out My Mobiler. It’s free software that purports to do exactly that for Windows Mobile users.