Pen Computing Is Still About the Pen

By | November 22, 2011

Livescribe

I’ve always loved the idea of pens that work with your computer, either transcribing our hand-written notes, or faithfully reproducing our drawings on our computer, but the promise has always dwarfed the reality. Is LiveScribe different?

LiveScribe, launched at last week’s D conference, differs from previous digital pens in several ways: instead of merely trying to capture what you write, it captures what it hears, and is able to link what is written with what is recorded. Tap on a word you’ve written and it will jump to that part of the recording. Write something and have the pen translate it into Japanese or Swedish [sic].

All of this sounds amazing. It’s the sort of thing that has the potential for revolutionizing the way we work. Its inventor, Jim Marggraff, says “we can see this changing the world.” Even my colleague Walt Mossberg got so excited about it, according to the San Francisco Chronicle, he “was so determined to have the scoop on the pen, and to unveil it at his conference, Marggraff said, that [New Yorker’s Ken] Auletta (who was writing a profile of Mossberg and in the room when Marggraff was giving a demo) was not allowed to write about it.” Walt doesn’t get excited about stuff very often.

But what I find interesting is how hard it has been to get the pen– or paper-based computing thing to where we are now. LiveScribe, for example, is the vision of Marggraff, but also incorporates a lot of technology that came before. He himself was the inventor of the LeapPad (1999), “essentially a cross between a talking book and an educational videogame console,” in the words of a WIRED profile of Marggraff from 2005. It made LeapFrog, the company behind it, the fastest growing company in history.

Then Marggraff came up with the Fly, a kid’s version of LiveScribe, which used technology provided by Sweden–based Anoto, the company that developed the technology behind other pen– or paper-based computing systems, including Logitech’s own io Pen

By then the company’s fortunes had taken a serious dive, so a lot was riding on the Fly. Oddly, and I can’t find an explanation for this, Marggraff then quit the company and joined Anoto (before the WIRED article had actually hit the streets). A year later Marggraff again left to form LiveScribe, although Anoto remains a partner of LiveScribe, according to this press release. (Anoto helps develop and market the pen in return for cash and royalties. Hence the Swedish translation, I guess.)

So while there may seem to be a buzz about this product (and there should be; it’s got some great features) it’s actually just the latest offering in a series of innovations that, at least for the adult/professional market, has dazzled more than it has actually won over. For some reason pens aren’t as exciting to users as the idea of pens that do more. Walt may be excited by the product, and so am I. But I’ve learned that’s not always enough: journalists (well, me) have been excited by earlier incarnations of the digital pen and they don’t seem to have caught on either.

Why? I think it’s a few factors. Part of it is that all these products seem too fiddly, or at least require a change of habit. Another is Dependability: we need to know they’ll always work or we won’t trust them to do the job alone (recording interviews and writing notes are the sort of things you don’t want to mess up.) Thirdly, it’s because we’re weird about our pens: We either have pens we love and wouldn’t part with, or else we buy a particular brand we like by the truckload and lose them. In either case it’s because we like the way they write, and the ioPen and its cousins all failed to understand that, giving us just a basic Biro-type nib that doesn’t make us want to write. It’s like selling us a beautiful new laptop with a keyboard from WalMart.

So, my soapbox lesson for the day: Paper– or pen-based computing is a great notion, and may yet have its day, but developers need to understand that whatever the gizmo can do, it’s first and foremost a pen. (Like a smartphone, however snazzy, is first and foremost a phone.) So make it a great pen first, and then add the bells and whistles. Offer all sorts of different cartridge types and colored inks/gels. Make it a pleasure to doodle with, and then add the technology. Then we’ll grab a hold of the rest of the technology, and this time we may not let go.

Foleo, Surface, Stumbling etc

By | November 22, 2011

There’s lots of news out there which I won’t bother you with because you’ll be reading it elsewhere. But here are some links in case:

  • Palm has a new mini laptop called the Foleo. I like the idea, but I fear it will go the way of the LifeDrive, which I also kinda liked.
  • Microsoft has launched a desktop (literally) device called the Surface. Which looks fun, and embraces the idea of moving beyond the keyboard not a moment too soon, but don’t expect to see it anywhere in your living room any time soon.
  • eBay buys StumbleUpon, a group bookmarking tool I’ve written a column about somewhere. I don’t use StumbleUpon that much but I love the idea of a community-powered browsing guide. Let’s hope eBay doesn’t mess it up like they seem to be doing with Skype.
  • Microsoft releases a new version of LiveWriter, their blogging tool. Scoble says Google is planning something similar. True?

Oh, and Google Reader now works offline. Here are my ten minut.es with it, and a how to guide at ten ste.ps. This is big news, because it’s the first step Google have made in making their tools available offline. I’ve found myself using their stuff more and more, so the idea of being able to use the Reader, Calendar, Docs and Gmail offline seems an exciting one. (We’re not there yet, but Google Reader is a start.)

This brings me to again plead with anyone offering an RSS feed of their stuff, to put the whole post in the feed. Offline browsing is not going to work if you can only read an extract.

The Shift to a Mobile Web

By | November 22, 2011

This more than anything else, probably, will push the shift from desktop browsing to mobile browsing. The more restrictions workers face on their office computers from blinkered employers, the more natural it will be to turn to their mobile:

A nationwide study by T-Mobile UK has revealed that over a quarter of the UK’s workforce, still deprived of web access, are now turning to the Internet on their mobile – as employers enforce blanket bans on net usage.

A few points worth making here:

  • It’s an umbilical thing: offices misunderstand the use of the Web, which is probably why they ban it. It’s no longer just about surfing for information, shopping or football scores (although it’s still that). It’s about staying connected. The Internet is no longer just a resource of information (and, cough, images) but of “checking in” with one’s network, whether it’s on FaceBook, MySpace, Twitter, Skype, or wherever. Offices need to cope with this somehow, or they’ll lose the attention of their workers.
  • A different screen, a different app: the shift to the mobile web because of this negative pressure from the work place will create huge demand for mobile web apps that work quickly and efficiently. Indeed, it’s not the only pressure: Browsing is a quite different experience on the mobile phone. Browsers are already developing ways to reshape information to fit on a screen, but a smarter way would be to find new ways to deliver the information via the mobile phone (Widsets have made a start in this direction.)
  • Toilets: the unsung productivity hive Techdirt rightly points to the part of the survey which shows that 15% of users “resorted to hiding in the toilet just to get online.” Working from home, I do this with my laptop, frankly. But it’s not really about resorting to anything: it’s what the mobile world is. We used to read the newspaper on the john; why not a mobile phone?

History will find it weird, not that we connect to the Web on the john with a device once designed to make phone calls, but that for 15 years we had to do that via a big hunk of metal, plastic and wires sitting in the middle of what used to be a big open space called a desk.

Escape to Streetlevel

By | November 22, 2011

Everyscape1

Next up: cities you can drive through, and not from above, or fake worlds where everyone has big chests. Real cities, from all angles. It’s called EveryScape.

The company calls it “the world’s first interactive eye-level search that offers Web users a totally immersive world on the Internet.” A “virtual experience of all metropolitan, suburban and rural areas in which visitors can share their stories and opinions about real-life daily experiences against a photo-realistic backdrop ranging from streets and cities, communities, restaurants, schools, real estate and the like.” Yes, I’m not crazy about the lingo, but the idea is a cool one: Just try the preview of San Francisco’s Union Square.

Using a Flash-enabled browser you move through the terrain and ground level (in the middle of the street), and then can tilt your view through all angles. You can click on certain markers for more information, or enter certain buildings. You “window shop storefronts as well as tour the inside of those stores, see their offerings, and access published reviews and other information.” You can add content such as “relevant links, personal reviews, rankings” and things like “a “For Rent” sign and an apartment tour.”

Everyscape2

Putting the stuff together doesn’t sound as hard as you would expect. EveryScape’s HyperMedia Technology Platform means anyone with an SLR camera can take pictures and upload them; EveryScape hopes to tap “into local communities and users to assist in building out a visual library of content that will cover the entire world.” A sort of Google Earth at ground level.

Great idea, though of course you can imagine there’ll be a lot of commercial elements to all this. It’s hard to imagine ordinary Joes allowed to plaster streets with their virtual graffiti or anything else that gets in the way of advertising opportunities. The only other concern I have off the top of my head is that Google Earth made some of us wonder whether, after seeing every corner of the globe from a bird’s wing, we’d feel the same urge to travel. Now, after wandering the virtual streets of San Francisco, would we lose our wanderlust?

EveryScape plans to launch 10 U.S. metropolitan areas this year.

CAPTCHA Gets Useful

By | November 22, 2011

Captcha1

An excellent example of something that leverages a tool that already exists and makes it useful — CAPTCHA forms. AP writes from Pittsburgh:

Researchers estimate that about 60 million of those nonsensical jumbles are solved everyday around the world, taking an average of about 10 seconds each to decipher and type in.

Instead of wasting time typing in random letters and numbers, Carnegie Mellon researchers have come up with a way for people to type in snippets of books to put their time to good use, confirm they are not machines and help speed up the process of getting searchable texts online.

”Humanity is wasting 150,000 hours every day on these,” said Luis von Ahn, an assistant professor of computer science at Carnegie Mellon. He helped develop the CAPTCHAs about seven years ago. ”Is there any way in which we can use this human time for something good for humanity, do 10 seconds of useful work for humanity?”

The project, reCAPTCHA, is using people’s deciphering to go through those books being digitized by the Internet Archive that can’t be converted using ordinary OCR, where the results come out like this:

Captcha2

Those words are sent to CAPTCHAs and then the results fed back into the scanning engine. Here’s the neat bit, though, as explained on the website:

But if a computer can’t read such a CAPTCHA, how does the system know the correct answer to the puzzle? Here’s how: Each new word that cannot be read correctly by OCR is given to a user in conjunction with another word for which the answer is already known. The user is then asked to read both words. If they solve the one for which the answer is known, the system assumes their answer is correct for the new one. The system then gives the new image to a number of other people to determine, with higher confidence, whether the original answer was correct.

Which I think is kind of neat: the only problems might occur if people know this and mess the system by getting one right and the other wrong. But how do they know which one?