A Call for Diminished Reality

(a copy of my weekly syndicated column. Podcast from the BBC here.)

By Jeremy Wagstaff

I was walking the infant the other day, when I saw a vision of my future.  A mother and father, out with their son and dog.  The mother sat on a park bench, dog sitting obediently at her feet as she flicked absent-mindedly at her iPhone.

In the playground, the boy wove his way through a tunnel, across some ropes, down a slide–the father nearby, lost in his own iPhone. Occasionally he would waken from his 3G trance and, without looking up, point the phone at his son as if scanning him for radiation.  The resulting photo probably went straight to his Facebook page.  Ah, happy families, connected by place but detached by devices.

It’s a familiar lament.  Our devices distract so much we can’t ignore them.  We ignore our kith and kin but obey their beeps, walk into traffic or drive into pedestrians to heed their call.  And the solutions are usually less than imaginative, or practical: holidays where you check them in at the gate, where you put them in a glove compartment, or (shock), leave them at home entirely.

I have tried all these and they don’t work.  Which is why I fear I will be that family. Perhaps I already am; desperate to catch my infant’s first steps, words, or symphony, I think it more important that my cellphone camera is there, somehow, than I am. This is silly.  But I think I have found the answer in something called augmented reality.

Augmented reality is where our devices use their camera and positioning capability to add layers of information to what is in front of us: little pointers appear on the screen detailing where the nearest ATM is, or Chinese restaurant, or how far away and in what direction the nearest Twitter user is. The reality is the scene in front of us viewed through our camera, the augmented bit are these layers of extra information.

This is not new, but it’s becoming more popular.  And it’s kind of fun.  It is related to another technology that adds a layer onto what we see—so-called heads-up displays, that project information onto the windscreen of our airplane, or car, or goggles, that help us identify a target, a runway, an obstacle in the road.

Interesting, but I think they’ve got it all backwards.  Our problem is not that we need more information overlain on the world, we need to have the world overlain on the screens that command us.  We spend so little time interacting with the world now that we need technology to help us reintroduce the real world back into our lives.

I don’t think handing over our devices to well-intentioned guards at hotel gates is going to do it.  I think we need to find a way to fit the real world into our device.

Which is why, two years ago, I got very excited about an application for the iPhone called Email n Walk.  This was a simple application that overlays a simple email interface on top of whatever is in front of you.  The iPhone’s camera sees that for you, but instead of putting lots of pins about ATMs, Chinese restaurants and twitter users on the image, it puts the bare bones of whatever email you’re typing.  You can type away as you’re walking, while also seeing where you’re going.

Brilliant.  And of course, as with all brilliant things, it got lots of media attention and promptly disappeared.  The app is still there on Apple’s software shop, but the company’s home page makes no mention of it.  I tried to reach the developers but have yet to hear back.

They’re careful not to claim too much for the software. We can’t take any responsibility for your stupidity, so please don’t go walking into traffic, off of cliffs, or into the middle of gunfights while emailing, they say.  But it’s an excellent solution to our problem of not being able to drag our eyes from our screens, even to watch our son clambering over a climbing frame.

It’s not augmented reality, which purports to enrich our lives by adding information to it.  It’s a recognition that our reality is already pretty hemmed in, squeezed into a 7 by 5 cm frame, and so tries to bring a touch of the real world to that zone.  I believe that this kind of innovation should be built into every device, allowing us to at least get a glimmer of the real world.

Indeed, there are signs that we’re closer to this than we might expect. Samsung last month unveiled what may be the world’s first transparent laptop display, meaning you can see through it when it’s turned on, and when it’s turned off. I don’t pretend that it’s a good solution to the growing impoverishment of our lives, which is why I have no hesitation to call this inversion of augmented reality ‘diminished reality.’

And now, if you’ll excuse me, my daughter is making funny faces at me through the screen so I better grab a photo of it for my Facebook page.

A Call for Diminished Reality

By Jeremy Wagstaff

I was walking the infant the other day, when I saw a vision of my future.  A mother and father, out with their son and dog.  The mother sat on a park bench, dog sitting obediently at her feet as she flicked absent-mindedly at her iPhone.

In the playground, the boy wove his way through a tunnel, across some ropes, down a slide–the father nearby, lost in his own iPhone. Occasionally he would waken from his 3G trance and, without looking up, point the phone at his son as if scanning him for radiation.  The resulting photo probably went straight to his Facebook page.  Ah, happy families, connected by place but detached by devices.

It’s a familiar lament.  Our devices distract so much we can’t ignore them.  We ignore our kith and kin but obey their beeps, walk into traffic or drive into pedestrians to heed their call.  And the solutions are usually less than imaginative, or practical: holidays where you check them in at the gate, where you put them in a glove compartment, or (shock), leave them at home entirely.

I have tried all these and they don’t work.  Which is why I fear I will be that family. Perhaps I already am; desperate to catch my infant’s first steps, words, or symphony, I think it more important that my cellphone camera is there, somehow, than I am. This is silly.  But I think I have found the answer in something called augmented reality.

Augmented reality is where our devices use their camera and positioning capability to add layers of information to what is in front of us: little pointers appear on the screen detailing where the nearest ATM is, or Chinese restaurant, or how far away and in what direction the nearest Twitter user is. The reality is the scene in front of us viewed through our camera, the augmented bit are these layers of extra information.

This is not new, but it’s becoming more popular.  And it’s kind of fun.  It is related to another technology that adds a layer onto what we see—so-called heads-up displays, that project information onto the windscreen of our airplane, or car, or goggles, that help us identify a target, a runway, an obstacle in the road.

Interesting, but I think they’ve got it all backwards.  Our problem is not that we need more information overlain on the world, we need to have the world overlain on the screens that command us.  We spend so little time interacting with the world now that we need technology to help us reintroduce the real world back into our lives.

I don’t think handing over our devices to well-intentioned guards at hotel gates is going to do it.  I think we need to find a way to fit the real world into our device.

Which is why, two years ago, I got very excited about an application for the iPhone called Email n Walk.  This was a simple application that overlays a simple email interface on top of whatever is in front of you.  The iPhone’s camera sees that for you, but instead of putting lots of pins about ATMs, Chinese restaurants and twitter users on the image, it puts the bare bones of whatever email you’re typing.  You can type away as you’re walking, while also seeing where you’re going.

Brilliant.  And of course, as with all brilliant things, it got lots of media attention and promptly disappeared.  The app is still there on Apple’s software shop, but the company’s home page makes no mention of it.  I tried to reach the developers but have yet to hear back.

They’re careful not to claim too much for the software. We can’t take any responsibility for your stupidity, so please don’t go walking into traffic, off of cliffs, or into the middle of gunfights while emailing, they say.  But it’s an excellent solution to our problem of not being able to drag our eyes from our screens, even to watch our son clambering over a climbing frame.

It’s not augmented reality, which purports to enrich our lives by adding information to it.  It’s a recognition that our reality is already pretty hemmed in, squeezed into a 7 by 5 cm frame, and so tries to bring a touch of the real world to that zone.  I believe that this kind of innovation should be built into every device, allowing us to at least get a glimmer of the real world.

Indeed, there are signs that we’re closer to this than we might expect. Samsung last month unveiled what may be the world’s first transparent laptop display, meaning you can see through it when it’s turned on, and when it’s turned off. I don’t pretend that it’s a good solution to the growing impoverishment of our lives, which is why I have no hesitation to call this inversion of augmented reality ‘diminished reality.’

And now, if you’ll excuse me, my daughter is making funny faces at me through the screen so I better grab a photo of it for my Facebook page.

How Long Was the iPhone Location Vulnerability Known?

I’m very intrigued by the Guardian’s piece iPhone keeps record of everywhere you go | Technology | guardian.co.uk but I’m wondering how new this information is, and whether other less transparent folk have already been using this gaping hole. Charles Arthur writes:

Security researchers have discovered that Apple‘s iPhone keeps track of where you go – and saves every detail of it to a secret file on the device which is then copied to the owner’s computer when the two are synchronised.

The file contains the latitude and longitude of the phone’s recorded coordinates along with a timestamp, meaning that anyone who stole the phone or the computer could discover details about the owner’s movements using a simple program.

For some phones, there could be almost a year’s worth of data stored, as the recording of data seems to have started with Apple’s iOS 4 update to the phone’s operating system, released in June 2010.

But it seems that folk on a forum have already been talking about it since January: Convert Iphone 4 Consolidated.db file to Google earth:

Someone called Gangstageek asked on Jan 6:

Is there a way to, or a program (for the PC) that can read the Consolidated.db file from the Iphone 4 backup folder and accurately translate the cell locations and timestamps into Google earth?

Other forum members helped him out. Indeed, an earlier forum, from November 2010, looked at the same file. kexan wrote on Nov 26:

We are currently investigating an iphone used during a crime, and we have extracted the geopositions located within consilidated.db for analysis. During this we noticed that multiple points have the same unix datestamp. We are unsure what to make of this. Its kind of impossible to be on several locations at once, and the points are sometimes all over town.

Going back even further, Paul Courbis wrote on his site (translated from the French), including a demo:

Makes it relatively easy to draw the data on a card to get an idea of ​​places visited by the owner of the iPhone..

I don’t have an iPhone so I’ve not been able to test this. But I’m guessing that this issue may have already been known for some time by some kind of folk. Indeed, there are tools in use by police and others that may have already exploited this kind of vulnerability.

The Sleazy Practice of Internal Linking

image

It’s a small bugbear but I find it increasingly irritating, and I think it reflects a cynical intent to mislead on the part of the people who do it, so I’m going to vent my spleen on it: websites which turn links in their content, not to the site itself, but to another page on their own website.

An example: TechCrunch reviews Helium, a directory of user-generated articles. But click on the word Helium, and it doesn’t take you, as you might reasonably expect, to the website Helium, but to a TechCrunch page about Helium. If you want to actually find a link to the Helium page, you need to go there first.

I find this misleading, annoying and cynical on the part of the websites that do this. First off, time-honored tradition of the net would dictate a website name which is linked to something would be to the website itself. Secondly, clearly TechCrunch and its ilk are trying to keep eyeballs by forcing readers to go to another internal page, with all the ads, before finding the link itself. Thirdly, because I’m a PersonalBrain user and I like to drag links into my plex (that’s what we PBers call it) it’s a pain.

Fourthly, it’s clearly a policy that even TechCrunch has trouble enforcing. In the case above, the original post had the word Helium directly linking to the website itself, but which was subsequently edited to link to the internal TechCrunch page (as noticed by a reader of the site). If you subscribe to the TechCrunch feed, that’s what you’ll still see:

image image

TechCrunch isn’t alone in this, by the way. StartupSquad does it (a particularly egregious example here of five links in a row which don’t link to the actual sites). For an example of how it should be done, check out Webware, which has the word linking to the site itself, and an internal review as a parenthetical link following. Like this, in Rafe Needleman’s look at companionship websites. Click on Hitchsters and you go to the site; click on ‘review’ and you go to a review.

image

It’s a nuisance more than a crime, but to me it still undermines a central tenet of the web: links should be informative and not misleading. If you are linking to anything other than what your reader would expect, then you’re just messing around with them.

The Limits of the Ribbon Revolution

Ribbon1

The Microsoft Office Ribbon is really starting to take off. I’ve seen it in three applications in the past two days: a mindmapping program (I’m under embargo so can’t say which one), SmartDraw 2007, and even something like Mindomo (tenminut.es review here), an online mindmapping program. Here is a bunch of other programs using the Ribbon: Essential Studio, Radius, SandRibbon, etc.

I was positive about the ribbon in a recent WSJ.com column (subscription only) which has led to death threats and old friends no longer talking to me. But I felt, and still feel, the ribbon is a big step forward in interface design. But I’m not sure that it will last. Here’s why.

Visio1Limits: The ribbon only makes sense for some programs. But which? The obvious distinction is between navigation and creativity/productivity. A browser doesn’t make sense, for example. But why not something like Visio? I have to assume it’s not laziness or lack of time that has meant that quite a few programs in the Office 2007 stable don’t actually use the ribbon (besides Visio, Outlook and Publisher don’t) so presumably it was decided the ribbon didn’t suit those programs. So we’re stuck now with two competing interface approaches — menus and ribbons. Is that making things simpler?

Licensing: Microsoft will only license it to non-competing programs. So, don’t expect to see it in all programs that might most benefit from it. Instead, expect to see OpenOffice et al develop something like a ribbon which is similar enough to look, well, similar, but not similar enough to flatten the learning curve. It may already have happened, since one or two of the ribbons I’ve seen don’t exactly feel very similar to Microsoft’s design. (How many of these Ribbon look alikes are actually licensees?)

Smartdraw1Poor design: The ribbon is designed, among other things, to increase the amount of space available for you to do stuff. Some programmers don’t seem to get this: the new SmartDraw, for example, has no way I can see of minimizing the Ribbon, severely reducing the amount of space to actually draw in. Ditto with Mindomo. (And as several readers of the column pointed out, why can’t we move the Ribbon around the screen or customize it? What is this? 1992?)

Mindomo4

Standards: The ribbon is supposed to be intuitive, and it is. Once you get it, there are very few commands that are elusive. The commands are grouped together well — more intuitively than the old menu system. But inevitably, as more programs adopt the ribbon approach users will get confused and mis-remember placement of functions. Mindomo, for example, doesn’t really follow the logic of other ribbon interfaces (‘Topic’ is the second ribbon name, and ‘Task Info’ the third. No logic for me there, and I’m a seasoned mindmapper.) SmartDraw has just one main ribbon and then smaller sub-ribbons on the right which makes some sense but requires a whole new attitude, not to mention weird mouse movements to get there:

Smartdraw2

Opting out: The big complaint about the Ribbon Revolution is that there’s no opting out of it. In none of the programs I have looked at is there a way to say “Ribbon? No thanks, give me back my menus. It took me 15 years to learn them and I want to stick with them.” I think this is a mistake not to give people that option.

I’m not saying the Ribbon is a bad idea. I think it’s great in Word and Excel. But it’s already beginning to feel that it should have been more flexible in its design. If Microsoft is serious about making this the new user interface, then it needs to take a long hard look at how it’s used beyond the narrow Office cubicle cluster.

What Goes Around…

I’m belatedly playing with Microsoft’s new Windows Live Writer. I like it, but then I’ve always been a fan of blog writing tools. Here’s a list of them I started keeping, although I’m pretty sure it’s out of date by now.

But does it not strike you as somewhat strange that we’ve gotten to this point? I mean, those blog writing tools were available nearly three years ago, doing pretty much what Windows Live Writer does now — WYSIWYG authoring, HTML source code editing, Web preview mode, adding photos, compatibility with different blog services, some weird formatting and error messages, etc etc. In fact the only thing it’s got the others don’t have, map publishing, doesn’t yet work. Oh, it’s free. But otherwise Dmitry Chestnykh of BlogJet seems to have a point when he claims Microsoft has ripped off his software.

So is this where Web 2.0 has taken us? All the way back to a small software tool that lets us write our blog postings offline so we can upload them later?

What Goes Around…

I’m belatedly playing with Microsoft’s new Windows Live Writer. I like it, but then I’ve always been a fan of blog writing tools. Here’s a list of them I started keeping, although I’m pretty sure it’s out of date by now.

But does it not strike you as somewhat strange that we’ve gotten to this point? I mean, those blog writing tools were available nearly three years ago, doing pretty much what Windows Live Writer does now — WYSIWYG authoring, HTML source code editing, Web preview mode, adding photos, compatibility with different blog services, some weird formatting and error messages, etc etc. In fact the only thing it’s got the others don’t have, map publishing, doesn’t yet work. Oh, it’s free. But otherwise Dmitry Chestnykh of BlogJet seems to have a point when he says Microsoft has ripped off his software.

So is this where Web 2.0 has taken us? All the way back to a small software tool that lets us write our blog postings offline so we can upload them later?

How to Send Screenshots to Flickr

Here’s a cool plug in which allows you to capture and post screenshots directly to Flickr: SnagIt Profile for Flickr

Uploading a capture to Flickr, and then sharing a link to that image, is much easier and more universal than sharing image files as attachments. Using the SnagIt to Flickr profiles, you get the best of both tools – SnagIt’s high-quality screen captures with the ease of sharing on Flickr.

SnagIt is made by the same folks who make Camtasia, one of the better screencasting programs. So it’s not surprising there’s a screencast of how it works too. (Here’s a directory of screencasting tools in case you missed it.)

Google Finance Raises the Bar (Chart)

Google’s new Finance site is worth a look, but it’s only when you look at the graphs do you realise how good it is. Take this snippet from the chart on the Apple page:

Gfin

The overall chart is a great example of beauty and graphical simplicity. The letters correspond to a scrolling column of stories that tie in date-wise to the stock price. Drag the main chart around and the column of stories moves up or down to reflect the period now visible in the chart window. The bit at the top, meanwhile, is a longer timeline+chart which can be used to narrow or expand the larger chart below (by grabbing the two vertical little tweezer things, there’s probably a fancy name for them) and pulling together or apart.

There’s nothing revolutionary in here, but it’s executed very well, and to me is the first time that Google has really shown that it intends to eat into Yahoo!’s core market big time. There’s lots of other data on the page — all that one would need to make an investment decision, I’d wager — but the chart is the thing. What would be an afterthought elsewhere turns into a really dynamic tool. Exciting stuff.

How to Split Your Screen Down the Middle

Here’s something for the directory of monitor extenders — stuff that increases the size, scope or general bendiness of your screen — SplitView , from the guys who brought you DiskView:

SplitView increases productivity by making it easy to work with two applications side by side. It helps make full use of your high resolution monitor and gives the benefit of dual-monitors without their associated cost.

Given it costs $19, that statement is indeed true. The problem is simple. Having two monitors is great — if you haven’t done it yet, you haven’t lived — but it’s also neat because you can pretty much keep them separate, a bit like having two desks to play with. That’s because Windows treats the two screens as one for some functions – moving windows and whatnot — but as two for functions like maximising programs etc. Very useful if you’re moving between two documents, or dragging and dropping text using the mouse.

But what happens if you have one supersized monitor, with high resolution? You have all that real estate, but not the same duality, if you get my drift. This is where SplitView jumps in. A small program that incorporates itself into the pull-down resize menu on the left-hand top corner (right clicking on its icon in the toolbar at the bottom of the screen has the same effect), SplitView lets you make the program take up half the screen on either the left or right in one move (or via keyboard shortcuts).  So now you have two monitors in one:

I can imagine this would also be useful for those of us used to dual monitors but forced into single screendom when on the road. Now your laptop can be split in two, making it easy to drag and drop and stuff. Its author, Rohan, says he wrote it “as a ‘me-ware’ – something i needed myself, and then productized it.” Good productizing, Rohan.