The Connections Our Buttons Make

By | May 16, 2007

CapOnce we create all that attention data, think of the whacky things we can do with it.

I’ve been banging on about attention data for a while now, and I apologise. (For an explanation and a bit of background, go here.) But I can’t help seeing stuff through that prism nowadays. Like this camera called Buttons that doesn’t take pictures but times, and then searches the Internet for photographs taken at that second:

It is a camera that will capture a moment at the press of a button. However, unlike a conventional analog or digital camera, this one doesn’t have any optical parts. It allows you to capture your moment but in doing so, it effectively seperates it from the subject. Instead, as you will memorize the moment, the camera memorizes only the time and starts to continuously search on the net for other photos that have been taken in the very same moment.

Basically the camera is a phone inside a sort of camera case. Press the button and the phone searches Flickr for photos taken at that moment. (Of course, this may take a little time.)

A lovely idea and a fascinating one. I seem to recall a photography project here where individuals were given cameras and told to take photos at the exact same moment around the city. Danged if I can remember what it was called. But as Tim O’Reilly points out in the comments on a post by Nikolaj Nyholm, it has even greater potential beyond the variable of time:

I imagine that with geolocation, you could potentially go one better. Imagine a camera that does take a picture, but also initiates a search for all other pictures taken at that same location (and optionally at the same time of day/year.)

Less poetic a vision than that of Sascha Pohflepp, creator of Buttons, but possibly more relevant to many users. I’d certainly love to see Google Earth etc use time more in their layers, so that it’s possible to get historical changes in a place (say 3D models of old buildings that no longer exist, or photos like those extraordinary collections created by the UNEP which depict changes in the environment.)

But the main idea here is to use the metadata embedded in attention streams (in this case, when or where a photo was taken) and match it with metadata from other streams. A bit like Last.fm, et al, where similarities are found between what music two quite separate people are listening to. The goal is as Sascha puts it, to subordinate the device to the bigger purpose of connecting people:

Even more so, it reduces the cameras to their networked buttons in order to create a link between two individuals.

The possibilities are endless, but it’s too early in the morning for me to think of any.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.