Let’s face it: we’re not about to have driverless cars in our driveway any time soon. Soonest: a decade. Latest: a lot longer, according to the folk I’ve spoken to.
But in some ways, if you’ve got the dosh, you can already take your foot off the gas and hands off the steering wheel. Higher end cars have what are called active safety features, such as warning you if you stray out of your lane, or if you’re about to fall asleep, or which let the car take over the driving if you’re in heavy, slow moving traffic. Admittedly these are just glimpses of what could happen, and take the onus off you for a few seconds, but they’re there. Already.
The thinking behind all this: More than 90% (roughly, depends who you talk to) of all accidents are caused by human error. So, the more we have the car driving, the fewer the accidents. And there is data that appears to support that. The US-based Insurance Institute for Highway Safety found that forward collision warning systems led to a 7% reduction in collisions between vehicles.
But that’s not quite the whole story. For one thing, performing these feats isn’t easy. Getting a car, for example, to recognise a wandering pedestrian is one of the thorniest problems that a scientist working in computer vision could tackle, because you and I may look very different — unlike, say, another car, or a lamppost, or a traffic sign. We’re tall, short, fat, thin, we were odd clothes and we are unpredictable — just because we’re walking towards the kerb at a rate of knots, does that mean we’re about to walk in to the road?
Get this kind of thing wrong and you might have a top of the range Mercedes Benz slam on the brakes for nothing. The driver might forgive the car’s computer the first time, but not the second. And indeed, this is a problem for existing safety features — is that a beep to warn you when you’re reversing too close to an object, or you haven’t put your seatbelt on, or you’re running low on windscreen fluid, or bceause you’re straying into oncoming traffic? We quickly filter out warning noises and flashing lights, as airplane designers have found to their (and their pilots’) cost.
Indeed, there’s a school of thought that says that we’re making a mistake by even partially automating this kind of thing. For one thing, we need to know what exactly is going on: are we counting on our car to warn us about things that might happen, and, in the words of the tech industry “mitigate for us”? Or are these interventions just things that might happen some of the time, if we’re lucky, but not something we can rely on?
If so, what exactly is the point of that? What would be the point of an airbag that can’t be counted on to deploy, or seatbelts that only work some of the time? And then there’s the bigger, philosophical issue: for those people learning to drive for the first time, what are these cars telling them: that they don’t have to worry too much about sticking to lanes, because the car will do it for you? And what happens when they find themselves behind the wheel of a car that doesn’t have those features?
Maybe it’s a good thing we’re seeing these automated features now — because it gives us a chance to explore these issues before the Google car starts driving itself down our street and we start living in a world, not just of driverless cars, but of cars that people don’t know how to drive.
This is a piece I wrote for the BBC World Service, based on a Reuters story.