Reuters: Making cars safer: have the driver do less

A piece I wrote for Reuters. BBC version here

Making cars safer: have the driver do less

By Jeremy Wagstaff

SINGAPORE Tue Nov 11, 2014 4:00pm EST

Nov 12 (Reuters) – As millions of cars are under recall for potentially lethal air bags, designers are trying to reduce the need for the device – using sensors, radar, cameras and lasers to prevent collisions in the first place.

With driver error blamed for over 90 percent of road accidents, the thinking is it would be better to have them do less of the driving. The U.S.-based Insurance Institute for Highway Safety found that forward-collision warning systems cut vehicle-to-vehicle crashes by 7 percent – not a quantum leap, but a potential life saver. Nearly 31,000 people died in car accidents in 2012 in the United States alone.

“Passive safety features will stay important, and we need them. The next level is now visible. Autonomous driving for us is clearly a strategy to realise our vision for accident-free driving,” said Thomas Weber, global R&D head at Mercedes-Benz .

While giving a computer full control of a car is some way off, there’s a lot it can do in the meantime.

For now, in some cars you can take your foot off the pedal and hands off the wheel in slow-moving traffic, and the car will keep pace with the vehicle in front; it can jolt you awake if it senses you’re nodding off; alert you if you’re crossing into another lane; and brake automatically if you don’t react to warnings of a hazard ahead.

How close this all comes to leaving the driver out of the equation was illustrated by an experiment at Daimler last year: adding just a few off-the-shelf components to an S-class Mercedes, a team went on a 100 km (62 mile) ride in Germany without human intervention. “The project was about showing how far you can go, not just with fancy lasers, but with stuff you can buy off the shelf,” said David Pfeiffer, one of the team.

Such features, however, require solving thorny problems, including how to avoid pedestrians.

While in-car cameras are good at identifying and classifying objects, they don’t work so well in fog or at night. Radar, on the other hand, can calculate the speed, distance and direction of objects, and works well in limited light, but can’t tell between a pedestrian and a pole. While traffic signs are stationary and similar in shape, people are often neither.

For a better fix on direction there’s LiDAR – a combination of light and radar – which creates a picture of objects using lasers. Velodyne’s sensors on Google’s autonomous car, for example, use up to 64 laser beams spinning 20 times per second to create a 360-degree, 3D view of up to several hundred metres around the car.

Mercedes’ ‘Stop-and-Go Pilot’ feature matches the speed of the car in front in slow traffic and adjusts steering to stay in lane using two ultrasonic detectors, five cameras and six radar sensors. “This technology is a first major step,” said R&D chief Weber. “(However distracted the driver is), the system mitigates any accident risk in front.”

HOLY GRAIL

The next stage, experts say, is a road network which talks to cars, and where cars talk to other cars. General Motors has said its 2017 Cadillac CTS will transmit and receive location, direction and speed data with oncoming vehicles via a version of Wi-Fi.

Other approaches include using cameras to monitor the driver. Abdelaziz Khiat, at Nissan Motor’s research centre in Japan, uses cameras to track the driver’s face to detect yawns, a drooping head suggesting drowsiness, or frowns that may indicate the onset of road rage.

These advanced safety features are fine – if you can afford them. The Insurance Institute survey found that the forward collision warning systems were available in fewer than one in every 20 registered vehicles in 2012.

In key markets across emerging Asia, says Klaus Landhaeusser, regional head of government relations at Bosch , many first-time car buyers don’t want to spend more than $2,500. For that, he said, “you won’t be able to introduce any safety features.”

Road conditions are also key. “It will be a long time before we have software and algorithms that can see everything happening” on the roads in emerging markets, said Henrik Kaar, at auto safety equipment market leader Autoliv Inc.

And not everyone welcomes this progress. Some drivers complain the technology is intrusive, or is inconsistent. “If a safety feature is seen as intrusive or bothersome, a driver may try to circumvent or disable it,” said Chris Hayes, a vice president at insurer Travelers.

The key appears to be ensuring that while humans remain in charge of the vehicle, they have good information and features that correct the errors they make.

“For a long time, people thought it was an all-or-nothing jump between humans in charge and fully autonomous vehicles,” said Michael James, senior research scientist at Toyota Motor’s U.S. technical centre. “I don’t think that’s the case anymore. People see it as a more gradual transition.”

 

(Additional reporting by Norihiko Shirouzu; Editing by Ian Geoghegan)

BBC: Cars we can’t drive

Let’s face it: we’re not about to have driverless cars in our driveway any time soon. Soonest: a decade. Latest: a lot longer, according to the folk I’ve spoken to.

But in some ways, if you’ve got the dosh, you can already take your foot off the gas and hands off the steering wheel. Higher end cars have what are called active safety features, such as warning you if you stray out of your lane, or if you’re about to fall asleep, or which let the car take over the driving if you’re in heavy, slow moving traffic. Admittedly these are just glimpses of what could happen, and take the onus off you for a few seconds, but they’re there. Already.

The thinking behind all this: More than 90% (roughly, depends who you talk to) of all accidents are caused by human error. So, the more we have the car driving, the fewer the accidents. And there is data that appears to support that. The US-based Insurance Institute for Highway Safety found that forward collision warning systems led to a 7% reduction in collisions between vehicles.

But that’s not quite the whole story. For one thing, performing these feats isn’t easy. Getting a car, for example, to recognise a wandering pedestrian is one of the thorniest problems that a scientist working in computer vision could tackle, because you and I may look very different — unlike, say, another car, or a lamppost, or a traffic sign. We’re tall, short, fat, thin, we were odd clothes and we are unpredictable — just because we’re walking towards the kerb at a rate of knots, does that mean we’re about to walk in to the road?

Get this kind of thing wrong and you might have a top of the range Mercedes Benz slam on the brakes for nothing. The driver might forgive the car’s computer the first time, but not the second. And indeed, this is a problem for existing safety features — is that a beep to warn you when you’re reversing too close to an object, or you haven’t put your seatbelt on, or you’re running low on windscreen fluid, or bceause you’re straying into oncoming traffic? We quickly filter out warning noises and flashing lights, as airplane designers have found to their (and their pilots’) cost.

Indeed, there’s a school of thought that says that we’re making a mistake by even partially automating this kind of thing. For one thing, we need to know what exactly is going on: are we counting on our car to warn us about things that might happen, and, in the words of the tech industry “mitigate for us”? Or are these interventions just things that might happen some of the time, if we’re lucky, but not something we can rely on?

If so, what exactly is the point of that? What would be the point of an airbag that can’t be counted on to deploy, or seatbelts that only work some of the time? And then there’s the bigger, philosophical issue: for those people learning to drive for the first time, what are these cars telling them: that they don’t have to worry too much about sticking to lanes, because the car will do it for you? And what happens when they find themselves behind the wheel of a car that doesn’t have those features?

Maybe it’s a good thing we’re seeing these automated features now — because it gives us a chance to explore these issues before the Google car starts driving itself down our street and we start living in a world, not just of driverless cars, but of cars that people don’t know how to drive.

This is a piece I wrote for the BBC World Service, based on a Reuters story.