BlogsRSS

Driverless Google car hits a bus, but don't panic

ITEM: A Google self-driving car hit a bus in California last month.

Remain calm.

According to media reports – based on the public release of the accident report from California’s Department of Motor Vehicles on Monday – the Google car ran into the bus at an intersection in Mountain View, CA, while trying to avoid some sandbags on the road.

While the Google Lexus SUV was in autonomous mode at the time, there was a driver in the car as well. The driver had the option to take manual control, but thought that the bus would slow down as the car pulled out into the intersection.

It didn’t.

A few caveats are in order at this point:

1. No one was hurt.

2. Google’s car was moving at 2 mph and the bus at 15 mph. So we’re not talking action-movie-level car crashes here.

3. Google cars have been involved in about a dozen minor accidents since the company first started testing them on the streets in 2014 – and all were the fault of the other drivers involved.

However, this is the first time that the Google car was at fault. And while the fault could technically be assigned to the driver for not overriding the autonomous controls, legally it could also be the fault of the car’s artificial intelligence, according to a letter from the US National Highway Traffic Safety Administration (NHTSA) issued to Google earlier this month.

That letter not only creates a strange legal framework where the car manufacturer or the AI software developer could be held liable for road accidents involving driverless cars, but also impacts how driverless cars will be designed, Technology Review reports:

Google’s efforts to create a fully automated vehicle without a steering wheel or pedals had previously been stymied by California regulations stipulating that a “driver” must be behind the wheel in case of emergency.

In a letter addressed to Chris Urmson, the head of the self-driving car project, the NHTSA essentially overrules that. “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving,” the letter reads. “We agree with Google its SDV [self-driving vehicles] will not have a 'driver' in the traditional sense that vehicles have had drivers during the last more than one hundred years.”

That raises issues about whether passengers of autonomous vehicles would be able to take control of the car in emergency situations if the car has no steering wheel or brake pedals. On the other hand, MIT research scientist Bryan Reimer, who studies self-driving systems, told Technology Review that it’s arguably safer to prevent passengers from intervening:

Reimer’s own research has shown that managing human behavior behind the wheel of a self-driving car may be the most problematic issue of all. Drivers generally do a poor job of monitoring automated systems, and so they are rarely able to retake control quickly.

“Most of the auto industry, and Google, are still treating it as a technology issue,” Reimer says. “But if you want to get innovation to market, the hard stuff is the most unpredictable—us.”

In that case, the latest Google car accident shows that we still have a ways to go before driverless cars can handle themselves.

Meanwhile, according to the BBC, Google says it has refined its self-driving algorithm since the bus collision:

"From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future."

Interesting times, eh?