From Forbes today:
NTSB Report On Tesla Autopilot Accident Shows What's Inside And It's Not Pretty For FSD
Sep 6, 2019,
Transportation
I cover robocar technology & previously worked on Google's car team.
The Tesla Model S after hitting the fire truck. The driver walked away, fortunately
NTSB
It seems the world picks on Tesla for crashes, and the release of the NTSB report on a January 2019 Model S Autopilot crash has generated lots of commentary and analysis.
We pick on Tesla because Autopilot is out there driving far more miles than other systems, and it’s an incomplete driver assist system being monitored by ordinary drivers, so it’s going to have a lot more crashes. As a driver assist system, fault for most of these crashes still lies with the supervising driver, but ever since Tesla declared that Autopilot is on the cusp of morphing into a “full self driving” product of some type later this year, it’s been naturally to examine the sort of mistakes it’s making. Tesla Full Self Driving will be a new system, but it will almost surely use most of the core components found in Autopilot. How those components perform, and how well Tesla improves them, are areas of serious inquiry.
When the NTSB gets involved — as it has in several Tesla crashes — we get a window into the internals of these crashes we don’t otherwise get. Because of NTSB rules, Tesla is not allowed to talk until the investigation is over, and the investigations into fatalities are involved and still going on. This crash did not involve injuries, but did involve the model S crashing into the back of a parked fire truck that was deliberately blocking the left (carpool) lane to allow crews to help a victim of a motorcycle accident in safety.
This is a scary situation for all vehicles. The fire truck was deliberately stopped in the lane. It angled itself slightly so it would not look like it was actually using the lane, making it clear, to humans at least, that it was deliberately closing the lane. Even so, the Tesla found itself following another vehicle which blocked the Tesla’s view of the situation. That leading car, seeing the fire truck, changed quickly into the lane to the right as expected — suddenly revealing the parked fire truck situation with about 4 seconds to act.
The Tesla Autopilot and its driver did not react until about 0.5 seconds before the crash. Or rather, it reacted in the worst way, by speeding up. Fortunately it was going only 20mph in the traffic jam, and only got up to 30mph before hitting the truck.
Teslas, and systems like them, have problems with vehicles stopped on the road ahead of them, especially when they are revealed with little warning. They have this problem for a few reasons:
As such, when the lead car veered off, the Tesla decided that now the lane in front of it was suddenly wide open. It calculated there was nobody in front for 120 meters and it should immediately speed up. (I think that Tesla’s TACC is a little too eager about that in general. This also happened in the tragic fatality in Silicon Valley.)
As it sped up, it finally detected that the truck was there. New interpretation of the camera and radar data made it decide that the radar return from the truck it had been deciding was not really there was something to worry about. It issued the “forward collision warning” beeps to the driver, who did not react. The Automatic Emergency Braking did not activate yet — it usually gives the driver some time to react first.
Wham.
There’s also speculation about what the driver was doing. The driver understood Autopilot. There are accusations that the driver was distracted, possibly looking at a phone, and not looking up in the seconds leading to the crash, according to a witness in the next lane. Clearly, the driver didn’t do his job here.
The report also contains the now very common incorrect interpretation of Tesla’s system to detect hands on the wheel, making claims that “The system detected driver’s hands on the steering wheel for only 78 seconds out of 29 minutes and 4 seconds during which the Autopilot was active.”
Tesla does not have a system to detect hands on the wheel. Instead it detects the application of modest steering force on the wheel. You can, and many drivers do, keep their hands on the wheel without applying steering force. They may only briefly apply some force every so often to keep Tesla’s system happy and avoid the warnings. This is not at all out of the ordinary. The driver was holding the wheel, he says, in what might be deemed a fairly light way, with hand resting on his knee and holding the wheel to be able to torque it from time to time. This is not what one would normally recommend as a “ready to grab” position, but it is what some people do.
I’ve noticed a disturbing pattern in these incorrect reports about whether a driver had his or her hands on the wheel of a Tesla. I have to wonder why Tesla doesn’t correct this error. The cynic in me wonders if they might like the error, because it makes the drivers who have crashes sound more negligent than they may have been.
That said, there have been calls for Tesla to improve their system of assuring driver attention, as other carmakers have done. This could include using the internal camera to track the gaze of the driver, and know when they have not looked at the road for too long, which is again something others have done.
Conclusions
Stopped vehicles continue to be a problem for systems based only on camera+radar. LIDAR easily solves this problem. Maps can also assist greatly with this problem, by telling you the places where fixed objects will produce radar returns that look like stopped vehicles. Tesla avoids both technologies. They believe they will produce computer vision systems using their new hardware that can calculate how far away everything is in a camera image by understanding the image. At present this is not reliable.
This is not the first Autopilot crash in this precise situation — you’re following a car which veers to the right, revealing something ahead on the road — the thing that caused it to veer. Tesla knows about this and is not making enough progress. In addition to the obvious steps of improving perception, Tesla could decide to treat the “car I am following suddenly veers away” as a special caution situation. To not accelerate quickly in that situation. To pay more attention to radar returns from stopped objects in that situation. To be cautious about conclusions that the road ahead is suddenly clear when the traffic is actually thick. Perhaps even leave it to the human to accelerate in that situation.
Their Full Self Drive system will have better perception and more compute power. But it won’t have better sensors and they say it won’t have maps. They have said they will shortly have a driver-monitored full self-drive system soon (and it is rumored to be in their version 10 software release.) They have also said it won’t need (in a technical, not regulatory sense) that driver monitoring next year. The signs don’t point to this.
I founded ClariNet, the world's first internet based business, amChairman Emeritus of the Electronic Frontier Foundation, and a directorof the Foresight Institute. My…Read More