Personally, I would consider success to be when it drives statistically better than _me_, not just _most people_. It really depends on how you choose to aggregate your data; if it's better on average, well, the average crash rate per mile driven for human drivers could be dragged down by a minority of very bad drivers. If that's the case, then if everyone bought a Tesla and used FSD, the total crash rate per mile driven would go up, despite FSD being "statistically better than most people" by that measure.
The big benefit of AI would be that it wouldn't get tired, impatient or distracted, which would potentially be a big safety boon even if it were slightly less capable than a normal driver at their best.
That said, FSD is driving so bad that I would demand an actual driver that drives this way get their license revoked.
That's something I hadn't thought about. It makes sense. We hold drivers accountable, who do we hold accountable if it's partially self driven, but the driver is supposed to be ultimately responsible?
A lawyer would say Tesla, because $, but at least at this point I would still claim the driver is. Especially with 'beta' in the software title, where they're explicitly told to pay attention and take control.
I don't envy car manufactures working on this. They're facing exactly the same battle that Ford did when the 'horseless carriage' was becoming popular. We knew then that humans are honestly speaking, terrible drivers who frequently get distracted, or lose control of the vehicle.
I expect self-driving to follow exactly the same course that horseless carriages did, to have heavy early resistance and lots of bad incidents, then acceptance, then be utterly commonplace. With the same track record of decreasing accidents and increasingly safety measures over time and eventually cease to even be a controversy.
I mean this is what I'm getting at, if by important measures like preventing deaths, injuries, and damage it's statistically better than humans (and there's a whole discussion on how you collect and decide on that data), even if the behavior is weirdly unlike a human driver, is that success? Is that good enough?
I'm not asserting it even, just spitballing because this is fascinating to me.
Yeah, it's something I'd be interested in reading more about: how do we define "better" in this context? Better for whom: society as a whole? Pedestrians? Drivers of self-driving cars? Passengers of self-driving cars? Drivers of non-self-driving cars? And so on...
Precisely. The impression I got from the video was the car was way more cautious, even more courteous than an overly cautious human drivers would be. It definitely fucked up, but if we define "better" as driving exactly like a skilled human driver, then I expect it to kill exactly as many people as we do.
The counter-intuitive advice people are given for things like an animal suddenly leaping into the road is to just hit it, because slamming on the breaks can and has lead to the car's occupants dying for the sake of a squirrel.
What do we expect a self driving car to do in that situation? Just imagine the outrage and headlines if a Tesla does what humans are told to do, and kills a dog or cat!