This is what-aboutism. Humans are not the subject of scrutiny for this video. It was the human who countermanded the dangerous decisions made by the FSD system.
No, it's accurately pointing out that perfect driving by an AI is unachievable, if we demand that, then 'self driving' will literally never be a thing. Ever.
The standard should be: "Safer than human driving" shouldn't it?
And when you realize that humans are already terrible drivers it doesn't seem so reckless to hold AI driving to a realistic, achievable standard instead of an ideology.
Another way to look at it: imagine two different human drivers, one is clearly "better" and faster than the other, but wrecks occasionally, but the other is a "bad" driver, but never does.
Which is a more desirable model for an AI to emulate?
Humans have a semantic model of the world. "AI," so far does not. When human drivers see things they can usually discern what they mean, even with bad and high latency sensor input much faster and better than "AI." The problem to solve with self-driving is not sensor field or latency, of course a machine can do better, rather it is integrating what is sensed into a reasonable plan of action, "AI" demonstrably sucks at this.
Humans are absolutely the subject of scrutiny for this video, and anything else that will ever be related to autonomous driving, because we're the only other source of drivers for automobiles.
The only bar to clear, and I mean the only bar is, "Do autonomous vehicles kill fewer human beings than humans on the road do?"
That's it. When it decreases by even 1 human being below what humans cause, its time to switch over, because like every single mechanical, electronic, and computerized tool, it will only continue to reduce the number of human deaths as we progress with its development.
I get that autonomous vehicle deaths freak people out because they do things a human would never do. But? So what? If 53,000 people die in automobile accidents where they drive into each other because they stop paying attention to the road to reach back and slap their 10 year old who won't leave his sister alone, it makes absolutely zero difference than if the car drives into the ocean and drowns its driver. Same result. Dead person. We only think it matters because we like to think we have control over our lives. You don't and you never will. You could have a massive aneurism the next time you're driving and slam your vehicle right into an oak tree. The Universe is a random uncaring system.
Your life is just a series of mostly random events that you shape into a coherent story because accepting that shit just happens and you have to deal with it is, like, a real bummer, man.
I see this sentiment often. But is this comparison fair? For example, what does the distribution of risk look like in each cohort (AI vs human drivers)?
Presumably the risk of an accident is relatively evenly distributed between all AI drivers (they are using the same AI after all). But is the risk of a car accident evenly distributed between all people? Not even close. It’s perfectly possible to simultaneously reduce the overall risk for everyone while at the same time increasing the risk for a given individual by an order of magnitude.
Would you be willing to assume a greater risk of accidental death _personally_ to decrease overall risk of death? Not a question I imagine reaches broad consensus...
And what about the soft problems? Like responsibility. A self-driving car runs off the road and kills your daughter. Now what? Tesla is certainly not going to accept responsibility. So... you just “chalk it up” as bad luck? At least the current paradigm has the _ability_ to offer closure after a tragedy.
Reducing “self-driving cars” to a single metric is not only mathematically dubious, it’s ethically abhorrent and just plain stupid. I expect better.