I've been using the latest v13.2.2 and it regularly goes 100% on its own door to door over 1-2 hour trips that I've driven, navigating side roads, highways, lane changes, road blocks, everything, without a single intervention from me. I just sit and watch, it's incredible actually. Again, this is NOT just highway experience, but door to door. I've driven the earlier versions and they were pretty good, but this latest one (v13.2.2) is a huge improvement that makes me feel it's arrived.
That is a fundamental misunderstanding of the reliability level needed for fully autonomous vehicle operation. A fully self driving vehicle operating outside of a testing safety protocol in use by consumers requires a average disengagement rate not on the order of 1-2 hours, but 1,000-2,000 hours to be considered in the vicinity of fully self driving. You are literally presenting anecdotes that are inadequate by a factor of 1,000x to provide evidence "it's arrived".
Individual driver experience is basically useless for assessing if it has "arrived" due to the fundamental lack of data any one human can generate, however individual driver experience is adequate for assessing if it has not "arrived".
To explain, suppose a manufacturer claims that their widget fails 1 in 1,000,000 times. Suppose a regular human can use 1,000 units. Even if a regular human finds zero failures in their 1,000 unit random sample that does not provide evidence that the manufacturer's claim is true. You need over 1,000,000 samples, usually on the order of 10,000,000 samples, and observe a number of failures comparable to the claimed rate to make any such claim in a statistically rigorous fashion.
In contrast, if 100,000 samples are collected and 10 failures are observed, a rate of 1 in 10,000 which is already a expected failure rate 10x more than the amount any individual person would use, then you can assert with high confidence that the manufacturer's claim is a lie even though you have not even observed 1,000,000 samples as would be required to prove the manufacturer's claim is true. If a regular human discovers more than 1 failure in their 1,000 units, a observed failure rate of over 1 in 1,000 then you can assert with extreme confidence that the manufacturer is lying. The bar for adequate data to establish statistical significance is the number of failures, not the number of units.
Individual drives are over a factor of 1,000x from establishing the requisite reliability level and thus have 0.1% of the adequate evidentiary power to establish success, but can be used to establish failure. Even a literal lifetime of human experience with zero faults is barely adequate to establish success, with even a handful of failures over a literal human lifetime being sufficient to reject the claim of a reliability level adequate for fully self driving operation outside of a testing protocol in the hands of consumers.
This is basic statistics and it is literally impossible that the people at Tesla do not know the level of evidence needed to support their claims is over a factor of 1,000x more than their auditable presented evidence. They just choose to deceive their customers to support their stock price and then blast soundbite arguments that sound good, but are intentionally deceptive, to overwhelm the discourse.
I'm surprised this article doesn't mention the elephant in the room. With Musk's influence over the Trump administration it seems overwhelmingly likely that Tesla will achieve Full Self Driving by changing the regulatory framework to allow whatever Tesla currently has to be called (and used as) self-driving. It's that simple. Will your Tesla suddenly become an autonomous vehicle? Well obviously not you cant just change the regulations and hope reality accomodates you.
So what we'll probably end up with is real self driving Waymos all over the place and fake self-driving Teslas that 'self-drive' as long as you're still really driving.
The only real concern I have is whether Musk exploits his position to impact the regulations to push both Waymo and Tesla into a bucket called "self-driving" where they get categorized the same and both still require drivers, essentially using the regulations to knee cap any rival that is ahead of Tesla.
The other side of it is that I think we'd all be very happy if Musk went back to just lying about his electric vehicles.
I don't know if machine learning can ever match the human brain for that. The brain does a lot of fairly advanced inferences that require a deep understanding of the world and the people and things in it.
Still, I'm not sure how much additional inputs would help the ML. If you had to drive by "touch" (LIDAR), you probably shouldn't be allowed to drive. It might be useful when the visual system has failed, to stop the vehicle before it hits something, but if the visual system fails that often then the system wouldn't be usable for any purpose.
Visual input combined with a lot of mental projection and inference and understanding/experience and good split second decision making.
Examples: How does a FSD vehicle use it's camera to identify "black ice" on an overpass? How does it identify that a heavy truck tire has just exploded in it's lane a few cars ahead on the freeway? Or how about a bumper dropped from a car ahead or a large truck losing it's load?
All these and more have happened to me and I lived to tell about it.
Waymo's approach to autonomy, and robotaxi service, are both very different from Tesla's concept of a plan for general purpose autonomous driving plus an AirBnB-like taxi fleet.
In addition to what will turn out to be a foolish fixation on cameras as the only sensor, Tesla FSD hasn't got nearly as much mapping data, or real-time traffic data which Waymo can source from Google Maps, or in-vehicle monitoring of passengers and vehicle condition.
Elon has his online claque for distorting reality. But reality is still there. That's why Elon wants to buy the whole government to get bigger reality-denial tools.
Sure, Musk is and has always been over-optimistic about the state of self-driving, but the OP reads too much like a hit piece intended to trigger outrage.
I'm not sure the OP's data justifies its conclusions.
The entire article is arguing that Tesla has presented insufficient auditable evidence to support their conclusions and separately has done so to deceive and defraud their customers. Are you disagreeing with the first or second conclusion?
Do you think that Tesla has presented sufficient auditable evidence to support their conclusion? As the article notes, they have presented zero auditable first-party evidence of their claims despite copious data collection. However, as the article notes, the CEO has pointed to third-party data as representative of their claims of improvement. But, as the article notes, that data that very same CEO points to emphatically does not support the specific claims of the CEO or support their projections.
Are you arguing that the data/evidence cited is low-quality? Then the CEO's claims are also supported by low-quality evidence and thus the conclusion of the article, that Tesla has presented insufficient auditable evidence of their claims, is correct.
The only way the data in the article does not justify its conclusions is if the evidence is "high-quality" and supports Tesla's specific quantitative claims. I do not see how the data showing a 2.7x improvement is support for Tesla's claims of a ~5-10x improvement.
For that matter, the amount of data that Tesla would need to support their claim is so miniscule that it would be trivial for any half-competent testing process to to produce auditable statistically-sound first-party evidence of their claims prior to deployment and would be trivially verified during deployment within a week. They have chosen to present no such evidence for their claims instead opting to point at low quality crowdsourced third-party data in preference over their own despite the fact that such data disproves their own claims.
There is something to be said of pointing at third-party evidence supporting your claims as it minimizes the risk of of a conflict of interest and looks more unbiased. However, deliberately pointing at third-party data that disproves your claims? You would either not comment on it or release your own first-party data showing better results (which runs the risk of critics arguing it is biased). As I noted previously, it would be trivial to collect and present such first-party data both prior to and after release to demonstrate that their statements were not made with a disregard for the truth, and yet they have not done so.
So, we are left with three conclusions:
1. They do not collect first party data. Their claims are made with a disregard for the truth or falsity of the statement. That meets the standard for fraud and the second conclusion of the article.
2. They do collect first party data, but the data is worse than claimed and yet they state otherwise. That meets the standard for fraud and the second conclusion of the article.
3. They do collect first party data, and the data is equal or better than claimed. Despite that, they cite third party data that disproves their own claims because they are so humble. This coming from the CEO who routinely makes statements that are legally so outrageous and untrue that no reasonable person could believe them. Who has historically run smear campaigns [1] against whistleblower campaigns highlighting the dangers of Autopilot when Tesla thinks it can argue the other side. That company who publicly rails against "wrong" data is intentionally citing data that disproves their own claims while having better data that proves their own claims out of a sense of humility. Seems legit.
It's not very surprising to me, Elon Musk has a habit of over-promising and under delivering.
Self driving is a really complex problem, although we do have self driving vehicles from waymo I believe the type of self driving that a Tesla would have to do is a little different, more challenging.
It might be that getting 80% there was sort of the easy part, and the next 20% is a nightmare problem to solve; if it's even possible to solve with our current paradigms of ML.
I don't blame Musk for having promised full self driving 10 years ago, but after a decade, we all should know better.