Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why couldn't AI do that? It's better than any human at this stuff.


You could certainly train AI to navigate that particular maze of netting, but I'm far from convinced you could train an AI to navigate a near-infinite variety of novel, hostile measures not present in the training corpus.

It seems trivial to confuse a Tesla's AI. I'm assuming they're fairly near the top of the game when it comes to that, yes?

https://www.youtube.com/watch?v=U1MigIJXJx8

This sort of intentionally hostile pathological case is of course rare in real-world driving. It will not be rare in warfare.

And a drone has to operate fully in three dimensions, unlike a Tesla which is effectively operating in two dimensions.

An autonomous drone will also have extremely constrained computing resources relative to a Tesla due to size/weight/power constraints.


Self-driving cars are a political problem, not a technical one. We have self-driving cars on Mars.

Autonomous drones, yeah, well... https://www.tudelft.nl/en/2025/lr/autonomous-drone-from-tu-d...


Perseverance "self drives" at 0.1mph on a nearly flat, static landscape with zero other threats or moving objects.

(Sure, it's a hostile landscape with regards to dust/temperature/radiation. And getting to Mars and landing safely is a fiendishly difficult task. But those aren't concerns of the self-driving system...)

https://science.nasa.gov/resource/how-perseverance-drives-on...

The problem space is truly not comparable to that of a drone that needs to navigate an actively hostile, evolving environment in three dimensions at two orders of magnitude greater speed.


Sometimes I feel like the entire AI field just never watched The Terminator.


Campy, low budget action movies are not a sound basis for forming public policy.


Whether it should is a different question. It certainly can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: