Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think any AI tech should be stifled or hosed because it affects jobs, or livelihoods.I think we can live in a world where people don't have to work much at all and that's okay. People averaged like 15 hours per week working back in the middle ages...

I think where it needs regulated of course is when it crosses into the realm of weaponry, and when it could do harm to people. Self-driving cars will be awesome but can also do a lot of harm..

When they crash, who's at fault the driver or the manufacturer? Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

I am optimistically excited at what AI will bring us, but feel am certain there are plenty of places where regulations will need...



For an idea of what self-driving car regulation should look like, take a look at elevators. An elevator is basically a simple AI that could easily kill people. So there are complex regulations as far as licensing, inspection, testing, repair, and safety requirements. For the most part, this system works and elevators don't kill people. And if they do, liability depends on who messed up. (Source: have an elevator and need to deal with the regulations.)

Obviously self-driving cars are much more complex, but elevators are an example of how existing dangerous autonomous systems are regulated and I expect self-driving cars would be similar.


thanks for providing an existing real-world example i can use as a point of comparison when thinking about AI regulation in general!


> When they crash, who's at fault the driver or the manufacturer?

If they crash due to a manufacturing defect, the manufacturer is liable; If they crash due to operator negligence, the operator is liable; if both factors are involved, both may be (potentially fully, in each case) liable, and in various circumstances the vehicle owner and/or driver's employer may also be liable.

At least, that's how it is with non-automated vehicles, but I see no reason the same rules don't work fine for automated vehicles.

> Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

Generally, vehicular manslaughter is a special offense that applies only to vehicle operators. Anything from negligent homicide up through depraved heart murder could, in principle, be applicable to software (or other) defect, depending on knowledge and other factors.


but the operator is the AI... it has full control.. you could be sleeping at the wheel safe in the knowledge that your car will get you to point B.So really 90% or greater crashes could be the Ai's fault... albeit 90% of 95% less crashes across the board.. is still an AMAZING improvement but when someone dies, someone gets sued or sent to prison - someone has to answer --even if it's just life and shit happens...


> When they crash, who's at fault the driver or the manufacturer? Can the developer be sued for negligent homicide/vehicular manslaughter because the software had a bug?

Why not have the same standards as for human employees ?

Either the user, or the employee (program), or the company/employer is at fault. Default is the employer, unless either the user or the program tried to deliberately (and presumably successfully) cause the malfunction.

Meaning if the human user or the program merely make mistakes, even incredibly stupid ones, it's the company's fault.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: