Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

uh --- which implications does this have for driving AI? Possibly none, if the acceptance criterion for AI is: "it kills fewer people than humans would. possibly others, but fewer overall."

?



None, unless you think that people are somehow able to side-step this problem in a way that a computer can't mimic.


The paper specifically details several situations in which humans are the ones making the decision, and the result is the same. There is no bounded-time decision procedure that can take continuous (i.e. physical) inputs and render a discrete decision.


Driving AI doesn't require continuous input variables. Approximations are good enough in the real world.


What do you mean? All the inputs are continuous: light sensors, LIDAR, infra-red, inputs from mechanical sensors from the driver. Sure, the sensor package's hardware/firmware/software converts these to discrete inputs before it reaches the Driving AI, but all of the Buridan's Paradox results still apply to those sensor packages. They can't perform their tasks in a finite amount of time -- either they will sometimes fail to make a decision at all, or they will render an invalid output (e.g. rather than outputting voltage corresponding to logical 0 or 1, they will go into a meta-stable or astable output mode that is not a valid output voltage).


>rather than outputting voltage corresponding to logical 0 or 1, they will go into a meta-stable or astable output mode that is not a valid output voltage

It sounds like you could easily avoid this. You can use a microprocessor and lead in analog input on one pin and then set an output pin based off that. You will always output a valid voltage. Sensors have noise anyways so it's not a big deal if the output is slightly wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: