Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That depends on how good we get at interpretability. If the models can not only do the job but also are structured to permit an explanation of how they did it, we get the confirmation. Or not, if it turns out that the explanation is faulty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: