Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of the time knowing the human has to take over isn't one of the more difficult problems: the AI can't map the user input to a possible continuation with any high probability, or the AI interprets the user input as an expression of frustration or an assertion it's wrong.

The challenge is when AI has to interpret questions about stuff which can be expressed in syntactically similar ways with very different or precisely opposite meanings so it's very confidently (and plausibly) wrong about stuff like price changes and tax, event timings, refunds etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: