Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the PI approach still requires that the agents in the game have a shared internal model of the game.

You can make Searle's Chinese Room argument, but I always find "assume there exists a book that contains every conversation ever" as a flawed premise.



Well, if there've been about 120B humans ever, and we speak fewer than 1B words per lifetime, and the average word takes 1 byte to store, that's about a fifth of all data stored in AWS (according to Wolfram Alpha). It's undoubtedly a lot, and yet clearly within human capability. And of course that ignores optimizations that'd certainly drop that high estimate by many orders of magnitude.


I think you're misunderstanding Searle's Chinese Room. It has a response for every sequence of conversation, ever. It doesn't store every conversation that has happened; it stores every possible conversation that's possible, whether it'll ever happen or not.

It would be able to handle the following exchange:

Person: "Here's a cool question, ready?" Room: "Ready." Person: "What was the last message I sent to you?"

It can respond appropriately to the following sentence:

Person: "Hey, I'm gonna say something. Here is a sentence. Can you repeat the previous sentence back to me?"

Otherwise, why bother with all of this AI stuff? Just build Searle's Chinese Room as an index and you have a perfect chatbot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: