Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now ChatGPT is pretty much Skinner’s work come to life.

I'm pretty sure that is based on a misunderstanding of Skinner, ChatGPT, or both



Or very lossy compression of what I mean. I have studied both quite extensively. Not 10,000 hours each, but hundreds for sure.


Did Skinner have anything to say about how the reinforcement works? Because with LLMs you do need the right sort of architecture, and the same with neurons, even though they don't use back propagation. Only humans are known to have language in the full sense, and there has to be some neural reason why that is. Maybe you could make an argument for cetaceans or certain birds, but again they must have the neural architecture for it.


Skinner (and behaviorists in general) did establish various 'laws' of behavioral reinforcement that do tend to hold in simple cases such as pigeons pecking at levers in return for food, etc. etc. Of course these laws had nothing interesting to say about language acquisition. I challenge anyone who thinks otherwise to actually try reading Verbal Behavior. It's an incredibly turgid and uninsightful book.


The difference between hundreds and 10k hours is roughly 10k hours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: