There are two separate issues that folks are getting tripped up on here. The first is that the most powerful AI systems do not do offline learning. There are a bunch of hard problems here: e.g. known unsupervised learning techniques have been far less successful, and inference only approaches cost effectiveness by decoupling from training. It seems plausible that we will solve some of these, though I don't know about others.
The way I have been thinking about the other bit is is that LLMs are functionally pretty similar to the linguistic parts of a brain attached to a brain stem (the harness is the brain stem). They don't have long-term memory, the capacity for inspiration, theory of mind, prioritization, etc because they just don't have analogues of the parts of the brain that do those things. We have a good sense of how to make some of those (e.g. vision), but not all.
The common ground here is that some fundamental research needs to happen. We need to solve all of these problems for AI to become independently dangerous. On the other hand, it's proving mildly dangerous in human hands right now - this is the immediate threat.
The way I have been thinking about the other bit is is that LLMs are functionally pretty similar to the linguistic parts of a brain attached to a brain stem (the harness is the brain stem). They don't have long-term memory, the capacity for inspiration, theory of mind, prioritization, etc because they just don't have analogues of the parts of the brain that do those things. We have a good sense of how to make some of those (e.g. vision), but not all.
The common ground here is that some fundamental research needs to happen. We need to solve all of these problems for AI to become independently dangerous. On the other hand, it's proving mildly dangerous in human hands right now - this is the immediate threat.