This won't make a dent in the logical armor of AI optimists:
[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI
[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking
I think they aren't the full answer, no matter how much they're scaled up. But they may be one essential element of a working solution, and perhaps one or two brilliant insights away. I also think that some of the money being invested into the LLM craze will be directed into the search for those other brilliant insights.
Many teams are trying to combine their ideas with LLM. Because despite their weaknesses, it seems LLMs (and related concepts such as RLFH, transformers, self-supervised learning, internet-scale datasets), have made some remarkable gains. Those team are coming from the whole spectrum of ML and AI research. And they wish to use their ideas to overcome some of the weaknesses of current day LLMs. Do you also think that none of these children can lead to AGI? Why not?
LLMS don't have to be smart enough to be AGI. They just have to be smart enough to create AGI. And if creating something smarter than yourself sounds crazy, remember that we were created by simpler ancestors that we now effortlessly dominate.
I don't disagree with the general notion, but it seem to me that LLMs being smart enough to create AGI is even more far fetched than if they are just smart enough to be AGI.
All I’d like to see from AI safety folks is an empirical argument demonstrating that we’re remotely close to AGI, and that AGI is dangerous.
Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.
The flipside: it's equally hard for people who assume AI is safe to establish empirical criteria for safety and behavior. Neither side of the argument has a strong empirical basis, because we know of no precedent for an event like the rise of non-biological intelligence.
If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.
I find it delightfully ironic that humans are so bad at the things we criticise AI for not being able to do, such as extrapolating to outside our experience.
As a society, we don't even agree on the meanings of each of the initials of "AGI", and many of us use the triplet to mean something (super-intelligence) that isn't even one of those initials; for your claim to be true, AGI has to be a higher standard than "intern of all trades, senior of none" because that's what the LLMs do.
Expert-at-everything-level AGI is dangerous because the definition of the term is that it can necessarily do anything that a human can do[0], and that includes triggering a world war by assassinating an archduke, inventing the atom bomb, and at least four examples (Ireland, India, USSR, Cambodia) of killing several million people by mis-managing a country that they came to rule by political machinations that are just another skill.
When it comes to AI alignment, last I checked we don't know what we even mean by the concept: if you have two AI, there isn't even a metric you can use to say if one is more aligned than the other.
If I gave a medieval monk two lumps of U-238 and two more of U-235, they would not have the means to determine which pair was safe to bash together and which would kill them in a blue flash. That's where we're at with AI right now. And like the monks in this metaphor, we also don't have the faintest idea if the "rocks" we're "bashing together" are "uranium", nor what a "critical mass" is.
Sadly this ignorance isn't a shield, as evolution made us without any intentionality behind it. So we don't know how to recognise "unsafe" when we do it, we don't know if we might do it by accident, we don't know how to do it on purpose in order to say "don't do that", and because of this we may be doing cargo-cult "intelligence" and/or "safety" at any given moment and at any given scale, making us fractally-wrong[1] about basically every aspect including which ones we should even care about.
[0] If you think it needs a body, I'd point out we've already got plenty of robot bodies for it to control, the software for these is the hard bit
[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI
[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking
Unbridled optimism lives another day!