The concept of AGI is so poorly defined it’s hard for me to even take it seriously. And yet some people have leapfrogged over that and went directly for existential doom. Part of me thinks it’s a result of shifting conversations about real, known X-risk like climate change for which the solutions are hard but entirely possible, to conversations where not only is the solution unknown, it is unknowable because the problem is not defined.
It can be helpful to replace “AI” with automated systems, because that is a more circumscribed concept. There are plenty of practical concerns about automated systems. The primary X-risk would be automation eviscerating the economy. Another highly salient issue is automation exacerbating existing inequality. A third would be corporatism and the widening gap between our regulatory capabilities and corporate power due to corporate capture and technological progress.
But these are passé issues to the AGI crowd. It’s not cool to talk about that. They’d rather talk about, let’s face it, science fiction that demands it be taken seriously.
Part of me thinks it’s a result of shifting conversations about real, known X-risk...
I don't believe that "conversations" are going to solve any problem or pose any problem to them. More like these fears are popular because they're the only reaction that generate enough clicks.
The thing is so new that people doing something really interesting and productive with it are too busy.
The primary X-risk would be automation eviscerating the economy.
We don't need AGI to have transformational AI that is extremely disruptive to our civilization. With some improvements LLMs threaten to make a lot of people's white collar jobs redundant. We've also seen AI doing things like flying jet fighters better than people, drone AI, all kinds of military applications, the boston dynamic robots to replace manual workers. None of these qualify as AGI but AI doesn't need to be sentient or truly conscious in order to
(a) potentially self replicate if properly set up for it
(b) centralize a lot of power in the hands of a few
Automated systems works for me. The problem imho is hooking up the wheelworks of society to an automated system that doesn't work properly. Human error is some how more accepted.
It can be helpful to replace “AI” with automated systems, because that is a more circumscribed concept. There are plenty of practical concerns about automated systems. The primary X-risk would be automation eviscerating the economy. Another highly salient issue is automation exacerbating existing inequality. A third would be corporatism and the widening gap between our regulatory capabilities and corporate power due to corporate capture and technological progress.
But these are passé issues to the AGI crowd. It’s not cool to talk about that. They’d rather talk about, let’s face it, science fiction that demands it be taken seriously.