Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's try to rewrite this in a somewhat more dispassionate style:

A pragmatic perspective requires one to accept the present reality as it is, rather than hypothesize an exaggerated potential of what could be. Not all concerns surrounding existential risks in technology are necessarily grounded in empirical evidence. When it comes to artificial intelligence, for instance, current models operate at a speed vastly superior to human cognition. However, this does not equate to sentient consciousness or personal motivation. The projection of human traits onto these models may be misplaced, as AI systems do not possess inherently human drives or desires.

Many misconceptions about reinforcement learning and its capabilities abound. The development of systems that can translate abstract objectives into detailed subtasks remains a distant prospect. There seems to be a pervasive certainty about the risks associated with these models, yet concrete evidence of such dangers is still wanting.

This belief system, one might argue, shares certain characteristics with a doomsday cult. There is a narrative that portrays a small group of technologists as our only defense against a looming, catastrophic end. These artificial intelligence models, which were engineered after extensive research, are often misinterpreted as inscrutable entities capable of outsmarting and eradicating humanity, while simultaneously being so simplistic as to obsess over trivial tasks.

Alternatively, these AI models could be viewed as valuable tools for knowledge compression and distribution, enabling the advancement of civilization. As a result, societal education levels could improve, and the cost of goods and services might decrease, which could potentially enrich human life on a global scale. While there seems to be a tendency to worry about every potential hazard, optimism about the future is not unfounded given the trajectory of human progress.

There are certainly different perspectives on this issue. Some adhere to a more fatalistic viewpoint, while others are working towards a brighter future for humanity. Regardless, once the present fears subside, everyone is invited to participate in shaping our collective future.



Hahaha, thanks ChatGPT! This is better said than my snarky, frustrated at the FUD version, and I can learn from the approach.


No, it's really not, because your riff on 'shoggoths that are both so brilliant as to be dangerous, yet so stupid that they maximize paperclips' touches on an important point that the summarized version completely omits.

AI is exactly that kind of stupid. What it lacks isn't 'brilliance' but intentionality. It can do all sorts of rhetorical party tricks, including those that are good at influencing humans, it can even very likely work out WHICH lines of argument are good at influencing humans from context, and yet it has no intentionality. It's wholly incapable of thinking 'wait, I'm making people turn the world to paperclips. This is stupid'.

So it IS likely to turn its skills to paperclip maximization, or any other hopelessly quixotic and destructive pursuit. It just needs a stupid person to ask it to do that… and we're not short of stupid people.

So what you said was better, snark and all :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: