Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I oppose regulating what calculations humans may perform in the strongest possible terms.


Ten years ago, even five years ago, I would have said exactly the same thing. I am extremely pro-FOSS.

Forget the particulars for just a moment. Forget arguments about the probability of the existential risk, whatever your personal assessment of that risk is.

Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

Because lately it seems like people can't even agree on that much, or worse, won't even answer the question without dodging it and playing games of rhetoric.

If we can agree on that, then the argument comes down to: how do we fairly evaluate an existential risk, taking it seriously, and determine at what point an existential risk becomes sufficient that people can no longer take unilateral actions that incur that risk?

You can absolutely argue that you think the existential risk is unlikely. That's an argument that's reasonable to have. But for the time when that argument is active and ongoing, even assuming you only agree that it's a possibility rather than a probability, are we as a species in fact capable of handling even a potential existential risk like this by some kind of consensus, rather than a free-for-all? Because right now the answer is looking a lot like "no".


No, we can't. People have never been able to trust each other so much that they would allow the risk of being marginalised in the name of safety. We don't trust people. Other people are out to get us, or to get ahead. We still think mostly in tribal logic.

If they say "safety" we hear "we want to get an edge by hindering you", or "we want to protect our nice social position by blocking others who would use AI to bootstrap themselves". Or "we want AI to misrepresent your position because we don't like how you think".

We are adversaries that collaborate and compete at the same time. That is why open source AI is the only way ahead, it places the least amount of control on some people by other people.

Even AI safety experts accept that humans misusing AI is a more realistic scenario than AI rebelling against humans. The main problem is that we know how people think and we don't trust them. We are still waging holy wars between us.


>Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

No, we cannot, because that isn't practical. any of the nuclear armed countries can launch a nuclear strike tomorrow (hypothetically - but then again, isn't all "omg ai will kill us all" hypothetical, anyway?) - and they absolutely do not need consent of humanity, much less their own citizenry.

This is honestly, not a great argument.


>Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

Politicians do this every day.


at least the population had some say over their appointment and future reappointment

how do we get Sam Altmann removed from OpenAI?

asking for a (former) board member


> Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity

This has nothing to do with should. There are at the very least a handful of people who can, today, unilaterally take risks with the future of humanity without the consent of humanity. I do not see any reason to think that will change in the near future. If these people can build something that they believe is the equivalent of nuclear weapons, you better believe they will.

As they say, the cat is already out of the bag.


Hmm.

So, wealth isn't distributed evenly, and computers of any specific capacity are getting cheaper (not Moore's Law any more, IIRC, but still getting cheaper).

If there's a threshold that requires X operations, that currently costs Y dollars, and say only a few thousand individuals (and more corporations) can afford that.

Halve the cost, either by cheaper computers or by algorithmic reduction of the number of operations needed, and you much more than double the number of people who can do it.


> Can we agree that people should not be able to unilaterally take existential risks with the future of humanity without the consent of humanity, based solely on their unilateral assessment of those risks?

No we can not, at least not without some examples showing that the risk is actually existential. Even if we did "agree" (which would necessarily be an international treaty) the situation would be volatile, much like nuclear non-proliferation and disarmament. Even if all signatories did not secretly keep a small AGI team going (very likely), they would restart as soon as there is any doubt about a rival sticking to the treaty.

More than that, international pariahs would not sign, or sign and ignore the provisions. Luckily Iran, North Korea and their friends probably don't have the ressources and people to get anywhere, but it's far from a sure thing.


Humans can't handle potential existential risks. The moloch trap is that everyone signs the paper and immediately subverts it. In the painfully predictable scenario, "only criminals have guns", and the cops aren't on your side.


Given how dangerous humans can be (they can invent GPT4) maybe we should just make sure education is forbidden and educated people jailed. Just to be sure. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: