Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I spent years thinking its crazy to get your morals from a random book, but then I read this stuff and I understand the pitfalls of also having it all wide open. This idea we all define our morality works extremely poorly. I used to think it was a product of the old times of almost non existent education and honestly dumb "ancient world" people (lack of nutrition, lead poisoning...).

But I don't know, clearly smart and educated people can believe extremely dumb things. The smarter they are the weirder the setups they get themselves into. One almost comes to appreciate the existence of some religions with a history of many centuries to "chill out" and see what works, instead of any of the new age cults.



I’m not religious and open minded about new stuff but ultimately decided the ancient Stoics system of morals works perfectly for me- and is more time tested than even Christianity. I think the rationalists decision to use consequentialist morality in regular life is a huge mistake, and impossible to get right in practice. Humans cannot predict the future outcomes of our actions very well, but we can easily remember and follow a simple set of core values.


If you look at most great evil perpetrated by mankind, it's almost always someone trying to "do good". Sure there are serial killers or whatever, crimes of passion, but they just kill one or a few. The people who have killed millions, driven whole regions into famine and death, destroyed lives at an industrial scale.... they were 'doing good'.

Utilitarian consequentialism is the most evil of all philosophies. If someone is merely sadistic their sadism will be sated after some finite amount of abuse. If someone is greedy then their harm will at least be limited to what they can profit from and there is no profit in ruling over a cinder. But the harm possible by someone convinced that their actions are good in some abstract sense is without any bound.

The irony in the lesswrongers fixation on consequentialist morality is that their great fear of machine superintendence is derived from an expectation that its evil will arise from the same sorts of reasoning they engage in themselves. They simply fail to pause and ask "Are we the baddies?"

I usually leave it out of my complaints because they're so ineffectual that it's not a real threat. But the LW solution to "unaligned" machine intelligence is to create an AI god in their own image first, so that it can enslave humanity and all other lifeforms within its lightcone for their own best interest, and suppresses the creation of any competing "unaligned" God. So they're literally out to create the very thing they fear, but with the hubris to imagine that if it was theirs it would be "good". The ultimate fantasy of both the utilitarian consequentialist and most authoritarian mass murderers. If their doomsday fears come true my bet is that it will be at their own hand or that of their followers.

It's also the ultimate horseshoe for militant internet atheists-- "there is no God; and this is OK" becomes "there is no God; and we're gonna create one in our own image".


Interesting observation that the rationalists doomsday AI scenarios all are essentially just obvious horrific consequences of utilitarian consequentialism backed by unlimited power- yet they insist on trying to live by acting this way themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: