I was surprised that an article on the problems of decision making under uncertainty reached a very narrow-minded conclusion without much consideration of other ways that we might not get that which we each respectively hope for from AI.
For example given the summary of the evidence in the article, I see a risk that AI be used to create some kind of hyper-meta-super AI that can audit/validate/certify the AI engines and/or their output. When that happens, the next obvious step is to protect the public by mandating that all software be validated by the hyper-meta-super AI. The owner of the hyper-meta-super AI will be a huge single-winner, small software producers will not be able to afford compliance, and the only remaining exemptions from the resulting software monoculture will be exemptions granted to those too big to fail.
For example given the summary of the evidence in the article, I see a risk that AI be used to create some kind of hyper-meta-super AI that can audit/validate/certify the AI engines and/or their output. When that happens, the next obvious step is to protect the public by mandating that all software be validated by the hyper-meta-super AI. The owner of the hyper-meta-super AI will be a huge single-winner, small software producers will not be able to afford compliance, and the only remaining exemptions from the resulting software monoculture will be exemptions granted to those too big to fail.