Suchir’s suicide (if it was a suicide) is a tragedy. I happen to share some of his views, and I am negative on the impact of current ML tech on society—not because of what it can do, but precisely because of the way it is trained.
The ends do not justify the means—and it is easy to see the means having wide-ranging systemic effects besides the ends, even if we pretended those ends were well-defined and planned (which, aside from the making profit, they are clearly not: just think of the nebulous ideas and contention around AGI).
I enjoy using Generative AI but have significant moral qualms with how they train their data. They flagrantly ignore copyright law for a significant amount of their data. The fact they do enter into licensing agreements with some publishers basically shows they know they are breaking the law.
The ends do not justify the means—and it is easy to see the means having wide-ranging systemic effects besides the ends, even if we pretended those ends were well-defined and planned (which, aside from the making profit, they are clearly not: just think of the nebulous ideas and contention around AGI).