Checksumming the data is not based out of paranoia but simply as a result of having to detect which blocks are unusable in order to run the Reed-Solomon algorithm.
I'd also assume that a sufficient number of these corruption events are used as a signal to "heal" the system by migrating the individual data blocks onto different machines.
Overall, I'd say the things that you mentioned are pretty typical of a storage system, and are not at all specific to S3 :)
The S3 checksum feature applies to the objects, so that’s entirely orthogonal to erasure codes. Unless you know something I don’t and SHA256 has commutative properties. You’d still need to compute the object hash independent of any blocks.
It's not entirely orthogonal; RAID5 plus stripe-level CRC (or better) can reliably correct bitrot at any single position in a stripe whereas RAID5 alone can only report an error. My guess is that S3 and other large object stores have the equivalent of stripe-level checksums for this purpose.
I’m positive something like this is the case. Yet it’s entirely orthogonal to the object hash in the user facing feature, which would need to be computed separately.
For append-only or write-once objects or for BLAKE-3 and other fully parallelizable hashes it's possible to store the intermediate hash function state with each chunk or stripe so that the final bytes of the data, once the hash is finished, yield the user-facing checksum as well.
I'd also assume that a sufficient number of these corruption events are used as a signal to "heal" the system by migrating the individual data blocks onto different machines.
Overall, I'd say the things that you mentioned are pretty typical of a storage system, and are not at all specific to S3 :)