Storage buffers are 32bit blocks of memory so why not just use a 32bit texture as a storage buffer? With their implementation is it actually different?
You can't do random writes to textures in WebGL, which is required by the vast majority of algorithms. Some hacks exist, all of which come with severe limitations and performance penalties.
Treating a texture as a 'poor man's storage buffer' often works but is much more awkward than populating a storage buffer in a compute shader, and you're giving up on a lot of usage scenarios where WebGPU is simply more flexible (even simple things like populating a storage buffer in a compute shader and then binding it as vertex- or index-buffer).
It would have made sense 5 years ago when it wasn't clear that WebGPU would be delayed for so long, but now that WebGPU support in browsers is actually close to the finish line it's probably not worth the hassle.