Azure has the LS series of VMS [1] which can have up to ten 1.92TB disks attached directly to the CPU using NVMe. We use these for spilling big data to disk during high-performance computation, rather than single-machine persistence, so we also don't bother with RAID replication in the first place.
Though it is a bit disappointing that while Microsoft advertises this as "good for distributed persistent stores", there are no obvious SLAs that I could rely on for actually trusting such a cluster with my data persistence.
Well, attaching 10 PCIe devices is going to give a very hard time to your CPU if all of them should be used. The speed of copying from memory or between devices will become a bottleneck. Another problem is that on such a machine you will also need huge amount of memory to allow for copying to work. And, if you want this to work well, you'd need some high-end hardware to be actually able to pull that off. In such a system, your CPU will prevent you from exploiting the possible benefits of parallelization. It seems beefy, but it's entirely possible that a distributed solution you could build with a fraction of the cost would perform just as well.
This situation may not be reflected in Azure pricing (the calculator gives 7.68 $/h for L80as_v3) since if MS has such hardware, it would be a waste for it to stand idle. They'd be incentivized to rent it out event at a discount (their main profit is from traffic anyways). So, you may not be getting an adequate reading of the situation, if you are trying to judge it by the price (rather than cost). But, this is only the price of the VM, I'm scared to think about how much you'd pay if you actually utilize it to its full potential.
Also, since it claims to have 80 vCPUs, well... it's either a very expensive server, or it's, again, a distributed system, where you simply don't see the distributed part. I haven't dealt with such hardware firsthand, but we have in our DC a Gigaio PCIe TOR switch which would allow you to have that much memory (in principle, we don't use it like that) in a single VM. That thing with the rest of the hardware setup costs some six-digit number of dollars. I imagine something similar must exist for CPU sharing / aggregation.
> The high throughput and IOPS of the local disk makes the Lasv3-series VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB.
This is cringe-worthy. Cassandra is an abysmal quality product when it comes to performing I/O. It cannot saturate the system at all... I mean, for example, if you take old-reliable PostgreSQL or MySQL, then with a lot of effort you may get them to dedicate up to 30% CPU time to I/O. Where the reason for relatively low utilization (compared to direct writes to disk) is the need to synchronize that's not well-aligned with how the disk may want to deal with destaging.
Cassandra is in a class of its own when it comes to I/O. You'd be happy to hit 2-3% CPU utilization in the same context where PostgreSQL would hit 30%. I have no idea what it's doing to cause such poor performance, but if I had to guess, some application logic... making some expensive calculations sequentially with I/O, or just waiting in mutexes...
So, yeah... someone who wanted Cassandra to perform well would probably need that kind of a beefy machine :D But whether that's a sound advise -- I don't know.
Though it is a bit disappointing that while Microsoft advertises this as "good for distributed persistent stores", there are no obvious SLAs that I could rely on for actually trusting such a cluster with my data persistence.
[1]: https://learn.microsoft.com/en-us/azure/virtual-machines/las...