With 10 times the memory usage and 100 times the compute power, maybe you could replace floats with something that behaves more like real numbers and covers mostly the same range.
But the resulting type is still going to have its own limitations and sharp edges. Floats are not the right tool for every job but they are quite good at the jobs they are right for. Learning how they work is more useful than lamenting their existence.
With densely packed decimals (3 digits in 10 bits), you can reduce the space overhead to as little as 2.4% (1024/1000). The IEEE has even standardized various base-10 floating-point formats (e.g. decimal64). I'd suspect that with dedicated hardware, you could bring down the compute difference to 2-3x binary FP.
However I read the post I responded to as decrying all floating-point formats, regardless of base. That leaves only fixed-point (fancy integers) and rationals. To represent numbers with the same range as double precision, you'd need about 2048 bits for either alternative. And rational arithmetic is really slow due to heavy reliance on the GCD operation.