I think the spec just means, assume IEEE 754. In the case of 0.1, which cannot be represented exactly, software should assume that `0.1` will be represented as `0.100000000000000005551115123126`. Depending on `0.1` being parsed as the exact value `0.1` is not widely interoperable.
Relatedly, what about integers like 9007199254740995. Is that a legal integer since it rounds to 9007199254740996?
It does seem unclear what it means to exceed precision (given rounding is such an expected part of the way we use these numbers). Magnitude feels easier as at least you definitely run out of bits in the exponent.
I think the spec is saying that it is the message that should not express greater magnitude or precision, not 'the number'.
So including the string "0.1" in a message is fine because v = 0.1 implies 0.05 < v < 0.15, but including 0.100000000000000000000000000000000000 would not be.
I'm confused by this.
What is the precision of 0.1, relative to IEEE 754?
If I read it correctly, that statement is saying:
^ How do I calculate these values?