I-JSON (short for "Internet JSON") is a restricted profile of JSON designed to maximize interoperability and increase confidence that software can process it successfully with predictable results.
Why restrict to 54-bit signed integers? Is there some common language I’m not thinking of that has this as its limit?
Edit: Found it myself, it’s the range where you can store an integer in a double precision float without error. I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types. I don’t come from a web-dev/js background, though, so maybe it makes more sense there.
Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.
Meaning, it’s the highest integer precision that a double-precision object can express.
I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.
It’s not about compatibility. It’s because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.
Why restrict to 54-bit signed integers? Is there some common language I’m not thinking of that has this as its limit?
Edit: Found it myself, it’s the range where you can store an integer in a double precision float without error. I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types. I don’t come from a web-dev/js background, though, so maybe it makes more sense there.
I didn’t think you realize just how much code is written in JavaScript these days.
Because
number
is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.Meaning, it’s the highest integer precision that a double-precision object can express.
It’s not about compatibility. It’s because JSON only has a
number
type which covers both floating point and integers, andnumber
is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.