The following examples demonstrate how numbers are represented and how the precision setting affects data comparisons:

set decimals to 18 // to see as many digits as possible

set precision to 16 // maximum

? 0.5 // 0.500000000000000000 exact

? 0.375 // 0.375000000000000000 exact

? 0.4 // 0.400000000000000022 16 digits precision

? 1/9 // 0.111111111111111105 16 digits precision

? 12345.4 // 12345.399999999999640000 11 digits precision

? 123456789.4 // 123456789.400000006000000000 7 digits precision

? 12345.4 - 12345 // 0.399999999999636202 11 digits precision

? 12345.4 - 12345 == 0.4 // False, too much precision attempted

set precision to 10

? 12345.4 - 12345 == 0.4 // True

set decimals to 0 // Has no effect on comparisons

? 12345.4 - 12345 == 0.4 // Still True

set precision to 16

? 12345.4 - 12345 == 0.4 // Still False

set precision to 11

? 12345.4 - 12345 == 0.4 // True

set precision to 12

? 12345.4 - 12345 == 0.4 // True

Note that the final comparion to 12 digits returns true because the first 12 digits just happen to be the same for both the calculated and literal value of 0.4. In fact, there are only 11 digits of precision in the calculated value. The 12th digit is the first rounding digit.