swiftcoder wrote:
The IEEE specification doesn't fully specify all bits in a float for all values. Many architectures/implementations have multiple different representations of zero (and often many for -0 as well), INF and NAN are also quite badly defined.
As long as IEEE754 is implemented correctly (for example the math on Cell BE is not IEEE754 compliant) all results but NANs are bit identical - when using the same modes like rounding and number of bits. The meaning of NAN, quiet or signaling, is exactly defined - only the bit pattern may vary between architectures. But as long as you check the meaning on the same architecture as the value was produced you get identical results.
x86 FPU and SSE differ in regard to subnormal numbers, I assume the FPU is non-compliant in this regard since PowerPC yields the same results as SSE in my tests.
That does not mean that one gets the same results on two different architectures (even if both are x86) without special precautions.
swiftcoder wrote:
Also remember that x86 and newer actually deal with 80-bit floats, and truncate before storing in a 32-bit memory location, which can cause problems of its own.
When you are using SSE that's not true. Then it's depending on the instruction 32 bit / 64 bit.
x86_64 by default uses 64 bit registers instead of the old FPU registers to do math.
Most of the time you can ignore these problems, but it explains why comparing two floats using == or != is a bad idea. If you are writing a lock-stepped simulation like an RTS then you have to take care of these issues. Another case would be a replay system when you are storing only inputs to drive the simulation.