The book may be wrong, but the idea behind is likely correct:The book says 10 bytes the program below says 8 bytes
x86 CPUs used to off-load floating point operations to an x87 coprocessor. That coprocessor used an extended precision floating point format that consists of 80 bits: 1 sign bit, 15 exponent bits, and a 64-bit significand (no implicit integer bit). On x86 machines, "long double" refers to this extended precision format.
Even though there's only 80 bits of data (10 bytes), these numbers are padded up to the next multiple of the system's word size to preserve alignment in memory, and that's why you get a sizeof(long double) of 96 bits (12 bytes) on an x86 machine, and 128 bits (16 bytes) on an x86-64 machine. Note that sizeof includes padding.
AFAIK, there are no common ARM CPUs that implement the x87 format, so long double is just an IEEE-754 double precision float on 32-bit ARM, and an IEEE-754 quad precision float on 64-bit ARM. These give you sizeof(long double) of 8 and 16 respectively (without any padding).
Note that the C and C++ standards allow implementations to use the same precision for double and long double. In my opinion, this is quite silly: Either double is enough for your application, and you should use double everywhere, or double is not precise enough, and in that case you want a type that is precise enough on all platforms, and not something that silently gets downgraded to double on 32-bit ARM.
If you truly need higher precision than what's offered by double, you can use the new <stdfloat> types such as std::float128_t, this will be the same on all platforms (if the compiler supports it).
Statistics: Posted by tttapa — Tue Sep 10, 2024 9:28 pm