#BUGFIX by cg
class: LimitedPrecisionReal class
comment/format in: #documentation
--- a/LimitedPrecisionReal.st Mon Jun 19 17:30:41 2017 +0200
+++ b/LimitedPrecisionReal.st Mon Jun 19 18:27:15 2017 +0200
@@ -104,12 +104,13 @@
The only really portable sizes are IEEE-single and IEEE-double floats (i.e. ShortFloat and Float instances).
These are supported on all architectures.
Some do provide an extended precision floating pnt. number,
- however, the downside is that CPU-architects did not agree on a common format and precision.
+ however, the downside is that CPU-architects did not agree on a common format and precision; some use 80 bits,
+ others 96 and others even 128.
See the comments in the LongFloat class for more details.
We recommend using Float (i.e. IEEE doubles) unless absolutely required,
and care for machine dependencies in the code otherwise.
- For higher precision needs, you may also try the new QDouble class, which gives you >200bits (60digits) of precision
- on all machines (at a performance price, though).
+ For higher precision needs, you may also try the new QDouble class, which gives you >200bits (60digits)
+ of precision on all machines (at a noticable performance price, though).
Range and Precision of Storage Formats:
@@ -146,7 +147,16 @@
QDoubles are special soft floats; slower in performance, but providing 4 times the precision of regular doubles.
-
+ To see the differences in precision:
+
+ '%60.58f' printf:{ 1 asShortFloat exp } -> '2.718281828459045*090795598298427648842334747314453125' (32 bits)
+ '%60.58f' printf:{ 1 asFloat exp } -> '2.718281828459045*090795598298427648842334747314453125' (64 bits)
+ '%60.58f' printf:{ 1 asLongFloat exp } -> '2.718281828459045235*4281681079939403389289509505033493041992' (only 80 valid bits on x86)
+
+ '%60.58f' printf:{ 1 asQDouble exp } -> '2.7182818284590452353602874713526624977572470936999595749669*8' (>200 bits)
+
+ correct value is: 2.71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217852516642742746
+
Bulk Containers:
================
If you have a vector or matrix (and especially: large ones) of floating point numbers, the well known