LimitedPrecisionReal.st
changeset 22787 b4f6d9a8bf5e
parent 22786 7375ac3023c4
child 22851 df11e296e70d
--- a/LimitedPrecisionReal.st	Tue May 08 10:40:28 2018 +0200
+++ b/LimitedPrecisionReal.st	Tue May 08 10:49:41 2018 +0200
@@ -167,31 +167,38 @@
     For this, the bulk numeric containers are provided, which keep the elements unboxed and properly aligned.
     Use them for matrices and large numeric vectors. They also provide some optimized bulk operation methods,
     such as adding, multiplying etc.
+    Take a look at FloatArray, DoubleArray, HalfFloatArray etc.
 
     
     Comparing Floats:
     =================
-    Due to rounding errors (usually on the last bit), you shalt not compare two floating point numbers
-    using the #= operator. For example, the value 0.1 cannot be represented as a sum of powers of two fractions,
+    Due to rounding errors (usually on the last bit(s)), you shalt not compare two floating point numbers
+    using the #= operator. For example, the value 0.1 cannot be represented as a sum of powers-of-two fractions,
     and will therefore always be an approximation with a half bit error in the last bit of the mantissa.
     Usually, the print functions take this into consideration and return a (faked) '0.1'.
-    However, this half bit error may accumulate, for example, when multiplying that by 10, the error may get large
-    enough to be no longer pushed under the rug by the print function, and you will get '0.9999999999999' from it.
+    However, this half bit error may accumulate, for example, when multiplying that by 0.1 then by 100, 
+    the error may get large enough to be no longer pushed under the rug by the print function, 
+    and you will get '0.9999999999999' from it.
 
-    Also, comparing against a proper 1.0 (which is representable as an exact power of 2), you will get a false result.
-    (i.e. 0.1 * 10 ~= 0.1)
+    Also, comparing against a proper 1.0 (which is representable as an exact power of 2), 
+    you will get a false result.
+    i.e. (0.1 * 0.1 * 100 ~= 1.0) and (0.1 * 0.1 * 100 - 1.0) ~= 0.0
     This often confuses non-computer scientists (and occasionally even some of those).
-    For this, you should always provide an epsilon value, when comparing two numbers. The epsilon value is
-    the distance you accept two number to be apart to be still considered equal. Effectively the epsilon says
-    are those nearer than this epsilon?.
+
+    For this, you should always provide an epsilon value, when comparing two non-integer numbers. 
+    The epsilon value is the distance you accept two number to be apart to be still considered equal. 
+    Effectively the epsilon says are those nearer than this epsilon?.
+
     Now we could say is the delta between two numbers smaller than 0.00001,
     and get a reasonable answer for big numbers. But what if we compare two tiny numbers?
-    Then a reasonable epsilon must also be much smaller.
+    Then a reasonable epsilon must also be much smaller!!
+
     Actually, the epsilon should always be computed dynamically depending on the two values compared.
     That is what the #isAlmostEqualTo:nEpsilon: method does for you. It does not take an absolute epsilon,
     but instead the number of distinct floating point numbers that the two compared floats may be apart.
-    That is: the number of actually representable numbers between those two. Effectively, that is the difference between
-    the two mantissas, when the numbers are scaled to the same exponent, taking the number of mantissa bits into account.
+    That is: the number of actually representable numbers between those two. 
+    Effectively, that is the difference between the two mantissas, 
+    when the numbers are scaled to the same exponent, taking the number of mantissa bits into account.
 
     [author:]
         Claus Gittinger