Error rates for log-gamma
I've ported the test data from Boost for log-gamma and gamma and thought the results might be helpful. It seems that in some cases, OCT does better than GSL and Rmath, on other intervals worse, sometimes much worse. See, for example near 0, -10 and -55.
Test | OCT | Boost |
---|---|---|
Factorial | Max = 2.10e+1ε Mean = 1.32e+0ε |
Boost: Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 33.6ε (Mean = 2.78ε)) (Rmath 3.2.3: Max = 1.55ε (Mean = 0.592ε)) |
near 0 | Max = 8.35e+15ε Mean = 2.42e+15ε |
Boost: Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 5.21ε (Mean = 1.57ε)) (Rmath 3.2.3: Max = 0ε (Mean = 0ε)) |
near 1 | Max = 3.91e+1ε Mean = 1.45e+1ε |
Boost Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 442ε (Mean = 88.8ε)) (Rmath 3.2.3: Max = 7.99e+04ε (Mean = 1.68e+04ε)) |
near 2 | Max = 3.91e+1ε Mean = 1.46e+1ε |
Boost: Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 1.17e+03ε (Mean = 274ε)) (Rmath 3.2.3: Max = 2.63e+05ε (Mean = 5.84e+04ε)) |
near -10 | Max = 6.91e+15ε Mean = 1.70e+15ε |
Boost: Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 24.9ε (Mean = 4.6ε)) (Rmath 3.2.3: Max = 4.22ε (Mean = 1.26ε)) |
near -55 | Max = 1.82e+14ε Mean = 8.74e+13ε |
Boost: Max = 0ε (Mean = 0ε) (GSL 2.1: Max = 7.02ε (Mean = 1.47ε)) (Rmath 3.2.3: Max = 250ε (Mean = 60.9ε)) |
This is with double-floats. Could this be related to Common Lisp having 62 bits to work with rather than the 64 of the C implementations? If that's true, I'd have expected some manner of difference, but not the orders of magnitude that some of the intervals have.
I've attached the log file from the SBCL test run here, which has all the details.