ISO/ IEC JTC1/SC22/WG21 N1884

N1884=05-0144
2005-08-26

Further Restrictions on Special Math Functions

P.J. Plauger & Chris Walker
Dinkumware, Ltd.


INTRODUCTION

We've encountered several additional problems in implementing the special
math functions for TR1. You can argue that that's our problem, as
implementers, but we believe that the issue is not so simple. What we've
learned is that the problem areas we've encountered are also problem
areas for potential users of the functions, for reasons we explain below.
Thus, we recommend that the problematic domains, or whole functions in
two cases, be removed from the special math requirements.

The unifying principle at work here is what we've come to call the
*sensitivity* of functions. We follow the conventional rule that a
library function should treat each input value x as if it were exact.
(You don't know where it's been, in any case.) It is the job of the
implementers to return the nearest internal representation corresponding
to the actual value of the function for a given x. Nevertheless, it's
worth asking what happens to the output value if you change x by the
smallest possible amount -- conventionally 1 ULP (unit in the least
significant place). We take the magnitude of the larger of the two
changes, also expressed in ULP, as the *sensitivity* of the function
for the input value x.

It is well understood that large sensitivity causes problems for the
programmer. A small error in the input value can lead to surprisingly
large errors in the function value. It doesn't help if a library does
a perfect job if the program cannot guarantee perfect function inputs --
and most programs can't. Large sensitivity is not necessarily an excuse
for the implementers to do a sloppy job, however; if there's any chance
that the input is correct, then the most meaningful output should be
generated. But it should come as no surprise that the biggest
challenges to implementers typically occur where sensitivity is high.
So the community is not well served by demanding great accuracy for
domains that are not likely to be usable in a meaningful program.

The commonest cause of high sensitivity is a very large ratio of
first derivative to function value, |f'(x)/f(x)|. tgamma(x), for
example, is notoriously sensitive for large function values. But the
peculiarities of floating-point arithmetic create other cases as
well:

-- If a function has a zero value for an irrational argument value,
such as expint (one) or lgamma (lots), the region near the zero has
high sensitivity. The implementer can patch a small number of such
zeros, but an infitite number requires heroic measures that are
probably not worthwhile.

-- A cyclic function, such as sin(x) or a Bessel, gets progressively
more sensitive as the argument grows larger. Put simply, each ULP
change in the argument makes ever larger leaps around the unit circle,
so the sequence of function values becomes erratic. Techniques exist
for reducing arguments for many of the cyclic functions; whether
they're worth employing beyond a certain sensitivity is debatable.

-- A really fast moving function, such as assoc_laguerref for large
order, can actually toggle between plus and minus infinity with 1
ULP changes in the argument value. Determining the proper sign for
each function value can be both difficult and of only academic
interest.

The most useful specification for a math library would probably be
one that identifies regions of "high" sensitivity -- and provides
a defensible definition of "high" in the bargain. We at Dinkumware
may eventually produce such a document, but not right now. The
best we can offer at the moment is a few more specification changes
for the special math functions, to eliminate known problem areas.


SUGGESTED CHANGES

assoc_laguerre -- The m argument should also be implementation
defined for m >= 128.

assoc_lgauerref -- If n >= 75 AND m >= 75, IEEE 32-bit is pretty
meaningless (toggles between plus and minus infinity). For example,
the value of assoc_laguerref(127, 127, 41.85287) is -0.64511883e44,
with a sensitivity of 24920640. We can probably get the signs right
by storing a table of roots, but it's hard to imagine a program that
would make good use of the function values.

assoc_legendre -- The m argument should also be implementation
defined for m >= 128.

assoc_legendref -- If n >= 35 and m >= 35, IEEE 32-bit is pretty
meaningless (toggles between plus and minus infinity). For example,
the value of assoc_legendre(50, 35, 0.17249578) is -0.33007636e52,
with a sensitivity of 27800896.

conf_hyperg -- This function has so many sensitive subdomains that
it is essentially useless computationally (however pretty it is as
a unifying concept mathematically). It should be removed from TR1.

hermitef -- If n >= 47 and |x| < 10, IEEE 32-bit is pretty
meaningless (toggles between plus and minus infinity). For example,
the value of hermitef(127, 0.19673838) is -0.13458131e119,
with a sensitivity of 31215745. We can probably get the signs right
by storing a table of roots, but it's hard to imagine a program that
would make good use of the function values.

hyperg -- This function has so many sensitive subdomains that
it is essentially useless computationally (however pretty it is as
a unifying concept mathematically). It should be removed from TR1.

sph_legendre -- The m argument should also be implementation defined
for m >= 128.