Comparison in C++

ISO/IEC JTC1 SC22 WG21 N4367 - 2015-02-08

Lawrence Crowl, Lawrence@Crowl.org

Introduction
    Definition of Equivalence and Order
    Algorithm Order Requirements
Correctness
    Analysis of Floating Point
        Not-a-Number
        Signed Zero
    Other Scalar Types
    User-Defined Types
    Boolean Comparison Functions
    Consistency Between Relations
    Algorithms for Comparing Composite Types
        Memory Representation
        Unanimous
        Uncontended
        Lexicographical
        Order Summary
Performance
Checking
Function Defaults
    Generic Less and Equal
    Generic Order
    Explicit Defaults for Order
Operator Defaults
References

Introduction

Comparisons have many different uses: partitioning over the domain of the type, sorting, memoization, and iteration. These different uses have different requirements. We show that the current language fails to completely address those differences. We also show how to address them.

Definition of Equivalence and Order

A comparison is a query on a relation that provides equivalence, order, or both.

All equivalence and order relations are transitive (a?b and b?c implies a?c)

All equivalence relations are reflexive (a~a) and symmetric (a~b implies b~a). An equality (or congruence) relation is an equivalence relation that is also substitutable (a = b implies f(a) = f(b)).

Strict orders are irreflexive (not a<a), i.e. an element is not less than itself. C++ does not use non-strict orders, so the strict adjective is often implied in the remainder of this paper. Strict orders are also asymmetric (a<b implies not b<a) These axioms together constitute a partial order. A weak order also partitions the elements into ordered equivalence classes, that is being unordered with respect to an element is a transitive relation (a<b implies a<c or c<b). A total order is trichotomous (exactly one of a<b, a=b, a>b) where equality is defined above.

catetory properties relation properties
equivalence reflexive: a~a
symmetric: a~b implies b~a
transitive: a~b and b~c implies a~c
(weak) equivalence
equality (congruence) substitutable: a = b implies f(a) = f(b)
order irreflexive: not a<a
asymmetric: a<b implies not b<a
transitive: a<b and b<c implies a<c
(strict) partial order
(strict) weak order ordered partitions: a<b implies a<c or c<b
(strict) total order trichotomous: exactly one of a<b, a=b, b<a

Algorithm Order Requirements

The algorithms have requirements on the strength of the order they require.

algorithm requirements
partitioning the domain for computation specific to the type
topological sort partial order
normal sort weak order
indexing total order
memoization weak order and equality

Correctness

The C++ standard, by default, uses the comparison operators as thought they implemented the requirements above. Programmers may not often understand the requirements or even know if the types they are using meet those requirements. The consequences of failing to meet them can be significant, for example: sorting does not sort, only shuffles data around; memoization yields wrong results; and memoization fills up with unused buckets.

Analysis of Floating Point

IEEE floating-point arithmetic has several kinds of special values: positive and negative zero, positive and negative infinity (aka Inf), positive and negative not-a-number (aka NaN). The definition for comparison between them implies a strength in the comparsion relations. See section "5.11 Details of comparions predicates" of [IEEE].

Not-a-Number

The expression NaN==NaN is false, so the operation is not reflexive. So, the floating-point operator== does not form an equivalence relation. When NaN values are part of a sequence of memoization calls, they never compare equal to prior queries, and so every query not only fails to use prior computation, it allocates a new memo for every call. That is, NaNs induce pathological behavior into memozation.

For a normal floating values x<y, neither x<NaN nor NaN<y, and therefore floating-point does not have ordered partions. So, the floating-point operator< does not form weak order and therefore does not form a total order. It is, however, a partial order.

A partial order is insufficient for good behavior with std::sort. With g++ 4.8.2, the input {-NaN,-1.0,+0.0,+1.0,+NaN} in various permutations will yield wildly different output:

{-NaN,-1.0,+0.0,+1.0,+NaN}
{+NaN,-1.0,+0.0,+1.0,-NaN}
{-1.0,-NaN,+0.0,+NaN,+1.0}
{-1.0,+0.0,+1.0,-NaN,+NaN}
{-1.0,-NaN,+1.0,+NaN,+0.0}
{+NaN,-NaN,-1.0,+0.0,+1.0}

The problem here is that an incomparable value tends to partition the sort.

The advice to "do not use NaN" may lead to rare and subtle failures, particularly because comparisons on NaN do not produce a signal. The consequences within large software, with often weak, if any, numerical analysis are mostly unknowable.

Signed Zero

Even in the absence of NaN, signed zero still causes problems. While +0==-0 is true, 1.0/-0.0 is -Inf and 1.0/+0.0 is +Inf. That is operator== fails substitutability and therefore, even in the absence of NaN, floating-point operator== not an equality comparison.

Because operator== is not an equality, memoization yields incorrect results. The first query on an element in an equivalence class is taken as the result of all queries on that class. For example, memoing the function 1/x with the sequence {+0.0,-0.0,-0.0} yields {+Inf,+Inf,+Inf} rather than {+Inf,-Inf,-Inf}

In the absence of NaN, operator< provides a weak order. This meets the requirements of std::sort, but may surprise programmers because the order sort output depends on the order of the sort input. Given {-1.0,-0.0,+0.0,-0.0,+1.0} in various permutations will yield different output:

{-1.0,-0.0,+0.0,-0.0,+1.0}
{-1.0,-0.0,+0.0,-0.0,+1.0}
{-1.0,-0.0,-0.0,+0.0,+1.0}
{-1.0,+0.0,-0.0,-0.0,+1.0}

Signed zeros result from operations that underflow, and since inexact signals are rarely enabled, one cannot readily dismiss signed zero as too rare to consider. The consequences within large software, with often weak, if any, numerical analysis are mostly unknowable.

Other Scalar Types

Most current systems use a flat linear memory space with simple pointers, which are simply placed in a total order. But flat memory spaces were not always the case and may not be so in the future. While the operating system may be able to provide a total order, it may not do so efficiently. Comparision between pointers to within the same object is almost certain to be efficient. Comparison between general pointers may require an operating system call to translate a capability to a segment index, and is therefore unlikely to be efficient. Distinguishing between such comparisons has the best chance of preserving C++ program performance on future systems.

Most modern systems use two's complement arithmetic for signed numbers. However, the standard also supports one's complement and signed magnitude representations. Both of these formats have signed zero and thus have some of the same issues as IEEE floating-point numbers.

User-Defined Types

More importantly, user-defined types are often composite types. Programmmers write the comparison operators to partition the value domain in ways that serve algorithms using those types. It is easy to produce a partially ordered type in this environment, and so those operators are increasingly unlikely to meet the algorithmic requirements. Some types explicitly do not provide comparison operators simply to prevent inappropriate comparison, e.g. std::complex provides equality operators but not relational operators.

Boolean Comparison Functions

The core of the problem is in using the same operators for both domain-specific computation and utility algorithms. The solution is separate those two uses.

Because we cannot change the existing operator meanings, We leave operators to user-defined semantics and introduce new functions for utility algorithms.

We introduce the following functions in namespace std in analogy to std::less:

template<typename T> bool partial_less(const T&,const T&);
template<typename T> bool weak_less(const T&,const T&);
template<typename T> bool total_less(const T&,const T&);

Where these functions provide a partial order, a weak order, and a total order, respectively.

Separate facilities for a total order beyond the operators is not new. IEEE floating-point [IEEE] provides a totalOrder function in section 5.10 Details of totalOrder predicate. Likewise Java provides both Comparable and Comparator which provide a total order [Java].

We also provide the following complementary functions, in analogy to std::equal_to:

template<typename T> bool partial_unordered(const T&,const T&);
template<typename T> bool weak_equal(const T&,const T&);
template<typename T> bool total_equal(const T&,const T&);

Where these functions return true when neither argument is less than the other using the correponding functions. The function weak_equal forms an equivalence and the function total_equal forms an equality.

Specializations for these functions will need to be provided for all the standard types. Fortunately, they are a pure extension.

Consistency Between Relations

Programmers will likely expect the above functions to deliver consistent results. We may not be able to provide that consistency in the standard, but we can define it.

A function g refines a function f when for every a and b, f(a,b) implies g(a,b). That is, if a call to f is true, so is the corresponding call to g.

A set of order functions is consistent when

A set of order functions is consistent with an operator set when

Algorithms for Comparing Composite Types

There are several algorithms for comparing composite types.

Memory Representation

Using memcmp provides a total order based on memory representation. Unfortunately, it will likely not deliver results consistent with operators. More importantly, it will certainly not deliver expected results for types with indirects data, such as std::string. Due to these issues, we will not consider it further.

Unanimous

The algorithm requires that all corresponding fields must be less:

T<U iff forall i, T[i]<U[i]

The composite relation is obviously irreflexive and transitive, so it provides a partial order. But while (2,2)<(3,3), neither (2,2)<(1,4) nor (1,4)<(3,3), so resulting order is neither weak nor total.

Uncontended

The algorithm requires that at least one field must be less, and none be greater:

T<U iff (exists i such that T[i]<U[i]) and (forall j!=i, not U[i]<T[i])

The composite relation is obviously irreflexive. Given a field with a partial order, though, the relation is not transitive, e.g. (2,1)<(NaN,2) and (NaN,2)<(1,3) but not (2,1)<(1,3). So, the resulting relation is not even an order. A partial order breaks the algorithm.

Even with fields totally ordered, (2,2)<(3,3) but neither (2,2)<(1,4) nor (1,4)<(3,3), so the resulting relations are neither weak nor totally ordered. They are, however, partially ordered.

Lexicographical

The algorithm requires that all fields compare equal before the first field that compares less:

T<U iff exists i such that (T[i]<U[i] and (forall j<i, not U[i]<T[i]))

The composite relation is obviously irreflexive. Given a field with a partial order, though, the relation is not transitive, e.g. (2,1)<(NaN,2) and (NaN,2)<(1,3) but not (2,1)<(1,3). So, the resulting relation is not even an order. A partial order breaks the algorithm.

When fields are at least weakly ordered, the order is defined by the first non-equivalent field, which in turn is either weakly or totally ordered. So, the stength of the composite relation is weak if any field is weak, otherwise it is total.

Order Summary

The order produced by the algorithms is summarized as follows.

resulting order for weakest field order being
algorithm partial weak total
unanimous partial partial partial
uncontended failure partial partial
lexicographical failure weak total

Where the algorithms do not fail, lexicographical refines uncontended which refines unanimous.

Performance

The standard typically uses a lexicographical comparison, which is built on operator<. The algorithms often require that they visit a field twice, i.e. call both a.f<b.f and b.f<a.f. For a composite type with l layers and n fields per layer, the worst case complexity is O((2*n)^(l+1)).

Algorithms that visit each field pair once can be substantially cheaper.

Rather than rely on a boolean less, we adopt the approach of strcmp et.al. and return a three-state value indicating less, greater, or neither. In addition to historical C funtions, the approach is also used by Java Comparable and Comparator [Java].

The functions are partial_order, weak_order, and total_order.

Checking

At present, there is generally no diagnostic when an operator< does not meet the requirements of the algorithm that uses it.

We create different function signatures for the different order functions by having different three-state return types. Algorithms embodying certain requirements would require a comparison function that returned the appropriate type.

The three-state return types are:


enum class partial_ordering { less, unordered, greater };
enum class weak_ordering { less, equivalent, greater };
enum class total_ordering { less, equal, greater };

and the corresponding order functions are


template<typename T> partial_ordering partial_order(T,T)
template<typename T> weak_ordering weak_order(T,T)
template<typename T> total_ordering total_order(T,T)

Function Defaults

There are a potentially significant number of new specializations introduce by the above functions. We can reduce some of the coding with generic implementations or explicit defaults.

Generic Less and Equal

Given order functions, we can infer the less and equal functions.


template<typename T> bool total_less(T a, T b) {
  return total_order(a,b) == total_ordering::less;
}
template<typename T> bool total_equal(T a, T b) {
  return total_order(a,b) == total_ordering::equal;
}

Similar definitions exist for the other strengths.

These functions can of course be specialized for effiency.

Generic Order

A total order is also a weak order and a weak order is also a partial order, so we can use a generic implementation for a weaker version using a stronger version.


template<typename T> weak_ordering weak_order(T a, T b) {
  total_ordering o = total_order(a,b);
  switch ( o ) {
    case total_ordering::less: return weak_ordering::less;
    case total_ordering::greater: return weak_ordering::greater;
    case total_ordering::equal: return weak_ordering::equivalent;
  }
}

A similar definition exists for partial_order. However, there is no generic version for total_order.

These functions can of course be specialized for effiency.

Explicit Defaults for Order

For classes of one field, the defaults for order functions can simply reflect the order of that field.

We can provide an explicit default definitions for a composite total order using the lexicographical algorithm. Such a default would require that all fields also provide a total order. However, the lexicographical algorithm on fields may not be consistent with the operators. Without automatic consistency, it is not clear that we should provide default definitions.

Much the same argument applies to a weak a composite order default, except that the requirement is that all fields provide at least a weak order.

There are two reasonable algorithms for a composite partial order. One provides more refined results while the other has more generality. A default here seems less than obvious.

Operator Defaults

Given the approach of separating comparison for utility algorithms from domain-specific comparison, operator defaults should support the programmer, not enforce a goal of total order.

For types of a single field, the default should be to simply reflect the operator for the field. This approach is consistent with wrapper abstractions.

For classes that have more than one field, or for those that have one field but no corresponding operator, there are some uncontroversial defaults.

  1. Derive a!=b from !(a==b).
  2. Derive a>b from b<a.

There are also some more controversial defaults.

  1. Derive a<=b from a<b|| a==b versus from !(a>b).
  2. Derive a>=b from a>b|| a==b versus from !(a<b).

The former directly reflects the reading of the operator names while the later assumes at least a consistent weak ordering. The latter is not true of floating point, and probably many user-defined types.

The remaining operator< and operator== are too specific to the value domain and the functions that operate on it to have reasonable defaults.

References

[IEEE]
IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2008, http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4610933
[Java]
The JavaTM Tutorials, Object Ordering, http://docs.oracle.com/javase/tutorial/collections/interfaces/order.html