C++ Data-Dependency Ordering: Atomics

ISO/IEC JTC1 SC22 WG21 N2359 = 07-0219 - 2007-08-03

Paul E. McKenney, paulmck@linux.vnet.ibm.com

Introduction

This document presents an interface and minimal implementation for preservation of data dependency ordering to expedite access to dynamic linked data structures that are read frequently and seldom modified.

This proposal is an addendum to N2324 (or one of its descendants), describing the rationale for dependency ordering and corresponding extensions to the atomics API. The companion document N2360 describes an addendum to the memory model in N2334 (or one of its descendants), and N2361 describes annotations to function arguments and return values to permit dependency chains crossing module boundaries to be supported while still permitting the compiler to partake of dependency-breaking optimizations.

This proposal is expected to have minimal affect to strongly ordered machines (e.g., x86) and on weakly ordered machines that do not support data dependency ordering (e.g., Alpha). It has no effect on implementations that refrain from breaking dependency chains. The major burden of this proposal would fall on weakly ordered machines that order data-dependent operations, such as ARM, Itanium, and PowerPC. Even for these architectures, a fully conforming compiler could use the same approach as weakly ordered machines that do not support data dependency ordering, albeit at a performance penalty.

This proposal enforces only data dependencies, not control dependencies. If experience indicates that control dependencies also need to be enforced, a separate proposal will be put forward for them.

This proposal is based on N2153 by Silvera, Wong, McKenney, and Blainey, on N2176 by Hans Boehm, on N2195 by Peter Dimov, on N2260 by Paul E. McKenney, on discussions on the cpp-threads list, and on discussions in the concurrency workgroup at the 2007 Oxford and Toronto meetings.

Rationale

Low-overhead access to read-mostly concurrent data structures
Read-mostly concurrent data structures are quite common both in operating-system kernels and in server-style applications. Examples include data structures representing outside state (such as routing tables), software configuration (modules currently loaded), hardware configuration (storage device currently in use), and security policies (access control permissions, firewall rules). Read-to-write ratios well in excess of a billion to one are quite common.

In such cases, use of data dependency ordering has resulted in order-of-magnitude speedups and similar improvements in scalability.

Deterministic access to read-mostly concurrent data structures
Maintaining data dependency ordering enables readers to access shared data structures in O(1) time, without the need for locking or for the retries that are often required for lock-free data structure algorithms.
Low-overhead publish-subscribe semantics for the case where the publication is mediated by a pointer to the data being published. The several weakly ordered machines that support data dependency ordering support very low-overhead subscribe overhead in this case.
Maintaining data dependency ordering enables readers to access shared data structures without the need for expensive lock acquisitions, atomic instructions, or memory fences that are otherwise required.

A simplified example use of data dependency ordering found within the Linux kernel looks something like the following:

struct foo {
	int a;
	struct foo *next;
};
struct foo *head = NULL;

void insert(int a)
{
	struct foo *p = kmalloc(sizeof(*p), GFP_KERNEL); /* cannot fail */

	spin_lock(&mylock);
	p->a = 1;
	p->next = head->next;
	smp_wmb();  /* Can be thought of as a store-release fence. */
	head->next = p;
	spin_unlock(&mylock);
}

int getfirstval(void)
{
	int retval;

	q = rcu_dereference(head);  /* see discussion below. */
	assert(q != NULL);
	retval = q->a;
	return retval;
}

More elaborate examples are described in a presentation at the Oxford 2007 meeting describing use cases from the Linux kernel beginning on slide 37, including traversal of multiple levels of pointers, indexing arrays, and casts. The effect of the above code is to return the value at the head of the list with little more (or even no more) overhead than would be required if the list were immutable, but while still allowing updates. The rcu_dereference() API used in getfirstval() can be implemented in different ways, optimized for different classes of machines:

  1. On machines with strong memory ordering (e.g., TSO), rcu_dereference() simply prevents the compiler from performing optimizations that would order operations with data dependencies on q before the load from head. In this case, the code relies on the strong ordering to prevent the assignment to retval from seeing the pre-initialized version of the ->a field.
  2. On machines with weak memory ordering, but that enforce ordering based on data dependencies, rcu_dereference() again prevents the compiler from performing optimizations that would order operations with data dependencies on q before the load from head. However, in this case, the code relies on the the machine's enforcement of data-dependency ordering to prevent the assignment to retval from seeing the pre-initialized version of the ->a field.
  3. On machines with weak memory ordering that enforce ordering based on data dependencies, but whose compilers refrain from breaking data dependencies, no further action need be taken.
  4. On other machines, namely those with weak memory ordering, but with no enforcement of ordering based on data dependencies, rcu_dereference() is promoted to a load-acquire operation. Because this prevents all subsequent memory references from being reordered with the load from head, it must prevent any subsequent operations depending on q from being reordered with the load from head.
  5. For completeness, any compiler that avoids optimizations that break dependency chains may simply ignore these primitives.

These machines are not well-supported by prior proposals that omit data-dependency ordering, including N2324. The remainder of this paper describes how to augment this paper with data-dependency ordering.

Prior Approaches

N2324 would require that these machines implement rcu_dereference() using either an acquire fence or a load-acquire. In both cases, this prohibits useful classes of compiler optimizations that involve code motion that does not break dependencies on the load from head. Worse yet, this requires emitting a heavyweight memory barrier for the second class of machines, which can result in unacceptable performance degradation.

In N2195, Peter Dimov proposes an atomic_load_address() template function that protects a single level of indirection. Although this suffices for the very simple example above, it does not handle other examples given in a presentation at the Oxford 2007 meeting describing use cases from the Linux kernel (beginning on slide 37). In particular, N2195, does not support data dependencies that traverse multiple levels of indirection nor that traverse array accesses.

In N2260, Paul E. McKenney presents an API, and this proposal adapts that work to the existing atomics proposal.

Dependency Chains

This proposal requires the programmer to explicitly mark the heads of data dependency chains, so that the head of a data dependency-chain is an explicitly marked load of a pointer or an integer from a shared variable. The value loaded is within the data-dependency chain. Any value produced by a computation that takes as input a value within the data-dependency chain is itself within the data-dependency chain, but only if the computation does not cross an unannotated function-call argument or function-return boundary.

Given any subsequent load, store, or read-modify-write operation by that same thread whose address is taken from the data-dependency chain, that operation is said to have a data dependency on the head of the data-dependency chain. In the case of load and read-modify-write operations, the value returned by the operation is within the data-dependency chain. In the case of store and read-modify-write operations, the value returned by subsequent access to the location stored by this same thread is also within the data-dependency chain, but only if there no intervening unannotated function-call arguments or function-return boundaries have been encountered in the meantime.

The compiler is required to build data-dependency chains before doing any optimizations. An alternative proposal in N2195, introduces the notion of dynamic dependencies. Use of dynamic dependencies would permit the data-dependency chains to be scanned after performing those optimizations that do not break dynamic data-dependency chains.

A dependency chain is thus ended by the death of the register or variable containing a value within the data-dependency chain, or when the value flows through an unannotated function argument or is passed back as an unannotated function return value.

Compilers can avoid tracing dependency chains by emitting a load-acquire for the head of the dependency chain. As noted earlier, this can be a reasonable solution for strongly ordered machines in which a load-acquire operation emits no code, but merely suppresses code-motion operations that would reorder subsequent code before the head of the dependency chain. It is also appropriate for weakly ordered machines that do not order data dependencies. Compilers can also avoid tracing dependency chains by avoiding those optimizations that break these chains.

The pointer or integer at the head of the dependency chain must be such that loads and stores to it are atomic. Some implementations may provide such atomicity given proper alignment. Other implementations may require that the pointer or integer at the head of the dependency chain be declared to be atomic as described in N2145. This document assumes that the load at the head of the dependency chain is an atomic as described in N2145.

Current Approach

This proposal augments N2324 by adding a memory_order_dependency option that may be supplied to operations for which data-dependency semantics are permitted. The memory_order enumeration in N2324 would then read as follows, keeping the enumeration in rough order of increasing memory-ordering strength:

typedef enum memory_order {
    memory_order_relaxed, memory_order_dependency, memory_order_acquire,
    memory_order_release, memory_order_acq_rel, memory_order_seq_cst
} memory_order;

Behavior on Dependency Examples

In N2176, Hans Boehm lists a number of example optimizations that can break dependency chains, which are discussed in the following sections.

Example 1

N2176 example code:

r1 = x.load(memory_order_relaxed);
r2 = *r1;

Recoding to this proposal's API:

r1 = x.load(memory_order_dependency);
r2 = *r1;

Assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the indirection through r1, so the dependency is ordered.

Example 2

N2176 example code:

r1 = x.load(memory_order_relaxed);
r3 = &a + r1 - r1;
r2 = *r3;

This could legitimately be optimized to the following, breaking the dependency chain:

r1 = x.load(memory_order_relaxed);
r3 = &a;
r2 = *r3;

However, recoding to this proposal's API:

r1 = x.load(memory_order_dependency);
r3 = &a + r1 - r1;
r2 = *r3;

Again assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the indirection through r1, so the dependency is ordered. Because the dependency chains must be traced prior to optimization, if the optimization is performed, a countervailing memory fence or artificial data dependency must be inserted.

Example 3

N2176 example code, recoding to this proposal's API:

r1 = x.load(memory_order_dependency);
if (r1 == 0)
        r2 = *r1;
else
	r2 = *(r1 + 1);

Assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the indirection through r1, so the dependency is ordered.

Example 3'

N2176 example code, as modified during email discussions, where x is known to be either 0 or 1:

if (x.load(memory_order_dependency))
	...
else
	...
y = 42 * x / 13;

This might be optimized to the following:

if (x.load(memory_order_dependency)) {
	...
	y = 3;
} else {
	...
	y = 0;
}

assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the assignment to y, so the dependency is ordered. If the underlying machine preserves control-dependency ordering for writes, this optimization is perfectly legal. If the underlying machine does not preserve control-dependency ordering, then either this optimization must be avoided, a memory fence must be emitted after the load of x, or an artificial data dependency must be manufactured. An example artificial data dependency might be as follows:

if (r1 = x.load(memory_order_dependency)) {
	...
	y = 3;
} else {
	...
	y = 0;
}
y = y + r1 - r1;

The compiler would need to decide whether the add and subtract was better than the multiply and divide.

Example 4

N2176 example code:

r1 = x.load(memory_order_relaxed);
if (r1)
        r2 = y.a;
else
	r2 = y.a;

This might be optimized to the following in order to break dependency chains:

r1 = x.load(memory_order_relaxed);
r2 = y.a;

This is a control dependency, so falls outside the scope of this proposal.

Example 5

N2176 example code:

r1 = x.load(memory_order_relaxed);
if (r1)
	f(&y);
else
	g(&y);

Assuming that x is an atomic, the x.load(memory_order_relaxed) will form the head of a dependency chain. The question is then whether the prototypes and definitions of functions f and g have their arguments annotated. If they are so annotated, then the dependency chains propagate into f and g, otherwise, the chains will not propagate. A proposal for such annotation may be found in N2361.

Example 6

N2176 example code:

r2 = x.load(memory_order_dependency);
r3 = r2->a;

Without the x.load(memory_order_dependency), the following data-dependency-breaking optimization would be legal:

r2 = x.load(memory_order_dependency);
r3 = r1->a;
if (r1 != r2) r3 = r2->a;

However, assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the indirection through r2, so the dependency is ordered and the optimization prohibited, at least in absence of a compensating fence or artificially generated data dependency.

Example 7

N2176 example code:

r1 = x.load(memory_order_dependency);
r2 = a[r1->index % a_size];

If the variable a_size is known to the compiler to have the value zero, then there might be a temptation to optimize as follows:

r1 = x.load(memory_order_dependency);
r2 = a[0];

However, again assuming that x is an atomic, the x.load(memory_order_dependency) will form the head of a dependency chain. Because there are no function calls, the dependency chain extends to the indirection through r1, so the dependency is ordered. Therefore, this optimization is prohibited unless accompanied by a compensating memory barrier or artificial data dependency. Note that under Peter Dimov's notion of dynamic dependencies described in N2195, this optimization would be legal, even when the dependency ordering was marked.

Alternatives Considered