Document Number: P3323R0.

Date: 2024-06-10.

Reply to: Gonzalo Brito Gadeschi <gonzalob _at_ nvidia.com>.

Authors: Gonzalo Brito Gadeschi, Lewis Baker.

Audience: SG1.

# cv-qualified types in atomic and atomic_ref

## Summary

Addresses LWG#4069 and LWG#3508 by clarifying that cv-qualified types are not supported by `std::atomic<T>`

and specifying how these are supported by `std::atomic_ref<T>`

.

## Motivation

CWG#2094 made `is_trivially_copyable_v<volatile ...-type>`

(integer, pointer, floating-point) true, leading to LWG#3508 and LWG#4069.

Supporting `atomic_ref<volatile T>`

can be useful for atomically accessing objects of type `T`

stored in shared-memory where the object was not created as an `atomic<T>`

.

## Resolution for `std::atomic`

`std::atomic<...-type>`

specializations only apply for cv-unqualified types.

*Proposed resolution*: restrict `std::atomic<T>`

to types `T`

for which

`same_as<T, remove_cv_t<T>>`

is true.
*Rationale*: `atomic<volatile int>`

use case is served by `volatile atomic<int>`

, i.e., there is no need to support `atomic<volatile T>`

.
*Impact*: libstdc++ and libc++ can't compile `atomic<volatile T>`

already. MSVC can, but usage is limited, e.g., because `fetch_add`

only exists on specialization, not primary template.
*Proposed wording*:

Modify [atomics.types.generic.general]:

The template argument for `T`

shall meet the *Cpp17CopyConstructible* and *Cpp17CopyAssignable* requirements. The program is ill-formed if any of

`is_trivially_copyable_v<T>`

,
`is_copy_constructible_v<T>`

,
`is_move_constructible_v<T>`

,
`is_copy_assignable_v<T>`

,~~or~~
`is_move_assignable_v<T>`

, or
`same_as<T, remove_cv_t<T>>`

is false.

## Resolution for `std::atomic_ref`

LWG#3508 also points out this problem, and indicates that for const-qualified types, it is not possible to implement atomic load or atomic read-modify-write operations.

`std::atomic_ref<...-type>`

specializations only apply for cv-unqualified types.

*Proposed resolution*: specify `std::atomic_ref<T>`

for cv-qualified T by restricting support of `volatile`

-qualified types to lock-free atomics and restricting support of `const`

-qualified types to atomic read operations.
*Rationale*: `atomic_ref`

goal of improving concurrency support when interfacing with third-party types, which may be using `volatile int`

for historical purposes, needs `std::atomic_ref<volatile int>`

: the `atomic_ref`

itself is not `volatile`

, the data it references is.
*Impact*: libstdc++ and libc++ (among others) would need to implement it.
*Wording*:

Modify [atomics.ref.generic.general]:

```
namespace std {
template<class T> struct atomic_ref {
private:
T* ptr; // exposition only
public:
using value_type =
```~~T~~remove_cv_t<T>;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
explicit atomic_ref(T&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(~~T~~value_type, memory_order = memory_order::seq_cst) const noexcept;
~~T~~value_type operator=(~~T~~value_type) const noexcept;
~~T~~value_type load(memory_order = memory_order::seq_cst) const noexcept;
operator ~~T~~value_type() const noexcept;
~~T~~value_type exchange(~~T~~value_type, memory_order = memory_order::seq_cst)
const noexcept;
bool compare_exchange_weak(~~T~~value_type&, ~~T~~value_type,
memory_order, memory_order)
const noexcept;
bool compare_exchange_strong(~~T~~value_type&, ~~T~~value_type,
memory_order, memory_order)
const noexcept;
bool compare_exchange_weak(~~T~~value_type&, ~~T~~value_type,
memory_order = memory_order::seq_cst)
const noexcept;
bool compare_exchange_strong(~~T~~value_type&, ~~T~~value_type,
memory_order = memory_order::seq_cst)
const noexcept;
void wait(~~T~~value_type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

- An
`atomic_ref`

object applies atomic operations ([atomics.general]) to the object referenced by `*ptr`

such that, for the lifetime ([basic.life]) of the `atomic_ref`

object, the object referenced by `*ptr`

is an atomic object ([intro.races]).
- The program is ill-formed if
`is_trivially_copyable_v<T>`

is false.
- The lifetime ([basic.life]) of an object referenced by
`*ptr`

shall exceed the lifetime of all `atomic_ref`

s that reference the object. While any `atomic_ref`

instances exist that reference the `*ptr`

object, all accesses to that object shall exclusively occur through those `atomic_ref`

instances. No subobject of the object referenced by `atomic_ref`

shall be concurrently referenced by any other `atomic_ref`

object.
- Atomic operations applied to an object through a referencing
`atomic_ref`

are atomic with respect to atomic operations applied through any other `atomic_ref`

referencing the same object.

[Note 1: Atomic operations or the `atomic_ref`

constructor can acquire a shared resource, such as a lock associated with the referenced object, to enable atomic operations to be applied to the referenced object. — end note]
- The program is ill-formed if
`is_always_lock_free`

is `false`

and `is_volatile_v<T>`

is `true`

.

Modify [atomics.ref.ops] as follows:

33.5.7.2 Operations [atomics.ref.ops]

```
static constexpr size_t required_alignment;
```

- The alignment required for an object to be referenced by an atomic reference, which is at least
`alignof(T)`

.
- [Note 1: Hardware could require an object referenced by an
`atomic_ref`

to have stricter alignment ([basic.align]) than other objects of type `T`

. Further, whether operations on an `atomic_ref`

are lock-free could depend on the alignment of the referenced object. For example, lock-free operations on `std::complex<double>`

could be supported only if aligned to `2*alignof(double)`

. — end note]

```
static constexpr bool is_always_lock_free;
```

- The static data member
`is_always_lock_free`

is `true`

if the `atomic_ref`

type's operations are always lock-free, and `false`

otherwise.

```
bool is_lock_free() const noexcept;
```

- Returns:
`true`

if operations on all objects of the type `atomic_ref<T>`

are lock-free, `false`

otherwise.

```
atomic_ref(T& obj);
```

- Preconditions: The referenced object is aligned to
`required_alignment`

.
- Postconditions:
`*this`

references `obj`

.
- Throws: Nothing.

```
atomic_ref(const atomic_ref& ref) noexcept;
```

- Postconditions:
`*this`

references the object referenced by `ref`

.

`void store(`~~T~~value_type desired, memory_order order = memory_order::seq_cst) const noexcept

- Constraints:
`is_const_v<T>`

is `false`

.
- Preconditions:
`order`

is `memory_order::relaxed`

, `memory_order::release`

, or `memory_order::seq_cst`

.
- Effects: Atomically replaces the value referenced by
`*ptr`

with the value of `desired`

. Memory is affected according to the value of `order`

.

~~T~~value_type operator=(~~T~~value_type desired) const noexcept;

- Constraints:
`is_const_v<T>`

is `false`

.
- Effects: Equivalent to:

```
store(desired);
return desired;
```

~~T~~value_type load(memory_order order = memory_order::seq_cst) const noexcept;

- Preconditions:
`order`

is `memory_order::relaxed`

, `memory_order::consume`

, `memory_order::acquire`

, or `memory_order::seq_cst`

.
- Effects: Memory is affected according to the value of
`order`

.
- Returns: Atomically returns the value referenced by
`*ptr`

.

`operator `~~T~~value_type() const noexcept;

- Effects: Equivalent to:
`return load();`

~~T~~value_type exchange(~~T~~value_type desired, memory_order order = memory_order::seq_cst) const noexcept;

- Constraints:
`is_const_v<T>`

is `false`

.
- Effects: Atomically replaces the value referenced by
`*ptr`

with `desired`

. Memory is affected according to the value of `order`

. This operation is an atomic read-modify-write operation ([intro.multithread]).
- Returns: Atomically returns the value referenced by
`*ptr`

immediately before the effects.

`bool compare_exchange_weak(`~~T~~value_type& expected, ~~T~~value_type desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_strong(~~T~~value_type& expected, ~~T~~value_type desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_weak(~~T~~value_type& expected, ~~T~~value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(~~T~~value_type& expected, ~~T~~value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;

- Constraints:
`is_const_v<T>`

is `false`

.
- Preconditions:
`failure`

is `memory_order::relaxed`

, `memory_order::consume`

, `memory_order::acquire`

, or `memory_order::seq_cst`

.
- Effects: Retrieves the value in
`expected`

. It then atomically compares the value representation of the value referenced by `*ptr`

for equality with that previously retrieved from `expected`

, and if `true`

, replaces the value referenced by `*ptr`

with that in `desired`

. If and only if the comparison is `true`

, memory is affected according to the value of `success`

, and if the comparison is `false`

, memory is affected according to the value of `failure`

. When only one `memory_order`

argument is supplied, the value of `success`

is `order`

, and the value of `failure`

is `order`

except that a value of `memory_order::acq_rel`

shall be replaced by the value `memory_order::acquire`

and a value of `memory_order::release`

shall be replaced by the value `memory_order::relaxed`

. If and only if the comparison is `false`

then, after the atomic operation, the value in `expected`

is replaced by the value read from the value referenced by `*ptr`

during the atomic comparison. If the operation returns `true`

, these operations are atomic read-modify-write operations ([intro.races]) on the value referenced by `*ptr`

. Otherwise, these operations are atomic load operations on that memory.
- Returns: The result of the comparison.
- Remarks: A weak compare-and-exchange operation may fail spuriously. That is, even when the contents of memory referred to by
`expected`

and `ptr`

are equal, it may return `false`

and store back to expected the same memory contents that were originally there.

[Note 2: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop. When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. — end note]

`void wait(`~~T~~value_type old, memory_order order = memory_order::seq_cst) const noexcept;

- Preconditions:
`order`

is `memory_order::relaxed`

, `memory_order::consume`

, `memory_order::acquire`

, or `memory_order::seq_cst`

.
- Effects: Repeatedly performs the following steps, in order:

(23.1) Evaluates `load(order)`

and compares its value representation for equality against that of `old`

.

(23.2) If they compare unequal, returns.

(23.3) Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
- Remarks: This function is an atomic waiting operation ([atomics.wait]) on atomic object
`*ptr`

.

`void notify_one() const noexcept;`

- Effects: Unblocks the execution of at least one atomic waiting operation on
`*ptr`

that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
- Remarks: This function is an atomic notifying operation ([atomics.wait]) on atomic object
`*ptr`

.

`void notify_all() const noexcept;`

- Effects: Unblocks the execution of all atomic waiting operations on
`*ptr`

that are eligible to be unblocked ([atomics.wait]) by this call.
- Remarks: This function is an atomic notifying operation ([atomics.wait]) on atomic object
`*ptr`

.

Modify [atomics.ref.int]:

33.5.7.3 Specializations for integral types[atomics.ref.int]

- There are specializations of the
`atomic_ref`

class template for all integral types except cv`bool`

~~the integral types ~~`char`

, `signed char`

, `unsigned char`

, `short`

, `unsigned short`

, `int`

, `unsigned int`

, `long`

, `unsigned long`

, `long long`

, `unsigned long long`

, `char8_t`

, `char16_t`

, `char32_t`

, `wchar_t`

, and any other types needed by the typedefs in the header <cstdint>. For each such possibly cv-qualified type *integral-type*, the specialization `atomic_ref<`*integral-type*>

provides additional atomic operations appropriate to integral types.

[Note 1: The specialization `atomic_ref<bool>`

uses the primary template ([atomics.ref.generic]). — end note]
- The program is ill-formed if
`is_always_lock_free`

is `false`

and `is_volatile_v<T>`

is `true`

.

```
namespace std {
template<> struct atomic_ref<
```*integral-type*> {
private:
*integral-type** ptr; // *exposition only*
public:
using value_type = remove_cv_t<*integral-type*>;
using difference_type = value_type;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(*integral-type*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(*integral-type*value_type, memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type operator=(*integral-type*value_type) const noexcept;
*integral-type*value_type load(memory_order = memory_order::seq_cst) const noexcept;
operator *integral-type*value_type() const noexcept;
*integral-type*value_type exchange(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(*integral-type*value_type&, *integral-type*value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(*integral-type*value_type&, *integral-type*value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(*integral-type*value_type&, *integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(*integral-type*value_type&, *integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_add(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_sub(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_and(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_or(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_xor(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_max(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type fetch_min(*integral-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type*value_type operator++(int) const noexcept;
*integral-type*value_type operator--(int) const noexcept;
*integral-type*value_type operator++() const noexcept;
*integral-type*value_type operator--() const noexcept;
*integral-type*value_type operator+=(*integral-type*value_type) const noexcept;
*integral-type*value_type operator-=(*integral-type*value_type) const noexcept;
*integral-type*value_type operator&=(*integral-type*value_type) const noexcept;
*integral-type*value_type operator|=(*integral-type*value_type) const noexcept;
*integral-type*value_type operator^=(*integral-type*value_type) const noexcept;
void wait(*integral-type*value_type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

- Descriptions are provided below only for members that differ from the primary template.
- The following operations perform arithmetic computations. The correspondence among key, operator, and computation is specified in Table 148.

*integral-type*value_type fetch_key(*integral-type*value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;

- Constraints:
`is_const_v<`*integral-type*>

is `false`

.
- Effects: Atomically replaces the value referenced by
`*ptr`

with the result of the computation applied to the value referenced by `*ptr`

and the given `operand`

. Memory is affected according to the value of `order`

. These operations are atomic read-modify-write operations ([intro.races]).
- Returns: Atomically, the value referenced by
`*ptr`

immediately before the effects.
- Remarks: Except for
`fetch_max`

and `fetch_min`

, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.

[Note 2: There are no undefined results arising from the computation. — end note]
- For
`fetch_max`

and `fetch_min`

, the maximum and minimum computation is performed as if by `max`

and `min`

algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.

*integral-type*value_type operator *op*=(*integral-type*value_type operand) const noexcept;

- Constraints:
`is_const_v<`*integral-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_`*key*(operand) *op* operand;

Modify [atomics.ref.float]:

33.5.7.4 Specializations for floating-point types[atomics.ref.float]

- There are specializations of the
`atomic_ref`

class template for all ~~cv-unqualified~~ floating-point types. For each such possibly cv-qualified type *floating-point-type*, the specialization `atomic_ref<`*floating-point*>

provides additional atomic operations appropriate to floating-point types.
- The program is ill-formed if
`is_always_lock_free`

is `false`

and `is_volatile_v<T>`

is `true`

.

```
namespace std {
template<> struct atomic_ref<
```*floating-point-type*> {
private:
*floating-point-type** ptr; // exposition only
public:
using value_type = remove_cv_t<*floating-point-type*>;
using difference_type = value_type;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(*floating-point-type*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(*floating-point-type*value_type, memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type*value_type operator=(*floating-point-type*value_type) const noexcept;
*floating-point-type*value_type load(memory_order = memory_order::seq_cst) const noexcept;
operator *floating-point-type*value_type() const noexcept;
*floating-point-type*value_type exchange(*floating-point-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(*floating-point-type*value_type&, *floating-point-type*value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(*floating-point-type*value_type&, *floating-point-type*value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(*floating-point-type*value_type&, *floating-point-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(*floating-point-type*value_type&, *floating-point-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type*value_type fetch_add(*floating-point-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type*value_type fetch_sub(*floating-point-type*value_type,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type*value_type operator+=(*floating-point-type*value_type) const noexcept;
*floating-point-type*value_type operator-=(*floating-point-type*value_type) const noexcept;
void wait(*floating-point-type*value_type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

- Descriptions are provided below only for members that differ from the primary template.
- The following operations perform arithmetic computations. The correspondence among key, operator, and computation is specified in Table 148.

*floating-point-type*value_type fetch_key(*floating-point-type*value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;

- Constraints:
`is_const_v<`*floating-point-type*>

is `false`

.
- Effects: Atomically replaces the value referenced by
`*ptr`

with the result of the computation applied to the value referenced by `*ptr`

and the given `operand`

. Memory is affected according to the value of `order`

. These operations are atomic read-modify-write operations ([intro.races]).
- Returns: Atomically, the value referenced by
`*ptr`

immediately before the effects.
- Remarks: If the result is not a representable value for its type ([expr.pre]), the result is unspecified, but the operations otherwise have no undefined behavior. Atomic arithmetic operations on
*floating-point-type* should conform to the `std::numeric_limits<`*floating-point-type*value_type>

traits associated with the floating-point type ([limits.syn]). The floating-point environment ([cfenv]) for atomic arithmetic operations on *floating-point-type* may be different than the calling thread's floating-point environment.

*floating-point-type*value_type operator op=(*floating-point-type*value_type operand) const noexcept;

- Constraints:
`is_const_v<`*floating-point-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_`*key*(operand) *op* operand;

Modify [atomics.ref.pointer]:

33.5.7.5 Partial specialization for pointers[atomics.ref.pointer]

- There are specializations of the
`atomic_ref`

class template for all pointer-to-object types. For each such possibly cv-qualified type *pointer-type*, the specialization `atomic_ref<`*pointer-type*>

provides additional atomic operations appropriate to pointer types.
- The program is ill-formed if
`is_always_lock_free`

is `false`

and `is_volatile_v<T>`

is `true`

.

```
namespace std {
template<
```~~class T~~> struct atomic_ref<~~T*~~*pointer-type*> {
private:
~~T*~~*pointer-type** ptr; // exposition only
public:
using value_type = ~~T*~~remove_cv_t<*pointer-type*>;
using difference_type = ptrdiff_t;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
explicit atomic_ref(~~T*~~*pointer-type*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(~~T*~~value_type, memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type operator=(~~T*~~value_type) const noexcept;
~~T*~~value_type load(memory_order = memory_order::seq_cst) const noexcept;
operator ~~T*~~value_type() const noexcept;
~~T*~~value_type exchange(~~T*~~value_type, memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(~~T*~~value_type&, ~~T*~~value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(~~T*~~value_type&, ~~T*~~value_type,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(~~T*~~value_type&, ~~T*~~value_type,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(~~T*~~value_type&, ~~T*~~value_type,
memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type fetch_max(~~T*~~value_type, memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type fetch_min(~~T*~~value_type, memory_order = memory_order::seq_cst) const noexcept;
~~T*~~value_type operator++(int) const noexcept;
~~T*~~value_type operator--(int) const noexcept;
~~T*~~value_type operator++() const noexcept;
~~T*~~value_type operator--() const noexcept;
~~T*~~value_type operator+=(difference_type) const noexcept;
~~T*~~value_type operator-=(difference_type) const noexcept;
void wait(~~T*~~value_type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

- Descriptions are provided below only for members that differ from the primary template.
- The following operations perform arithmetic computations. The correspondence among key, operator, and computation is specified in Table 149.

```
```~~T*~~value_type fetch_*key*(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;

- Constraints:
`is_const_v<`*pointer-type*>

is `false`

.
- Mandates:
`T`

`remove_pointer_t<`*pointer-type*>

is a complete object type.
- Effects: Atomically replaces the value referenced by
`*ptr`

with the result of the computation applied to the value referenced by `*ptr`

and the given `operand`

. Memory is affected according to the value of `order`

. These operations are atomic read-modify-write operations ([intro.races]).
- Returns: Atomically, the value referenced by
`*ptr`

immediately before the effects.
- Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
- For
`fetch_max`

and `fetch_min`

, the maximum and minimum computation is performed as if by `max`

and `min`

algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.

[Note 1: If the pointers point to different complete objects (or subobjects thereof), the `<`

operator does not establish a strict weak ordering (Table 29, [expr.rel]). — end note]

```
```~~T*~~value_type operator *op*=(difference_type operand) const noexcept;

- Constraints:
`is_const_v<`*pointer-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_`*key*(operand) *op* operand;

Modify [atomics.ref.memop]:

33.5.7.6 Member operators common to integers and pointers to objects[atomics.ref.memop]

- Let
*referred-type* be *pointer-type* for the specializations in [atomics.ref.pointer] and be *integral-type* for the specializations in [atomics.ref.int].

`value_type operator++(int) const noexcept;`

- Constraints:
`is_const_v<`*referred-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_add(1);`

`value_type operator--(int) const noexcept;`

- Constraints:
`is_const_v<`*referred-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_sub(1);`

`value_type operator++() const noexcept;`

- Constraints:
`is_const_v<`*referred-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_add(1) + 1;`

`value_type operator--() const noexcept;`

- Constraints:
`is_const_v<`*referred-type*>

is `false`

.
- Effects: Equivalent to:
`return fetch_sub(1) - 1;`

Document Number: P3323R0.

Date: 2024-06-10.

Reply to: Gonzalo Brito Gadeschi <gonzalob _at_ nvidia.com>.

Authors: Gonzalo Brito Gadeschi, Lewis Baker.

Audience: SG1.

## cv-qualified types in atomic and atomic_ref

## Summary

Addresses LWG#4069 and LWG#3508 by clarifying that cv-qualified types are not supported by

`std::atomic<T>`

and specifying how these are supported by`std::atomic_ref<T>`

.## Motivation

CWG#2094 made

`is_trivially_copyable_v<volatile ...-type>`

(integer, pointer, floating-point) true, leading to LWG#3508 and LWG#4069.Supporting

`atomic_ref<volatile T>`

can be useful for atomically accessing objects of type`T`

stored in shared-memory where the object was not created as an`atomic<T>`

.## Resolution for

`std::atomic`

`std::atomic<...-type>`

specializations only apply for cv-unqualified types.Proposed resolution: restrict`std::atomic<T>`

to types`T`

for which`same_as<T, remove_cv_t<T>>`

is true.Rationale:`atomic<volatile int>`

use case is served by`volatile atomic<int>`

, i.e., there is no need to support`atomic<volatile T>`

.Impact: libstdc++ and libc++ can't compile`atomic<volatile T>`

already. MSVC can, but usage is limited, e.g., because`fetch_add`

only exists on specialization, not primary template.Proposed wording:Modify [atomics.types.generic.general]:

The template argument for

`T`

shall meet theCpp17CopyConstructibleandCpp17CopyAssignablerequirements. The program is ill-formed if any of`is_trivially_copyable_v<T>`

,`is_copy_constructible_v<T>`

,`is_move_constructible_v<T>`

,`is_copy_assignable_v<T>`

,~~or~~`is_move_assignable_v<T>`

, or`same_as<T, remove_cv_t<T>>`

is false.

## Resolution for

`std::atomic_ref`

LWG#3508 also points out this problem, and indicates that for const-qualified types, it is not possible to implement atomic load or atomic read-modify-write operations.

`std::atomic_ref<...-type>`

specializations only apply for cv-unqualified types.Proposed resolution: specify`std::atomic_ref<T>`

for cv-qualified T by restricting support of`volatile`

-qualified types to lock-free atomics and restricting support of`const`

-qualified types to atomic read operations.Rationale:`atomic_ref`

goal of improving concurrency support when interfacing with third-party types, which may be using`volatile int`

for historical purposes, needs`std::atomic_ref<volatile int>`

: the`atomic_ref`

itself is not`volatile`

, the data it references is.Impact: libstdc++ and libc++ (among others) would need to implement it.Wording:Modify [atomics.ref.generic.general]:

`atomic_ref`

object applies atomic operations ([atomics.general]) to the object referenced by`*ptr`

such that, for the lifetime ([basic.life]) of the`atomic_ref`

object, the object referenced by`*ptr`

is an atomic object ([intro.races]).`is_trivially_copyable_v<T>`

is false.`*ptr`

shall exceed the lifetime of all`atomic_ref`

s that reference the object. While any`atomic_ref`

instances exist that reference the`*ptr`

object, all accesses to that object shall exclusively occur through those`atomic_ref`

instances. No subobject of the object referenced by`atomic_ref`

shall be concurrently referenced by any other`atomic_ref`

object.`atomic_ref`

are atomic with respect to atomic operations applied through any other`atomic_ref`

referencing the same object.[Note 1: Atomic operations or the

`atomic_ref`

constructor can acquire a shared resource, such as a lock associated with the referenced object, to enable atomic operations to be applied to the referenced object. — end note]`is_always_lock_free`

is`false`

and`is_volatile_v<T>`

is`true`

.Modify [atomics.ref.ops] as follows:

33.5.7.2 Operations [atomics.ref.ops]

`alignof(T)`

.`atomic_ref`

to have stricter alignment ([basic.align]) than other objects of type`T`

. Further, whether operations on an`atomic_ref`

are lock-free could depend on the alignment of the referenced object. For example, lock-free operations on`std::complex<double>`

could be supported only if aligned to`2*alignof(double)`

. — end note]`is_always_lock_free`

is`true`

if the`atomic_ref`

type's operations are always lock-free, and`false`

otherwise.`true`

if operations on all objects of the type`atomic_ref<T>`

are lock-free,`false`

otherwise.`required_alignment`

.`*this`

references`obj`

.`*this`

references the object referenced by`ref`

.`is_const_v<T>`

is`false`

.`order`

is`memory_order::relaxed`

,`memory_order::release`

, or`memory_order::seq_cst`

.`*ptr`

with the value of`desired`

. Memory is affected according to the value of`order`

.`is_const_v<T>`

is`false`

.`store(desired); return desired;`

`order`

is`memory_order::relaxed`

,`memory_order::consume`

,`memory_order::acquire`

, or`memory_order::seq_cst`

.`order`

.`*ptr`

.`return load();`

`is_const_v<T>`

is`false`

.`*ptr`

with`desired`

. Memory is affected according to the value of`order`

. This operation is an atomic read-modify-write operation ([intro.multithread]).`*ptr`

immediately before the effects.`is_const_v<T>`

is`false`

.`failure`

is`memory_order::relaxed`

,`memory_order::consume`

,`memory_order::acquire`

, or`memory_order::seq_cst`

.`expected`

. It then atomically compares the value representation of the value referenced by`*ptr`

for equality with that previously retrieved from`expected`

, and if`true`

, replaces the value referenced by`*ptr`

with that in`desired`

. If and only if the comparison is`true`

, memory is affected according to the value of`success`

, and if the comparison is`false`

, memory is affected according to the value of`failure`

. When only one`memory_order`

argument is supplied, the value of`success`

is`order`

, and the value of`failure`

is`order`

except that a value of`memory_order::acq_rel`

shall be replaced by the value`memory_order::acquire`

and a value of`memory_order::release`

shall be replaced by the value`memory_order::relaxed`

. If and only if the comparison is`false`

then, after the atomic operation, the value in`expected`

is replaced by the value read from the value referenced by`*ptr`

during the atomic comparison. If the operation returns`true`

, these operations are atomic read-modify-write operations ([intro.races]) on the value referenced by`*ptr`

. Otherwise, these operations are atomic load operations on that memory.`expected`

and`ptr`

are equal, it may return`false`

and store back to expected the same memory contents that were originally there.[Note 2: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop. When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. — end note]

`order`

is`memory_order::relaxed`

,`memory_order::consume`

,`memory_order::acquire`

, or`memory_order::seq_cst`

.(23.1) Evaluates

`load(order)`

and compares its value representation for equality against that of`old`

.(23.2) If they compare unequal, returns.

(23.3) Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.

`*ptr`

.`*ptr`

that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.`*ptr`

.`*ptr`

that are eligible to be unblocked ([atomics.wait]) by this call.`*ptr`

.Modify [atomics.ref.int]:

33.5.7.3 Specializations for integral types[atomics.ref.int]

`atomic_ref`

class template for all integral types except cv`bool`

~~the integral types~~. For each such possibly cv-qualified type`char`

,`signed char`

,`unsigned char`

,`short`

,`unsigned short`

,`int`

,`unsigned int`

,`long`

,`unsigned long`

,`long long`

,`unsigned long long`

,`char8_t`

,`char16_t`

,`char32_t`

,`wchar_t`

, and any other types needed by the typedefs in the header <cstdint>integral-type, the specialization`atomic_ref<`

provides additional atomic operations appropriate to integral types.integral-type>[Note 1: The specialization

`atomic_ref<bool>`

uses the primary template ([atomics.ref.generic]). — end note]`is_always_lock_free`

is`false`

and`is_volatile_v<T>`

is`true`

.`is_const_v<`

isintegral-type>`false`

.`*ptr`

with the result of the computation applied to the value referenced by`*ptr`

and the given`operand`

. Memory is affected according to the value of`order`

. These operations are atomic read-modify-write operations ([intro.races]).`*ptr`

immediately before the effects.`fetch_max`

and`fetch_min`

, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.[Note 2: There are no undefined results arising from the computation. — end note]

`fetch_max`

and`fetch_min`

, the maximum and minimum computation is performed as if by`max`

and`min`

algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.`is_const_v<`

isintegral-type>`false`

.`return fetch_`

key(operand)opoperand;Modify [atomics.ref.float]:

33.5.7.4 Specializations for floating-point types[atomics.ref.float]

`atomic_ref`

class template for all~~cv-unqualified~~floating-point types. For each such possibly cv-qualified typefloating-point-type, the specialization`atomic_ref<`

provides additional atomic operations appropriate to floating-point types.floating-point>`is_always_lock_free`

is`false`

and`is_volatile_v<T>`

is`true`

.`is_const_v<`

isfloating-point-type>`false`

.`*ptr`

with the result of the computation applied to the value referenced by`*ptr`

and the given`operand`

. Memory is affected according to the value of`order`

. These operations are atomic read-modify-write operations ([intro.races]).`*ptr`

immediately before the effects.floating-point-typeshould conform to the`std::numeric_limits<`

traits associated with the floating-point type ([limits.syn]). The floating-point environment ([cfenv]) for atomic arithmetic operations onvalue_type>floating-point-typefloating-point-typemay be different than the calling thread's floating-point environment.`is_const_v<`

isfloating-point-type>`false`

.`return fetch_`

key(operand)opoperand;Modify [atomics.ref.pointer]:

33.5.7.5 Partial specialization for pointers[atomics.ref.pointer]

`atomic_ref`

class template for all pointer-to-object types. For each such possibly cv-qualified typepointer-type, the specialization`atomic_ref<`

provides additional atomic operations appropriate to pointer types.pointer-type>`is_always_lock_free`

is`false`

and`is_volatile_v<T>`

is`true`

.`is_const_v<`

ispointer-type>`false`

.`T`

`remove_pointer_t<`

is a complete object type.pointer-type>`*ptr`

with the result of the computation applied to the value referenced by`*ptr`

and the given`operand`

. Memory is affected according to the value of`order`

. These operations are atomic read-modify-write operations ([intro.races]).`*ptr`

immediately before the effects.`fetch_max`

and`fetch_min`

, the maximum and minimum computation is performed as if by`max`

and`min`

algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.[Note 1: If the pointers point to different complete objects (or subobjects thereof), the

`<`

operator does not establish a strict weak ordering (Table 29, [expr.rel]). — end note]`is_const_v<`

ispointer-type>`false`

.`return fetch_`

key(operand)opoperand;Modify [atomics.ref.memop]:

33.5.7.6 Member operators common to integers and pointers to objects[atomics.ref.memop]

referred-typebepointer-typefor the specializations in [atomics.ref.pointer] and beintegral-typefor the specializations in [atomics.ref.int].`is_const_v<`

isreferred-type>`false`

.`return fetch_add(1);`

`is_const_v<`

isreferred-type>`false`

.`return fetch_sub(1);`

`is_const_v<`

isreferred-type>`false`

.`return fetch_add(1) + 1;`

`is_const_v<`

isreferred-type>`false`

.`return fetch_sub(1) - 1;`