# constexpr atomic<T> and atomic_ref<T>

## Introduction and motivation

This paper proposes marking most of `atomic<T>`

methods and associated functions `constexpr`

to allow usage of atomic code without changes in a constexpr and consteval code.

Proposed changes will allow implementing other types (`std::shared_ptr<T>`

, persistent data structures with atomic pointers) and algorithms (thread safe data-processing, like scanning data with atomic counter) with just sprinkling `constexpr`

to their specification.

## Changes

- R0 → R1: Make
`wait`

and`notify`

functions as requested by SG1. Wording changed accordingly. Updaded link to implementation on Compiler Explorer.

## Previous polls

SG1: Forward P3309 to LEWG with the following notes:- Add constexpr to the wait and notify functions in the next revision of P3309
`atomic<shared_ptr>`

should be supported in constexpr whenever`shared_ptr`

is supported in constexpr (whichever paper lands second should have this change)`is_lock_free()`

should not be made constexpr

SF | F | N | A | SA |
---|---|---|---|---|

2 | 10 | 4 | 0 | 0 |

## Intention for wording changes

Mark all functions in [atomics] constexpr excluding all `volatile`

overloads. As all these can be implemented in constant expression evaluator or using `if consteval`

:

```
template<class T>
constexpr T atomic_fetch_add(atomic<T>* target, typename atomic<T>::difference_type diff) noexcept {
if consteval {
const auto previous = target->value;
target->value += diff;
return previous;
} else {
return __c11_atomic_fetch_add(&target->value, diff);
}
}
```

Synchronization functions and helpers can be implemented as no-ops (`std::kill_dependency`

, `std::atomic_thread_fence`

). Memory order parameters should be just ignored as `constant evaluated`

code doesn't have multiple threads.

Alternative implementation strategy is to allow atomic builtins to work in constant evaluator.

### Question answered by SG1

- Should we make
`is_lock_free`

functions also constexpr? No, keep it non-constexpr as it can be different on running environment.

### Question for LEWG

- Should we make
`atomic<shared_ptr<T>>`

and`atomic<weak_ptr<T>>`

constexpr? (paper's wording contains this change)There is associated paper P3037R1 making

`shared_ptr<T>`

constexpr.

## Example

This example shows how you can easily reuse code between runtime and constant evaluated code without duplication. Without this paper you need to duplicate multiple functions.

```
constexpr bool process_first_unprocessed(std::atomic<size_t> & counter, std::span<cell> subject) {
// BEFORE: compile-time error when you try to evaluate this inside constant evaluated code
// AFTER: work sequentialy in constant-evaluated code
const size_t current = counter.fetch_add(1);
if (current >= subject.size()) {
return false;
}
process(subject[current]);
return true;
}
constexpr void process_all(std::span<cell> subject, unsigned thread_count = 1) {
// BEFORE: calling following function in constant evaluated code will always fail with any number of requested threads
// AFTER: calling it with argument thread_count == 1 will succeed in constant evaluated code
std::atomic<size_t> counter{0};
auto threads = std::vector<std::jthread>{};
assert(thread_count >= 1);
for (unsigned i = 1; i < thread_count; ++i) {
threads.emplace_back([&]{
while (process_first_unprocessed(counter, subject));
});
}
while (process_first_unprocessed(counter, subject));
}
```

link to compiler-explorer.com
## Proposed changes to wording

# 33 Concurrency support library [thread]

## 33.5 Atomic operations [atomics]

### 33.5.1 General [atomics.general]

### 33.5.2 Header <atomic> synopsis [atomics.syn]

*unspecified*; // freestanding inline constexpr memory_order memory_order_relaxed = memory_order::relaxed; // freestanding inline constexpr memory_order memory_order_consume = memory_order::consume; // freestanding inline constexpr memory_order memory_order_acquire = memory_order::acquire; // freestanding inline constexpr memory_order memory_order_release = memory_order::release; // freestanding inline constexpr memory_order memory_order_acq_rel = memory_order::acq_rel; // freestanding inline constexpr memory_order memory_order_seq_cst = memory_order::seq_cst; // freestanding template<class T> constexpr T kill_dependency(T y) noexcept; // freestanding } // [atomics.lockfree], lock-free property #define ATOMIC_BOOL_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_CHAR_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_CHAR8_T_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_CHAR16_T_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_CHAR32_T_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_WCHAR_T_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_SHORT_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_INT_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_LONG_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_LLONG_LOCK_FREE

*unspecified*// freestanding #define ATOMIC_POINTER_LOCK_FREE

*unspecified*// freestanding namespace std { // [atomics.ref.generic], class template atomic_ref template<class T> struct atomic_ref; // freestanding // [atomics.ref.pointer], partial specialization for pointers template<class T> struct atomic_ref<T*>; // freestanding // [atomics.types.generic], class template atomic template<class T> struct atomic; // freestanding // [atomics.types.pointer], partial specialization for pointers template<class T> struct atomic<T*>; // freestanding // [atomics.nonmembers], non-member functions template<class T> bool atomic_is_lock_free(const volatile atomic<T>*) noexcept; // freestanding template<class T> bool atomic_is_lock_free(const atomic<T>*) noexcept; // freestanding template<class T> void atomic_store(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr void atomic_store(atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> void atomic_store_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr void atomic_store_explicit(atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> T atomic_load(const volatile atomic<T>*) noexcept; // freestanding template<class T> constexpr T atomic_load(const atomic<T>*) noexcept; // freestanding template<class T> T atomic_load_explicit(const volatile atomic<T>*, memory_order) noexcept; // freestanding template<class T> constexpr T atomic_load_explicit(const atomic<T>*, memory_order) noexcept; // freestanding template<class T> T atomic_exchange(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_exchange(atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> T atomic_exchange_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_exchange_explicit(atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> bool atomic_compare_exchange_weak(volatile atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type) noexcept; template<class T> constexpr bool atomic_compare_exchange_weak(atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type) noexcept; template<class T> bool atomic_compare_exchange_strong(volatile atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type) noexcept; template<class T> constexpr bool atomic_compare_exchange_strong(atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type) noexcept; template<class T> bool atomic_compare_exchange_weak_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type, memory_order, memory_order) noexcept; template<class T> constexpr bool atomic_compare_exchange_weak_explicit(atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type, memory_order, memory_order) noexcept; template<class T> bool atomic_compare_exchange_strong_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type, memory_order, memory_order) noexcept; template<class T> constexpr bool atomic_compare_exchange_strong_explicit(atomic<T>*, // freestanding typename atomic<T>::value_type*, typename atomic<T>::value_type, memory_order, memory_order) noexcept; template<class T> T atomic_fetch_add(volatile atomic<T>*, // freestanding typename atomic<T>::difference_type) noexcept; template<class T> constexpr T atomic_fetch_add(atomic<T>*, typename atomic<T>::difference_type) noexcept; // freestanding template<class T> T atomic_fetch_add_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::difference_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_add_explicit(atomic<T>*, typename atomic<T>::difference_type, // freestanding memory_order) noexcept; template<class T> T atomic_fetch_sub(volatile atomic<T>*, // freestanding typename atomic<T>::difference_type) noexcept; template<class T> constexpr T atomic_fetch_sub(atomic<T>*, typename atomic<T>::difference_type) noexcept; // freestanding template<class T> T atomic_fetch_sub_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::difference_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_sub_explicit(atomic<T>*, typename atomic<T>::difference_type, // freestanding memory_order) noexcept; template<class T> T atomic_fetch_and(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_fetch_and(atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> T atomic_fetch_and_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_and_explicit(atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> T atomic_fetch_or(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_fetch_or(atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> T atomic_fetch_or_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_or_explicit(atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> T atomic_fetch_xor(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_fetch_xor(atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> T atomic_fetch_xor_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_xor_explicit(atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> T atomic_fetch_max(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_fetch_max(atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> T atomic_fetch_max_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_max_explicit(atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> T atomic_fetch_min(volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr T atomic_fetch_min(atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> T atomic_fetch_min_explicit(volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr T atomic_fetch_min_explicit(atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> void atomic_wait(const volatile atomic<T>*, // freestanding typename atomic<T>::value_type) noexcept; template<class T> constexpr void atomic_wait(const atomic<T>*, typename atomic<T>::value_type) noexcept; // freestanding template<class T> void atomic_wait_explicit(const volatile atomic<T>*, // freestanding typename atomic<T>::value_type, memory_order) noexcept; template<class T> constexpr void atomic_wait_explicit(const atomic<T>*, typename atomic<T>::value_type, // freestanding memory_order) noexcept; template<class T> void atomic_notify_one(volatile atomic<T>*) noexcept; // freestanding template<class T> constexpr void atomic_notify_one(atomic<T>*) noexcept; // freestanding template<class T> void atomic_notify_all(volatile atomic<T>*) noexcept; // freestanding template<class T> constexpr void atomic_notify_all(atomic<T>*) noexcept; // freestanding // [atomics.alias], type aliases using atomic_bool = atomic<bool>; // freestanding using atomic_char = atomic<char>; // freestanding using atomic_schar = atomic<signed char>; // freestanding using atomic_uchar = atomic<unsigned char>; // freestanding using atomic_short = atomic<short>; // freestanding using atomic_ushort = atomic<unsigned short>; // freestanding using atomic_int = atomic<int>; // freestanding using atomic_uint = atomic<unsigned int>; // freestanding using atomic_long = atomic<long>; // freestanding using atomic_ulong = atomic<unsigned long>; // freestanding using atomic_llong = atomic<long long>; // freestanding using atomic_ullong = atomic<unsigned long long>; // freestanding using atomic_char8_t = atomic<char8_t>; // freestanding using atomic_char16_t = atomic<char16_t>; // freestanding using atomic_char32_t = atomic<char32_t>; // freestanding using atomic_wchar_t = atomic<wchar_t>; // freestanding using atomic_int8_t = atomic<int8_t>; // freestanding using atomic_uint8_t = atomic<uint8_t>; // freestanding using atomic_int16_t = atomic<int16_t>; // freestanding using atomic_uint16_t = atomic<uint16_t>; // freestanding using atomic_int32_t = atomic<int32_t>; // freestanding using atomic_uint32_t = atomic<uint32_t>; // freestanding using atomic_int64_t = atomic<int64_t>; // freestanding using atomic_uint64_t = atomic<uint64_t>; // freestanding using atomic_int_least8_t = atomic<int_least8_t>; // freestanding using atomic_uint_least8_t = atomic<uint_least8_t>; // freestanding using atomic_int_least16_t = atomic<int_least16_t>; // freestanding using atomic_uint_least16_t = atomic<uint_least16_t>; // freestanding using atomic_int_least32_t = atomic<int_least32_t>; // freestanding using atomic_uint_least32_t = atomic<uint_least32_t>; // freestanding using atomic_int_least64_t = atomic<int_least64_t>; // freestanding using atomic_uint_least64_t = atomic<uint_least64_t>; // freestanding using atomic_int_fast8_t = atomic<int_fast8_t>; // freestanding using atomic_uint_fast8_t = atomic<uint_fast8_t>; // freestanding using atomic_int_fast16_t = atomic<int_fast16_t>; // freestanding using atomic_uint_fast16_t = atomic<uint_fast16_t>; // freestanding using atomic_int_fast32_t = atomic<int_fast32_t>; // freestanding using atomic_uint_fast32_t = atomic<uint_fast32_t>; // freestanding using atomic_int_fast64_t = atomic<int_fast64_t>; // freestanding using atomic_uint_fast64_t = atomic<uint_fast64_t>; // freestanding using atomic_intptr_t = atomic<intptr_t>; // freestanding using atomic_uintptr_t = atomic<uintptr_t>; // freestanding using atomic_size_t = atomic<size_t>; // freestanding using atomic_ptrdiff_t = atomic<ptrdiff_t>; // freestanding using atomic_intmax_t = atomic<intmax_t>; // freestanding using atomic_uintmax_t = atomic<uintmax_t>; // freestanding using atomic_signed_lock_free =

*see below*; using atomic_unsigned_lock_free =

*see below*; // [atomics.flag], flag type and operations struct atomic_flag; // freestanding bool atomic_flag_test(const volatile atomic_flag*) noexcept; // freestanding constexpr bool atomic_flag_test(const atomic_flag*) noexcept; // freestanding bool atomic_flag_test_explicit(const volatile atomic_flag*, // freestanding memory_order) noexcept; constexpr bool atomic_flag_test_explicit(const atomic_flag*, memory_order) noexcept; // freestanding bool atomic_flag_test_and_set(volatile atomic_flag*) noexcept; // freestanding constexpr bool atomic_flag_test_and_set(atomic_flag*) noexcept; // freestanding bool atomic_flag_test_and_set_explicit(volatile atomic_flag*, // freestanding memory_order) noexcept; constexpr bool atomic_flag_test_and_set_explicit(atomic_flag*, memory_order) noexcept; // freestanding void atomic_flag_clear(volatile atomic_flag*) noexcept; // freestanding constexpr void atomic_flag_clear(atomic_flag*) noexcept; // freestanding void atomic_flag_clear_explicit(volatile atomic_flag*, memory_order) noexcept; // freestanding constexpr void atomic_flag_clear_explicit(atomic_flag*, memory_order) noexcept; // freestanding void atomic_flag_wait(const volatile atomic_flag*, bool) noexcept; // freestanding constexpr void atomic_flag_wait(const atomic_flag*, bool) noexcept; // freestanding void atomic_flag_wait_explicit(const volatile atomic_flag*, // freestanding bool, memory_order) noexcept; constexpr void atomic_flag_wait_explicit(const atomic_flag*, // freestanding bool, memory_order) noexcept; void atomic_flag_notify_one(volatile atomic_flag*) noexcept; // freestanding constexpr void atomic_flag_notify_one(atomic_flag*) noexcept; // freestanding void atomic_flag_notify_all(volatile atomic_flag*) noexcept; // freestanding constexpr void atomic_flag_notify_all(atomic_flag*) noexcept; // freestanding #define ATOMIC_FLAG_INIT

*see below*// freestanding // [atomics.fences], fences extern "C" constexpr void atomic_thread_fence(memory_order) noexcept; // freestanding extern "C" constexpr void atomic_signal_fence(memory_order) noexcept; // freestanding }

### 33.5.3 Type aliases [atomics.alias]

### 33.5.4 Order and consistency [atomics.order]

*unspecified*{ relaxed, consume, acquire, release, acq_rel, seq_cst }; }

- memory_order::relaxed: no operation orders memory.
- memory_order::release, memory_order::acq_rel, and memory_order::seq_cst: a store operation performs a release operation on the affected memory location.
- memory_order::consume: a load operation performs a consume operation on the affected memory location.
- memory_order::acquire, memory_order::acq_rel, and memory_order::seq_cst: a load operation performs an acquire operation on the affected memory location.

*coherence-ordered before*another atomic operation B on M if

- A is a modification, and B reads the value stored by A, or
- A precedes B in the modification order of M, or
- A and B are not the same atomic read-modify-write operation, and there exists an atomic modification X of M such that A reads the value stored by X and X precedes B in the modification order of M, or
- there exists an atomic modification X of M such that A is coherence-ordered before X and X is coherence-ordered before B.

- if A and B are both memory_order::seq_cst operations, then A precedes B in S; and
- if A is a memory_order::seq_cst operation and B happens before a memory_order::seq_cst fence Y, then A precedes Y in S; and
- if a memory_order::seq_cst fence X happens before A and B is a memory_order::seq_cst operation, then X precedes B in S; and
- if a memory_order::seq_cst fence X happens before A and B happens before a memory_order::seq_cst fence Y, then X precedes Y in S.

*Note 5*:

*end note*]

*Note 6*:

*end note*]

*Note 7*:

*end note*]

*Recommended practice*: The implementation should make atomic stores visible to atomic loads, and atomic loads should observe atomic stores, within a reasonable amount of time.

```
template<class T>
constexpr T kill_dependency(T y) noexcept;
```

### 33.5.5 Lock-free property [atomics.lockfree]

*unspecified*#define ATOMIC_CHAR_LOCK_FREE

*unspecified*#define ATOMIC_CHAR8_T_LOCK_FREE

*unspecified*#define ATOMIC_CHAR16_T_LOCK_FREE

*unspecified*#define ATOMIC_CHAR32_T_LOCK_FREE

*unspecified*#define ATOMIC_WCHAR_T_LOCK_FREE

*unspecified*#define ATOMIC_SHORT_LOCK_FREE

*unspecified*#define ATOMIC_INT_LOCK_FREE

*unspecified*#define ATOMIC_LONG_LOCK_FREE

*unspecified*#define ATOMIC_LLONG_LOCK_FREE

*unspecified*#define ATOMIC_POINTER_LOCK_FREE

*unspecified*

### 33.5.6 Waiting and notifying [atomics.wait]

*Atomic waiting operations*and

*atomic notifying operations*provide a mechanism to wait for the value of an atomic object to change more efficiently than can be achieved with polling.

*Note 3*:

- atomic<T>::notify_one and atomic<T>::notify_all,
- atomic_flag::notify_one and atomic_flag::notify_all,
- atomic_notify_one and atomic_notify_all,
- atomic_flag_notify_one and atomic_flag_notify_all, and
- atomic_ref<T>::notify_one and atomic_ref<T>::notify_all.

*end note*]

*eligible to be unblocked*by a call to an atomic notifying operation on M if there exist side effects X and Y on M such that:

- the atomic waiting operation has blocked after observing the result of X,
- X precedes Y in the modification order of M, and
- Y happens before the call to the atomic notifying operation.

### 33.5.7 Class template atomic_ref [atomics.ref.generic]

#### 33.5.7.1 General [atomics.ref.generic.general]

*exposition only*public: using value_type = T; static constexpr size_t required_alignment =

*implementation-defined*; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(T&); constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(T, memory_order = memory_order::seq_cst) const noexcept; constexpr T operator=(T) const noexcept; constexpr T load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator T() const noexcept; constexpr T exchange(T, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) const noexcept; constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; }; }

#### 33.5.7.2 Operations [atomics.ref.ops]

```
static constexpr size_t required_alignment;
```

*Note 1*:

*end note*]

```
static constexpr bool is_always_lock_free;
```

```
bool is_lock_free() const noexcept;
```

```
constexpr atomic_ref(T& obj);
```

```
constexpr atomic_ref(const atomic_ref& ref) noexcept;
```

```
constexpr void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

```
constexpr T operator=(T desired) const noexcept;
```

```
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
```

```
constexpr operator T() const noexcept;
```

```
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

```
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
```

*Remarks*: A weak compare-and-exchange operation may fail spuriously.

*Note 2*:

*end note*]

```
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
```

```
constexpr void notify_one() const noexcept;
```

*Effects*: Unblocks the execution of at least one atomic waiting operation on *ptr that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.

```
constexpr void notify_all() const noexcept;
```

*Effects*: Unblocks the execution of all atomic waiting operations on *ptr that are eligible to be unblocked ([atomics.wait]) by this call.

#### 33.5.7.3 Specializations for integral types [atomics.ref.int]

*integral-type*, the specialization atomic_ref<

*integral-type*> provides additional atomic operations appropriate to integral types.

*Note 1*: —

*end note*]

*integral-type*> { private:

*integral-type** ptr; //

*exposition only*public: using value_type =

*integral-type*; using difference_type = value_type; static constexpr size_t required_alignment =

*implementation-defined*; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(

*integral-type*&); constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*operator=(

*integral-type*) const noexcept; constexpr

*integral-type*load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator

*integral-type*() const noexcept; constexpr

*integral-type*exchange(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_add(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_sub(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_and(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_or(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_xor(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_max(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*fetch_min(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*integral-type*operator++(int) const noexcept; constexpr

*integral-type*operator--(int) const noexcept; constexpr

*integral-type*operator++() const noexcept; constexpr

*integral-type*operator--() const noexcept; constexpr

*integral-type*operator+=(

*integral-type*) const noexcept; constexpr

*integral-type*operator-=(

*integral-type*) const noexcept; constexpr

*integral-type*operator&=(

*integral-type*) const noexcept; constexpr

*integral-type*operator|=(

*integral-type*) const noexcept; constexpr

*integral-type*operator^=(

*integral-type*) const noexcept; constexpr void wait(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; }; }

`constexpr `*integral-type* fetch_*key*(*integral-type* operand,
memory_order order = memory_order::seq_cst) const noexcept;

*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand.

*Remarks*: Except for fetch_max and fetch_min, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.

`constexpr `*integral-type* operator *op*=(*integral-type* operand) const noexcept;

#### 33.5.7.4 Specializations for floating-point types [atomics.ref.float]

*floating-point-type*, the specialization atomic_ref<

*floating-point*> provides additional atomic operations appropriate to floating-point types.

*floating-point-type*> { private:

*floating-point-type** ptr; //

*exposition only*public: using value_type =

*floating-point-type*; using difference_type = value_type; static constexpr size_t required_alignment =

*implementation-defined*; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(

*floating-point-type*&); constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*floating-point-type*operator=(

*floating-point-type*) const noexcept; constexpr

*floating-point-type*load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator

*floating-point-type*() const noexcept; constexpr

*floating-point-type*exchange(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*floating-point-type*fetch_add(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*floating-point-type*fetch_sub(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr

*floating-point-type*operator+=(

*floating-point-type*) const noexcept; constexpr

*floating-point-type*operator-=(

*floating-point-type*) const noexcept; constexpr void wait(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; }; }

`constexpr `*floating-point-type* fetch_*key*(*floating-point-type* operand,
memory_order order = memory_order::seq_cst) const noexcept;

*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand.

*Remarks*: If the result is not a representable value for its type ([expr.pre]), the result is unspecified, but the operations otherwise have no undefined behavior.

*floating-point-type*should conform to the std::numeric_limits<

*floating-point-type*> traits associated with the floating-point type ([limits.syn]).

`constexpr `*floating-point-type* operator *op*=(*floating-point-type* operand) const noexcept;

#### 33.5.7.5 Partial specialization for pointers [atomics.ref.pointer]

*exposition only*public: using value_type = T*; using difference_type = ptrdiff_t; static constexpr size_t required_alignment =

*implementation-defined*; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(T*&); constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(T*, memory_order = memory_order::seq_cst) const noexcept; constexpr T* operator=(T*) const noexcept; constexpr T* load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator T*() const noexcept; constexpr T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) const noexcept; constexpr T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept; constexpr T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept; constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) const noexcept; constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) const noexcept; constexpr T* operator++(int) const noexcept; constexpr T* operator--(int) const noexcept; constexpr T* operator++() const noexcept; constexpr T* operator--() const noexcept; constexpr T* operator+=(difference_type) const noexcept; constexpr T* operator-=(difference_type) const noexcept; constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; }; }

`constexpr T* fetch_`*key*(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;

*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand.

*Note 1*:

*end note*]

`constexpr T* operator `*op*=(difference_type operand) const noexcept;

#### 33.5.7.6 Member operators common to integers and pointers to objects [atomics.ref.memop]

```
constexpr value_type operator++(int) const noexcept;
```

```
constexpr value_type operator--(int) const noexcept;
```

```
constexpr value_type operator++() const noexcept;
```

```
constexpr value_type operator--() const noexcept;
```

### 33.5.8 Class template atomic [atomics.types.generic]

#### 33.5.8.1 General [atomics.types.generic.general]

*implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; // [atomics.types.operations], operations on atomic types constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>); constexpr atomic(T) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; T load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr T load(memory_order = memory_order::seq_cst) const noexcept; operator T() const volatile noexcept; constexpr operator T() const noexcept; void store(T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(T, memory_order = memory_order::seq_cst) noexcept; T operator=(T) volatile noexcept; constexpr T operator=(T) noexcept; T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept; void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; }; }

#### 33.5.8.2 Operations on atomic types [atomics.types.operations]

```
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
```

```
constexpr atomic(T desired) noexcept;
```

*Note 1*:

*end note*]

```
void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
```

```
T operator=(T desired) volatile noexcept;
constexpr T operator=(T desired) noexcept;
```

```
T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
```

```
operator T() const volatile noexcept;
constexpr operator T() const noexcept;
```

```
T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
```

```
bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
```

*Note 4*:

*end note*]

*Example 1*:

*end example*]

*Example 2*:

*end example*]

*Remarks*: A weak compare-and-exchange operation may fail spuriously.

*Note 5*:

*end note*]

*Note 6*:

*end note*]

*Note 7*:

*end note*]

*Note 8*:

*end note*]

```
void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
```

```
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
```

*Effects*: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.

```
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
```

*Effects*: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.

#### 33.5.8.3 Specializations for integers [atomics.types.int]

*integral-type*, the specialization atomic<

*integral-type*> provides additional atomic operations appropriate to integral types.

*Note 1*: —

*end note*]

*integral-type*> { using value_type =

*integral-type*; using difference_type = value_type; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const volatile noexcept; bool () const noexcept; constexpr atomic() noexcept; constexpr atomic(

*integral-type*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; void store(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*operator=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator=(

*integral-type*) noexcept;

*integral-type*load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr

*integral-type*load(memory_order = memory_order::seq_cst) const noexcept; operator

*integral-type*() const volatile noexcept; constexpr operator

*integral-type*() const noexcept;

*integral-type*exchange(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*exchange(

*integral-type*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order, memory_order) noexcept; bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order, memory_order) noexcept; bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(

*integral-type*&,

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_add(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_add(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_sub(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_sub(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_and(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_and(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_or(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_or(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_xor(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_xor(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_max(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_max(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*fetch_min(

*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*integral-type*fetch_min(

*integral-type*, memory_order = memory_order::seq_cst) noexcept;

*integral-type*operator++(int) volatile noexcept; constexpr

*integral-type*operator++(int) noexcept;

*integral-type*operator--(int) volatile noexcept; constexpr

*integral-type*operator--(int) noexcept;

*integral-type*operator++() volatile noexcept; constexpr

*integral-type*operator++() noexcept;

*integral-type*operator--() volatile noexcept; constexpr

*integral-type*operator--() noexcept;

*integral-type*operator+=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator+=(

*integral-type*) noexcept;

*integral-type*operator-=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator-=(

*integral-type*) noexcept;

*integral-type*operator&=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator&=(

*integral-type*) noexcept;

*integral-type*operator|=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator|=(

*integral-type*) noexcept;

*integral-type*operator^=(

*integral-type*) volatile noexcept; constexpr

*integral-type*operator^=(

*integral-type*) noexcept; void wait(

*integral-type*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(

*integral-type*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; }; }

`T fetch_`*key*(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T fetch_*key*(T operand, memory_order order = memory_order::seq_cst) noexcept;

*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand.

*Remarks*: Except for fetch_max and fetch_min, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.

`T operator `*op*=(T operand) volatile noexcept;
constexpr T operator *op*=(T operand) noexcept;

#### 33.5.8.4 Specializations for floating-point types [atomics.types.float]

*floating-point-type*, the specialization atomic<

*floating-point-type*> provides additional atomic operations appropriate to floating-point types.

*floating-point-type*> { using value_type =

*floating-point-type*; using difference_type = value_type; static constexpr bool is_always_lock_free =

*implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(

*floating-point-type*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; void store(

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept;

*floating-point-type*operator=(

*floating-point-type*) volatile noexcept; constexpr

*floating-point-type*operator=(

*floating-point-type*) noexcept;

*floating-point-type*load(memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*floating-point-type*load(memory_order = memory_order::seq_cst) noexcept; operator

*floating-point-type*() volatile noexcept; constexpr operator

*floating-point-type*() noexcept;

*floating-point-type*exchange(

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*floating-point-type*exchange(

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) noexcept; bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order, memory_order) noexcept; bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(

*floating-point-type*&,

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept;

*floating-point-type*fetch_add(

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*floating-point-type*fetch_add(

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept;

*floating-point-type*fetch_sub(

*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr

*floating-point-type*fetch_sub(

*floating-point-type*, memory_order = memory_order::seq_cst) noexcept;

*floating-point-type*operator+=(

*floating-point-type*) volatile noexcept; constexpr

*floating-point-type*operator+=(

*floating-point-type*) noexcept;

*floating-point-type*operator-=(

*floating-point-type*) volatile noexcept; constexpr

*floating-point-type*operator-=(

*floating-point-type*) noexcept; void wait(

*floating-point-type*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(

*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; }; }

`T fetch_`*key*(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T fetch_*key*(T operand, memory_order order = memory_order::seq_cst) noexcept;

*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand.

*Remarks*: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.

*floating-point-type*should conform to the std::numeric_limits<

*floating-point-type*> traits associated with the floating-point type ([limits.syn]).

`T operator `*op*=(T operand) volatile noexcept;
constexpr T operator *op*=(T operand) noexcept;

*Remarks*: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.

*floating-point-type*should conform to the std::numeric_limits<

*floating-point-type*> traits associated with the floating-point type ([limits.syn]).

#### 33.5.8.5 Partial specialization for pointers [atomics.types.pointer]

*implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(T*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; void store(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept; T* operator=(T*) volatile noexcept; constexpr T* operator=(T*) noexcept; T* load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr T* load(memory_order = memory_order::seq_cst) const noexcept; operator T*() const volatile noexcept; constexpr operator T*() const noexcept; T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept; T* operator++(int) volatile noexcept; constexpr T* operator++(int) noexcept; T* operator--(int) volatile noexcept; constexpr T* operator--(int) noexcept; T* operator++() volatile noexcept; constexpr T* operator++() noexcept; T* operator--() volatile noexcept; constexpr T* operator--() noexcept; T* operator+=(ptrdiff_t) volatile noexcept; constexpr T* operator+=(ptrdiff_t) noexcept; T* operator-=(ptrdiff_t) volatile noexcept; constexpr T* operator-=(ptrdiff_t) noexcept; void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; }; }

key | Op | Computation | key | Op | Computation | |

add | + | addition | sub | - | subtraction | |

max | maximum | min | minimum |

`T* fetch_`*key*(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_*key*(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;

*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand.

*Note 2*:

*end note*]

`T* operator `*op*=(ptrdiff_t operand) volatile noexcept;
constexpr T* operator *op*=(ptrdiff_t operand) noexcept;

#### 33.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]

```
value_type operator++(int) volatile noexcept;
constexpr value_type operator++(int) noexcept;
```

```
value_type operator--(int) volatile noexcept;
constexpr value_type operator--(int) noexcept;
```

```
value_type operator++() volatile noexcept;
constexpr value_type operator++() noexcept;
```

```
value_type operator--() volatile noexcept;
constexpr value_type operator--() noexcept;
```

#### 33.5.8.7 Partial specializations for smart pointers [util.smartptr.atomic]

#### 33.5.8.7.1 General [util.smartptr.atomic.general]

*Example 1*: template<typename T> class atomic_list { struct node { T t; shared_ptr<node> next; }; atomic<shared_ptr<node>> head; public: shared_ptr<node> find(T t) const { auto p = head.load(); while (p && p->t != t) p = p->next; return p; } void push_front(T t) { auto p = make_shared<node>(); p->t = t; p->next = head; while (!head.compare_exchange_weak(p->next, p)) {} } }; —

*end example*]

#### 33.5.8.7.3 Partial specialization for weak_ptr [util.smartptr.atomic.weak]

*implementation-defined*; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(weak_ptr<T> desired) noexcept; atomic(const atomic&) = delete; constexpr void operator=(const atomic&) = delete; constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept; constexpr operator weak_ptr<T>() const noexcept; constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept; constexpr void operator=(weak_ptr<T> desired) noexcept; constexpr weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept; constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept; constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept; constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept; constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept; constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept; constexpr void notify_one() noexcept; constexpr void notify_all() noexcept; private: weak_ptr<T> p; //

*exposition only*}; }

```
constexpr atomic() noexcept;
```

```
constexpr atomic(weak_ptr<T> desired) noexcept;
```

*Note 1*:

*end note*]

```
constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
```

```
constexpr void operator=(weak_ptr<T> desired) noexcept;
```

```
constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
```

```
constexpr operator weak_ptr<T>() const noexcept;
```

```
constexpr weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
```

```
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
```

```
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
```

*Effects*: Equivalent to: return compare_exchange_weak(expected, desired, order, fail_order); where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.

```
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
```

*Effects*: Equivalent to: return compare_exchange_strong(expected, desired, order, fail_order); where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.

```
constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
```

*Remarks*: Two weak_ptr objects are equivalent if they store the same pointer and either share ownership or are both empty.

```
constexpr void notify_one() noexcept;
```

*Effects*: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.

```
constexpr void notify_all() noexcept;
```

*Effects*: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.

### 33.5.9 Non-member functions [atomics.nonmembers]

*f*or the pattern atomic_

*f*_explicit invokes the member function

*f*, with the value of the first parameter as the object expression and the values of the remaining parameters (if any) as the arguments of the member function call, in order.

### 33.5.10 Flag type and operations [atomics.flag]

```
constexpr atomic_flag::atomic_flag() noexcept;
```

```
bool atomic_flag_test(const volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test(const atomic_flag* object) noexcept;
bool atomic_flag_test_explicit(const volatile atomic_flag* object,
memory_order order) noexcept;
constexpr bool atomic_flag_test_explicit(const atomic_flag* object,
memory_order order) noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr bool atomic_flag::test(memory_order order = memory_order::seq_cst) const noexcept;
```

```
bool atomic_flag_test_and_set(volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test_and_set(atomic_flag* object) noexcept;
bool atomic_flag_test_and_set_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr bool atomic_flag_test_and_set_explicit(atomic_flag* object, memory_order order) noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) noexcept;
```

```
void atomic_flag_clear(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_clear(atomic_flag* object) noexcept;
void atomic_flag_clear_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr void atomic_flag_clear_explicit(atomic_flag* object, memory_order order) noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void atomic_flag::clear(memory_order order = memory_order::seq_cst) noexcept;
```

```
void atomic_flag_wait(const volatile atomic_flag* object, bool old) noexcept;
constexpr void atomic_flag_wait(const atomic_flag* object, bool old) noexcept;
void atomic_flag_wait_explicit(const volatile atomic_flag* object,
bool old, memory_order order) noexcept;
constexpr void atomic_flag_wait_explicit(const atomic_flag* object,
bool old, memory_order order) noexcept;
void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const volatile noexcept;
constexpr void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const noexcept;
```

```
void atomic_flag_notify_one(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_one(atomic_flag* object) noexcept;
void atomic_flag::notify_one() volatile noexcept;
constexpr void atomic_flag::notify_one() noexcept;
```

*Effects*: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.

```
void atomic_flag_notify_all(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_all(atomic_flag* object) noexcept;
void atomic_flag::notify_all() volatile noexcept;
constexpr void atomic_flag::notify_all() noexcept;
```

*Effects*: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.

`#define ATOMIC_FLAG_INIT `*see below*

*Remarks*: The macro ATOMIC_FLAG_INIT is defined in such a way that it can be used to initialize an object of type atomic_flag to the clear state.

### 33.5.11 Fences [atomics.fences]

*acquire fence*.

*release fence*.

```
extern "C" constexpr void atomic_thread_fence(memory_order order) noexcept;
```

*Effects*: Depending on the value of order, this operation:

- has no effects, if order == memory_order::relaxed;
- is an acquire fence, if order == memory_order::acquire or order == memory_order::consume;
- is a release fence, if order == memory_order::release;
- is both an acquire fence and a release fence, if order == memory_order::acq_rel;
- is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst.

```
extern "C" constexpr void atomic_signal_fence(memory_order order) noexcept;
```

*Note 1*:

*end note*]

### 33.5.12 C compatibility [stdatomic.h.syn]

*std-atomic*= std::atomic<T>; //

*exposition only*#define _Atomic(T)

*std-atomic*<T> #define ATOMIC_BOOL_LOCK_FREE

*see below*#define ATOMIC_CHAR_LOCK_FREE

*see below*#define ATOMIC_CHAR16_T_LOCK_FREE

*see below*#define ATOMIC_CHAR32_T_LOCK_FREE

*see below*#define ATOMIC_WCHAR_T_LOCK_FREE

*see below*#define ATOMIC_SHORT_LOCK_FREE

*see below*#define ATOMIC_INT_LOCK_FREE

*see below*#define ATOMIC_LONG_LOCK_FREE

*see below*#define ATOMIC_LLONG_LOCK_FREE

*see below*#define ATOMIC_POINTER_LOCK_FREE

*see below*using std::memory_order; //

*see below*using std::memory_order_relaxed; //

*see below*using std::memory_order_consume; //

*see below*using std::memory_order_acquire; //

*see below*using std::memory_order_release; //

*see below*using std::memory_order_acq_rel; //

*see below*using std::memory_order_seq_cst; //

*see below*using std::atomic_flag; //

*see below*using std::atomic_bool; //

*see below*using std::atomic_char; //

*see below*using std::atomic_schar; //

*see below*using std::atomic_uchar; //

*see below*using std::atomic_short; //

*see below*using std::atomic_ushort; //

*see below*using std::atomic_int; //

*see below*using std::atomic_uint; //

*see below*using std::atomic_long; //

*see below*using std::atomic_ulong; //

*see below*using std::atomic_llong; //

*see below*using std::atomic_ullong; //

*see below*using std::atomic_char8_t; //

*see below*using std::atomic_char16_t; //

*see below*using std::atomic_char32_t; //

*see below*using std::atomic_wchar_t; //

*see below*using std::atomic_int8_t; //

*see below*using std::atomic_uint8_t; //

*see below*using std::atomic_int16_t; //

*see below*using std::atomic_uint16_t; //

*see below*using std::atomic_int32_t; //

*see below*using std::atomic_uint32_t; //

*see below*using std::atomic_int64_t; //

*see below*using std::atomic_uint64_t; //

*see below*using std::atomic_int_least8_t; //

*see below*using std::atomic_uint_least8_t; //

*see below*using std::atomic_int_least16_t; //

*see below*using std::atomic_uint_least16_t; //

*see below*using std::atomic_int_least32_t; //

*see below*using std::atomic_uint_least32_t; //

*see below*using std::atomic_int_least64_t; //

*see below*using std::atomic_uint_least64_t; //

*see below*using std::atomic_int_fast8_t; //

*see below*using std::atomic_uint_fast8_t; //

*see below*using std::atomic_int_fast16_t; //

*see below*using std::atomic_uint_fast16_t; //

*see below*using std::atomic_int_fast32_t; //

*see below*using std::atomic_uint_fast32_t; //

*see below*using std::atomic_int_fast64_t; //

*see below*using std::atomic_uint_fast64_t; //

*see below*using std::atomic_intptr_t; //

*see below*using std::atomic_uintptr_t; //

*see below*using std::atomic_size_t; //

*see below*using std::atomic_ptrdiff_t; //

*see below*using std::atomic_intmax_t; //

*see below*using std::atomic_uintmax_t; //

*see below*using std::atomic_is_lock_free; //

*see below*using std::atomic_load; //

*see below*using std::atomic_load_explicit; //

*see below*using std::atomic_store; //

*see below*using std::atomic_store_explicit; //

*see below*using std::atomic_exchange; //

*see below*using std::atomic_exchange_explicit; //

*see below*using std::atomic_compare_exchange_strong; //

*see below*using std::atomic_compare_exchange_strong_explicit; //

*see below*using std::atomic_compare_exchange_weak; //

*see below*using std::atomic_compare_exchange_weak_explicit; //

*see below*using std::atomic_fetch_add; //

*see below*using std::atomic_fetch_add_explicit; //

*see below*using std::atomic_fetch_sub; //

*see below*using std::atomic_fetch_sub_explicit; //

*see below*using std::atomic_fetch_and; //

*see below*using std::atomic_fetch_and_explicit; //

*see below*using std::atomic_fetch_or; //

*see below*using std::atomic_fetch_or_explicit; //

*see below*using std::atomic_fetch_xor; //

*see below*using std::atomic_fetch_xor_explicit; //

*see below*using std::atomic_flag_test_and_set; //

*see below*using std::atomic_flag_test_and_set_explicit; //

*see below*using std::atomic_flag_clear; //

*see below*using std::atomic_flag_clear_explicit; //

*see below*#define ATOMIC_FLAG_INIT

*see below*using std::atomic_thread_fence; //

*see below*using std::atomic_signal_fence; //

*see below*

*using-declaration*for some name A in the synopsis above makes available the same entity as std::A declared in <atomic>.

*using-declaration*

*s*for intN_t, uintN_t, intptr_t, and uintptr_t listed above is defined if and only if the implementation defines the corresponding

*typedef-name*in [atomics.syn].

*Recommended practice*: Implementations should ensure that C and C++ representations of atomic objects are compatible, so that the same object can be accessed as both an _Atomic(T) from C code and an atomic<T> from C++ code.

### Feature test macro

## 17.3.2 Header <version> synopsis [version.syn]

`#define __cpp_lib_constexpr_atomic 2024??L`

## Implementation experience

This was implemented in libc++ & clang by adding `constexpr`

to needed places implementing atomic builtins.

## Impact on existing code

None, currently`std::atomic`

and `std::atomic_ref`

can't be used in constant evaluated code.